We have a new mandate in our team that any new that is pushed to any of our GIT repositories need to have 100% code coverage. We already have (actively used and not legacy, deprecatable) code in the repository for which the coverage is around 75-80% (based on the repository).
We are generating sonar reports with jacoco being the underlying coverage analysis tool.
However, when we see the reports, we are not able to identify the coverage % for the new code alone. Is there a way to do that?
Please note that exclusion of modules or files is not always feasible since some of the code that is added is in an existing class.
Is there some setting that forces sonar/jacoco to provide this information?
I don't know how your reports need to look like, but maybe the "differentials" help:
http://www.sonarqube.org/differentials-four-ways-to-see-whats-changed/
Is such a thing possible?
To explain: I'm on a project where the Acceptance Tests are being run in a bespoke way from an actual webpage rather than as part of one of Maven's test phases. It's a long story why so please consider criticisms of this to be off-topic!
Would like to see the coverage after clicking a button that runs the tests. On-the-fly coverage while the tests are running isn't a necessity but would at least like to be able to see the line-by-line Java code coverage in Eclipse after the tests have finished - ideally with the page still up and JVM still running.
Would be grateful if any replies could include any needed tools, pom fragments and setup information.
EDIT: Forgot to mention the customer is only interested in using open source tools.
Yes, this is possible and easy. Our (Semantic Designs) Java Test Coverage tools do this just fine.
You instrument the application and compile it. As it runs (anything: units tests, ad hoc interactive exercises, etc.) it collects test coverage data. At any moment, you can cause the test coverage data to be dumped without disturbing the application. That dump coverage data can be immediately imported and viewed.
I currently use Clover to measure the code coverage of my Java code. One feature which I rely on is the ability to exclude arbitrary sections of code from coverage reports:
///CLOVER:OFF because this case is simpler to verify by code read
if (lFile.isFile() &&
lFile.getName().endsWith(FILE_EXTN) &&
!lFile.delete())
{
throw new IOException("delete() failed for: " + lFile);
}
///CLOVER:ON
I find this kind of exclusion makes it much easier to focus on testing the interesting logic while still achieving 100% code coverage.
Are there any other Java code coverage tools (either free or paid) which support this kind of fine grained exclusion? Whole class or whole method exclusions aren't good enough.
NOTE: I am currently investigating adding something suitable to JaCoCo (Issue #14).
Followings are open source java code coverage tools. Those may help you
NoUnit
InsECT
Jester
JVMDI Code Coverage Analyser
GroboCodeCoverage
jcoverage/gpl
JBlanket
Cobertura
Coverlipse
Hansel
CodeCover
EMMA
PIT
From my experience, the following all work well:
In terms of closed-source: Clover
In terms of open-source: Cobetura (but does not work with Java 7), EMMA
Is there a tool for Java which, given a set of JUnit tests, and a class to test, will tell you which lines of the class are tested by the tests? ie. required to be present for the tests to run successfully. I don't mean "code coverage", which only tells you whether a line is executed, but something stronger than that: Is the line required for the test to pass?
I often comment out a line of code and run a test to see if the test really is testing that line of code. I reckon this could be done automatically by a semi-smart tool (eg. something like an IDE that can work out what can be removed from a method whilst keeping it compilable).
There's an open source mutation-testing tool called Jester that changes the lines of your source code, then runs your tests, and reports whether your tests passed anyway. Sounds closer to what you're looking for than code coverage tools.
Jester is a test tester for testing your java JUnit tests (Pester is for Python PyUnit tests). It modifies your source code, runs the tests and reports if the tests pass despite the changes to the code. This can indicate missing tests or redundant code.
WRT the discussion about whether these tools are needed in a pure TDD project, there is a link on the Jester project webpage to a posting about the benefits of using Jester on code written during a TDD session (Uncle Bob's infamous bowling TDD example).
What you are looking for might be referred to as mutation testing. While mutation testing won't tell you which lines of code are required to pass, per se. What mutation testing does is modify your source code looking for changes it can make to your code but your test still passes. E.g. changing
if (a < b)
to
if (a >= b)
and seeing if the test still passes. This will highlight weaknesses in your test.
Another java library for mutation testing is jumble.
I use emma for most of my projects. i included it in my ant build file and it generates html files for the reports
two other coverage projects i read about but haven't tried yet are clover or cobertura
I love cobertura, because the generated reports are IMHO the most beautiful. And it has its own ant target!
In comparison to emma, it has also branch coverage, not only line coverage, which is misleading very often.
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
What code analysis tools do you use on your Java projects?
I am interested in all kinds
static code analysis tools (FindBugs, PMD, and any others)
code coverage tools (Cobertura, Emma, and any others)
any other instrumentation-based tools
anything else, if I'm missing something
If applicable, also state what build tools you use and how well these tools integrate with both your IDEs and build tools.
If a tool is only available a specific way (as an IDE plugin, or, say, a build tool plugin) that information is also worth noting.
For static analysis tools I often use CPD, PMD, FindBugs, and Checkstyle.
CPD is the PMD "Copy/Paste Detector" tool. I was using PMD for a little while before I noticed the "Finding Duplicated Code" link on the PMD web page.
I'd like to point out that these tools can sometimes be extended beyond their "out-of-the-box" set of rules. And not just because they're open source so that you can rewrite them. Some of these tools come with applications or "hooks" that allow them to be extended. For example, PMD comes with the "designer" tool that allows you to create new rules. Also, Checkstyle has the DescendantToken check that has properties that allow for substantial customization.
I integrate these tools with an Ant-based build. You can follow the link to see my commented configuration.
In addition to the simple integration into the build, I find it helpful to configure the tools to be somewhat "integrated" in a couple of other ways. Namely, report generation and warning suppression uniformity. I'd like to add these aspects to this discussion (which should probably have the "static-analysis" tag also): how are folks configuring these tools to create a "unified" solution? (I've asked this question separately here)
First, for warning reports, I transform the output so that each warning has the simple format:
/absolute-path/filename:line-number:column-number: warning(tool-name): message
This is often called the "Emacs format," but even if you aren't using Emacs, it's a reasonable format for homogenizing reports. For example:
/project/src/com/example/Foo.java:425:9: warning(Checkstyle):Missing a Javadoc comment.
My warning format transformations are done by my Ant script with Ant filterchains.
The second "integration" that I do is for warning suppression. By default, each tool supports comments or an annotation (or both) that you can place in your code to silence a warning that you want to ignore. But these various warning suppression requests do not have a consistent look which seems somewhat silly. When you're suppressing a warning, you're suppressing a warning, so why not always write "SuppressWarning?"
For example, PMD's default configuration suppresses warning generation on lines of code with the string "NOPMD" in a comment. Also, PMD supports Java's #SuppressWarnings annotation. I configure PMD to use comments containing "SuppressWarning(PMD." instead of NOPMD so that PMD suppressions look alike. I fill in the particular rule that is violated when using the comment style suppression:
// SuppressWarnings(PMD.PreserveStackTrace) justification: (false positive) exceptions are chained
Only the "SuppressWarnings(PMD." part is significant for a comment, but it is consistent with PMD's support for the #SuppressWarning annotation which does recognize individual rule violations by name:
#SuppressWarnings("PMD.CompareObjectsWithEquals") // justification: identity comparision intended
Similarly, Checkstyle suppresses warning generation between pairs of comments (no annotation support is provided). By default, comments to turn Checkstyle off and on contain the strings CHECKSTYLE:OFF and CHECKSTYLE:ON, respectively. Changing this configuration (with Checkstyle's "SuppressionCommentFilter") to use the strings "BEGIN SuppressWarnings(CheckStyle." and "END SuppressWarnings(CheckStyle." makes the controls look more like PMD:
// BEGIN SuppressWarnings(Checkstyle.HiddenField) justification: "Effective Java," 2nd ed., Bloch, Item 2
// END SuppressWarnings(Checkstyle.HiddenField)
With Checkstyle comments, the particular check violation (HiddenField) is significant because each check has its own "BEGIN/END" comment pair.
FindBugs also supports warning generation suppression with a #SuppressWarnings annotation, so no further configuration is required to achieve some level of uniformity with other tools. Unfortunately, Findbugs has to support a custom #SuppressWarnings annotation because the built-in Java #SuppressWarnings annotation has a SOURCE retention policy which is not strong enough to retain the annotation in the class file where FindBugs needs it. I fully qualify FindBugs warnings suppressions to avoid clashing with Java's #SuppressWarnings annotation:
#edu.umd.cs.findbugs.annotations.SuppressWarnings("UWF_FIELD_NOT_INITIALIZED_IN_CONSTRUCTOR")
These techniques makes things look reasonably consistent across tools. Note that having each warning suppression contain the string "SuppressWarnings" makes it easy to run a simple search to find all instances for all tools over an entire code base.
I use a combination of Cobertura, Checkstyle, (Ecl)Emma and Findbugs.
EclEmma is an awesome Eclipse plugin that shows the code coverage by coloring the java source in the editor (screenshot) - the coverage is generated by running a JUnit test. This is really useful when you are trying to figure out which lines are covered in a particular class, or if you want to see just which lines are covered by a single test. This is much more user friendly and useful than generating a report and then looking through the report to see which classes have low coverage.
The Checkstyle and Findbugs Eclipse plugins are also useful, they generate warnings in the editor as you type.
Maven2 has report plugins that work with the above tools to generate reports at build time. We use this to get overall project reports, which are more useful when you want aggregate numbers. These are generated by our CI builds, which run using Continuum.
All of the following we use and integrate easiy in both our Maven 2.x builds and Eclipse/RAD 7:
Testing - JUnit/TestNG
Code analysis - FindBugs, PMD
Code coverage - Clover
In addition, in our Maven builds we have:
JDepend
Tag checker (TODO, FIXME, etc)
Furthermore, if you're using Maven 2.x, CodeHaus has a collection of handy Maven plugins in their Mojo project.
Note: Clover has out-of-the-box integration with the Bamboo CI server (since they're both Atlassian products). There are also Bamboo plugins for FindBugs, PMD, and CheckStyle but, as noted, the free Hudson CI server has those too.
I use the static analysis built into IntelliJ IDEA. Perfect integration.
I use the code coverage built into Intellij IDEA (based on EMMA). Again, perfect integration.
This integrated solution is reliable, powerful, and easy-to-use compared to piecing together tools from various vendors.
Checkstyle is another one I've used at a previous company... it's mainly for style checking, but it can do some static analysis too. Also, Clover for code coverage, though be aware it is not a free tool.
We are using FindBugs and Checkstyle as well as Clover for Code Coverage.
I think it's important to have some kind of static analysis, supporting your development. Unfortunately it's still not widely spread that these tools are important.
We use FindBugs and JDepend integrated with Ant. We use JUnit but we're not using any coverage tool.
I'm not using it integrated to Rational Application Developer (the IDE I'm using to develop J2EE applications) because I like how neat it looks when you run javac in the Windows console. :P
I've had good luck with Cobertura. It's a code coverage tool which can be executed via your ant script as part of your normal build and can be integrated into Hudson.
Our team use PMD and Cobertura, actually our projects are maven projects and there is very simple to include plug ins for code analysis. The real question would be for specific project which analysis you need to use, my opinion is that it's you couldn't use the same plugins for each project.
in our project we use Sonar in front of checkstyle, pmd.... together with the CI (Bamboo, Hudson) we get also a nice history of our source quality and what directing we go. I do like Sonar, because you one central tool in the CI Stack that does it for you, and you can easy customize the rules for each project.
Structure 101 is good at code analysis and finding the cyclic package dependencies.
I am looking for many answers to learn about new tools and consolidate this knowledge in a one question/thread, so I doubt there will be 1 true answer to this question.
My answer to my own question is that we use:
Findbugs to look for common errors bad/coding - run from maven, and also integrates easily into Eclipse
Cobertura for our coverage reports - run from maven
Hudson also has a task-scanner plugin that will display a count of your TODO and FIXMEs, as well as show where they are in the source files.
All are integrated with Maven 1.x in our case and tied into Hudson, which runs our builds on check-in as well as extra things nightly and weekly. Hudson trend graphs our JUnit tests, coverage, findbugs, as well as open tasks. There is also a Hudson plugin that reports and graphs our compile warnings. We also have several performance tests with their own graphs of performance and memory use over time using the Hudson plots plugin as well.