I'm working on a project that has a lot of legacy code that is not covered with tests.
Is there any way that I could set up the integration server to check that all new commits have a minimum amount of tests (say, coverage is >70%)?
Essentially, I see two options:
Somehow set up the CI server to fail the build when the committed changes are not covered with unit tests. This will ensure that every piece of new code will have tests and that tests for the legacy code will increase with each change.
Set a coverage threshold for the whole project and fail the build if the coverage percentage decreases after a commit. The problem with this is that if I delete a class containing 100 instructions and add a new class with 50 instructions the coverage percentage will go up without me writing any tests.
I like option 1 more because it forces changes in legacy code to be unit tested. This should increase the overall test coverage.
Right now we're using Jenkins as our CI server and JaCoCo for test coverage. Maven is used for building the project and SVN is our main source control.
I know you can configure Jenkins to verify that there is at least one test file as part of the commit. That would not assure good test coverage, but at least you would know there was some kind of test related changes.
Some coverage tools (like cobertura) support excluding packages. This way, you can exclude all the old code (assuming it can be pattern matched) and have cobertura check only new code (which covers new commits).
I hope this helps.
For option 2, you can use the Jenkins JaCoCo plugin to track the code coverage for each build and set the build result to passed or failed depending on the coverage metrics.
I like option 1 better, too, but I don't know of a built-in way for Jenkins to do this. It should be fairly easy (at least at the class level) to post-process the coverage data and combine it with the SVN revision info, something like:
Parse the JaCoCo output files and find classes that have 0% coverage
Get the file(s) that changed for this build from the SVN revision details (Jenkins makes the revision numbers available in environment variables, SVN_REVISION if there is only one for this build or SVN_REVISION_1, SVN_REVISION_2, ... for multiple)
Print an error message if any of the changed classes have 0% coverage
Use the Jenkins Text Finder plugin to fail the build if the error message is printed.
This isn't a full solution, it gets trickier for new methods or lines that aren't covered by tests. Gives me an idea for a new Jenkins plugin ;-)
I have built a tool which does exactly this
https://github.com/exussum12/coverageChecker
You pass in the diff of the branch and the cover test output from the tests. The tool works out which lines in the diff are also in the clover file. and fails the build if less than a certain percentage
to use
bin/diffFilter --phpunit diff.txt clover.xml 70
to fail builds when less than 70% of the diff is covered by a test
I can add other formats if needed
Edit
I have added jacoco
Bin/diffFilter --jacoco diff.txt jacoco.xml
Related
Recently I've upgraded SonarQube from 3.5 to 4.5.4 (LTS) and now there are a few users complaining that there are some reports missing on their project dashboards. The reports/numbers missing widgets are: lines of code and complexity. Unit tests coverage displays nothing. Other widgets (like technical debt, issues, directory tangle index) display 0 which also is suspicious. The project is in Java using the Sonar way profile.
The user does:
mvn clean org.jacoco:jacoco-maven-plugin:prepare-agent install
mvn sonar:sonar -Dsonar.login=login -Dsonar.password=***** -Dcom.sun.jndi.ldap.connect.pool.prefsize=0 -Dcom.sun.jndi.ldap.connect.pool.timeout=3600000
The sonar:sonar step shows "0 files indexed".
The log is huge so I don't want to paste it here. I could not find anything helpful in it. What do I need to do to have all reports I used to have?
I have a test project where most of the missing data is displayed "out of the box".
Starting with version 4.3, SonarQube no longer runs automated tests. It expects Jenkins/CI system to run the tests, create the JUnit/PMD/Jacoco/Clover etc. reports, and then tell SonarQube where to find them. (In older versions of SonarQube, this behavior could be achieved by setting the "reuseReports" flag to true.)
If the build is not configured to generate the reports, it will need to be adjusted to do so.
Currently I'm looking at integrating some build processes into my source control (Git hooks specifically). I'm trying to write a pre-commit hook that checks for build errors in my Java project (a medium-large test development project) and fails to allow commits that contain errors in the build. This is turning out to be rather challenging.
The approach here uses a command-line Eclipse tool to build and output warnings and errors. This does technically work, but it's slow and may cause problems with the Eclipse IDE (I've already had heap allocation errors). I've also looked at solutions using ant but these approaches don't seem to be a simple one-line solution, and may still be slow.
My main question: what's the fastest (run-time compilation speed) way to build and validate a Java project, by command line? I'd like a solution that returns 0 with no errors and something else if errors are present, but I'm willing to look at other things.
Let's start with some basics:
pre-commit hooks run on the server and not the client. There is no working directory by default. You have to make sure that javac is available, and is the correct version.
Your pre-commit hook will freeze up the user's terminal until completion.
Now, how long will it take to checkout a fresh copy of your Java project, run Ant, wait for it to compile, and then process the output of the compile? a minute or two? 20 seconds? 10 seconds? Even 10 seconds will feel like forever as you wait for the Git push to complete. And, if other users want to commit code, they have to also wait.
A better, and easier approach is to use a Continuous Build Server like Jenkins. Jenkins is easy to setup. (It comes with its own application server built in) and has hundreds of plugins that you can use to help report the health of your project. If a compile cannot happen, Jenkins will email the culprit and whomever else you mention.
We have our Jenkins setup to do Ant builds, Maven builds, and use either Git or Subversion as our repository (depending upon the project). Jenkins builds the project, keeps the console log, and will fail the build if build.xml fails. At our place, this means I start pestering the developer to fix the problem or to undo their changes. At my last workplace, developers were given 10 minutes to fix the build, or I would undo their changes.
Not only can Jenkins let you know when a build fails, but has plugins that can report on the Java compiler warnings, Javadoc warnings, run Findbugs, PMD, find duplicate lines of code (via CPD that comes with PMD), and then report everything in a series of graphs. You can also mark builds as unstable (build completes, but is problematic) or simply fail the build based upon the number of issues found with these tools.
Jenkins can also run Unit tests, and again graph the results, then run coverage analysis with JaCoCo or Cobertura or Emma.
So, take a look at Jenkins. It's easy to setup and will do exactly what you want and more.
Ant. There isn't going to be a "one-line-solution". Write an ANT script that compiles the code, and fails if there are any errors. It's not easy, but it's the best option.
Out of the choices you mention, Ant is the best. But let's face it, writing XML sucks. My guess is that any build tool will fail and return an error code when compilation fails. My favorite is sbt, but there's a bit of a learning curve if you aren't into Scala (and even those in Scala like to complain about sbt). Another great option IMO is Gradle. You write your scripts in Groovy which is a dynamically-typed superset of Java.
Jenkins may be a something you could look at
I am measuring the code coverage in my project using the EclEmma plugin for Eclipse. This involves running the coverage for the whole project. But due to some dependency issues tests in some packages are failing altogether. When the coverage for these package is taken individually, the tests run properly and the package is showing the coverage correctly.
Is it possible to get a Coverage report, by running the coverage for each package separately and then merging these reports into one.
Or Alternatively, are there any other free plugins which offer the above capability.
Note: Modifying the test methods to remove the dependency may not be possible due to logical and size constraints.
It's been a while since I shifted to IDEA, but I seem to recall that there was an option (as in "button in the EclEMMA view") to merge several coverage runs.
A visit to http://www.eclemma.org/ confirms this - look for "Merge Sessions". Also:
http://www.eclemma.org/userdoc/sessions.html and
http://www.eclemma.org/userdoc/coverageview.html
It is the button to the right of the "double-X" Remove all sessions button.
Cheers,
I have set up eclipse to work just as I want with my java web app using the following instructions : https://stackoverflow.com/a/6189031/106261.
Is it also possible to get unit tests to be run as part of the auto build, without running a maven install (or test). So I make a change to a class, the tests get run, and if a fail occurs I get a some sort of indicator. Without needing to manually run maven test.
There are several ways to achieve this, all with their own limitations:
You could set up a CI server which builds your project every time you commit a new version. Very reliable but not really "real time"
You can add your own builder to the list of builders (project properties -> Builders), for example an Ant builder which runs "ant test" or something. This builder gets invoked every time you save. Every time. That means Eclipse will become a total slug unless running your unit tests takes less than a few milliseconds.
You can use one of the plugins mentioned here: Is it possible to run incremental/automated JUnit testing in Eclipse?
IDE misconfiguration is a big source of inefficient time use in our team.
I wanted to know if other teams have tried to check the health of the eclipse workspace with continuous integration.
Eclipse is open source and extensible, and most (all?) of its files are in xml. So it should not be difficult to add a step to continuous integration that checks the health of the workspace, such as no missing Jar files, no errors, etc.
What we have is a separate ant script to do the real builds that go to QA and to the customers. This ant script is run with continuous integration and we have put in place a few simple checks that catch most big showstoppers.
The workspace configuration is a different story and we sometimes detect problems when it's too late (the dev left home).
EDIT: Note that we share our Eclipse config files.
There is some information on building with Eclipse from the command line here.
(Should be a comment, but I can't).
I don't see why you want to do that. Eclipse complains loudly if anything is broken, so leave it to the developer.
What you should consider instead, in my opinion is to write tests that check that everything is as you expect it to in the building process of those builds from source code that the developer has checked in the source repository.
If a build breaks due to a jar is missing in the build, add a check. If a build breaks because it is dependent on a certain feature in the JVM, add a check.
Only ship builds outside of the development team that pass all tests. Those builds that fail, should be fixed by the developer introducing the change that broke the build.
Since you are using Ant, you can create a custom task that verifies the following files against pre-defined ones. If they don't match, report problem:
workspace/.metadata/*.* (whichever configurations you think are important)
workspace/project/.classpath
workspace/project/.project
workspace/project/.settings/*.* (whichever configurations you think are important)
Of course, these files include some hard-coded paths, so you can use regular expressions in the pre-defined files.
If you want to check only simple things like "the project doesn't compile", then just compile the project in the ant script (using the javac task) and see if there are errors.
Another thing - continuous integration should better be IDE-agnostic. I.e. you must have a IDE-less environment (a CI Engine) that compiles the project. Imagine the following:
three developers, one of them accidentally removed a jar from his Eclipse, but the project in the repository is compiling. No need to report problems in that case
one of the developers adds a new jar and commits. The others have not updated. No problems are reported in there workspaces, although after they update, they might get the problem.
That all said, I think you'd better look at Hudson, which is a continuous integration engine. Thus you won't be dependent on IDE settings for your builds.