Does Coverage plugins need that unit tests run beforehand - java

I couldnt find this info anywhere. In order to get code coverage calculated using a plugin (like Jococo, Cobertura..etc) Do I need to run all the unit tests before? These look like relevant tasks, But still I think Code coverage should not be dependent on running unit tests before hand, unless coverage plugin really relies on the Junit

You do not need to run tests beforehand. The coverage tool instruments the code (if required), runs the tests (or your main) and then reports the stats back to you.
Having said that, if your code relies of fancy reflection/bytecode manipulation, it may be a good idea to run tests beforehand, just to make sure that failures reported during the coverage scan the the instrumentation's fault and not "real" test failures.

Related

JUnit tests pass but PIT says the suite isn't green

While trying to run a PIT mutation test I get the following error:
mutationCoverage failed: All tests did not pass without mutation when calculating line coverage. Mutation testing requires a green suite.
The tests run just fine when I do a normal test build but while running the mutation tests phase they supposedly fail but no details are provided as to why. I have gone through the reasons listed on the PIT Testing FAQ but I still have no clue what could be wrong.
I tried:
adding the -Dthreads=1 option to rule of any multi threading issue
could not find any system properties unique the couple tests that are failing
the tests are not ignored under normal runs
What are some other things I should try? Or other ways to debug what could be going on here?
The common causes of tests failing at the coverage stage are
PIT picking up tests not included/are excluded in the normal test config
Tests rely on an environment variable or other property set in the test config, but not set in the pitest config
Tests have a hidden order dependency that is not revealed during the normal test run
PIT doesn't like something in your tech stack - possibly a JUnit test runner
It sounds like you've eliminated 1 & 2. so that leaves 3 and 4.
Test order dependencies can be hard to spot. If the answer is yes to any of these you may have one.
Does your codebase include mutable static state? (e.g in singletons)
Do your tests hit a database (in memory or otherwise) where it is possible for state to persist between tests?
Do your tests modify files on disk?
There are probably also many other causes not listed above.
If you are confident that order dependencies are impossible in your code base, that leaves a problem with these particular tests.
It's hard to guess what this might be without some code. Can you post a simplified version of the test that still fails?

Code coverage for Fitnesse

I have just inherited a old java codebase (around 10 - 15 years old). It does not have any automated testing coverage, or at least the contemporary world knows about it. I am planning to write some Fitnesse scripts around it, to begin with.
I know about Concordion etc. And I have my reason to pick Fitnesse. I will keep away from that since that is not the topic of this question.
My question is, I don't know of a quick way to measure the code coverage done by the Fitnesse tests as I write them. I know that jacoco (or similar libraries) should be able to report them but I just can't figure out exactly how.
So, if anyone of you have worked on Fitnesse test scripts and have managed to have Jenkins report on the coverage achieved by the scripts, please help.
Thanks.
I have not done this myself, because I tend to use FitNesse to test deployed applications (and I cannot measure code coverage in the 'real installation'). But I believe this should be fairly straightforward if you run your FitNesse tests as part of a Jenkins (or any other build server's) jUnit run, which measures code coverage.
To have your FitNesse tests be executed as part of a jUnit run: create a Java class annotated with #RunWith(FitNesseRunner.class) and give it a #Suite("MyPageOrSuite.That.IWantToRun") annotation to indicate which tests to run. This will execute the specified page(s) in the same Java process that the jUnit process is using, so if that is instrumented in some way to determine code coverage the coverage of your FitNesse tests will be included in the report.
Sample jUnit test class, running FitNesse.SuiteAcceptanceTests.SuiteSlimTests.TestScriptTable:
#RunWith(FitNesseRunner.class)
#FitNesseRunner.Suite("FitNesse.SuiteAcceptanceTests.SuiteSlimTests.TestScriptTable")
#FitNesseRunner.FitnesseDir(".")
#FitNesseRunner.OutputDir("../target/fitnesse-results")
public class FitNesseRunnerTest {
}

Using maven and jenkins, how to test the programmers did some test cases?

I am working on a number of projects and we are using Java, Springs, Maven and Jenkins for CI but I am running into a issues that some of the programmers are not adding real junit test cases to the projects. I want maven and jenkins to run the test before deploying to the server. Some of the programers made a blank test so it starts and stops and will pass the tests.
Can someone please tell me how can I automat this check so maven and jenkins can see if the test put out some output.
I have not found any good solution to this issue, other than reviewing the code.
Code coverage fails to detect the worst unit tests I ever saw
Looking at the number of tests, fails there too. Looking at the test names, you bet that fails.
If you have developers like the "Kevin" who writes tests like those, you'll only catch those tests by code review.
The summary of how "Kevin" defeats the checks:
Write a test called smokes. In this test you invoke every method of the class under test with differing combinations of parameters, each call wrapped in try { ... } catch (Throwable t) {/* ignore */}. This gives you great coverage, and the test never fails
Write a load of empty tests with names that sound like you have thought up fancy test scenarios, eg widgetsTurnRedWhenFlangeIsOff, widgetsCounterrotateIfFangeGreaterThan50. These are empty tests, so will never fail, and a manager inspection the CI system will see lots of detailed test cases.
Code review is the only way to catch "Kevin".
Hope your developers are not that bad
Update
I had a shower moment this morning. There is a type of automated analysis that can catch "Kevin", unfortunately it can still be cheated around, so while it is not a solution to people writing bad tests, it does make it harder to write bad tests.
Mutation Testing
This is an old project, and won't work on recent code, and I am not suggesting you use this. But I am suggesting that it hints at a type of automated analysis that would stop "Kevin"
If I were implementing this, what I would do is write a "JestingClassLoader" that uses, e.g. ASM, to rewrite the bytecode with one little "jest" at a time. Then run the test suite against your classes when loaded with this classloader. If the tests don't fail, you are in "Kevin" land. The issue is that you need to run all the tests against every branch point in your code. You could use automatic coverage analysis and test time profiling to speed things up, though. In other words, you know what code paths each test executes, so when you make a "jest" against one specific path, you only run the tests that hit that path, and you start with the fastest test. If none of those tests fail, you have found a weakness in your test coverage.
So if somebody were to "modernize" Jester, you'd have a way to find "Kevin" out.
But that will not stop people writing bad tests. Because you can pass that check by writing tests that verify the code behaves as it currently behaves, bugs and all. Heck there are even companies selling software that will "write the tests for you". I will not give them the Google Page Rank by linking to them from here, but my point is if they get their hands on such software you will have loads of tests that straight-jacket your codebase and don't find any bugs (because as soon as you change anything the "generated" tests will fail, so now making a change requires arguing over the change itself as well as the changes to all the unit tests that the change broke, increasing the business cost to make a change, even if that change is fixing a real bug)
I would recommend using Sonar which has a very useful build breaker plugin.
Within the Sonar quality profile you can set alerts on any combination of metrics, so, for example you could mandate that your java projects should have
"Unit tests" > 1
"Coverage" > 20
Forcing developers to have at least 1 unit test that covers a minimum of 20% of their codebase. (Pretty low quality bar, but I suppose that's your point!)
Setting up an additional server may appear like extra work, but the solution scales when you have multiple Maven projects. The Jenkins plugin for Sonar is all you'll need to configure.
Jacoco is the default code coverage tool, and Sonar will also automatically run other tools like Checkstyle, PMD and Findbugs.
Finally Stephen is completely correct about code review. Sonar has some basic, but useful, code review features.
You need to add a code coverage plugin such as JaCoCo, EMMA, Cobertura, or the likes. Then you need to define in the plugin's configuration the percent of code coverage (basically "code covered by the tests") that you would like to have, in order for the build to pass. If it's below that number, you can have the build failing. And, if the build is failing, Jenkins (or whatever your CI is) won't deploy.
As others have pointed out, if your programmers are already out to cheat coding practices, using better coverage tools won't solve your problem. They can be out cheated as well.
You need to sit down with your team and have an honest talk with them about professionalism and what software engineering is supposed to be.
In my experience, code reviews are great but they need to happen before the code is committed. But for that to work in a project where people are 'cheating', you'll need to at least have a reviewer you can trust.
http://pitest.org/ is a good solution for so called "mutation testing":
Faults (or mutations) are automatically seeded into your code, then your tests are run. If your tests fail then the mutation is killed, if your tests pass then the mutation lived.
The quality of your tests can be gauged from the percentage of mutations killed.
The good thing is that you can easyli use it in conjunction with maven, jenkins and ... SonarQube!

Is there a tool for Java which finds which lines of code are tested by specific JUnit tests?

Is there a tool for Java which, given a set of JUnit tests, and a class to test, will tell you which lines of the class are tested by the tests? ie. required to be present for the tests to run successfully. I don't mean "code coverage", which only tells you whether a line is executed, but something stronger than that: Is the line required for the test to pass?
I often comment out a line of code and run a test to see if the test really is testing that line of code. I reckon this could be done automatically by a semi-smart tool (eg. something like an IDE that can work out what can be removed from a method whilst keeping it compilable).
There's an open source mutation-testing tool called Jester that changes the lines of your source code, then runs your tests, and reports whether your tests passed anyway. Sounds closer to what you're looking for than code coverage tools.
Jester is a test tester for testing your java JUnit tests (Pester is for Python PyUnit tests). It modifies your source code, runs the tests and reports if the tests pass despite the changes to the code. This can indicate missing tests or redundant code.
WRT the discussion about whether these tools are needed in a pure TDD project, there is a link on the Jester project webpage to a posting about the benefits of using Jester on code written during a TDD session (Uncle Bob's infamous bowling TDD example).
What you are looking for might be referred to as mutation testing. While mutation testing won't tell you which lines of code are required to pass, per se. What mutation testing does is modify your source code looking for changes it can make to your code but your test still passes. E.g. changing
if (a < b)
to
if (a >= b)
and seeing if the test still passes. This will highlight weaknesses in your test.
Another java library for mutation testing is jumble.
I use emma for most of my projects. i included it in my ant build file and it generates html files for the reports
two other coverage projects i read about but haven't tried yet are clover or cobertura
I love cobertura, because the generated reports are IMHO the most beautiful. And it has its own ant target!
In comparison to emma, it has also branch coverage, not only line coverage, which is misleading very often.

unit testing - per test code coverage for java

We use use junit for unit testing our java code. Today we use cobertura to get coverage numbers. It does not have an easy way of getting per test coverage number. Is there a tool to get per test code coverage - commercial/free?
(cobertura has a patch to get per test coverage numbers, out of date with latest cobertura).
we used clover to good effect. we wrote some ant tasks that allowed us to run it from a dev box, so we could view the coverage numbers locally, and we also integrated it into our continuos integration so we had a site for the official number.
http://www.atlassian.com/software/clover/
the only issue we had was it is a memory hog....
Emma provides detailed reports by overall/package/class for block and line coverage.
The obvious way to do this is, run one test and dump the test coverage data. (In fact, this is the only way to do this).
Our SD Java Test Coverage Tool has explicit DumpVectors and ResetVectors procedures that can be called anytime. By adjusting the unit test framework to just call these two procedures between tests, you can get one test coverage vector per unit test.
The display tool will display any individual test coverage vector. It can also give you the union of the entire set (as if you had run all the tests) or compute how one test overlaps with another.

Categories