I have just inherited a old java codebase (around 10 - 15 years old). It does not have any automated testing coverage, or at least the contemporary world knows about it. I am planning to write some Fitnesse scripts around it, to begin with.
I know about Concordion etc. And I have my reason to pick Fitnesse. I will keep away from that since that is not the topic of this question.
My question is, I don't know of a quick way to measure the code coverage done by the Fitnesse tests as I write them. I know that jacoco (or similar libraries) should be able to report them but I just can't figure out exactly how.
So, if anyone of you have worked on Fitnesse test scripts and have managed to have Jenkins report on the coverage achieved by the scripts, please help.
Thanks.
I have not done this myself, because I tend to use FitNesse to test deployed applications (and I cannot measure code coverage in the 'real installation'). But I believe this should be fairly straightforward if you run your FitNesse tests as part of a Jenkins (or any other build server's) jUnit run, which measures code coverage.
To have your FitNesse tests be executed as part of a jUnit run: create a Java class annotated with #RunWith(FitNesseRunner.class) and give it a #Suite("MyPageOrSuite.That.IWantToRun") annotation to indicate which tests to run. This will execute the specified page(s) in the same Java process that the jUnit process is using, so if that is instrumented in some way to determine code coverage the coverage of your FitNesse tests will be included in the report.
Sample jUnit test class, running FitNesse.SuiteAcceptanceTests.SuiteSlimTests.TestScriptTable:
#RunWith(FitNesseRunner.class)
#FitNesseRunner.Suite("FitNesse.SuiteAcceptanceTests.SuiteSlimTests.TestScriptTable")
#FitNesseRunner.FitnesseDir(".")
#FitNesseRunner.OutputDir("../target/fitnesse-results")
public class FitNesseRunnerTest {
}
Related
I'm currently struggling with a pretty hard problem: I'm working on a project that has around 8 thousands of unit tests (that take 15 minutes to execute on pretty strong machine) and test that are currently failing don't fail when run on they or own (or when run with other tests that failed), so I guess there is some test that passes but leaves some mess behind.
I'm currently trying to run those tests with tests from specific packages, using gradle:
test {
filter {
includeTestsMatching 'some.package.*'
includeTestsMatching '*Test1'
includeTestsMatching '*Test2'
}
}
However there are some things I don't know how to control, like execution order of test classes (if someone has an idea how to change order that would also help me).
Perhaps someone already knows some nice process of finding tests affecting other tests?
Assuming junit tests, then:
define a test suite that specifies the ordering of test classes
ensure you are using junit 4.11 or later to get a deterministic method order
run only the test suite from gradle (or just directly from your ide)
adjust the ordering by editing the suite until you reproduce the problem
I couldnt find this info anywhere. In order to get code coverage calculated using a plugin (like Jococo, Cobertura..etc) Do I need to run all the unit tests before? These look like relevant tasks, But still I think Code coverage should not be dependent on running unit tests before hand, unless coverage plugin really relies on the Junit
You do not need to run tests beforehand. The coverage tool instruments the code (if required), runs the tests (or your main) and then reports the stats back to you.
Having said that, if your code relies of fancy reflection/bytecode manipulation, it may be a good idea to run tests beforehand, just to make sure that failures reported during the coverage scan the the instrumentation's fault and not "real" test failures.
I'm writing some system tests in Groovy and piggybacking on its unit testing infrastructure. It works pretty well except that I don't like the default JUnit test runner, which displays . for each test in the suite waiting until the end to report details about the errors and failures. The system tests can take a long time to run so it's useful to be able to interrupt them in the middle once you know that a failure or error exists but you can only do that if you know which testcase failed and what the failure was while the suite is running.
Are there any alternate JUnit test runners which I could use that provide this functionality out of the box?
I'm not exactly sure what you mean by "textui", but IntelliJ IDEA 11 Community Edition has an integrated JUnit test runner that works with Groovy exactly as you describe (but it has a graphical user interface only, as far as I know).
I've similar questions asked on this site, but this is a bit of a different scenario than what I have seen.
We have a PC client execute JUnit 4 tests. However, we have a custom test runner that ships the JUnit 4 tests as JUnit 3 tests (using JUnit38ClassRunner) over ethernet to a target system that is running a service that executes JUnit tests using JUnit 3.8. The tests execute as intended, however, when they are returned to the PC client they are marked as Unrooted Tests. Is there a way to organize these tests as "non-unrooted tests"? It is somewhat difficult to sift through the failed results when they are all returned in one group when not using Eclipse. Using JUnit 4 is not an option on the remote system as the target embedded system uses Java 1.4.2, and this is not changing anytime in the near future. We really do not want to have to downgrade to JUnit 3.8 on the PC client side because of the #RunWith annotation, which will take us a little while to figure out how to re-implement.
Any assistance on this is appreciated, thanks in advance.
I wasn't able to find a solution for this in Eclipse, but using Ant to manage the junit execution generates xml reports, and I can view the summary in a web page by compiling those reports using the <junitreport> element.
Is there a tool for Java which, given a set of JUnit tests, and a class to test, will tell you which lines of the class are tested by the tests? ie. required to be present for the tests to run successfully. I don't mean "code coverage", which only tells you whether a line is executed, but something stronger than that: Is the line required for the test to pass?
I often comment out a line of code and run a test to see if the test really is testing that line of code. I reckon this could be done automatically by a semi-smart tool (eg. something like an IDE that can work out what can be removed from a method whilst keeping it compilable).
There's an open source mutation-testing tool called Jester that changes the lines of your source code, then runs your tests, and reports whether your tests passed anyway. Sounds closer to what you're looking for than code coverage tools.
Jester is a test tester for testing your java JUnit tests (Pester is for Python PyUnit tests). It modifies your source code, runs the tests and reports if the tests pass despite the changes to the code. This can indicate missing tests or redundant code.
WRT the discussion about whether these tools are needed in a pure TDD project, there is a link on the Jester project webpage to a posting about the benefits of using Jester on code written during a TDD session (Uncle Bob's infamous bowling TDD example).
What you are looking for might be referred to as mutation testing. While mutation testing won't tell you which lines of code are required to pass, per se. What mutation testing does is modify your source code looking for changes it can make to your code but your test still passes. E.g. changing
if (a < b)
to
if (a >= b)
and seeing if the test still passes. This will highlight weaknesses in your test.
Another java library for mutation testing is jumble.
I use emma for most of my projects. i included it in my ant build file and it generates html files for the reports
two other coverage projects i read about but haven't tried yet are clover or cobertura
I love cobertura, because the generated reports are IMHO the most beautiful. And it has its own ant target!
In comparison to emma, it has also branch coverage, not only line coverage, which is misleading very often.