I could easily exclude the class from execution using the build system which is Gradle but what I don't like about that is that it's silent. I want a failure that has to be fixed prior to check in.
So we use the naming convention of "*ITCase.java" for integration tests and "*Test.java" for unit tests. It's happened before that someone used "*ITest.java" which resulted in integration tests running in the unit test phase. We are suspicious that this has caused us some mysterious transactional issues during parallel runs. I want there to be a test that fails when this happens. Since we make copious use of parent classes, it would seem that we have some means for doing this.
An idea I've had is to inject a property into the test JVM so that I know whether the "test" task is running or the "integrationTest" task is running. Check that property and fail if it's wrong. Not sure how to do that with Gradle.
I'm currently struggling with a pretty hard problem: I'm working on a project that has around 8 thousands of unit tests (that take 15 minutes to execute on pretty strong machine) and test that are currently failing don't fail when run on they or own (or when run with other tests that failed), so I guess there is some test that passes but leaves some mess behind.
I'm currently trying to run those tests with tests from specific packages, using gradle:
test {
filter {
includeTestsMatching 'some.package.*'
includeTestsMatching '*Test1'
includeTestsMatching '*Test2'
}
}
However there are some things I don't know how to control, like execution order of test classes (if someone has an idea how to change order that would also help me).
Perhaps someone already knows some nice process of finding tests affecting other tests?
Assuming junit tests, then:
define a test suite that specifies the ordering of test classes
ensure you are using junit 4.11 or later to get a deterministic method order
run only the test suite from gradle (or just directly from your ide)
adjust the ordering by editing the suite until you reproduce the problem
I couldnt find this info anywhere. In order to get code coverage calculated using a plugin (like Jococo, Cobertura..etc) Do I need to run all the unit tests before? These look like relevant tasks, But still I think Code coverage should not be dependent on running unit tests before hand, unless coverage plugin really relies on the Junit
You do not need to run tests beforehand. The coverage tool instruments the code (if required), runs the tests (or your main) and then reports the stats back to you.
Having said that, if your code relies of fancy reflection/bytecode manipulation, it may be a good idea to run tests beforehand, just to make sure that failures reported during the coverage scan the the instrumentation's fault and not "real" test failures.
I have 1000 junit tests and one "bad" test that is modifying a shared resource causing a subsequent test to fail. It passes if run alone. I'm looking for a Maven plugin or Java application or tool that will take as input a test class name. It will then run the 1000 tests in various combinations until it finds the "bad" test. Assume it is one "bad" test.
As you may noticed, writing dependant tests IS the problem ! Every solutions would be a workaround, maybe with side effectfs, complexity the first one.
Assuming you REALLY can't change that, you may have different strategies to solve your issue, but they are not necessary related to maven :
Ordering
Order your test to run the "bad one" at the end ... be carefull that you me affected again if you have a second bad test !
User TestNG instead of JUnit and #Groups and #AfterGroups annotations to split your tests and run them as you want
Use #AfterClass and BeforeClass with a test suite: Cleanup after all junit tests
Manually describe a Test Suite (not sure if you can achieve what you want)
Provide good data
Use setUp and tearDown methods to prepare and clean data in order to always have a stable environment, on each tests classes
Rollback the test that modify you resource (pretty the same thing indeed)
Step back thoughts :
Just a piece of minds :
If you cannot run tests independently, they are not unit tests.
Question
When I run all our JUnit tests, using eclipse, can I set a default timeout?
Background
My manager insists on writing Unit tests that sometimes take up to 5 minutes to complete. When I try to run our entire test suite (only about 300 tests) it can take over 30 minutes. I want to put something in place that will stop any test that takes longer than 10 seconds.
I know an individual test can be annotated with:
#Test(timeout=10000)
But doing this would make his long tests always fail. I want them to work when he runs them on his box (if I have to make minor adjustments to the project before checking it in, that's acceptable. However, deleting the timeouts from 40 different test files is not practical).
I also know I can create an ant task to set a default timeout for all tests, along the lines of:
<junit timeout="10000">
...
</junit>
The problem with that we typically run our tests from inside eclipse with Right Click > Run As > JUnit Test.
Summary
So is there a relatively painless way to set a timeout for all tests, perhaps using a Run Configuration setting, or project setting, or JUnit preference, or environment variable, or something? I'd even settle for installing some other plugin that lets me right click on particular test folders and run all the tests in some other manner like through ant or something...
Possible solution:
Extend all your Test classes from another class: TestBase for example
Add to TestBase global timeout. This timeout will be applied to all extended classes:
public class TestBase {
#Rule
public Timeout globalTimeout = new Timeout(10000);
}
So maybe a combination of using Infinitest with the "Slow test warning" enabled together with the filtering feature would do the trick. You could identify tests that exceed your time-limit and add them to the filter list, this would only affect testing from inside Eclipse. Running the tests via a possible build script via CLI/CI etc would not be affected at all.
You can find more on setting this up here: http://improvingworks.com/products/infinitest/infinitest-user-guide/
If you want to configure the tests to run for a maximum of ten seconds you can try this:
#Test(timeout=10000)
My manager insists on writing Unit tests that sometimes take up to 5 minutes to complete
This almost certainly indicates that those tests are not in fact unit tests. Cut that Gordian knot: try refactoring your testsuite to provide equivalent test coverage without requiring a test-case that runs for that long.
Almost certainly your bosses tests are system tests pretending to be unit tests. If they are suppsoed to be unit tests and are just slow they should be refactored to use mocks so that they run quicker.
Anyway, a more pragmatic and diplomatic approach than confronting your boss over this might be to just try and run the faster ones yourself. I've seen a hack to do this in a project where slow tests had SytemTest in their names. Then there were two ant targets created in the build file. One that ran all tests and one that filtered out by class name the SytemTests . To implement this all you would have to do is rename some of the tests and write your ant target.
It sounds like test suites would help you out.
You can have two test suites; QuickTests and AllTests. Include QuickTests in the AllTests suite, as well as the tests that take a long time. Then all other tests would go into the quick tests suite.
From eclipse you can run an entire test suite at once. So you would run QuickTests and that way all the other slow tests will not run.
Or see this question on how to apply a timeout to a suite which will apply to nested suites and classes in the suite. Which can achieve similar to what you want when combined with my above suggestion.
I know this doesnt really answer your question but the simple answer is don't!
Setting timeouts conditionally is wrong beacsue then you would have Unit tests on your machine that you are always going to fail. The point of Unit tests is to be able to quickly see that you havnt broken anything. Having to check through the failed test list to make sure it's just the long running tests is ust going to make some bugs pass through the cracks.
As some of the commenters mentioned you should split out the tests into unit tests that run quickly and the slower runnning integration tests i.e. have a source folder called src/main/java for your code, src/test/java for unit tests and src/integration-test/java for the longer running tests.