My testing framework has hundreds of tests. The average test takes a 30 seconds so right there that's 50 minutes.
If I change ONE file I should only have to re-test the dependencies.
The way I was thinking about doing this was to checkout rev0 from version control. Compile it. Then update to rev1, then look at the unit tests that needed to be recompiled after the task in ant kicks in and deletes the classes in the dependency graph.
In a trivial example I just did I found that I would only need to run 2 tests.
I imagine I can just do this with the hashes of the files. This way I can do cool things like tweak javadoc without triggering lots of retesting.
I could HACK something together but I don't think there's any clean way to do this in Junit/Ant.
Anyone have any ideas?
As said in a comment: if you have a unit test that takes 30 seconds, your tests are not good unit tests. They are probably not unit tests at all. You would be better off redesigning your tests.
That said, I have a large C++ software system with 25000 unit, integration and system tests. It uses make for building and cppunit for unit tests. Each module has its own suite of tests, which write a report file for each module. I have the make dependencies set up so only the modules that have changed rerun their tests.
Related
I built a backend server (ready-to-serve) that can load jar files as plugins and use the methods in it to serve different functionalities.
I want to write tests for it but I'm not sure what kind of tests I should write.
You should have a look at the different kinds of tests. Unit tests, integration tests, end-to-end tests.
In case you are writing the code for the imported jars yourself, you could write unit tests for the helper functions and services inside. They should be small and pure (self contained).
I guess, that you have a generalised interface exposed with each jar, that your main application reuses to communicate with the jar.
You could write a general integration test, that imports one of the jars and calls a general method, to show, that the execution succeeded. Something like a health check. Then you could write more tests for other functions and the expected results, though they will probably become more focused on each separate jar.
When it comes to testing plugins having only Unit tests is not enough. You'll need integration tests too, otherwise not possible to test combinations of plugins and/or dependencies between them.
Backing to your github project: from what i see, you basically need to test only plugin API and maybe some shared resources: configuration file, datasources and so on.
I've written a test automation framework where the application under test are files. I run each file through a JUnit parametrized suite. This is done because the contents of the file are read, then each record in the file is injected into the framework to be run across the same set of tests.
Now sometimes these files are very big, and testing them as singular large file takes much more time than batching the same file into several smaller files and running them through individually.
This is all well and good, but I still have to kick off each test run manually - What I want is to create some kind of loop which runs the each file during one session and reports the accumulated results as a whole to Maven.
I've read up on it and a lot of people point to Junit core, however when I try this route, Maven only reports on the last run, and doesn't retain any information about the previous test runs. (It also has some weird quirks, such as re-running the parameterized injection method twice before injecting).
The only other thought I have had is to write a Power Shell script with runs the Maven test goal for each batched file but ideally it would be just be cleaner to have Junit take care of this in-house.
Can anybody help with a possible solution to this?
I'm currently struggling with a pretty hard problem: I'm working on a project that has around 8 thousands of unit tests (that take 15 minutes to execute on pretty strong machine) and test that are currently failing don't fail when run on they or own (or when run with other tests that failed), so I guess there is some test that passes but leaves some mess behind.
I'm currently trying to run those tests with tests from specific packages, using gradle:
test {
filter {
includeTestsMatching 'some.package.*'
includeTestsMatching '*Test1'
includeTestsMatching '*Test2'
}
}
However there are some things I don't know how to control, like execution order of test classes (if someone has an idea how to change order that would also help me).
Perhaps someone already knows some nice process of finding tests affecting other tests?
Assuming junit tests, then:
define a test suite that specifies the ordering of test classes
ensure you are using junit 4.11 or later to get a deterministic method order
run only the test suite from gradle (or just directly from your ide)
adjust the ordering by editing the suite until you reproduce the problem
I'm working on a project with a maven structure and I want to begin doing some extended tests, but I'll need to a large amount of resources do to so. Any recommendations where I should place these resources? Should they go into 'src/main/test/resources' or be pulled from a different repo or something else?
Recapping your question
Any recommendations where I should place these resources? [...]
I'm aware that I can place them in 'src/main/test/resources' [...]
I'm thinking more of resources for integration tests. Developers won't necessarily want to pull a 100's of megs of resources from version control for tests they likely won't run.
Answer
You might change to a multi-module project layout, something like:
multi-module project
|
|
+---+-----------------+
| |
source and unit test integration test
prj prj
then your developers could pull only source and unit test prj.
Clearly integration test prj should have a compile scoped dependency on source and unit test prj.
Maven has an answer for you...
If you include the resources inside the 'src/main/test/resources' (that's correct)...
Remember that running an install in the final jar the tests will be excluded, and yes! The resources are excluded too...
Moreover you could skipping tests (also the compilation) to improve compilation performance...
(more info at: http://maven.apache.org/surefire/maven-surefire-plugin/examples/skipping-test.html)
I hope this helps you...
UPDATE:
also give a look at "How can you display the Maven dependency tree for the *plugins* in your project?"
We should think about the reasons for excluding resources from the environment of the developer. I just answer your question telling the common practice of handling situations like yours by Maven...
If as you say it is actually a problem for you, it may be an idea, for example, to separate all the tests and resources inside another moduel...including that in your parent pom...
Or maybe to define a profile (integration test) which will contains largely of test resources, and a profile (Developing test) that will only lead tests useful to the individual developer ...
I personally think that unless there are special reasons (critical), beyond simply the amount of mega occupying test, you can safely proceed with all the resources in the test package...
You should run this kind of large integration tests on a CI server, not on developer machines.
It's good practice to run long-running integration tests on a different machine to reduce the dev-test cycle time. You wouldn't want to run that in your normal builds anyway, make a profile and run it on CI.
Question
When I run all our JUnit tests, using eclipse, can I set a default timeout?
Background
My manager insists on writing Unit tests that sometimes take up to 5 minutes to complete. When I try to run our entire test suite (only about 300 tests) it can take over 30 minutes. I want to put something in place that will stop any test that takes longer than 10 seconds.
I know an individual test can be annotated with:
#Test(timeout=10000)
But doing this would make his long tests always fail. I want them to work when he runs them on his box (if I have to make minor adjustments to the project before checking it in, that's acceptable. However, deleting the timeouts from 40 different test files is not practical).
I also know I can create an ant task to set a default timeout for all tests, along the lines of:
<junit timeout="10000">
...
</junit>
The problem with that we typically run our tests from inside eclipse with Right Click > Run As > JUnit Test.
Summary
So is there a relatively painless way to set a timeout for all tests, perhaps using a Run Configuration setting, or project setting, or JUnit preference, or environment variable, or something? I'd even settle for installing some other plugin that lets me right click on particular test folders and run all the tests in some other manner like through ant or something...
Possible solution:
Extend all your Test classes from another class: TestBase for example
Add to TestBase global timeout. This timeout will be applied to all extended classes:
public class TestBase {
#Rule
public Timeout globalTimeout = new Timeout(10000);
}
So maybe a combination of using Infinitest with the "Slow test warning" enabled together with the filtering feature would do the trick. You could identify tests that exceed your time-limit and add them to the filter list, this would only affect testing from inside Eclipse. Running the tests via a possible build script via CLI/CI etc would not be affected at all.
You can find more on setting this up here: http://improvingworks.com/products/infinitest/infinitest-user-guide/
If you want to configure the tests to run for a maximum of ten seconds you can try this:
#Test(timeout=10000)
My manager insists on writing Unit tests that sometimes take up to 5 minutes to complete
This almost certainly indicates that those tests are not in fact unit tests. Cut that Gordian knot: try refactoring your testsuite to provide equivalent test coverage without requiring a test-case that runs for that long.
Almost certainly your bosses tests are system tests pretending to be unit tests. If they are suppsoed to be unit tests and are just slow they should be refactored to use mocks so that they run quicker.
Anyway, a more pragmatic and diplomatic approach than confronting your boss over this might be to just try and run the faster ones yourself. I've seen a hack to do this in a project where slow tests had SytemTest in their names. Then there were two ant targets created in the build file. One that ran all tests and one that filtered out by class name the SytemTests . To implement this all you would have to do is rename some of the tests and write your ant target.
It sounds like test suites would help you out.
You can have two test suites; QuickTests and AllTests. Include QuickTests in the AllTests suite, as well as the tests that take a long time. Then all other tests would go into the quick tests suite.
From eclipse you can run an entire test suite at once. So you would run QuickTests and that way all the other slow tests will not run.
Or see this question on how to apply a timeout to a suite which will apply to nested suites and classes in the suite. Which can achieve similar to what you want when combined with my above suggestion.
I know this doesnt really answer your question but the simple answer is don't!
Setting timeouts conditionally is wrong beacsue then you would have Unit tests on your machine that you are always going to fail. The point of Unit tests is to be able to quickly see that you havnt broken anything. Having to check through the failed test list to make sure it's just the long running tests is ust going to make some bugs pass through the cracks.
As some of the commenters mentioned you should split out the tests into unit tests that run quickly and the slower runnning integration tests i.e. have a source folder called src/main/java for your code, src/test/java for unit tests and src/integration-test/java for the longer running tests.