I've written a test automation framework where the application under test are files. I run each file through a JUnit parametrized suite. This is done because the contents of the file are read, then each record in the file is injected into the framework to be run across the same set of tests.
Now sometimes these files are very big, and testing them as singular large file takes much more time than batching the same file into several smaller files and running them through individually.
This is all well and good, but I still have to kick off each test run manually - What I want is to create some kind of loop which runs the each file during one session and reports the accumulated results as a whole to Maven.
I've read up on it and a lot of people point to Junit core, however when I try this route, Maven only reports on the last run, and doesn't retain any information about the previous test runs. (It also has some weird quirks, such as re-running the parameterized injection method twice before injecting).
The only other thought I have had is to write a Power Shell script with runs the Maven test goal for each batched file but ideally it would be just be cleaner to have Junit take care of this in-house.
Can anybody help with a possible solution to this?
Related
I found out about rerunFailingTestsCount feature in Surefire (commit). When a test fails, the runner tries to rerun it up to a specified number of times. If any of these reruns succeed, the test is considered PASSED, but FLAKY.
This feature implements extension to the JUnit XML report format, with additional attributes in the test result.
How can I configure Jenkins CI to meaningfully show the newly gained test data about my testing?
I would like to be able to monitor my flaky tests, so I can maintain general overview of what's going on, and later so I can prioritize fixing the ones that slow the build the most.
Build containing only flaky tests should be easily distinguishable from one that contained some failed tests, and from one containing only passing tests.
It looks like you've found almost all the answers by yourself :)
The only thing that IMO is missing is Some Jenkins Plugin that indeed can show (visualize) the flaky tests based on surefire reports.
There is indeed such a plugin called Flaky Test Handler.
Disclaimer - I haven't tried it by myself but it seems that it can do the job. And it would have been my best bet to solve the issue.
An alternative would be creating a Jenkins plugin by yourself but it looks like a lot of hassle.
Yet another approach I can think of is creating a maven plugin that would parse the results of surefire plugin and create some additional HTML report and based on that information you could just visualize the HTML report in Jenkins (avoiding writing the jenkins plugin).
One last approach that I worked with a long time ago - is a maven plugin that again parses the results of surefire test and adds the result to some database (like mongo db or something). It can be invoked only in Jenkins so that Jenkins for supply some additional information like buildNumber.
Later on you could roll your own UI calling to mongo db and giving statistics over builds (like previous build had 10 flaky test, this build has 8 and so forth).
In Junit, I have 2 tests. If I run individually, it is taking 4sec each. But if I run together also it is taking 4-5sec. I expected 8secs.
In both the tests, only input varies.
It is not spring project. It is not using databases.
It uses Aspose word to create MS-Word from xml.
How can I run each test as if running newly every time.?
I have found that JIT compiler in java is doing optimization when a method is running frequently. This is the reason for the performance improvement.
Im currently reading Continius Delivery and in the book the author says that it is crucial to build the binarys only once, and then use the same binarys for every deployment. What im having problem understanding is how this can be done in practice? For examaple in order to run the mocked unit tests there would be a special build? What im refering to is the scope tag in Maven.
If you look at the Maven life Cycle you'll see that you have only one compile task. For your tests, they will be compiled and executed right after the source compilation. With mocked unit tests, it is the same : two separated compilation for two objectives.
I think that the author of your book refers to a problem that may appear when you deploy automatically on several environment : it create more environment to debug. It is compulsory to have only one final binary for all the environment. If you have several binaries that you split over your environment, you can be assured that you will forget what difference there are between two of them, which argument you give to the first one and not the other. For continuous Delivery, it has to be the same everywhere.
Let's come back on Maven. Maven has a lot of possibilities during its life cycle. Sometimes you'll have to run several build to complete everything (as code coverage for example). This may be useful in your continuous integration process and can be done through different build type (every hour for unit test, everyday for code coverage, quality analysis and integration tests).
But in the end, when you'll enter Continuous Delivery, you'll build one final binary, one unique binary copy over your environment
My testing framework has hundreds of tests. The average test takes a 30 seconds so right there that's 50 minutes.
If I change ONE file I should only have to re-test the dependencies.
The way I was thinking about doing this was to checkout rev0 from version control. Compile it. Then update to rev1, then look at the unit tests that needed to be recompiled after the task in ant kicks in and deletes the classes in the dependency graph.
In a trivial example I just did I found that I would only need to run 2 tests.
I imagine I can just do this with the hashes of the files. This way I can do cool things like tweak javadoc without triggering lots of retesting.
I could HACK something together but I don't think there's any clean way to do this in Junit/Ant.
Anyone have any ideas?
As said in a comment: if you have a unit test that takes 30 seconds, your tests are not good unit tests. They are probably not unit tests at all. You would be better off redesigning your tests.
That said, I have a large C++ software system with 25000 unit, integration and system tests. It uses make for building and cppunit for unit tests. Each module has its own suite of tests, which write a report file for each module. I have the make dependencies set up so only the modules that have changed rerun their tests.
Question
When I run all our JUnit tests, using eclipse, can I set a default timeout?
Background
My manager insists on writing Unit tests that sometimes take up to 5 minutes to complete. When I try to run our entire test suite (only about 300 tests) it can take over 30 minutes. I want to put something in place that will stop any test that takes longer than 10 seconds.
I know an individual test can be annotated with:
#Test(timeout=10000)
But doing this would make his long tests always fail. I want them to work when he runs them on his box (if I have to make minor adjustments to the project before checking it in, that's acceptable. However, deleting the timeouts from 40 different test files is not practical).
I also know I can create an ant task to set a default timeout for all tests, along the lines of:
<junit timeout="10000">
...
</junit>
The problem with that we typically run our tests from inside eclipse with Right Click > Run As > JUnit Test.
Summary
So is there a relatively painless way to set a timeout for all tests, perhaps using a Run Configuration setting, or project setting, or JUnit preference, or environment variable, or something? I'd even settle for installing some other plugin that lets me right click on particular test folders and run all the tests in some other manner like through ant or something...
Possible solution:
Extend all your Test classes from another class: TestBase for example
Add to TestBase global timeout. This timeout will be applied to all extended classes:
public class TestBase {
#Rule
public Timeout globalTimeout = new Timeout(10000);
}
So maybe a combination of using Infinitest with the "Slow test warning" enabled together with the filtering feature would do the trick. You could identify tests that exceed your time-limit and add them to the filter list, this would only affect testing from inside Eclipse. Running the tests via a possible build script via CLI/CI etc would not be affected at all.
You can find more on setting this up here: http://improvingworks.com/products/infinitest/infinitest-user-guide/
If you want to configure the tests to run for a maximum of ten seconds you can try this:
#Test(timeout=10000)
My manager insists on writing Unit tests that sometimes take up to 5 minutes to complete
This almost certainly indicates that those tests are not in fact unit tests. Cut that Gordian knot: try refactoring your testsuite to provide equivalent test coverage without requiring a test-case that runs for that long.
Almost certainly your bosses tests are system tests pretending to be unit tests. If they are suppsoed to be unit tests and are just slow they should be refactored to use mocks so that they run quicker.
Anyway, a more pragmatic and diplomatic approach than confronting your boss over this might be to just try and run the faster ones yourself. I've seen a hack to do this in a project where slow tests had SytemTest in their names. Then there were two ant targets created in the build file. One that ran all tests and one that filtered out by class name the SytemTests . To implement this all you would have to do is rename some of the tests and write your ant target.
It sounds like test suites would help you out.
You can have two test suites; QuickTests and AllTests. Include QuickTests in the AllTests suite, as well as the tests that take a long time. Then all other tests would go into the quick tests suite.
From eclipse you can run an entire test suite at once. So you would run QuickTests and that way all the other slow tests will not run.
Or see this question on how to apply a timeout to a suite which will apply to nested suites and classes in the suite. Which can achieve similar to what you want when combined with my above suggestion.
I know this doesnt really answer your question but the simple answer is don't!
Setting timeouts conditionally is wrong beacsue then you would have Unit tests on your machine that you are always going to fail. The point of Unit tests is to be able to quickly see that you havnt broken anything. Having to check through the failed test list to make sure it's just the long running tests is ust going to make some bugs pass through the cracks.
As some of the commenters mentioned you should split out the tests into unit tests that run quickly and the slower runnning integration tests i.e. have a source folder called src/main/java for your code, src/test/java for unit tests and src/integration-test/java for the longer running tests.