How to enable a global timeout for JUnit testcase runs? - java

This question suggests to use the timeout parameter of the #Test annotation to have JUnit forcefully stop tests after that timeout period.
But we have like 5000 unit tests so far, and we want to establish a policy that asks developers to never release tests that need more than 10 seconds to complete. The policy would probably say "aim for < 10 seconds", but then we would like to ensure that any test is stopped after say 30 seconds. (the numbers are just examples, the idea is to define something that is "good enough" for most use cases, but that also makes sure things dont run "forever" )
Now I am wondering if there is a way to enable such behavior without turning into each test case and adding that annotation parameter.
The existing question doesn't help either: I am looking for one change to enable this, not a one change per test class solution. One central, global switch. Not one per file or method.

Although JUnit Jupiter (i.e., the programming and extension model introduced in JUnit 5) does not yet have built-in support for global timeouts, you can still implement global timeout support on your own.
The only catch is that a timeout extension cannot currently abort test execution preemptively. In other words, a timeout extension in JUnit Jupiter can currently only time the execution of tests and then throw an exception if the execution took too long (i.e., after waiting for the test to end, which may potentially never happen if the test hangs).
In any case, if you want to implement a non-preemptive global timeout extension for use with JUnit Jupiter, here's what you need to do.
Look at the TimingExtension example in the JUnit 5 User Guide for inspiration. You'll need code similar to that, but you'll want to throw an exception if the duration exceeds a configured timeout. How you configure your global timeout is up to you: hard code it, look up the value from a JVM system property, look up the value from a custom annotation etc.
Register your global timeout extension using Java's ServiceLoader mechanism. See Automatic Extension Registration for details.
Happy Testing!

Check out my JUnit 4 extension library (https://github.com/Nordstrom/JUnit-Foundation). Among the features provided by this library is the ability to define a global timeout value, which will be automatically applied to each test method that doesn't already define a longer timeout interval.
This library uses the Byte Buddy byte code generation library to install event hooks at strategic points in the test execution flow of JUnit 4. The global timeout is applied when JUnit has created a test class instance to run an "atomic" test.
To apply the global timeout, the library replaces the original #Test annotation with an object that implements the #Test interface. This approach utilizes all of JUnit's native timeout functionality, which provides pre-emptive termination of tests that run too long. The use of native timeout functionality eliminates the need for invasive implementation or special-case handling, and this functionality is activated without touching a single source file.
All of the updates needed to install and activate global timeout support are in the project file (POM / build.gradle) and optional properties file. The timeout interval can be overridden via System property, which enables adjustments to be made from the command line or programmatically. For scenarios where timeout failures are caused by transient conditions, you may want to pair the global timeout feature with the automatic retry feature.

What you're probably looking for is not implemented: https://github.com/junit-team/junit4/issues/140
Although, you can achieve the same results with simple inheritance.
Define an abstract parent class, like BaseIntegrationTest with the following #Rule field:
public abstract class BaseIntegrationTest extends RunListener {
private static final int TEST_GLOBAL_TIMEOUT_VALUE = 10;
#Rule
protected Timeout globalTimeout = Timeout.seconds(TEST_GLOBAL_TIMEOUT_VALUE);
}
Then make it a parent for every test class within the scope. For example:
public class BaseEntityTest extends BaseIntegrationTest {
#Before
public void init() {
// init
}
#Test
public void twoPlusTwoTest() throws Exception {
assert 2 + 2 == 4;
}
}
That's it.

Currently, maybe you cannot because Junit 5 removed the Rule and replace with the Extension.
The above example does not work because the example code implements the AfterTestExecutionCallback that will be invoked after the test method is done, so the timeout is not useful.

Related

How to exclude execution duration from methods annotated with JUnit 4's Before and After annotations

I am running some simple performance tests with JUnit 4 and using Jenkin's perfReport so that I can generate a performance report.
While running these tests, I noticed that the test method execution includes execution time of methods annotated with JUnit 4's #before and #after.
I came across a similar post: Exclude #Before method duration from JUnit test time, however I require my output format in a JUnit-style report since the Jenkin's perfReport parses JUnit-style format only.
As such, is there a way to exclude the execution time of these annotated methods?
I solved it by performing the following:
Extending the BlockJUnit4Runner
Overriding the runChild() method, as this is where test notifiers are received
Writing a custom runLeaf() method, as this is where the test notifiers are fired to notify the test has started or stopped.
Overriding the methodInvoker() method, as this is where the test method is invoked
Creating a new Statement class that functionally, performs the same set of actions as a invokeMethod() statement object created in methodInvoker. This class however, receives the test notifier, thus allow you to control how and when the test is considered to have started.
One issue of the above approach is that you will need to extract code sections that help to run JUnit rules, in order to preserve rule execution as these methods are strangely private.

Is it safe to call TimeZone.setDefault in #Before method in JUnit?

Here is comment about setting default timezone for test code in #Before method of JUnit test. But TimeZone.setDefault is static method. Can it affect other test which are run after test with #Before and TimeZone.setDefault completes successful?
There are many things to check here, it depends on how do you run the tests.
The following factors may come into considerations:
Since you've tagged "maven" in a question: Maven's surefire/failsafe plugins responsible for running the tests can run multiple tests simultaneously in 1 or many JVMs, it all depends on their configurations.
So tests may start failing sporadically during the build even if they pass locally.
#Before and #After are called before and after each test in the test case respectively. #After is called even if the test fails. So probably memorizing the default timezone and setting it back after the test should be ok, but not "re-setting" the state in an "#After" block may lead to incorrect definitions in subsequent tests.
The better approach IMHO is using java.time.Clock abstraction. See this question for examples
Another possible option is refactoring a code to use some "factory" for providing current date / time. Then in Unit Test you could instantiate this factory and "inject" it as a dependency into the code-under-test. A kind of hand-crafted Clock
It will effect the other tests (as you assumed), as it won't be reset after running a single test.
Either reset it to "normal" by an #After method, or maybe change the code to take/inject the timestamp for "now" and make the code do it's calculation from there. From my experience this will give you alot more flexibility.

JUnit 5: Difference between BeforeEachCallback and BeforeTestExecutionCallback

I can't find any ressources explaining what exactly the difference between BeforeEachCallback and BeforeTestExecutionCallback in the JUnit Jupiter extension model is. (I am of course also interested in the "After"-variants)
To my understanding, the following timeline describes what is happening:
BeforeEach - BeforeTestExecution - Actual execution of the test - AfterTestExecution - AfterEach
I suppose that BeforeTestExecution exists so you can execute code after all the BeforeEach callbacks have been worked on but before the actual test execution. However this is still unclear to me, because everyone could just use BeforeTestExecution instead of BeforeEach and the order of execution of these callbacks is random again.
So what is BeforeTestExecution exactly for and what happens if you use this callback in multiple extensions at the same time?
The Javadocs (here and here) don't make a clear distinction between them but the JUnit5 docs include the following:
BeforeTestExecutionCallback and AfterTestExecutionCallback define the APIs for Extensions that wish to add behavior that will be executed immediately before and immediately after a test method is executed, respectively. As such, these callbacks are well suited for timing, tracing, and similar use cases. If you need to implement callbacks that are invoked around #BeforeEach and #AfterEach methods, implement BeforeEachCallback and AfterEachCallback instead.
So, if you want to wrap just the test execution without any of the setup then use BeforeTestExecutionCallback. The docs go on to suggest timing and logging test execution as possible use cases for BeforeTestExecutionCallback.

How to write a unit test for a method that does a retry with back off? (Using FailSafe in Java)

This is the method that I would like to test:
void someMethodThatRetries(){
Failsafe.with( retryPolicy ).get(() -> callServiceX());
}
Retry policy looks like this :
this.retryPolicy = new RetryPolicy()
.retryIf( responseFromServiceXIsInvalid() )
.withBackoff( delay, MAX_DELAY, TimeUnit.MILLISECONDS )
This method calls a service X and retries the call to service X on a certain condition(response from X does not have certain values). Each retry is done with a delay and backoff.
Test Looks like this :
#Test
public void retriesAtMostThreeTimesIfResponseIsInvalid() throws Exception {
// Code that verifies that ServiceX got called 3 times. Service is called using a stub, and I am verifying on that stub
}
I am writing a test that verifies that service X gets called 3 times(Maximum allowed retries are 3) when the condition is met.
Because of the Delay and Back-off the unit test takes too much time. How should we write test in this case?
One solution that I thought of is to do a separate test on the RetryPolicy that it should retry 3 times, and a separate test for the fact that it retries when condition is met.
How should I do it?
I'd say that you should aim at unit-testing the functions callServiceX and responseFromServiceXIsInvalid, but apart from that you are in the realm of integration-testing and subsystem-testing (aka component-testing). Everything with an algorithmic nature here is hidden behind the FailSafe and RetryPolicy classes and methods - your code is just calling them.
Therefore, many of the bugs that your code might contain lie in the interaction with / proper use of these external classes. For example, you might have messed up the order of arguments delay and MAX_DELAY - you would find this only with integration testing.
There are also potential bugs on unit-testing level, like, the value of delay might not match the specified time unit. The hassle of checking this with unit-testing in these circumstances would be too big in my eyes. Check this in a re-view, or, again, use subsystem-testing to see if the durations are as you expect.
Some additional warning: When doing integration-testing and subsystem-testing, be sure to keep the focus on the bugs you want to find. This will help you to avoid that in the end you are effectively testing the FailSafe and RetryPolicy classes - which hopefully have been tested already by the library developers.

Cleanup after each test method in testng framework

I have 100 test methods. After each test, I need to perform some actions (data cleanup). Each of these 100 tests have different actions. These 100 test are not in one package or class. They are distributed.
How can I achieve this?
Right now, if a test passes, the cleanup happens, since it is part of the test. However, if the test fails, the cleanup doesn't happen. How can I make this work?
Any pointers would help.
If the tests do not have any common cleanup, you can ensure the test gets cleaned up from within the test method using a try/finally block, something like:
try {
// do test
}
finally {
// do cleanup
}
If there is any common cleanup between the test methods you could use #AfterMethod to do the cleaup.
In your case, it doesn't sound like there is much common cleanup, so the first may work better for you. It might also be worth considering if you need 100 different cleanup methods or if there can be any common setup/cleanup.
#AfterMethod would mean that you would need that every class gets this method. So you would need to go and edit each class/method. Same for #AfterGroups.
What I would suggest is to implement the IInvokedMethodListener. This would give you beforeInvocation and afterInvocation methods. In the afterInvocation method, implement your cleanup code.
Create a suite file with all of your tests which need this cleanup and specify this listener.
Hope it helps.
It sounds like you may already be using #AfterMethod to cleanup after the tests. To make #AfterMethod work after a failure, you need to use:
#AfterMethod(alwaysRun=true)
You can use groups and run a #AfterGroups somewhere. There's a #BeforeGroups as well. Setting it up with build tooling is a bit tedious and there are some interactions with IDEs as well. There's a BeforeSuite and AfterSuite as well, I believe.
An alternative could be using Spring and using the same spring context in all your tests (spring context gets reused that way). You can then do some things when the context is destroyed after your tests.

Categories