Cleanup after each test method in testng framework - java

I have 100 test methods. After each test, I need to perform some actions (data cleanup). Each of these 100 tests have different actions. These 100 test are not in one package or class. They are distributed.
How can I achieve this?
Right now, if a test passes, the cleanup happens, since it is part of the test. However, if the test fails, the cleanup doesn't happen. How can I make this work?
Any pointers would help.

If the tests do not have any common cleanup, you can ensure the test gets cleaned up from within the test method using a try/finally block, something like:
try {
// do test
}
finally {
// do cleanup
}
If there is any common cleanup between the test methods you could use #AfterMethod to do the cleaup.
In your case, it doesn't sound like there is much common cleanup, so the first may work better for you. It might also be worth considering if you need 100 different cleanup methods or if there can be any common setup/cleanup.

#AfterMethod would mean that you would need that every class gets this method. So you would need to go and edit each class/method. Same for #AfterGroups.
What I would suggest is to implement the IInvokedMethodListener. This would give you beforeInvocation and afterInvocation methods. In the afterInvocation method, implement your cleanup code.
Create a suite file with all of your tests which need this cleanup and specify this listener.
Hope it helps.

It sounds like you may already be using #AfterMethod to cleanup after the tests. To make #AfterMethod work after a failure, you need to use:
#AfterMethod(alwaysRun=true)

You can use groups and run a #AfterGroups somewhere. There's a #BeforeGroups as well. Setting it up with build tooling is a bit tedious and there are some interactions with IDEs as well. There's a BeforeSuite and AfterSuite as well, I believe.
An alternative could be using Spring and using the same spring context in all your tests (spring context gets reused that way). You can then do some things when the context is destroyed after your tests.

Related

How to exclude execution duration from methods annotated with JUnit 4's Before and After annotations

I am running some simple performance tests with JUnit 4 and using Jenkin's perfReport so that I can generate a performance report.
While running these tests, I noticed that the test method execution includes execution time of methods annotated with JUnit 4's #before and #after.
I came across a similar post: Exclude #Before method duration from JUnit test time, however I require my output format in a JUnit-style report since the Jenkin's perfReport parses JUnit-style format only.
As such, is there a way to exclude the execution time of these annotated methods?
I solved it by performing the following:
Extending the BlockJUnit4Runner
Overriding the runChild() method, as this is where test notifiers are received
Writing a custom runLeaf() method, as this is where the test notifiers are fired to notify the test has started or stopped.
Overriding the methodInvoker() method, as this is where the test method is invoked
Creating a new Statement class that functionally, performs the same set of actions as a invokeMethod() statement object created in methodInvoker. This class however, receives the test notifier, thus allow you to control how and when the test is considered to have started.
One issue of the above approach is that you will need to extract code sections that help to run JUnit rules, in order to preserve rule execution as these methods are strangely private.

Is it safe to call TimeZone.setDefault in #Before method in JUnit?

Here is comment about setting default timezone for test code in #Before method of JUnit test. But TimeZone.setDefault is static method. Can it affect other test which are run after test with #Before and TimeZone.setDefault completes successful?
There are many things to check here, it depends on how do you run the tests.
The following factors may come into considerations:
Since you've tagged "maven" in a question: Maven's surefire/failsafe plugins responsible for running the tests can run multiple tests simultaneously in 1 or many JVMs, it all depends on their configurations.
So tests may start failing sporadically during the build even if they pass locally.
#Before and #After are called before and after each test in the test case respectively. #After is called even if the test fails. So probably memorizing the default timezone and setting it back after the test should be ok, but not "re-setting" the state in an "#After" block may lead to incorrect definitions in subsequent tests.
The better approach IMHO is using java.time.Clock abstraction. See this question for examples
Another possible option is refactoring a code to use some "factory" for providing current date / time. Then in Unit Test you could instantiate this factory and "inject" it as a dependency into the code-under-test. A kind of hand-crafted Clock
It will effect the other tests (as you assumed), as it won't be reset after running a single test.
Either reset it to "normal" by an #After method, or maybe change the code to take/inject the timestamp for "now" and make the code do it's calculation from there. From my experience this will give you alot more flexibility.

JUnit 5: Difference between BeforeEachCallback and BeforeTestExecutionCallback

I can't find any ressources explaining what exactly the difference between BeforeEachCallback and BeforeTestExecutionCallback in the JUnit Jupiter extension model is. (I am of course also interested in the "After"-variants)
To my understanding, the following timeline describes what is happening:
BeforeEach - BeforeTestExecution - Actual execution of the test - AfterTestExecution - AfterEach
I suppose that BeforeTestExecution exists so you can execute code after all the BeforeEach callbacks have been worked on but before the actual test execution. However this is still unclear to me, because everyone could just use BeforeTestExecution instead of BeforeEach and the order of execution of these callbacks is random again.
So what is BeforeTestExecution exactly for and what happens if you use this callback in multiple extensions at the same time?
The Javadocs (here and here) don't make a clear distinction between them but the JUnit5 docs include the following:
BeforeTestExecutionCallback and AfterTestExecutionCallback define the APIs for Extensions that wish to add behavior that will be executed immediately before and immediately after a test method is executed, respectively. As such, these callbacks are well suited for timing, tracing, and similar use cases. If you need to implement callbacks that are invoked around #BeforeEach and #AfterEach methods, implement BeforeEachCallback and AfterEachCallback instead.
So, if you want to wrap just the test execution without any of the setup then use BeforeTestExecutionCallback. The docs go on to suggest timing and logging test execution as possible use cases for BeforeTestExecutionCallback.

How to enable a global timeout for JUnit testcase runs?

This question suggests to use the timeout parameter of the #Test annotation to have JUnit forcefully stop tests after that timeout period.
But we have like 5000 unit tests so far, and we want to establish a policy that asks developers to never release tests that need more than 10 seconds to complete. The policy would probably say "aim for < 10 seconds", but then we would like to ensure that any test is stopped after say 30 seconds. (the numbers are just examples, the idea is to define something that is "good enough" for most use cases, but that also makes sure things dont run "forever" )
Now I am wondering if there is a way to enable such behavior without turning into each test case and adding that annotation parameter.
The existing question doesn't help either: I am looking for one change to enable this, not a one change per test class solution. One central, global switch. Not one per file or method.
Although JUnit Jupiter (i.e., the programming and extension model introduced in JUnit 5) does not yet have built-in support for global timeouts, you can still implement global timeout support on your own.
The only catch is that a timeout extension cannot currently abort test execution preemptively. In other words, a timeout extension in JUnit Jupiter can currently only time the execution of tests and then throw an exception if the execution took too long (i.e., after waiting for the test to end, which may potentially never happen if the test hangs).
In any case, if you want to implement a non-preemptive global timeout extension for use with JUnit Jupiter, here's what you need to do.
Look at the TimingExtension example in the JUnit 5 User Guide for inspiration. You'll need code similar to that, but you'll want to throw an exception if the duration exceeds a configured timeout. How you configure your global timeout is up to you: hard code it, look up the value from a JVM system property, look up the value from a custom annotation etc.
Register your global timeout extension using Java's ServiceLoader mechanism. See Automatic Extension Registration for details.
Happy Testing!
Check out my JUnit 4 extension library (https://github.com/Nordstrom/JUnit-Foundation). Among the features provided by this library is the ability to define a global timeout value, which will be automatically applied to each test method that doesn't already define a longer timeout interval.
This library uses the Byte Buddy byte code generation library to install event hooks at strategic points in the test execution flow of JUnit 4. The global timeout is applied when JUnit has created a test class instance to run an "atomic" test.
To apply the global timeout, the library replaces the original #Test annotation with an object that implements the #Test interface. This approach utilizes all of JUnit's native timeout functionality, which provides pre-emptive termination of tests that run too long. The use of native timeout functionality eliminates the need for invasive implementation or special-case handling, and this functionality is activated without touching a single source file.
All of the updates needed to install and activate global timeout support are in the project file (POM / build.gradle) and optional properties file. The timeout interval can be overridden via System property, which enables adjustments to be made from the command line or programmatically. For scenarios where timeout failures are caused by transient conditions, you may want to pair the global timeout feature with the automatic retry feature.
What you're probably looking for is not implemented: https://github.com/junit-team/junit4/issues/140
Although, you can achieve the same results with simple inheritance.
Define an abstract parent class, like BaseIntegrationTest with the following #Rule field:
public abstract class BaseIntegrationTest extends RunListener {
private static final int TEST_GLOBAL_TIMEOUT_VALUE = 10;
#Rule
protected Timeout globalTimeout = Timeout.seconds(TEST_GLOBAL_TIMEOUT_VALUE);
}
Then make it a parent for every test class within the scope. For example:
public class BaseEntityTest extends BaseIntegrationTest {
#Before
public void init() {
// init
}
#Test
public void twoPlusTwoTest() throws Exception {
assert 2 + 2 == 4;
}
}
That's it.
Currently, maybe you cannot because Junit 5 removed the Rule and replace with the Extension.
The above example does not work because the example code implements the AfterTestExecutionCallback that will be invoked after the test method is done, so the timeout is not useful.

Unit test consumers in Vertx

I have a snippet of code that I want to unit test.
this.vertx.eventBus().consumer(VERTICLE_ID).toObservable()
.subscribe(msg -> doSomethingCool());
and my consumer method:
private void doSomethingCool(){
// Some cool stuff.
}
Now I want to unit test doSomethingCool() without using powermockito (I want to have code coverage) and I dont want to make my method visible (public). How can I do that? Any hook in vertx to do that ?
It actually hard to tell how you should write your test if nothing is known about the purpose of doSomethingCool
Does it return a value? (i.e. via msg.reply())
Does it modify state of the Verticle? or global state?
Does it make a downstream call?
Does your method invoke a handler once it's done with whatever it does?
A unit test should verify an observable result. So write your unit test to verify one of these outcomes.
In case a handler is invoked, you could work with the vertxunit TestContext and count down an async.
... and stay away from Powermockito.

Categories