I know that if I need to mock a static method, this indicates that my design has some issue, but in my case this does not seem to be a design issue.
BundleContext bundleContext = FrameworkUtil.getBundle(ConfigService.class).getBundleContext();
Here FrameworkUtil is a class present in an api jar. Using it in code cant be a design issue.
my problem here is while running this line
FrameworkUtil.getBundle(ConfigService.class);
returns null So my question, is there any way by which I can replace that null at runtime
I am using Mockito framewrok and my project does not allow me to use powermock.
if I use
doReturn(bundle).when(FrameworkUtil.class)
in this way getBundle method is not visible since its a static method.
You are correct that is not a design issue on your part. Without PowerMock, your options become a bit murkier, though.
I would suggest creating a non-static wrapper for the FrameworkUtil class that you can inject and mock.
Update: (David Wallace)
So you add a new class to your application, something like this
public class UtilWrapper {
public Bundle getBundle(Class<?> theClass) {
return FrameworkUtil.getBundle(theClass);
}
}
This class is so simple that you don't need to unit test it. As a general principle, you should only EVER write unit tests for methods that have some kind of logic to them - branching, looping or exception handling. One-liners should NOT be unit tested.
Now, within your application code, add a field of type UtilWrapper, and a setter for it, to every class that currently calls FrameworkUtil.getBundle. Add this line to the construtor of each such class
utilWrapper = new UtilWrapper();
And replace every call to FrameworkUtil.getBundle with utilWrapper.getBundle.
Now in your test, you make a mock UtilWrapper and stub it to return whatever Bundle you like.
when(mockUtilWrapper.getBundle(ConfigService.class)).thenReturn(someBundleYouMade);
and for the class that you're testing, call setUtilWrapper(mockUtilWrapper) or whatever. You don't need this last step if you're using #InjectMocks.
Now your test should all hang together, but using your mocked UtilWrapper instead of the one that relies on FrameworkUtil.
unit test
package x;
import static org.junit.Assert.*;
import org.junit.Before;
import org.junit.Test;
import org.mockito.Mockito;
public class GunTest {
#Before
public void setUp() throws Exception {
}
#Test
public void testFireTrue() {
final Gun unit = Mockito.spy(new Gun());
Mockito.doReturn(5).when(unit).getCount();
assertTrue(unit.fire2());
}
#Test
public void testFireFalse() {
final Gun unit = Mockito.spy(new Gun());
Mockito.doReturn(15).when(unit).getCount();
assertFalse(unit.fire2());
}
}
the unit:
fire calls the static method directly,
fire2 factors out the static call to a protected method:
package x;
public class Gun {
public boolean fire() {
if (StaticClass.getCount() > 10) {
return false;
}
else {
return true;
}
}
public boolean fire2() {
if (getCount() > 10) {
return false;
}
else {
return true;
}
}
protected int getCount() {
return StaticClass.getCount();
}
}
Related
I write JUnit5 Extension. But I cannot find way how to obtain test result.
Extension looks like this:
import org.junit.jupiter.api.extension.AfterTestExecutionCallback;
import org.junit.jupiter.api.extension.TestExtensionContext;
public class TestResultExtension implements AfterTestExecutionCallback {
#Override
public void afterTestExecution(TestExtensionContext context) throws Exception {
//How to get test result? SUCCESS/FAILED
}
}
Any hints how to obtain test result?
This work for me:
public class RunnerExtension implements AfterTestExecutionCallback {
#Override
public void afterTestExecution(ExtensionContext context) throws Exception {
Boolean testResult = context.getExecutionException().isPresent();
System.out.println(testResult); //false - SUCCESS, true - FAILED
}
}
#ExtendWith(RunnerExtension.class)
public abstract class Tests {
}
As other answers point out, JUnit communicates failed tests with exceptions, so an AfterTestExecutionCallback can be used to gleam what happened. Note that this is error prone as extension running later might still fail the test.
Another way to do that is to register a custom TestExecutionListener. Both of these approaches are a little roundabout, though. There is an issue that tracks a specific extension point for reacting to test results, which would likely be the most straight-forward answer to your question. If you can provide a specific use case, it would be great if you could head over to #542 and leave a comment describing it.
You can use SummaryGeneratingListener from org.junit.platform.launcher.listeners
It contains MutableTestExecutionSummary field, which implements TestExecutionSummary interface, and this way you can obtain info about containers, tests, time, failures etc.
You can create custom listener, for example:
Create class that extends SummaryGeneratingListener
public class ResultAnalyzer extends SummaryGeneratingListener {
#Override
public void testPlanExecutionFinished(TestPlan testPlan) {
//This method is invoked after all tests in all containers is finished
super.testPlanExecutionFinished(testPlan);
analyzeResult();
}
private void analyzeResult() {
var summary = getSummary();
var failures = summary.getFailures();
//Do something
}
}
Register listener by creating file
src\main\resources\META-INF\services\org.junit.platform.launcher.TestExecutionListener
and specify your implementation in it
path.to.class.ResultAnalyzer
Enable auto-detection of extensions, set parameter
-Djunit.jupiter.extensions.autodetection.enabled=true
And that's it!
Docs
https://junit.org/junit5/docs/5.0.0/api/org/junit/platform/launcher/listeners/SummaryGeneratingListener.html
https://junit.org/junit5/docs/5.0.0/api/org/junit/platform/launcher/listeners/TestExecutionSummary.html
https://junit.org/junit5/docs/current/user-guide/#extensions-registration-automatic
I have only this solution:
String testResult = context.getTestException().isPresent() ? "FAILED" : "OK";
It seems that it works well. But I am not sure if it will work correctly in all situations.
Fails in JUnit are propagated with exceptions. There are several exceptions, which indicate various types of errors.
So an exception in TestExtensionContext#getTestException() indicates an error. The method can't manipulate actual test results, so depending on your use case you might want to implement TestExecutionExceptionHandler, which allows you to swallow exceptions, thus changing whether a test succeeded or not.
You're almost there.
To implement a test execution callback and get the test result for logging (or generating a report) you can do the following:
import org.junit.jupiter.api.extension.AfterTestExecutionCallback;
import org.junit.jupiter.api.extension.ExtensionContext;
public class TestResultExtension implements AfterTestExecutionCallback
{
#Override
public void afterTestExecution(ExtensionContext context) throws Exception
{
// check the context for an exception
Boolean passed = context.getExecutionException().isEmpty();
// if there isn't, the test passed
String result = passed ? "PASSED" : "FAILED";
// now that you have the result, you can do whatever you want
System.out.println("Test Result: " + context.getDisplayName() + " " + result);
}
}
And then you just add the TestResultExtension using the #ExtendWith() annotation for your test cases:
import org.junit.jupiter.api.Test;
import org.junit.jupiter.api.extension.ExtendWith;
import static org.junit.jupiter.api.Assertions.assertTrue;
#ExtendWith(TestResultExtension.class)
public class SanityTest
{
#Test
public void testSanity()
{
assertTrue(true);
}
#Test
public void testInsanity()
{
assertTrue(false);
}
}
It's a good idea to extend a base test that includes the extension
import org.junit.jupiter.api.extension.ExtendWith;
#ExtendWith(TestResultExtension.class)
public class BaseTest
{}
And then you don't need to include the annotation in every test:
public class SanityTest extends BaseTest
{ //... }
Using EasyMock 3.2. In order to unit test an UI, I have to mock some dependencies. One of them is Page. Base class for UI tests looks like this:
abstract class AbstractUiTest {
#Before
public function setUpUiDependencies() {
Page page = createNiceMock(Page.class);
Ui.setCurrentPage(page);
}
}
Most of the time I don't use the page explicity, it's just there not to throw NullPointerException when e.g. Ui calls getPage().setTitle("sth") etc.
However, in a few tests I want to explicity check if something has happend with the page, e.g.:
public class SomeTest extends AbstractUiTest {
#Test
public void testNotification() {
// do something with UI that should cause notification
assertNotificationHasBeenShown();
}
private void assertNotificationHasBeenShown() {
Page page = Ui.getCurrentPage(); // this is my nice mock
// HERE: verify somehow, that page.showNotification() has been called
}
}
How to implement the assertion method? I would really want to implement it without recording behavior to the page, replaying and verifying it. My problem is a bit more complicated, but you should get the point.
EDIT: I think that perhaps this is not really needed, since simply using replay and verify should check that the expected methods were actually called. But you said you want to do this without replaying and verifying. Can you explain why you have that requirement?
I think that you can use andAnswer and an IAnswer. You don't mention what the return value of page.showNotification() is. Assuming it returns a String, you could do this:
import static org.easymock.EasyMock.expect;
import static org.junit.Assert.assertTrue;
import java.util.concurrent.atomic.AtomicBoolean;
import org.easymock.IAnswer;
import org.junit.Ignore;
import org.junit.Test;
public class SomeTest extends AbstractUiTest {
#Test
public void shouldCallShowNotification() {
final AtomicBoolean showNotificationCalled = new AtomicBoolean();
expect(page.showNotification()).andAnswer(new IAnswer<String>() {
#Override
public String answer() {
showNotificationCalled.set(true);
return "";
}
});
replay(page);
Ui.getCurrentPage();
verify(page);
assertTrue("showNotification not called", showNotificationCalled.get());
}
}
If showNotification returns void, I believe you would need to do this:
import static org.easymock.EasyMock.expectLastCall;
import static org.junit.Assert.assertTrue;
import java.util.concurrent.atomic.AtomicBoolean;
import org.easymock.IAnswer;
import org.junit.Ignore;
import org.junit.Test;
public class SomeTest extends AbstractUiTest {
#Test
public void shouldCallShowNotification() {
final AtomicBoolean showNotificationCalled = new AtomicBoolean();
page.showNotification();
expectLastCall().andAnswer(new IAnswer<Void>() {
#Override
public Void answer() {
showNotificationCalled.set(true);
return null;
}
});
replay(page);
Ui.getCurrentPage();
verify(page);
assertTrue("showNotification not called", showNotificationCalled.get());
}
}
Note: I've used an AtomicBoolean to record whether the method was called. You could also use a boolean array of a single element, or your own mutable object. I used AtomicBoolean not for its concurrency properties, but simply because it is a handy mutable boolean object that is already present in the Java standard libraries.
The other thing that I have done to verify that a method was being called is to not use a mock at all, but to create an instance of Page as an anonymous inner class and override the showNotification method, and record somewhere that the call occurred.
Use a nice mock in the tests where you don't care what happens to page and a normal mock in those tests where you want to test something explicit - and use expect, verify etc. I.e. have two variables in your setup method: nicePage (acts as a stub) and mockPage (acts as a mock)
Is there currently a way to disable TestNG test based on a condition
I know you can currently disable test as so in TestNG:
#Test(enabled=false, group={"blah"})
public void testCurrency(){
...
}
I will like to disable the same test based on a condition but dont know how. something Like this:
#Test(enabled={isUk() ? false : true), group={"blah"})
public void testCurrency(){
...
}
Anyone has a clue on whether this is possible or not.
An easier option is to use the #BeforeMethod annotation on a method which checks your condition. If you want to skip the tests, then just throw a SkipException. Like this:
#BeforeMethod
protected void checkEnvironment() {
if (!resourceAvailable) {
throw new SkipException("Skipping tests because resource was not available.");
}
}
You have two options:
Implement an annotation transformer.
Use BeanShell.
Your annotation transformer would test the condition and then override the #Test annotation to add the attribute "enabled=false" if the condition is not satisfied.
There are two ways that I know of that allow you the control of "disabling" tests in TestNG.
The differentiation that is very important to note is that SkipException will break out off all subsequent tests while implmenting IAnnotationTransformer uses Reflection to disbale individual tests, based on a condition that you specify. I will explain both SkipException and IAnnotationTransfomer.
SKIP Exception example
import org.testng.*;
import org.testng.annotations.*;
public class TestSuite
{
// You set this however you like.
boolean myCondition;
// Execute before each test is run
#BeforeMethod
public void before(Method methodName){
// check condition, note once you condition is met the rest of the tests will be skipped as well
if(myCondition)
throw new SkipException();
}
#Test(priority = 1)
public void test1(){}
#Test(priority = 2)
public void test2(){}
#Test(priority = 3)
public void test3(){}
}
IAnnotationTransformer example
A bit more complicated but the idea behind it is a concept known as Reflection.
Wiki - http://en.wikipedia.org/wiki/Reflection_(computer_programming)
First implement the IAnnotation interface, save this in a *.java file.
import java.lang.reflect.Constructor;
import java.lang.reflect.Method;
import org.testng.IAnnotationTransformer;
import org.testng.annotations.ITestAnnotation;
public class Transformer implements IAnnotationTransformer {
// Do not worry about calling this method as testNG calls it behind the scenes before EVERY method (or test).
// It will disable single tests, not the entire suite like SkipException
public void transform(ITestAnnotation annotation, Class testClass, Constructor testConstructor, Method testMethod){
// If we have chose not to run this test then disable it.
if (disableMe()){
annotation.setEnabled(false);
}
}
// logic YOU control
private boolean disableMe() {
}
}
Then in you test suite java file do the following in the #BeforeClass function
import org.testng.*;
import org.testng.annotations.*;
/* Execute before the tests run. */
#BeforeClass
public void before(){
TestNG testNG = new TestNG();
testNG.setAnnotationTransformer(new Transformer());
}
#Test(priority = 1)
public void test1(){}
#Test(priority = 2)
public void test2(){}
#Test(priority = 3)
public void test3(){}
One last step is to ensure that you add a listener in your build.xml file.
Mine ended up looking like this, this is just a single line from the build.xml:
<testng classpath="${test.classpath}:${build.dir}" outputdir="${report.dir}"
haltonfailure="false" useDefaultListeners="true"
listeners="org.uncommons.reportng.HTMLReporter,org.uncommons.reportng.JUnitXMLReporter,Transformer"
classpathref="reportnglibs"></testng>
I prefer this annotation based way for disable/skip some tests based on environment settings. Easy to maintain and not requires any special coding technique.
Using the IInvokedMethodListener interface
Create a custom anntotation e.g.: #SkipInHeadlessMode
Throw SkipException
public class ConditionalSkipTestAnalyzer implements IInvokedMethodListener {
protected static PropertiesHandler properties = new PropertiesHandler();
#Override
public void beforeInvocation(IInvokedMethod invokedMethod, ITestResult result) {
Method method = result.getMethod().getConstructorOrMethod().getMethod();
if (method == null) {
return;
}
if (method.isAnnotationPresent(SkipInHeadlessMode.class)
&& properties.isHeadlessMode()) {
throw new SkipException("These Tests shouldn't be run in HEADLESS mode!");
}
}
#Override
public void afterInvocation(IInvokedMethod iInvokedMethod, ITestResult iTestResult) {
//Auto generated
}
}
Check for the details:
https://www.lenar.io/skip-testng-tests-based-condition-using-iinvokedmethodlistener/
A Third option also can be Assumption
Assumptions for TestNG - When a assumption fails, TestNG will be instructed to ignore the test case and will thus not execute it.
Using the #Assumption annotation
Using AssumptionListener Using the Assumes.assumeThat(...)
method
You can use this example: Conditionally Running Tests In TestNG
Throwing a SkipException in a method annotated with #BeforeMethod did not work for me because it skipped all the remaining tests of my test suite with no regards if a SkipException were thrown for those tests.
I did not investigate it thoroughly but I found another way : using the dependsOnMethods attribute on the #Test annotation:
import org.testng.SkipException;
import org.testng.annotations.Test;
public class MyTest {
private boolean conditionX = true;
private boolean conditionY = false;
#Test
public void isConditionX(){
if(!conditionX){
throw new SkipException("skipped because of X is false");
}
}
#Test
public void isConditionY(){
if(!conditionY){
throw new SkipException("skipped because of Y is false");
}
}
#Test(dependsOnMethods="isConditionX")
public void test1(){
}
#Test(dependsOnMethods="isConditionY")
public void test2(){
}
}
SkipException: It's useful in case of we have only one #Test method in the class. Like for Data Driven Framework, I have only one Test method which need to either executed or skipped on the basis of some condition. Hence I've put the logic for checking the condition inside the #Test method and get desired result.
It helped me to get the Extent Report with test case result as Pass/Fail and particular Skip as well.
We occasionally have bugs that appear once in every X runs. Before people check in stuff (where it is automatically JUnit'd), our devs need to pass JUnit locally via Eclipse.
Is there some convenient way (built in or high-quality Plugin) to make Eclipse run the same test X times and stop if there's a failure? An alternative to just clicking Run X times?
Note that I'm looking for something in the UI (e.g., right click and say "Run X times" instead of just "Run").
If the for loop works, then I agree with nos.
If you need to repeat the entire setup-test-teardown, then you can use a TestSuite:
Right-click on the package containing the test to repeat
Go to New and choose to create a JUnit test SUITE
Make sure that only the test you want to repeat is selected and click through to finish.
Edit the file to run it multiple times.
In the file you just find the
addTestSuite(YourTestClass.class)
line, and wrap that in a for loop.
I'm pretty sure that you can use addTest instead of addTestSuite to get it to only run one test from that class if you just want to repeat a single test method.
If you really want to run a test class until failure, you need your own runner.
#RunWith(RunUntilFailure.class)
public class YourClass {
// ....
}
which could be implemented as follows...
package com.example;
import org.junit.internal.runners.*;
import org.junit.runner.notification.*;
import org.junit.runner.*;
public class RunUntilFailure extends Runner {
private TestClassRunner runner;
public RunUntilFailure(Class<?> klass) throws InitializationError {
this.runner = new TestClassRunner(klass);
}
#Override
public Description getDescription() {
Description description = Description.createSuiteDescription("Run until failure");
description.addChild(runner.getDescription());
return description;
}
#Override
public void run(RunNotifier notifier) {
class L extends RunListener {
boolean fail = false;
public void testFailure(Failure failure) throws Exception { fail = true; }
}
L listener = new L();
notifier.addListener(listener);
while (!listener.fail) runner.run(notifier);
}
}
...releasing untested code, feeling TDD guilt :)
Based on #akuhn's answer, here is what I came up with - rather than running forever, this will run 50 times or until failure, whichever comes first.
package com.foo
import org.junit.runner.Description;
import org.junit.runner.Runner;
import org.junit.runner.notification.Failure;
import org.junit.runner.notification.RunListener;
import org.junit.runner.notification.RunNotifier;
import org.junit.runners.BlockJUnit4ClassRunner;
import org.junit.runners.model.InitializationError;
public class RunManyTimesUntilFailure extends Runner {
private static final int MAX_RUN_COUNT = 50;
private BlockJUnit4ClassRunner runner;
#SuppressWarnings("unchecked")
public RunManyTimesUntilFailure(final Class testClass) throws InitializationError {
runner = new BlockJUnit4ClassRunner(testClass);
}
#Override
public Description getDescription() {
final Description description = Description.createSuiteDescription("Run many times until failure");
description.addChild(runner.getDescription());
return description;
}
#Override
public void run(final RunNotifier notifier) {
class L extends RunListener {
boolean shouldContinue = true;
int runCount = 0;
#Override
public void testFailure(#SuppressWarnings("unused") final Failure failure) throws Exception {
shouldContinue = false;
}
#Override
public void testFinished(#SuppressWarnings("unused") Description description) throws Exception {
runCount++;
shouldContinue = (shouldContinue && runCount < MAX_RUN_COUNT);
}
}
final L listener = new L();
notifier.addListener(listener);
while (listener.shouldContinue) {
runner.run(notifier);
}
}
}
I know it doesn't answer the question directly but if a test isn't passing every time it is run it is a test smell known as Erratic Test. There are several possible causes for this (from xUnit Test Patterns):
Interacting Tests
Interacting Test Suites
Lonely Test
Resource Leakage
Resource Optimism
Unrepeatable Test
Test Run War
Nondeterministic Test
The details of each of these is documented in Chapter 16 of xUnit Test Patterns.
Here is a post I wrote that shows several ways of running the tests repeatedly with code examples:
http://codehowtos.blogspot.com/2011/04/run-junit-test-repeatedly.html
You can use the #Parametrized runner, or use the special runner included in the post
There is also a reference to a #Retry implementation
I don't believe there's a built in way for junit to do exactly what you're asking for.
If multiple runs produce different result, you should have a unit test testing that case. Wich might be as simple as running a for loop in the relevant test cases.
It is possible to implement such an loop with TestRules (since JUnit 4.9)
A very simple implementation that runs every Test 10 times:
import org.junit.rules.TestRule;
import org.junit.runner.Description;
import org.junit.runners.model.Statement;
public class SimpleRepeatRule implements TestRule {
private static class SimpleRepeatStatement extends Statement {
private final Statement statement;
private SimpleRepeatStatement(Statement statement) {
this.statement = statement;
}
#Override
public void evaluate() throws Throwable {
for (int i = 0; i < 10; i++) {
statement.evaluate();
}
}
}
#Override
public Statement apply(Statement statement, Description description) {
return new SimpleRepeatStatement(statement);
}
}
usage:
public class Run10TimesTest {
#Rule
public SimpleRepeatRule repeatRule = new SimpleRepeatRule();
#Test
public void myTest(){...}
}
For a more useful implementation based on an annotation that define which test method has to been executed how often have a look at this blog:
http://www.codeaffine.com/2013/04/10/running-junit-tests-repeatedly-without-loops/
Like the title says, I'm looking for some simple way to run JUnit 4.x tests several times in a row automatically using Eclipse.
An example would be running the same test 10 times in a row and reporting back the result.
We already have a complex way of doing this but I'm looking for a simple way of doing it so that I can be sorta sure that the flaky test I've been trying to fix stays fixed.
An ideal solution would be an Eclipse plugin/setting/feature that I am unaware of.
The easiest (as in least amount of new code required) way to do this is to run the test as a parametrized test (annotate with an #RunWith(Parameterized.class) and add a method to provide 10 empty parameters). That way the framework will run the test 10 times.
This test would need to be the only test in the class, or better put all test methods should need to be run 10 times in the class.
Here is an example:
#RunWith(Parameterized.class)
public class RunTenTimes {
#Parameterized.Parameters
public static Object[][] data() {
return new Object[10][0];
}
public RunTenTimes() {
}
#Test
public void runsTenTimes() {
System.out.println("run");
}
}
With the above, it is possible to even do it with a parameter-less constructor, but I'm not sure if the framework authors intended that, or if that will break in the future.
If you are implementing your own runner, then you could have the runner run the test 10 times. If you are using a third party runner, then with 4.7, you can use the new #Rule annotation and implement the MethodRule interface so that it takes the statement and executes it 10 times in a for loop. The current disadvantage of this approach is that #Before and #After get run only once. This will likely change in the next version of JUnit (the #Before will run after the #Rule), but regardless you will be acting on the same instance of the object (something that isn't true of the Parameterized runner). This assumes that whatever runner you are running the class with correctly recognizes the #Rule annotations. That is only the case if it is delegating to the JUnit runners.
If you are running with a custom runner that does not recognize the #Rule annotation, then you are really stuck with having to write your own runner that delegates appropriately to that Runner and runs it 10 times.
Note that there are other ways to potentially solve this (such as the Theories runner) but they all require a runner. Unfortunately JUnit does not currently support layers of runners. That is a runner that chains other runners.
With IntelliJ, you can do this from the test configuration. Once you open this window, you can choose to run the test any number of times you want,.
when you run the test, intellij will execute all tests you have selected for the number of times you specified.
Example running 624 tests 10 times:
With JUnit 5 I was able to solve this using the #RepeatedTest annotation:
#RepeatedTest(10)
public void testMyCode() {
//your test code goes here
}
Note that #Test annotation shouldn't be used along with #RepeatedTest.
I've found that Spring's repeat annotation is useful for that kind of thing:
#Repeat(value = 10)
Latest (Spring Framework 4.3.11.RELEASE API) doc:
org.springframework.test.annotation
Unit Testing in Spring
Inspired by the following resources:
blog post
this solution
commented version
Example
Create and use a #Repeat annotation as follows:
public class MyTestClass {
#Rule
public RepeatRule repeatRule = new RepeatRule();
#Test
#Repeat(10)
public void testMyCode() {
//your test code goes here
}
}
Repeat.java
import static java.lang.annotation.ElementType.ANNOTATION_TYPE;
import static java.lang.annotation.ElementType.METHOD;
import java.lang.annotation.Retention;
import java.lang.annotation.RetentionPolicy;
import java.lang.annotation.Target;
#Retention( RetentionPolicy.RUNTIME )
#Target({ METHOD, ANNOTATION_TYPE })
public #interface Repeat {
int value() default 1;
}
RepeatRule.java
import org.junit.rules.TestRule;
import org.junit.runner.Description;
import org.junit.runners.model.Statement;
public class RepeatRule implements TestRule {
private static class RepeatStatement extends Statement {
private final Statement statement;
private final int repeat;
public RepeatStatement(Statement statement, int repeat) {
this.statement = statement;
this.repeat = repeat;
}
#Override
public void evaluate() throws Throwable {
for (int i = 0; i < repeat; i++) {
statement.evaluate();
}
}
}
#Override
public Statement apply(Statement statement, Description description) {
Statement result = statement;
Repeat repeat = description.getAnnotation(Repeat.class);
if (repeat != null) {
int times = repeat.value();
result = new RepeatStatement(statement, times);
}
return result;
}
}
PowerMock
Using this solution with #RunWith(PowerMockRunner.class), requires updating to Powermock 1.6.5 (which includes a patch).
Anything wrong with:
#Test
void itWorks() {
// stuff
}
#Test
void itWorksRepeatably() {
for (int i = 0; i < 10; i++) {
itWorks();
}
}
Unlike the case where you are testing each of an array of values, you don't particularly care which run failed.
No need to do in configuration or annotation what you can do in code.
This works much easier for me.
public class RepeatTests extends TestCase {
public static Test suite() {
TestSuite suite = new TestSuite(RepeatTests.class.getName());
for (int i = 0; i < 10; i++) {
suite.addTestSuite(YourTest.class);
}
return suite;
}
}
There's an Intermittent annotation in the tempus-fugit library which works with JUnit 4.7's #Rule to repeat a test several times or with #RunWith.
For example,
#RunWith(IntermittentTestRunner.class)
public class IntermittentTestRunnerTest {
private static int testCounter = 0;
#Test
#Intermittent(repition = 99)
public void annotatedTest() {
testCounter++;
}
}
After the test is run (with the IntermittentTestRunner in the #RunWith), testCounter would be equal to 99.
This is essentially the answer that Yishai provided above, re-written in Kotlin :
#RunWith(Parameterized::class)
class MyTest {
companion object {
private const val numberOfTests = 200
#JvmStatic
#Parameterized.Parameters
fun data(): Array<Array<Any?>> = Array(numberOfTests) { arrayOfNulls<Any?>(0) }
}
#Test
fun testSomething() { }
}
I build a module that allows do this kind of tests. But it is focused not only in repeat. But in guarantee that some piece of code is Thread safe.
https://github.com/anderson-marques/concurrent-testing
Maven dependency:
<dependency>
<groupId>org.lite</groupId>
<artifactId>concurrent-testing</artifactId>
<version>1.0.0</version>
</dependency>
Example of use:
package org.lite.concurrent.testing;
import org.junit.Assert;
import org.junit.Rule;
import org.junit.Test;
import ConcurrentTest;
import ConcurrentTestsRule;
/**
* Concurrent tests examples
*/
public class ExampleTest {
/**
* Create a new TestRule that will be applied to all tests
*/
#Rule
public ConcurrentTestsRule ct = ConcurrentTestsRule.silentTests();
/**
* Tests using 10 threads and make 20 requests. This means until 10 simultaneous requests.
*/
#Test
#ConcurrentTest(requests = 20, threads = 10)
public void testConcurrentExecutionSuccess(){
Assert.assertTrue(true);
}
/**
* Tests using 10 threads and make 20 requests. This means until 10 simultaneous requests.
*/
#Test
#ConcurrentTest(requests = 200, threads = 10, timeoutMillis = 100)
public void testConcurrentExecutionSuccessWaitOnly100Millissecond(){
}
#Test(expected = RuntimeException.class)
#ConcurrentTest(requests = 3)
public void testConcurrentExecutionFail(){
throw new RuntimeException("Fail");
}
}
This is a open source project. Feel free to improve.
You could run your JUnit test from a main method and repeat it so many times you need:
package tests;
import static org.junit.Assert.*;
import org.junit.Test;
import org.junit.runner.Result;
public class RepeatedTest {
#Test
public void test() {
fail("Not yet implemented");
}
public static void main(String args[]) {
boolean runForever = true;
while (runForever) {
Result result = org.junit.runner.JUnitCore.runClasses(RepeatedTest.class);
if (result.getFailureCount() > 0) {
runForever = false;
//Do something with the result object
}
}
}
}