Exactly I mean tag #EnabledIfSystemProperty using with #ParameterizedTest tag.
When I use #EnabledIfSystemProperty with #Test, the test method is being disabled and after tests run it is grayed out on the list (as I expect):
#Test
#EnabledIfSystemProperty(named = "env", matches = "test")
public void test() {
System.out.println("Only for TEST env");
}
Whereas I use #EnabledIfSystemProperty with #ParameterizedTest, the test is green on the list after tests run but it is not actually executed:
#ParameterizedTest
#EnabledIfSystemProperty(named = "env", matches = "test")
#ValueSource(strings = {"testData.json", "testData2.json"})
public void test(String s) {
System.out.println("Only for TEST env");
}
I execute tests from IntelliJ IDEA.
I need the test to be grayed out on the list. Any ideas? Thanks...
You could move the parameterized test to a #Nested test and apply the condition to it:
#Nested
#EnabledIfSystemProperty(named = "env", matches = "test")
class Inner {
#ParameterizedTest
#ValueSource(strings = {"testData.json", "testData2.json"})
public void test(String s) {
System.out.println("Only for TEST env: " + s);
}
}
Related
I have a class of jUnit 5 tests that is not allowed to run in the main pipeline (for multiple reasons). In order to disabled those tests in the pipeline but work on a developer machine I introduced #DisabledIfEnvironmentVariable for the test class (and it works great):
#DisabledIfEnvironmentVariable(named = "USER", matches = "(.*jenkins.*|.*tomcat.*)")
#SpringBootTest(classes = {BigApplication.class}, webEnvironment = RANDOM_PORT)
class LongRunningApplicationTest { ... }
How can I override #DisabledIfEnvironmentVariable if I want to run the test class on an occasion?
I tried adding #EnabledIfEnvironmentVariable hoping that it will override #DisabledIfEnvironmentVariable annotation, therefore providing me with a convenient way to run the test in the pipeline on occasion:
#EnabledIfEnvironmentVariable(named = "applicationTest", matches = "true")
#DisabledIfEnvironmentVariable(named = "USER", matches = "(.*jenkins.*|.*tomcat.*)")
#SpringBootTest(classes = {BigApplication.class}, webEnvironment = RANDOM_PORT)
class LongRunningApplicationTest { ... }
However the above approach doesn't work. Is there a way to override #DisabledIf... ?
One solution is to introduce your own conditions using following annotations:
#EnabledIf or #DisabledIf.
#EnabledIf("EnabledIfAnnotationUtils#shouldRun")
class ApplicationTest {
#Test
void renameMe() {
assertThat(false).isTrue();
}
}
Where EnabledIfAnnotationUtils - is external class (in case you have multiple tests under same condition) and #shouldRun - name of static method. Example:
public class EnabledIfAnnotationUtils {
static boolean shouldRun() {
boolean override = getPropertySafely("run-long-tests").equalsIgnoreCase("true");
if(override) return true;
String user = getEnvSafely("USER");
boolean isOnJenkins = user.toLowerCase().contains("jenkins") || user.toLowerCase().contains("tomcat");
return !isOnJenkins;
}
private static String getPropertySafely(String name) {
return "" + System.getProperty(name);
}
private static String getEnvSafely(String name) {
return "" + System.getenv(name);
}
}
Now tests will NOT run on Jenkins unless override parameter passed, example:
mvn test -Drun-long-tests=true
Im looking for a way of optimize running multiple parameterized tests with expensive setup
my current code looks like this
#ParameterizedTest
#MethodSource("test1CasesProvider")
void test1(String param) {
// expensive setup code 1
// execution & assertions
}
#ParameterizedTest
#MethodSource("test2CasesProvider")
void test2(String param) {
// expensive setup code 2
// execution & assertions
}
but in that shape expensive setup runs for every testCase, which is not very good
I can split this test into two separate tests and use #BeforeAll, then setup runs only once per test, but im looking for a way to keep both cases in one test
You can use #Nested tests in this case, like this way:
public class MyTests {
#Nested
#TestInstance(TestInstance.Lifecycle.PER_CLASS)
class Test1Cases {
#BeforeAll
void setUpForTest1() {
System.out.println("Test1Cases: setting up things!");
}
#AfterAll
void tearDownForTest1() {
System.out.println("Test1Cases: tear down things!");
}
#ParameterizedTest
#MethodSource("source")
void shouldDoSomeTests(String testCase) {
System.out.println("Test1Cases: Doing parametrized tests: " + testCase);
}
Stream<Arguments> source() {
return Stream.of(
Arguments.of("first source param!"),
Arguments.of("second source param!"),
Arguments.of("third source param!")
);
}
}
#Nested
#TestInstance(TestInstance.Lifecycle.PER_CLASS)
class Test2Cases {
#BeforeAll
void setUpForTest2() {
System.out.println("Test2Cases: setting up things!");
}
#AfterAll
void tearDownForTest2() {
System.out.println("Test2Cases: tear down things!");
}
#ParameterizedTest
#MethodSource("source")
void shouldDoSomeTests(String testCase) {
System.out.println("Test2Cases: Doing parametrized tests: " + testCase);
}
Stream<Arguments> source() {
return Stream.of(
Arguments.of("first source param!"),
Arguments.of("second source param!"),
Arguments.of("third source param!")
);
}
}
}
The output in this case was:
Test2Cases: setting up things!
Test2Cases: Doing parametrized tests: first source param!
Test2Cases: Doing parametrized tests: second source param!
Test2Cases: Doing parametrized tests: third source param!
Test2Cases: tear down things!
Test1Cases: setting up things!
Test1Cases: Doing parametrized tests: first source param!
Test1Cases: Doing parametrized tests: second source param!
Test1Cases: Doing parametrized tests: third source param!
Test1Cases: tear down things!
I am trying to run some parametrized unit tests programmatically using JUnitCore. My code works when the test is not parametrized, but it fails when it is. A minimal example:
Main.java:
public class Main {
public static void main(final String args[]) {
final Request request = Request.classes(TheTest.class);
final JUnitCore core = new JUnitCore();
final Result result = core.run(request);
System.out.println("A total of " + result.getRunCount() + " tests were executed.");
System.out.println("A total of " + result.getFailureCount() + " failed.");
for (Failure f : result.getFailures()) {
System.err.println(f.getMessage());
System.err.println(f.getTrace());
}
}
}
TheTest.java
public class TheTest {
#ParameterizedTest
#ValueSource(strings = {"A", "B"})
public void readFile_1(final String s) {
System.out.println(s);
}
}
Output:
A total of 1 tests were executed.
A total of 1 failed.
Invalid test class 'TheTest':
1. No runnable methods
org.junit.runners.model.InvalidTestClassError: Invalid test class 'TheTest':
1. No runnable methods
at org.junit.runners.ParentRunner.validate(ParentRunner.java:525)
at org.junit.runners.ParentRunner.<init>(ParentRunner.java:102)
at org.junit.runners.BlockJUnit4ClassRunner.<init>(BlockJUnit4ClassRunner.java:84)
at org.junit.runners.JUnit4.<init>(JUnit4.java:23)
at org.junit.internal.builders.JUnit4Builder.runnerForClass(JUnit4Builder.java:10)
at org.junit.runners.model.RunnerBuilder.safeRunnerForClass(RunnerBuilder.java:70)
at org.junit.internal.builders.AllDefaultPossibilitiesBuilder.runnerForClass(AllDefaultPossibilitiesBuilder.java:37)
at org.junit.runner.Computer.getRunner(Computer.java:50)
at org.junit.runner.Computer$1.runnerForClass(Computer.java:31)
at org.junit.runners.model.RunnerBuilder.safeRunnerForClass(RunnerBuilder.java:70)
at org.junit.runners.model.RunnerBuilder.runners(RunnerBuilder.java:125)
at org.junit.runners.model.RunnerBuilder.runners(RunnerBuilder.java:111)
at org.junit.runners.Suite.<init>(Suite.java:81)
at org.junit.runner.Computer$2.<init>(Computer.java:33)
at org.junit.runner.Computer.getSuite(Computer.java:28)
at org.junit.runner.Request.classes(Request.java:77)
at org.junit.runner.Request.classes(Request.java:92)
at Main.main(Main.java:8)
Process finished with exit code 0
As you can see, JUnit does not recognize the method as an unit test. If I add #Test on top of #ParametrizedTest, then JUnit complains that the test has parameters - so it obviously does not understand that it is a parametrized test.
How can I achieve that? Thanks!
The problem was that the scheme that I was using to invoke the tests works only for JUnit 4.
This works for JUnit 5:
public class Main {
public static void main(final String args[]) {
final LauncherDiscoveryRequest launcherDiscoveryRequest = LauncherDiscoveryRequestBuilder.request()
.selectors(selectClass(TheTest.class))
.build();
final Launcher launcher = LauncherFactory.create();
TestPlan testPlan = launcher.discover(launcherDiscoveryRequest);
SummaryGeneratingListener listener = new SummaryGeneratingListener();
launcher.registerTestExecutionListeners(listener);
launcher.execute(launcherDiscoveryRequest);
TestExecutionSummary summary = listener.getSummary();
summary.printTo(new PrintWriter(System.out));
}
}
More info in https://www.baeldung.com/junit-tests-run-programmatically-from-java
We defined one testng result listener which help us to send the testing result for each test case defined in testng.xml to one internal tool such like below:
public class TestResultsListener implements ITestListener, ISuiteListener {
#Override
public void onFinish(ISuite suite){
// some code to send the final suite result to internal tools
}
#Override
public void onTestSuccess(ITestResult iTestResult) {
this.sendResult(iTestResult,"PASS");Result
}
private void sendStatus(ITestResult iTestResult, String status){
// Set test case information
......
jsonArr.add(testResult);
}
}
And then we integrated this listener to other project's testng xml file such like:
<listeners>
<listener class-name="com.qa.test.listener.TestesultsListener" />
</listeners>
It worked as designed: once the test suite finishes, the test result will be uploaded to internal tools.
Now we have one requirement that in one project, one test case in testng.xml is related to 3 test cases in internal tool which means that for one test case in testng.xml we need to update 3 test cases in internal tools. How can we update our current testng listener to fulfill this?
Thanks a lot.
You can annotate each of your tests with the list of corresponding internal test tool ids:
Here I suppose that you have 2 testng tests: one is related to internal test IT-1, and the other one to internal tests IT-2, IT-3 and IT-4:
#Listeners(MyTestListener.class)
public class TestA {
#Test
#InternalTool(ids = "IT-1")
public void test1() {
System.out.println("test1");
fail();
}
#Test
#InternalTool(ids = {"IT-2", "IT-3", "IT-4"})
public void test2() {
System.out.println("test2");
}
}
The annotation is simply defined like this:
#Retention(RetentionPolicy.RUNTIME)
public #interface InternalTool {
String[] ids();
}
The your listener has just to figure out which annotation are present on successful/failed tests:
public class MyTestListener extends TestListenerAdapter implements ITestListener {
#Override
public void onTestSuccess(ITestResult tr) {
super.onTestSuccess(tr);
updateInternalTool(tr, true);
}
#Override
public void onTestFailure(ITestResult tr) {
super.onTestFailure(tr);
updateInternalTool(tr, false);
}
private void updateInternalTool(ITestResult tr, boolean success) {
InternalTool annotation = tr.getMethod().getConstructorOrMethod().getMethod().getAnnotation(InternalTool.class);
for (String id : annotation.ids()) {
System.out.println(String.format("Test with id [%s] is [%s]", id, success ? "successful" : "failed"));
}
}
}
The following output is produced:
test1
Test with id [IT-1] is [failed]
test2
Test with id [IT-2] is [successful]
Test with id [IT-3] is [successful]
Test with id [IT-4] is [successful]
You can also extend this mechanism to Suite listeners as well.
Disclaimer: The line
InternalTool annotation = tr.getMethod().getConstructorOrMethod().getMethod().getAnnotation(InternalTool.class); is not bullet-proof (high risk of null pointer exception). Should be more robust.
Class A {
#Test
#CustomAnnotation(attrib1 = "foo"; attrib2 = "moo"; attrib3 = "poo")
void methodA(){ }
#Test
#CustomAnnotation(attrib1 = "blahblah"; attrib2 = "flahflah"; attrib3 = "klahklah")
void methodB(){ }
#Test
#CustomAnnotation(attrib1 = "foo"; attrib2 = "flahflah"; attrib3 = "poo")
void methodC(){ }
}
Now, using reflection, my annotation processing class will return me a SET/LIST of methods which match my criteria (say,attrib1="foo") - Method A and method C will satisfy. Now I need to add these to a test suite at runtime and run that.
How can I add them to the test suite?
Have a look at org.junit.runner.JUnitCore. You should be able to specify the set of tests to run (methods that have to be executed as tests) using org.junit.runner.Request: http://junit.sourceforge.net/javadoc/org/junit/runner/JUnitCore.html#run(org.junit.runner.Request)