JUnit parameterized Tests Getting the parameter after the test - java

So here's my dilemma. I have been using Selenium, TestNG, and iText to generate nice PDF reports from the results of an automated Test run, but I was told recently that they didn't want competing libraries TestNG vs. JUnit, and was told to start using JUnit instead.
I am running these JUnit tests with parameters, and am wondering is there a way to access the parameter after/during the test run? The parameters are Strings with browser names which is used to tell selenium which WebDriver to get, and it would be nice to know that a test passed/failed in a certain browser. JUnit seems to be very limited in the information you can access once a test run completes.
I have a class which extends junit.framework.TestListener which listens for the start/stop of each test, and here is where I can gather information about the test.
currentTest is of type BaseTestResult which is a class I wrote that simply stores test results in a list.
import junit.framework.AssertionFailedError;
import junit.framework.Test;
import junit.framework.TestListener;
import junit.framework.TestResult;
import utilities.reporting.BaseReporting;
import utilities.reporting.BaseTestResult;
import utilities.reporting.ResultsPerSuite;
public class BaseListener implements TestListener {
private ResultsPerSuite resultsPerSuite;
private BaseReporting baseReporter;
private BaseTestResult currentSuite;
private BaseTestResult currentTest;
private long startTime;
private long endTime;
private long suiteStartTime;
private long suiteEndTime;
public BaseListener() {
baseReporter = new BaseReporting();
resultsPerSuite = new ResultsPerSuite();
currentTest = new BaseTestResult(null, null);
}
public void startSuite(Test suite) {
suiteStartTime = System.currentTimeMillis();
currentSuite = new BaseTestResult(suite);
}
#Override
public void startTest(Test arg0) {
startTime = System.currentTimeMillis();
currentTest = new BaseTestResult(arg0);
}
#Override
public void addError(Test arg0, Throwable arg1) {
currentTest.addError(new BaseTestResult(arg0, arg1));
}
#Override
public void addFailure(Test arg0, AssertionFailedError arg1) {
currentTest.addFailed(new BaseTestResult(arg0, arg1));
}
#Override
public void endTest(Test arg0) {
endTime = System.currentTimeMillis();
currentTest.setRuntime(startTime - endTime);
// If both empty, then test passed, so add to passed results.
if (currentTest.getFailed().isEmpty()
&& currentTest.getErrors().isEmpty()) {
resultsPerSuite.addPassed(currentTest);
} else {
resultsPerSuite.addFailed(currentTest);
}
}
public void endSuite(TestResult testEventDriver) {
suiteEndTime = System.currentTimeMillis();
currentSuite.setRuntime(suiteEndTime - suiteStartTime);
resultsPerSuite.setSuite(currentSuite);
baseReporter.printToConsole(resultsPerSuite);
}
/**
* #return the allTestResults
*/
public ResultsPerSuite getAllTestResults() {
return resultsPerSuite;
}
}

I am not sure how to get it to do what you are looking for trivially from inside of the TestListener without creating your own Runner. But you might be able to handle it with a TestRule.
Since you appear to be using the listener to send the results to an external service, you might have better luck specifically by rigging in a TestWatcher that communicates with your own listener. It would have access to the local class member variables and could report on them fairly easily.

Related

Cucumber Java: How to force JSON plugin generation

Overview: There are instances where in I want to stop the running cucumber test pack midway -- say for example when x number of tests failed.
I can do this just fine but I want the json file (plugin = {json:...}) to be generated when the test stops. Is this doable?
What I've tried so far:
Debug and see where the reporting / plugin generation happens. It seems to be when this line executes:
Cucumber.java: runtime.getEventBus().send.....
#Override
protected Statement childrenInvoker(RunNotifier notifier) {
final Statement features = super.childrenInvoker(notifier);
return new Statement() {
#Override
public void evaluate() throws Throwable {
features.evaluate();
runtime.getEventBus().send(new TestRunFinished(runtime.getEventBus().getTime()));
runtime.printSummary();
}
};
}
I was hoping to access the runtime field but it has a private modifier. I also tried accessing it via reflections but I'm not exactly getting what I need.
Found a quite dirty, but working solution and got what I need. Posting my solution here in case anyone might need.
Create a custom cucumber runner implementation to take the runtime instance.
public final class Foo extends Cucumber {
static Runtime runtime;
/**
* Constructor called by JUnit.
*
* #param clazz the class with the #RunWith annotation.
* #throws IOException if there is a problem
* #throws InitializationError if there is another problem
*/
public Foo(Class clazz) throws InitializationError, IOException {
super(clazz);
}
#Override
protected Runtime createRuntime(ResourceLoader resourceLoader, ClassLoader classLoader, RuntimeOptions runtimeOptions) throws InitializationError, IOException {
runtime = super.createRuntime(resourceLoader, classLoader, runtimeOptions);
return runtime;
}
}
Call the same line that generates the file depending on the plugin used:
public final class ParentHook {
#Before
public void beforeScenario(Scenario myScenario) {
}
#After
public void afterScenario() {
if (your condition to stop the test) {
//custom handle to stop the test
myHandler.pleaseStop();
Foo.runtime.getEventBus().send(new TestRunFinished(Foo.runtime.getEventBus().getTime()));
}
}
}
This will however require you to run your test via Foo.class eg:
#RunWith(Foo.class) instead of #RunWith(Cucumber.class)
Not so much value here but it fits what I need at the moment. I hope Cucumber provides a way to do this out of the box. If there's a better way, please do post it here so I can accept your answer once verified.
Why not quit?
import cucumber.api.Scenario;
import cucumber.api.java.After;
import cucumber.api.java.Before;
import cucumber.api.java.en.When;
public class StepDefinitions {
private static int failureCount = 0;
private int threshold = 20;
#When("^something$")
public void do_something() {
// something
}
#After
public void after(Scenario s) {
if (s.isFailed()) ++failureCount;
}
#Before
public void before() {
if (failureCount > threshold) {
if (driver !=null) {
driver.quit();
driver = null;
}
}
}

JUnit5 - How to get test result in AfterTestExecutionCallback

I write JUnit5 Extension. But I cannot find way how to obtain test result.
Extension looks like this:
import org.junit.jupiter.api.extension.AfterTestExecutionCallback;
import org.junit.jupiter.api.extension.TestExtensionContext;
public class TestResultExtension implements AfterTestExecutionCallback {
#Override
public void afterTestExecution(TestExtensionContext context) throws Exception {
//How to get test result? SUCCESS/FAILED
}
}
Any hints how to obtain test result?
This work for me:
public class RunnerExtension implements AfterTestExecutionCallback {
#Override
public void afterTestExecution(ExtensionContext context) throws Exception {
Boolean testResult = context.getExecutionException().isPresent();
System.out.println(testResult); //false - SUCCESS, true - FAILED
}
}
#ExtendWith(RunnerExtension.class)
public abstract class Tests {
}
As other answers point out, JUnit communicates failed tests with exceptions, so an AfterTestExecutionCallback can be used to gleam what happened. Note that this is error prone as extension running later might still fail the test.
Another way to do that is to register a custom TestExecutionListener. Both of these approaches are a little roundabout, though. There is an issue that tracks a specific extension point for reacting to test results, which would likely be the most straight-forward answer to your question. If you can provide a specific use case, it would be great if you could head over to #542 and leave a comment describing it.
You can use SummaryGeneratingListener from org.junit.platform.launcher.listeners
It contains MutableTestExecutionSummary field, which implements TestExecutionSummary interface, and this way you can obtain info about containers, tests, time, failures etc.
You can create custom listener, for example:
Create class that extends SummaryGeneratingListener
public class ResultAnalyzer extends SummaryGeneratingListener {
#Override
public void testPlanExecutionFinished(TestPlan testPlan) {
//This method is invoked after all tests in all containers is finished
super.testPlanExecutionFinished(testPlan);
analyzeResult();
}
private void analyzeResult() {
var summary = getSummary();
var failures = summary.getFailures();
//Do something
}
}
Register listener by creating file
src\main\resources\META-INF\services\org.junit.platform.launcher.TestExecutionListener
and specify your implementation in it
path.to.class.ResultAnalyzer
Enable auto-detection of extensions, set parameter
-Djunit.jupiter.extensions.autodetection.enabled=true
And that's it!
Docs
https://junit.org/junit5/docs/5.0.0/api/org/junit/platform/launcher/listeners/SummaryGeneratingListener.html
https://junit.org/junit5/docs/5.0.0/api/org/junit/platform/launcher/listeners/TestExecutionSummary.html
https://junit.org/junit5/docs/current/user-guide/#extensions-registration-automatic
I have only this solution:
String testResult = context.getTestException().isPresent() ? "FAILED" : "OK";
It seems that it works well. But I am not sure if it will work correctly in all situations.
Fails in JUnit are propagated with exceptions. There are several exceptions, which indicate various types of errors.
So an exception in TestExtensionContext#getTestException() indicates an error. The method can't manipulate actual test results, so depending on your use case you might want to implement TestExecutionExceptionHandler, which allows you to swallow exceptions, thus changing whether a test succeeded or not.
You're almost there.
To implement a test execution callback and get the test result for logging (or generating a report) you can do the following:
import org.junit.jupiter.api.extension.AfterTestExecutionCallback;
import org.junit.jupiter.api.extension.ExtensionContext;
public class TestResultExtension implements AfterTestExecutionCallback
{
#Override
public void afterTestExecution(ExtensionContext context) throws Exception
{
// check the context for an exception
Boolean passed = context.getExecutionException().isEmpty();
// if there isn't, the test passed
String result = passed ? "PASSED" : "FAILED";
// now that you have the result, you can do whatever you want
System.out.println("Test Result: " + context.getDisplayName() + " " + result);
}
}
And then you just add the TestResultExtension using the #ExtendWith() annotation for your test cases:
import org.junit.jupiter.api.Test;
import org.junit.jupiter.api.extension.ExtendWith;
import static org.junit.jupiter.api.Assertions.assertTrue;
#ExtendWith(TestResultExtension.class)
public class SanityTest
{
#Test
public void testSanity()
{
assertTrue(true);
}
#Test
public void testInsanity()
{
assertTrue(false);
}
}
It's a good idea to extend a base test that includes the extension
import org.junit.jupiter.api.extension.ExtendWith;
#ExtendWith(TestResultExtension.class)
public class BaseTest
{}
And then you don't need to include the annotation in every test:
public class SanityTest extends BaseTest
{ //... }

WebDriver datadriven (using TestNG) scripts takes a long time to start

I have extended Selenium using the Java WebDriver library and the TestNG framework. When running test scripts, I notice an inordinate amount of time for the test to start execution, when the test takes in input parameters from an Excel file (using the #DataProvider annotation).
The delay can amount to about 10 min, which makes it time consuming to run and debug. Is there a reason for this significant delay?
Yes could be because of the way you are reading from excel (greedy data provider) and depends on how big your excel file is. There is something called lazy data provider. Found an example of one here . Posting the code from the link.
For better understanding need to see your code.
public class LazyDataProviderExample {
#Test(dataProvider = "data-source")
public void myTestMethod(String info) {
Reporter.log("Data provided was :" + info, true);
}
#DataProvider(name = "data-source")
public Iterator<Object[]> dataOneByOne() {
return new MyData();
}
private static class MyData implements Iterator<Object[]> {
private String[] data = new String[] { "Java", "TestNG", "JUnit" };
private int index = 0;
#Override
public boolean hasNext() {
return (index <= (data.length - 1));
}
#Override
public Object[] next() {
return new Object[] { data[index++] };
}
#Override
public void remove() {
throw new UnsupportedOperationException("Removal of items is not supported");
}
}
}
For some reason, this issue was resolved by rebuilding my custom Firefox profile - it may have gotten corrupt.
Just posting this as an answer for reference, in case any one is bogged down by this issue.

How to use JUnit4TestAdapter with objects

I am trying to write a test suite using JUnit4 by relying on JUnit4TestAdapter. Having a look at the code of this class I saw that it only works with a Class as input. I would like to build a test class and set a parameter on it before running it with my TestSuite. Unfortunately, Junit4TestAdapter is building the test by using reflection (not 100% sure about the mechanism behind it), which means that I cannot change my test class on runtime.
Has anybody done anything similar before? Is there any possible workaround to this issue? Thanks for your help!
public class SimpleTest {
#Test
public void testBasic() {
TemplateTester tester = new TemplateTester();
ActionIconsTest test = new ActionIconsTest();
test.setParameter("New Param Value");
tester.addTests(test);
tester.run();
}
}
/////
public class TemplateTester {
private TestSuite suite;
public TemplateTester() {
suite = new TestSuite();
}
public void addTests(TemplateTest... tests) {
for (TemplateTest test : tests) {
suite.addTest(new JUnit4TestAdapter(test.getClass()));
}
}
public void run() {
suite.run(new TestResult());
}
}
/////
public interface TemplateTest {
}
/////
public class ActionIconsTest extends BaseTestStrategy implements TemplateTest {
#Test
public void icons() {
//Test logic here
}
public void navigateToTestPage() {
//Here I need the parameter
}
}
/////
public abstract class BaseTestStrategy {
protected String parameter;
#Before
public void init() {
navigateToTestPage();
}
public abstract void navigateToTestPage();
public void setParameter(String parameter) {
this.parameter = parameter;
}
}
I am trying to test a web application with Selenium. The way I want to test is by splitting the functionality, e.g., I want to test the available icons (ActionIconsTest), then I'd like to test other parts like buttons, etc.
The idea behind this is to have a better categorization of the functionality available in certain screen. This is quite coupled with the way we are currently developing our web app.
With this in mind, TemplateTest is just an interface implemented by the different kind of tests (ActionIconTest, ButtonTest, etc) available in my system.
TemplateTester is a Junit suite test with all the different tests that implement the interface TemplateTest.
The reason for this question is because I was trying to implement a Strategy pattern and then realized of the inconvenient of passing a class to Junit4TestAdapter in runtime.
Well, taking in account that JUNIT needs your tester's Class object as an object factory (so he can create several instances of your tester), I can only suggest you pass parameters to your tester through System Properties.
Moreover, it's the recommended way of passing parameters: http://junit.org/faq.html#running_7

Is there a way to make Eclipse run a JUnit test multiple times until failure?

We occasionally have bugs that appear once in every X runs. Before people check in stuff (where it is automatically JUnit'd), our devs need to pass JUnit locally via Eclipse.
Is there some convenient way (built in or high-quality Plugin) to make Eclipse run the same test X times and stop if there's a failure? An alternative to just clicking Run X times?
Note that I'm looking for something in the UI (e.g., right click and say "Run X times" instead of just "Run").
If the for loop works, then I agree with nos.
If you need to repeat the entire setup-test-teardown, then you can use a TestSuite:
Right-click on the package containing the test to repeat
Go to New and choose to create a JUnit test SUITE
Make sure that only the test you want to repeat is selected and click through to finish.
Edit the file to run it multiple times.
In the file you just find the
addTestSuite(YourTestClass.class)
line, and wrap that in a for loop.
I'm pretty sure that you can use addTest instead of addTestSuite to get it to only run one test from that class if you just want to repeat a single test method.
If you really want to run a test class until failure, you need your own runner.
#RunWith(RunUntilFailure.class)
public class YourClass {
// ....
}
which could be implemented as follows...
package com.example;
import org.junit.internal.runners.*;
import org.junit.runner.notification.*;
import org.junit.runner.*;
public class RunUntilFailure extends Runner {
private TestClassRunner runner;
public RunUntilFailure(Class<?> klass) throws InitializationError {
this.runner = new TestClassRunner(klass);
}
#Override
public Description getDescription() {
Description description = Description.createSuiteDescription("Run until failure");
description.addChild(runner.getDescription());
return description;
}
#Override
public void run(RunNotifier notifier) {
class L extends RunListener {
boolean fail = false;
public void testFailure(Failure failure) throws Exception { fail = true; }
}
L listener = new L();
notifier.addListener(listener);
while (!listener.fail) runner.run(notifier);
}
}
...releasing untested code, feeling TDD guilt :)
Based on #akuhn's answer, here is what I came up with - rather than running forever, this will run 50 times or until failure, whichever comes first.
package com.foo
import org.junit.runner.Description;
import org.junit.runner.Runner;
import org.junit.runner.notification.Failure;
import org.junit.runner.notification.RunListener;
import org.junit.runner.notification.RunNotifier;
import org.junit.runners.BlockJUnit4ClassRunner;
import org.junit.runners.model.InitializationError;
public class RunManyTimesUntilFailure extends Runner {
private static final int MAX_RUN_COUNT = 50;
private BlockJUnit4ClassRunner runner;
#SuppressWarnings("unchecked")
public RunManyTimesUntilFailure(final Class testClass) throws InitializationError {
runner = new BlockJUnit4ClassRunner(testClass);
}
#Override
public Description getDescription() {
final Description description = Description.createSuiteDescription("Run many times until failure");
description.addChild(runner.getDescription());
return description;
}
#Override
public void run(final RunNotifier notifier) {
class L extends RunListener {
boolean shouldContinue = true;
int runCount = 0;
#Override
public void testFailure(#SuppressWarnings("unused") final Failure failure) throws Exception {
shouldContinue = false;
}
#Override
public void testFinished(#SuppressWarnings("unused") Description description) throws Exception {
runCount++;
shouldContinue = (shouldContinue && runCount < MAX_RUN_COUNT);
}
}
final L listener = new L();
notifier.addListener(listener);
while (listener.shouldContinue) {
runner.run(notifier);
}
}
}
I know it doesn't answer the question directly but if a test isn't passing every time it is run it is a test smell known as Erratic Test. There are several possible causes for this (from xUnit Test Patterns):
Interacting Tests
Interacting Test Suites
Lonely Test
Resource Leakage
Resource Optimism
Unrepeatable Test
Test Run War
Nondeterministic Test
The details of each of these is documented in Chapter 16 of xUnit Test Patterns.
Here is a post I wrote that shows several ways of running the tests repeatedly with code examples:
http://codehowtos.blogspot.com/2011/04/run-junit-test-repeatedly.html
You can use the #Parametrized runner, or use the special runner included in the post
There is also a reference to a #Retry implementation
I don't believe there's a built in way for junit to do exactly what you're asking for.
If multiple runs produce different result, you should have a unit test testing that case. Wich might be as simple as running a for loop in the relevant test cases.
It is possible to implement such an loop with TestRules (since JUnit 4.9)
A very simple implementation that runs every Test 10 times:
import org.junit.rules.TestRule;
import org.junit.runner.Description;
import org.junit.runners.model.Statement;
public class SimpleRepeatRule implements TestRule {
private static class SimpleRepeatStatement extends Statement {
private final Statement statement;
private SimpleRepeatStatement(Statement statement) {
this.statement = statement;
}
#Override
public void evaluate() throws Throwable {
for (int i = 0; i < 10; i++) {
statement.evaluate();
}
}
}
#Override
public Statement apply(Statement statement, Description description) {
return new SimpleRepeatStatement(statement);
}
}
usage:
public class Run10TimesTest {
#Rule
public SimpleRepeatRule repeatRule = new SimpleRepeatRule();
#Test
public void myTest(){...}
}
For a more useful implementation based on an annotation that define which test method has to been executed how often have a look at this blog:
http://www.codeaffine.com/2013/04/10/running-junit-tests-repeatedly-without-loops/

Categories