I have a RetryAnalyzer and RetryListener. In RetryListener onTestFailure, I check if the test is retryable, if yes I set the result to SUCCESS. I also do, testResult.getTestContext().getFailedMethods.removeResult(testResult) in this method.
I again remove failed results (with valid if conditions) in onFinish method in the listener.
Now the problem I am running into is, I made each test class into groups. One test class does the WRITES and one test class does the READS. So READs group depends on WRITES.
If a test case fails on 1st attempts and succeeds on retrying, then all the test cases in the dependent group are SKIPPED, despite removing failed result in onTestFailure method.
Is there a way to run dependent method if a test case succeeds on retrying? I am fine with the behavior if the test case fails in all attempts, so I am not looking to add "alwaysRun=true" on each dependent method.
On retry you should be removing the test from the Failed tests. And plz be sure to remove ITestResult object. (i.e, result but not result.getMethod())
#Override
public boolean retry(ITestResult result) {
if (currentCount < maxRetryCount) {
result.getTestContext().getFailedTests().removeResult(result);
currentCount++;
return true;
}
return false;
}
I was using TestNG 6.8.7, upgraded it to 6.9.5.
After that, upon retry, TestNG was marking test case as SKIPPED. I just had to create a Listener, which implemented TestListenerAdapter and override onTestSkipped, if there are retries available then remove the method from skippedTests.
result.getTestContext().getSkippedTests().removeResult(result.getMethod());
If not set test to FAILURE. Now it works as expected.
In retry file, add a mechanism to see if retry is left of the case.
In Custom Listener, override onTestSkipped() and check if RetryLeft, remove it from skippedResult and return
public class Retry implements IRetryAnalyzer {
private int count = 0;
private static final List retriedTests = new CopyOnWriteArrayList();
private static final ConcurrentHashMap<String, Boolean> retriedTestsMap = new ConcurrentHashMap();
#Override
public boolean retry(ITestResult iTestResult) {
int maxTry = 3;
if (!iTestResult.isSuccess()) { // Check if test not succeed
String name = getNameForTestResult(iTestResult);
if (count < maxTry) { // Check if maxTry count is reached
count++; // Increase the count count by 1
retriedTests.add(iTestResult);
retriedTestsMap.put(name, true);
RestApiUtil.println("**" + name + " retry count " + count + " **");
iTestResult.setStatus(ITestResult.FAILURE); // Mark test as failed
return true; // Tells TestNG to re-run the test
} else {
iTestResult.setStatus(ITestResult.FAILURE); // If maxCount reached,test marked as failed
retriedTestsMap.put(name, true);
}
} else {
iTestResult.setStatus(ITestResult.SUCCESS); // If test passes, TestNG marks it as passed
}
return false;
}
public static List getRetriedTests() {
return retriedTests;
}
public static boolean isRetryLeft(ITestResult tr) {
return retriedTestsMap.get(getNameForTestResult(tr));
}
private static String getNameForTestResult(ITestResult tr) {
return tr.getTestClass().getRealClass().getSimpleName() + "::" + tr.getName();
}
}
public class CustomTestNGListener extends TestListenerAdapter {
#Override
public void onTestSkipped(ITestResult tr) {
if (Retry.isRetryLeft(tr)) {
tr.getTestContext().getSkippedTests().removeResult(tr);
return;
}
super.onTestSkipped(tr);
}
}
Related
I write selenium tests and make it parallel via testng.
There are some tests which should use resource, and that resource cant be used in tests, while it using in another test.
So to make it clear let me describe in other words, I have 10 resources, and when some test start working with one of them, only 9 another resources should be available in another tests. If all 10 resources are busy, and another test attempts to get it, it should wait until one of test will finish execution and free it's resource.
Im trying to create provider which will control desired behaviour but it looks like I get deadlocks, because it hangs out some times at synchronized method call.
My plan is provider have 2 methods get() and remove()
get() called in test method to get resource
remove() called in method annotated with #AfterMethod annotation and this method is default method of specific interface which should be implemented in class, containing resource usage
Here is provider class:
public class ResourceProvider {
private static final Logger logger = LogManager.getLogger();
private static List<Resource> freeResources;
private static Map<String, List<Resource>> resourcesInUse;
static {
freeResources = new ArrayList<>();
//here is resource initialization to fill freeResources list
resourcesInUse = new HashMap<>();
}
public static synchronized Resource get() {
String testName = Thread.currentThread().getStackTrace()[2].getClassName()
+ "." + Thread.currentThread().getStackTrace()[2].getMethodName();
Resource resource = null;
logger.info(String.format("Attempt to get resource for %s test", testName));
for (int i = 0; i < 240; i++) {
if (freeResources.isEmpty()) {
try {
Thread.sleep(5_000);
} catch (InterruptedException e) {
e.printStackTrace();
}
} else {
resource = freeResources.get(0);
if (resourcesInUse.containsKey(testName)) {
resourcesInUse.get(testName).add(resource);
} else {
List<Resource> resources = new ArrayList<>();
resources.add(resource);
resourcesInUse.put(testName, resources);
}
freeResources.remove(resource);
break;
}
}
if (resource == null) {
throw new RuntimeException(String.format("There is no free resource for '%s' in 20 minutes", testName));
}
logger.info(String.format("Resource %s used in %s", resource, testName));
return resource;
}
public static synchronized boolean remove(ITestResult result) {
String testName = result.getMethod().getTestClass().getName() + "." + result.getMethod().getMethodName();
return remove(testName);
}
public static synchronized boolean remove(String testName) {
boolean isTestUseResource = resourcesInUse.containsKey(testName);
if (isTestUseResource) {
logger.info(String.format("Removing %s resources, used in %s", resourcesInUse.get(testName), testName));
freeResources.addAll(resourcesInUse.get(testName));
resourcesInUse.remove(testName);
}
return isTestUseResource;
}
Interface:
public interface RemoveResource {
#AfterMethod
default void removeResource(ITestResult result) {
ResourceProvider.remove(result);
}
But this code doesnt work good, it hangs out at remove() call sometimes.
May you help me to understand why I get hangs out and how to resolve it?
I have a method to check if a Jenkins job's next build exists or not within 10 attempts.
void checkNextBuild10Attempts() {
int attempt = 0
while (attempt < 10) {
Job job = JenkinsUtils.getJob(jobName)
Run nextBuild = job.getBuildByNumber(nextBuildNumber)
if (nextBuild) {
break
}
attempt++
}
if (attempt == 10) {
error
}
}
I'm trying to write a unit test for it using PipelineSpockTestBase. My unit test in short:
given:
JenkinsUtils spy = GroovySpy(JenkinsUtils, constructorArgs: [script], global: true) {
getJob(_) >> new Job(null, 'foo') {
boolean isBuildable() {
return false
}
protected SortedMap _getRuns() {
return null
}
protected void removeRun(Run run) {}
Run getBuildByNumber(int buildNumber) {
return null
}
}
getNextBuildNumber(_) >> 100
}
releaseHelper.jenkinsUtils = spy
where:
releaseHelper.checkNextBuild10Attempts()
How I can return a dummy Run object by the method getBuildByNumber to break the while loop?
Javadoc for Job and Run
Thank you!
I have a test method like this:
#Test
public void generateReports(String clientname, String username) {
loginPage().login(clientname, username);
loginPage().chooseReportManagement();
reportPage().createReport();
}
My goal is to generate 100 reports. My solution right now is to loop the step createReport() 100 times, like this:
#Test
public void generateReports(String clientname, String username) {
loginPage().login(clientname, username);
loginPage().chooseReportManagement();
for (int i = 0; i < 100 ; i++) {
reportPage().createReport();
}
}
It does the task. But I would like to know if there is any other way to achieve this. Because in this way, when something wrong happens when creating a report, the test will be terminated. I want something like, the test should carry on if creating a report is failed, until the loop ends.
I use Selenium and TestNG.
Thanks
Use try/catch:
#Test
public void generateReports(String clientname, String username) {
loginPage().login(clientname, username);
loginPage().chooseReportManagement();
for (int i = 0; i < 100 ; i++) {
try {
reportPage().createReport();
} catch (Exception e) {
System.out.println("Report creation failed!")
}
}
}
If you are using TestNG you can just add invocation count besides using for loop
#Test(invocationCount = 100)
RetryAnalyzer.class
public class RetryAnalyzer implements IRetryAnalyzer {
int counter = 0;
#Override
public boolean retry(ITestResult result) {
// check if the test method had RetryCountIfFailed annotation
RetryCountIfFailed annotation = result.getMethod().getConstructorOrMethod().getMethod()
.getAnnotation(RetryCountIfFailed.class);
// based on the value of annotation see if test needs to be rerun
if((annotation != null) && (counter < annotation.value()))
{
counter++;
return true;
}
return false;
}
}
RetryCountIfFailed.class
#Retention(RetentionPolicy.RUNTIME)
public #interface RetryCountIfFailed {
// Specify how many times you want to
// retry the test if failed.
// Default value of retry count is 0
int value() default 0;
}
Test.class
#Test
#RetryCountIfFailed(100)
public void generateReports(String clientname, String username) {
loginPage().login(clientname, username);
loginPage().chooseReportManagement();
reportPage().createReport();
}
You can refer to this link if this answer doesn't satisfy enough: TestNG retryAnalyzer only works when defined in methods #Test, does not work in class' #Test
I want to be able to run a Test class a specified number of times. The class looks like :
#RunWith(Parameterized.class)
public class TestSmithWaterman {
private static String[] args;
private static SmithWaterman sw;
private Double[][] h;
private String seq1aligned;
#Parameters
public static Collection<Object[]> configs() {
// h and seq1aligned values
}
public TestSmithWaterman(Double[][] h, String seq1aligned) {
this.h = h;
this.seq1aligned = seq1aligned;
}
#BeforeClass
public static void init() {
// run smith waterman once and for all
}
#Test
#Repeat(value = 20) // does nothing
// see http://codehowtos.blogspot.gr/2011/04/run-junit-test-repeatedly.html
public void testCalculateMatrices() {
assertEquals(h, sw.getH());
}
#Test
public void testAlignSeq1() {
assertEquals(seq1aligned, sw.getSeq1Aligned());
}
// etc
}
Any of the tests above may fail (concurrency bugs - EDIT : the failures provide useful debug info) so I want to be able to run the class multiple times and preferably have the results grouped somehow. Tried the Repeat annotation - but this is test specific (and did not really make it work - see above) and struggled with the RepeatedTest.class, which cannot seem to transfer to Junit 4 - the closest I found on SO is this - but apparently it is Junit3. In Junit4 my suite looks like :
#RunWith(Suite.class)
#SuiteClasses({ TestSmithWaterman.class })
public class AllTests {}
and I see no way to run this multiple times.
Parametrized with empty options is not an option really - as I need my params anyway
So I am stuck hitting Control + F11 in eclipse again and again
Help
EDIT (2017.01.25): someone went ahead and flagged this as duplicate of the question whose accepted answer I explicitly say does not apply here
As suggested by #MatthewFarwell in the comments I implemented a test rule as per his answer
public static class Retry implements TestRule {
private final int retryCount;
public Retry(int retryCount) {
this.retryCount = retryCount;
}
#Override
public Statement apply(final Statement base,
final Description description) {
return new Statement() {
#Override
#SuppressWarnings("synthetic-access")
public void evaluate() throws Throwable {
Throwable caughtThrowable = null;
int failuresCount = 0;
for (int i = 0; i < retryCount; i++) {
try {
base.evaluate();
} catch (Throwable t) {
caughtThrowable = t;
System.err.println(description.getDisplayName()
+ ": run " + (i + 1) + " failed:");
t.printStackTrace();
++failuresCount;
}
}
if (caughtThrowable == null) return;
throw new AssertionError(description.getDisplayName()
+ ": failures " + failuresCount + " out of "
+ retryCount + " tries. See last throwable as the cause.", caughtThrowable);
}
};
}
}
as a nested class in my test class - and added
#Rule
public Retry retry = new Retry(69);
before my test methods in the same class.
This indeed does the trick - it does repeat the test 69 times - in the case of some exception a new AssertionError, with an individual message containing some statistics plus the original Throwable as a cause, gets thrown. So the statistics will be also visible in the jUnit view of Eclipse.
I have my existing framework built up using Jfunc which provides a facility to continue exection even when one of the asserts in the test case fails. Jfunc uses junit 3.x framework. But now we are migrating to junit4 so I can't use Jfunc anymore and have replaced it with junit 4.10 jar.
Now the problem is since we have extensively used jfunc in our framework, and with junit 4 we want to make our code continue the execution even when one of the asserts fails in a test case.
Does anyone has any suggestion/idea for this, i know in junit the tests needs to be more atomic i.e. one assert per test case but we can't do that in our framework for some reason.
You can do this using an ErrorCollector rule.
To use it, first add the rule as a field in your test class:
public class MyTest {
#Rule
public ErrorCollector collector = new ErrorCollector();
//...tests...
}
Then replace your asserts with calls to collector.checkThat(...).
e.g.
#Test
public void myTest() {
collector.checkThat("a", equalTo("b"));
collector.checkThat(1, equalTo(2));
}
I use the ErrorCollector too but also use assertThat and place them in a try catch block.
import static org.junit.Assert.*;
import static org.hamcrest.Matchers.*;
#Rule
public ErrorCollector collector = new ErrorCollector();
#Test
public void calculatedValueShouldEqualExpected() {
try {
assertThat(calculatedValue(), is(expected));
} catch (Throwable t) {
collector.addError(t);
// do something
}
}
You can also use assertj - soft assertion
#Test
public void testCollectErrors(){
SoftAssertions softly = new SoftAssertions();
softly.assertThat(true).isFalse();
softly.assertThat(false).isTrue();
// Don't forget to call SoftAssertions global verification !
softly.assertAll();
}
Also exist other way to use it without manually invoke softly.assertAll();
with rule
with autoclosable
Using the static assertSoftly method
Use try/finally blocks. This worked in my case:
...
try {
assert(...)
} finally {
// code to be executed after assert
}
...
Try - catch, in "try" use the assertion, in "catch" add the possible error to collection.
Then throw the exception at the end of test, in tearDown().
So if there will be fail/error in assert, it will be catched and test will continue.
(The collection in example is static, you can also make new instance in setUp() for each #Test)
public static List<String> errors = new ArrayList<>();
try {
//some assert...
}
catch (AssertionError error) {
errors.add(error.toString());
}
#After
public void tearDown() {
try {
if (!errors.isEmpty()) {
throw new AssertionError(errors);
}
}
finally {
//empty list because it's static, alternatively make instance for each test in setUp()
errors.clear();
}
}
I created my own simple assertions class. Easy to extend with your use-cases:
public class MyEquals {
public static void checkTestSummary(MyTestSummary myTestSummary) {
final List<MyTestResult> conditions = myTestSummary.getTestResults();
final int total = conditions.size();
final boolean isSuccessful = myTestSummary.isSuccessful();
if (isSuccessful) {
System.out.println(format("All [%s] conditions are successful!", total));
} else {
final List<MyTestResult> failedConditions = conditions.stream().filter(MyTestResult::isTestResult).collect(Collectors.toList());
System.out.println(format("\nNot yet.. [%s out of %s] conditions are failed", failedConditions.size(), total));
}
if (!isSuccessful) {
for (int i = 0; i < total; i++) {
final MyTestResult myTestResult = conditions.get(i);
if (myTestResult.isTestResult()) {
System.out.println(format(" Success [%s of %s] => Expected %s Actual %s Good!", i + 1, total, myTestResult.getExpected(), myTestResult.getActual()));
} else {
System.out.println(format("!! Failed [%s of %s] => Expected %s Actual %s", i + 1, total, myTestResult.getExpected(), myTestResult.getActual()));
}
}
}
assertTrue(isSuccessful);
}
public static void myAssertEquals(MyTestSummary myTestSummary, Object expected, Object actual) {
if (checkEquals(expected, actual)) {
assertEquals(expected, actual);
myTestSummary.addSuccessfulResult(expected, actual);
} else {
myTestSummary.addFailedResult(expected, actual);
myTestSummary.setSuccessful(false);
}
}
public static boolean checkEquals(Object value1, Object value2) {
if (value1 == null && value2 == null) {
return true;
} else if (value1 != null && value2 == null) {
return false;
} else if (value1 == null && value2 != null) {
return false;
} else if (value1 != null && value2 != null) {
return value1.equals(value2);
}
return false;
}
}
#Builder
#Value
public class MyTestResult {
String expected;
String actual;
boolean testResult;
}
#Data
public class MyTestSummary {
private boolean successful = true;
private List<MyTestResult> testResults = new ArrayList<>();
public MyTestSummary() {
}
public void addSuccessfulResult(Object expected, Object actual) {
getTestResults().add(MyTestResult.builder()
.expected(String.valueOf(expected))
.actual(String.valueOf(actual))
.testResult(true)
.build()
);
}
public void addFailedResult(Object expected, Object actual) {
getTestResults().add(MyTestResult.builder()
.expected(String.valueOf(expected))
.actual(String.valueOf(actual))
.testResult(false)
.build()
);
}
}
Usage in the junit test
#Test
public void testThat() {
MyTestSummary myTestSummary = new MyTestSummary();
myAssertEquals(myTestSummary, 10, 5 + 5);
myAssertEquals(myTestSummary, "xxx", "x" + "x");
checkTestSummary(myTestSummary);
}
Output:
Not yet.. [1 out of 2] conditions are failed
Success [1 of 2] => Expected 10 Actual 10 Good!
!! Failed [2 of 2] => Expected xxx Actual xx
org.opentest4j.AssertionFailedError: expected: <true> but was: <false>
Expected :true
Actual :false
Another option is the observable pattern in conjunction with lambda expressions. You can use something like the above.
public class MyTestClass {
private final List<Consumer<MyTestClass>> AFTER_EVENT = new ArrayList<>();
#After
public void tearDown() {
AFTER_EVENT.stream().forEach(c -> c.accept(this));
}
#Test
public void testCase() {
//=> Arrange
AFTER_EVENT.add((o) -> {
// do something after an assertion fail.
}));
//=> Act
//=> Assert
Assert.assertTrue(false);
}
}