Selenium to repeat a specific test step - java

I have a test method like this:
#Test
public void generateReports(String clientname, String username) {
loginPage().login(clientname, username);
loginPage().chooseReportManagement();
reportPage().createReport();
}
My goal is to generate 100 reports. My solution right now is to loop the step createReport() 100 times, like this:
#Test
public void generateReports(String clientname, String username) {
loginPage().login(clientname, username);
loginPage().chooseReportManagement();
for (int i = 0; i < 100 ; i++) {
reportPage().createReport();
}
}
It does the task. But I would like to know if there is any other way to achieve this. Because in this way, when something wrong happens when creating a report, the test will be terminated. I want something like, the test should carry on if creating a report is failed, until the loop ends.
I use Selenium and TestNG.
Thanks

Use try/catch:
#Test
public void generateReports(String clientname, String username) {
loginPage().login(clientname, username);
loginPage().chooseReportManagement();
for (int i = 0; i < 100 ; i++) {
try {
reportPage().createReport();
} catch (Exception e) {
System.out.println("Report creation failed!")
}
}
}

If you are using TestNG you can just add invocation count besides using for loop
#Test(invocationCount = 100)

RetryAnalyzer.class
public class RetryAnalyzer implements IRetryAnalyzer {
int counter = 0;
#Override
public boolean retry(ITestResult result) {
// check if the test method had RetryCountIfFailed annotation
RetryCountIfFailed annotation = result.getMethod().getConstructorOrMethod().getMethod()
.getAnnotation(RetryCountIfFailed.class);
// based on the value of annotation see if test needs to be rerun
if((annotation != null) && (counter < annotation.value()))
{
counter++;
return true;
}
return false;
}
}
RetryCountIfFailed.class
#Retention(RetentionPolicy.RUNTIME)
public #interface RetryCountIfFailed {
// Specify how many times you want to
// retry the test if failed.
// Default value of retry count is 0
int value() default 0;
}
Test.class
#Test
#RetryCountIfFailed(100)
public void generateReports(String clientname, String username) {
loginPage().login(clientname, username);
loginPage().chooseReportManagement();
reportPage().createReport();
}
You can refer to this link if this answer doesn't satisfy enough: TestNG retryAnalyzer only works when defined in methods #Test, does not work in class' #Test

Related

Hystrix CircuitBreakerSleepWindowInMilliseconds doesn't work as expected

I am testing Hystrix CircuitBreaker implementation. This is how command class looks like:
public class CommandOne extends HystrixCommand<String>
{
private MyExternalService service;
public static int runCount = 0;
public CommandGetPunterUnpayoutExternalBets(MyExternalServoce service)
{
super(Setter.withGroupKey(HystrixCommandGroupKey.Factory.asKey("AAA"))
.andThreadPoolPropertiesDefaults(
HystrixThreadPoolProperties.Setter().
.withMetricsRollingStatisticalWindowInMilliseconds(10000))
.andCommandPropertiesDefaults(HystrixCommandProperties.Setter()
.withCircuitBreakerEnabled(true)
.withCircuitBreakerErrorThresholdPercentage(20)
.withCircuitBreakerRequestVolumeThreshold(10)
.withExecutionTimeoutInMilliseconds(30)
.withCircuitBreakerSleepWindowInMilliseconds(100000)));
this.service = service;
}
#Override
protected String run()
{
run++;
return service.callMethod();
}
#Override
protected String getFallback()
{
return "default;
}
}
Command is called like this:
public class AnotherClass
{
private MyExternalServoce service;
public String callCmd()
{
CommandOne command = new CommandOne(service);
return command.execute();
}
}
In test I perform next steps:
#Test
public void test()
{
AnotherClass anotherClass = new AnotherClass();
// stubbing exception on my service
when(service.callMethod()).thenThrow(new RuntimeException());
for (int i = 0; i < 1000; i++)
{
anotherClass.callCmd();
}
System.out.println("Run method was called times = " + CommandOne.runCount);
}
What I expect with the configuration of command given: MyExternalService.callMethod() should be called 10 times (RequestVolumeThreshold) and after that not being called 100000 ms (long time). In my test case I expect that CommandOne.runCount = 10.
But in reality I am getting from 150 to 200 calls of MyExternalService.callMethod() (CommandOne.runCount = (150-200). Why does it happening? What I did wrong?
According to Hystrix docs health snapshot will be taken once per 500ms ( by default ). Which means that everything what happens with hystrix during first 500ms will not affect circuit breaker status. In your example you got random value of runCount because each time your machine executed random value of requests per 500 ms, and only after that time interval circuit state was updated and closed.
Please take a look on a bit simplified example:
public class CommandOne extends HystrixCommand<String> {
private String content;
public static int runCount = 0;
public CommandOne(String s) {
super(Setter.withGroupKey
(HystrixCommandGroupKey.Factory.asKey("SnapshotIntervalTest"))
.andCommandPropertiesDefaults(
HystrixCommandProperties.Setter()
.withCircuitBreakerSleepWindowInMilliseconds(500000)
.withCircuitBreakerRequestVolumeThreshold(9)
.withMetricsHealthSnapshotIntervalInMilliseconds(50)
.withMetricsRollingStatisticalWindowInMilliseconds(100000)
)
);
this.content = s;
}
#Override
public String run() throws Exception {
Thread.sleep(100);
runCount++;
if ("".equals(content)) {
throw new Exception();
}
return content;
}
#Override
protected String getFallback() {
return "FAILURE-" + content;
}
}
#Test
void test() {
for (int i = 0; i < 100; i++) {
CommandOne commandOne = new CommandOne();
commandOne.execute();
}
Assertions.assertEquals(10, CommandOne.runCount);
}
In this example I've added:
withMetricsHealthSnapshotIntervalInMilliseconds(50) to allow hystrix to take snapshots each 50ms.
Thread.sleep(100); to make requests a bit slower, without it they will be faster then 50 ms and we will face initial issue.
Despite of all these modifications I've seen some random failures. After this I came to conclusion that testing hystrix like this is not a good idea. Instead of it we could use:
1) Fallback/Success flow behavior by manually setting open/close circuit state.
2) Configuration tests

TestNG: RetryAnalyzer, dependent groups are skipped if a test succeeds upon retry

I have a RetryAnalyzer and RetryListener. In RetryListener onTestFailure, I check if the test is retryable, if yes I set the result to SUCCESS. I also do, testResult.getTestContext().getFailedMethods.removeResult(testResult) in this method.
I again remove failed results (with valid if conditions) in onFinish method in the listener.
Now the problem I am running into is, I made each test class into groups. One test class does the WRITES and one test class does the READS. So READs group depends on WRITES.
If a test case fails on 1st attempts and succeeds on retrying, then all the test cases in the dependent group are SKIPPED, despite removing failed result in onTestFailure method.
Is there a way to run dependent method if a test case succeeds on retrying? I am fine with the behavior if the test case fails in all attempts, so I am not looking to add "alwaysRun=true" on each dependent method.
On retry you should be removing the test from the Failed tests. And plz be sure to remove ITestResult object. (i.e, result but not result.getMethod())
#Override
public boolean retry(ITestResult result) {
if (currentCount < maxRetryCount) {
result.getTestContext().getFailedTests().removeResult(result);
currentCount++;
return true;
}
return false;
}
I was using TestNG 6.8.7, upgraded it to 6.9.5.
After that, upon retry, TestNG was marking test case as SKIPPED. I just had to create a Listener, which implemented TestListenerAdapter and override onTestSkipped, if there are retries available then remove the method from skippedTests.
result.getTestContext().getSkippedTests().removeResult(result.getMethod());
If not set test to FAILURE. Now it works as expected.
In retry file, add a mechanism to see if retry is left of the case.
In Custom Listener, override onTestSkipped() and check if RetryLeft, remove it from skippedResult and return
public class Retry implements IRetryAnalyzer {
private int count = 0;
private static final List retriedTests = new CopyOnWriteArrayList();
private static final ConcurrentHashMap<String, Boolean> retriedTestsMap = new ConcurrentHashMap();
#Override
public boolean retry(ITestResult iTestResult) {
int maxTry = 3;
if (!iTestResult.isSuccess()) { // Check if test not succeed
String name = getNameForTestResult(iTestResult);
if (count < maxTry) { // Check if maxTry count is reached
count++; // Increase the count count by 1
retriedTests.add(iTestResult);
retriedTestsMap.put(name, true);
RestApiUtil.println("**" + name + " retry count " + count + " **");
iTestResult.setStatus(ITestResult.FAILURE); // Mark test as failed
return true; // Tells TestNG to re-run the test
} else {
iTestResult.setStatus(ITestResult.FAILURE); // If maxCount reached,test marked as failed
retriedTestsMap.put(name, true);
}
} else {
iTestResult.setStatus(ITestResult.SUCCESS); // If test passes, TestNG marks it as passed
}
return false;
}
public static List getRetriedTests() {
return retriedTests;
}
public static boolean isRetryLeft(ITestResult tr) {
return retriedTestsMap.get(getNameForTestResult(tr));
}
private static String getNameForTestResult(ITestResult tr) {
return tr.getTestClass().getRealClass().getSimpleName() + "::" + tr.getName();
}
}
public class CustomTestNGListener extends TestListenerAdapter {
#Override
public void onTestSkipped(ITestResult tr) {
if (Retry.isRetryLeft(tr)) {
tr.getTestContext().getSkippedTests().removeResult(tr);
return;
}
super.onTestSkipped(tr);
}
}

Can we use Spring-cloud-netflix and Hystrix to retry failed exectuion

I am using Spring-Cloud-netflix library.
I wonder if there is a way to take this code and add configure it instead of executing the fallback method right away to retry to execute it N times and in case of N times than execute the fallback method:
#HystrixCommand(fallbackMethod = "defaultInvokcation")
public String getRemoteBro(String name) {
return(executeRemoteService(name));
}
private String defaultInvokcation(String name) {
return "something";
}
Thanks,
ray.
From my comment:
Handle this behavior in your code. It's not the job of hystrix to know your "special" business logic. As an example
private final static int MAX_RETRIES = 5;
#HystrixCommand(fallbackMethod = "defaultInvokcation")
public String getRemoteBro(String name) {
return(executeRemoteService(name));
}
private String executeRemoteService(String serviceName) {
for (int i = 0; i < MAX_RETRIES; i++) {
try {
return reallyExecuteRemoteService(serviceName);
} catch (ServiceException se) {
// handle or log execption
}
}
throw new RuntimeException("bam");
}
Don't know if you prefer to use an exception inside the loop ;) You could also wrap your answer from reallyExecuteRemoteServicein some kind of ServiceReturnMessage with a status code.

How to make TestNG #DataProvider which returns several objects returns only one?

I have following #DataProvider:
#DataProvider(name = "CredentialsProvider", parallel = true)
public static Object[][] credentialsProvider() {
...
for (int i = 0; i < login.size(); i++) {
credentials[i] = new Object[] {login.get(i)[0], password.get(i)[0]};
}
return credentials;
}
It used to generate credentials for test which are run in parallel mode:
#Test(dataProvider = "CredentialsProvider")
public void Login (String login, String password)
But sometimes I want to use the same #DataProvider in a test with only single run. I expected that using of invocationCount in #Test method will help with it, but was wrong. Is there any solution to invoke #DataProvider only once regardless on number of objects returned by provider without changing it's sources?
AFAIK you can only deal with this issue on the data provider side.
#DataProvider(name = "CredentialsProvider", parallel = true)
public static Object[][] credentialsProvider(Method method) { ... }
#DataProvider(name = "CredentialsProvider", parallel = true)
public static Object[][] credentialsProvider(ITestContext context) { ... }
In both cases you can get information from the context of the test cases which are using the data provider. In first case, for example method.getName(); gives you the name of the #Test method. In second case, context.getName(); gives you the name of the test case (<test name="TestName">) inside the test suite.
And I meant something like this:
for (int i = 0; i < login.size(); i++) {
credentials[i] = new Object[] {login.get(i)[0], password.get(i)[0]};
if(i > MAX_COUNT && "EXPECTED_TEST_NAME".equals(context.getName())) { break; }
}
Make your DataProvider accept an argument of type Method, and write your own custom annotation to handle this. Something like:
#DataProvider(name = "CredentialsProvider", parallel = true)
public static Object[][] credentialsProvider(Method method) {
//code to extract custom annotation value
....
}
#Test(dataProvider = "CredentialsProvider")
#RunCount(1)
public void test(String login, String password) {
....
}

Continuing test execution in junit4 even when one of the asserts fails

I have my existing framework built up using Jfunc which provides a facility to continue exection even when one of the asserts in the test case fails. Jfunc uses junit 3.x framework. But now we are migrating to junit4 so I can't use Jfunc anymore and have replaced it with junit 4.10 jar.
Now the problem is since we have extensively used jfunc in our framework, and with junit 4 we want to make our code continue the execution even when one of the asserts fails in a test case.
Does anyone has any suggestion/idea for this, i know in junit the tests needs to be more atomic i.e. one assert per test case but we can't do that in our framework for some reason.
You can do this using an ErrorCollector rule.
To use it, first add the rule as a field in your test class:
public class MyTest {
#Rule
public ErrorCollector collector = new ErrorCollector();
//...tests...
}
Then replace your asserts with calls to collector.checkThat(...).
e.g.
#Test
public void myTest() {
collector.checkThat("a", equalTo("b"));
collector.checkThat(1, equalTo(2));
}
I use the ErrorCollector too but also use assertThat and place them in a try catch block.
import static org.junit.Assert.*;
import static org.hamcrest.Matchers.*;
#Rule
public ErrorCollector collector = new ErrorCollector();
#Test
public void calculatedValueShouldEqualExpected() {
try {
assertThat(calculatedValue(), is(expected));
} catch (Throwable t) {
collector.addError(t);
// do something
}
}
You can also use assertj - soft assertion
#Test
public void testCollectErrors(){
SoftAssertions softly = new SoftAssertions();
softly.assertThat(true).isFalse();
softly.assertThat(false).isTrue();
// Don't forget to call SoftAssertions global verification !
softly.assertAll();
}
Also exist other way to use it without manually invoke softly.assertAll();
with rule
with autoclosable
Using the static assertSoftly method
Use try/finally blocks. This worked in my case:
...
try {
assert(...)
} finally {
// code to be executed after assert
}
...
Try - catch, in "try" use the assertion, in "catch" add the possible error to collection.
Then throw the exception at the end of test, in tearDown().
So if there will be fail/error in assert, it will be catched and test will continue.
(The collection in example is static, you can also make new instance in setUp() for each #Test)
public static List<String> errors = new ArrayList<>();
try {
//some assert...
}
catch (AssertionError error) {
errors.add(error.toString());
}
#After
public void tearDown() {
try {
if (!errors.isEmpty()) {
throw new AssertionError(errors);
}
}
finally {
//empty list because it's static, alternatively make instance for each test in setUp()
errors.clear();
}
}
I created my own simple assertions class. Easy to extend with your use-cases:
public class MyEquals {
public static void checkTestSummary(MyTestSummary myTestSummary) {
final List<MyTestResult> conditions = myTestSummary.getTestResults();
final int total = conditions.size();
final boolean isSuccessful = myTestSummary.isSuccessful();
if (isSuccessful) {
System.out.println(format("All [%s] conditions are successful!", total));
} else {
final List<MyTestResult> failedConditions = conditions.stream().filter(MyTestResult::isTestResult).collect(Collectors.toList());
System.out.println(format("\nNot yet.. [%s out of %s] conditions are failed", failedConditions.size(), total));
}
if (!isSuccessful) {
for (int i = 0; i < total; i++) {
final MyTestResult myTestResult = conditions.get(i);
if (myTestResult.isTestResult()) {
System.out.println(format(" Success [%s of %s] => Expected %s Actual %s Good!", i + 1, total, myTestResult.getExpected(), myTestResult.getActual()));
} else {
System.out.println(format("!! Failed [%s of %s] => Expected %s Actual %s", i + 1, total, myTestResult.getExpected(), myTestResult.getActual()));
}
}
}
assertTrue(isSuccessful);
}
public static void myAssertEquals(MyTestSummary myTestSummary, Object expected, Object actual) {
if (checkEquals(expected, actual)) {
assertEquals(expected, actual);
myTestSummary.addSuccessfulResult(expected, actual);
} else {
myTestSummary.addFailedResult(expected, actual);
myTestSummary.setSuccessful(false);
}
}
public static boolean checkEquals(Object value1, Object value2) {
if (value1 == null && value2 == null) {
return true;
} else if (value1 != null && value2 == null) {
return false;
} else if (value1 == null && value2 != null) {
return false;
} else if (value1 != null && value2 != null) {
return value1.equals(value2);
}
return false;
}
}
#Builder
#Value
public class MyTestResult {
String expected;
String actual;
boolean testResult;
}
#Data
public class MyTestSummary {
private boolean successful = true;
private List<MyTestResult> testResults = new ArrayList<>();
public MyTestSummary() {
}
public void addSuccessfulResult(Object expected, Object actual) {
getTestResults().add(MyTestResult.builder()
.expected(String.valueOf(expected))
.actual(String.valueOf(actual))
.testResult(true)
.build()
);
}
public void addFailedResult(Object expected, Object actual) {
getTestResults().add(MyTestResult.builder()
.expected(String.valueOf(expected))
.actual(String.valueOf(actual))
.testResult(false)
.build()
);
}
}
Usage in the junit test
#Test
public void testThat() {
MyTestSummary myTestSummary = new MyTestSummary();
myAssertEquals(myTestSummary, 10, 5 + 5);
myAssertEquals(myTestSummary, "xxx", "x" + "x");
checkTestSummary(myTestSummary);
}
Output:
Not yet.. [1 out of 2] conditions are failed
Success [1 of 2] => Expected 10 Actual 10 Good!
!! Failed [2 of 2] => Expected xxx Actual xx
org.opentest4j.AssertionFailedError: expected: <true> but was: <false>
Expected :true
Actual :false
Another option is the observable pattern in conjunction with lambda expressions. You can use something like the above.
public class MyTestClass {
private final List<Consumer<MyTestClass>> AFTER_EVENT = new ArrayList<>();
#After
public void tearDown() {
AFTER_EVENT.stream().forEach(c -> c.accept(this));
}
#Test
public void testCase() {
//=> Arrange
AFTER_EVENT.add((o) -> {
// do something after an assertion fail.
}));
//=> Act
//=> Assert
Assert.assertTrue(false);
}
}

Categories