Using SoftAssert with Parallel Execution in TestNG produces unexpected results - java

I am using the rest-assured library to test our REST api that deals with data on sports. In short I have 2 different #Test methods to call per sport, one #Test method to make multiple GET requests to gather all athlete image urls and store in a static ArrayList, and the other method to instantiate a SoftAssert object and actually call all of the url's in a for loop and soft assert a 200 response code. I then do a assertAll() at the end of the 2nd test method.
For example - I have a #Test getSoccerAthletes() which gathers all the urls from the response, the method will repeat until all athlete urls are gathered as the response is limited to 250 athletes at a time. After this method finishes, then the 2nd #Test method for Soccer will execute, it is named testSoccerAthletes() and you can see that it uses dependsOnMethods. Below is the setup.
#Test(priority = 1)
public static void getSoccerAthletes() {
baseURI = "https://XXXXX";
basePath = "XXXXX";
Response res = given().queryParam("apikey", "XXXXXXXX")
.queryParam("top", "250")
.queryParam("skip", soccerSkip)
.when().get();
Assert.assertEquals(res.statusCode(), 200);
JsonPath jPath = res.jsonPath();
System.out.println("Soccer: currentPageStart/totalCount " + jPath.getInt("currentPageStart")
+ "/" + jPath.getInt("totalCount"));
if (soccerSkip == 0) {
allSoccerAthletes = jPath.get("page.links.headshots.full");
} else {
ArrayList<String> athletes = jPath.get("page.links.headshots.full");
allSoccerAthletes.addAll(athletes);
}
if (jPath.getInt("currentPageStart") + 250 < jPath.getInt("totalCount")) {
soccerSkip += 250;
getSoccerAthletes();
}
}
#Test(priority = 2, dependsOnMethods = {"getSoccerAthletes"})
public static void testSoccerAthletes() {
SoftAssert softAssert = new SoftAssert();
for (int i = 0; i < allSoccerAthletes.size(); i++) {
String url = allSoccerAthletes.get(i);
System.out.println("Soccer Athlete: " + i + "/" + allSoccerAthletes.size());
Response res = when().head(url);
softAssert.assertEquals(res.statusCode(), 200, "Failed url: " + url);
if (i == allSoccerAthletes.size() - 1)
allSoccerAthletes.clear();
}
softAssert.assertAll();
}
I am seeing varying results, some of which have failures from the #Test testXXXXAthletes methods mixed up. This first suggestion online was to instantiate a SoftAssert object in each #Test method (which I currently am for every testXXXXAthletes method) so that can't be it.
I am starting to lean toward there are thread safety issues, but I am not really sure how to move forward with a solution. Reasons I believe there are thread safety issues are from some articles I have looked into, but do not fully understand. Articles -->
https://learn2automate.wordpress.com/2017/07/13/parallel-testng-soft-assertions-the-right-way/,
Retrieve test name on TestNG,
https://github.com/rest-assured/rest-assured/issues/1420
Any help in resolving my issues would be greatly appreciated! Btw I have also looked into using the #DataProvider annotations and that brings me to other questions about the structure of these tests. The getXXXAthletes methods are acting as data providers to the testXXXAthletes methods, but they have the dependsOnMethods attribute. Should I be using both DataProvider and dependsOnMethods, or one over the other?

Further research confirmed my theory about the thread-safety issues. I was able to conclude that the issue has nothing to do with SoftAssert and everything to do with rest-assured not being thread-safe. That information can be found here --> https://github.com/rest-assured/rest-assured/pull/851
I was able to find a thread-local branch in which someone kindly did the work to make rest-assured thread-safe. I downloaded 2 files RestAssuredThreadLocal and RestAssuredThreadLocalImpl which can be seen here --> https://github.com/rest-assured/rest-assured/commit/3307ba6c79c5547e88cea286d38e5c8a6d679229
After downloading those 2 files, there were some errors that needed resolved and some deprecated methods needed replaced. After that I was able to successfully run my rest-assured tests in parallel with TestNG with the correct results.

Related

How to Create a JUnit Test of a List<Overview>

I am currently stuck trying to create a unit test for this piece of code I have. I honestly can't figure out at all how to create a unit test for these lines of code. I have looked multiple places online and couldn't find anything. Its probably just because I don't understand unit test so I can't figure out how to create this one but could someone help me please?
public List<Overview> findOverviewByStatus(String status) throws CustomMongoException {
List<Overview> scenarioList = new ArrayList<Overview>();
LOGGER.info("Getting Scenario Summary Data for - {}", status);
Query query = new Query(Criteria.where("status").is(status));
if (mongoTemplate == null)
throw new CustomMongoException("Connection issue - Try again in a few minutes",
HttpStatus.FAILED_DEPENDENCY);
LOGGER.info("Running Query - {}", query);
scenarioList = mongoTemplate.find(query.with(new Sort(Sort.Direction.DESC, "lastUpdatedDate")), Overview.class);
return scenarioList;
}
So you want to unit test the method. Start with pretending you don't know what the code looks like (black box testing).
What happens if you call it with status of null, and then status of empty string?
What are some status string that return expected values?
Add all these as asserts to your test method to make sure that if someone changes this method in the future the unit test makes sure that it returns the expected result.
That is all a unit test usually does, makes sure that the code behaves in a predictable way and safeguard against change that violates a contract you created for the method when you wrote it.
For example:
import org.junit.Assert;
import org.junit.Test;
public class MyObjectTest {
#Test
public void testMyObjectMethod() {
// Create the object that contains your method (not in the sample you provided)
MyObjectToTest obj = new MyObjectToTest();
// Check that for a null status you get some result (assuming you want this)
Assert.assertNotNull(obj.findOverviewByStatus(null));
// Lets assume that a null status returns an empty array, add a check for it
Assert.assertTrue("null parameter size should be 0", obj.findOverviewByStatus(null).size() == 0);
//etc...
}
}

SoftAssert in Selenium Test Class

I have a test method inside a test class where I want to verify a couple things, only fail after I soft assert in this specific test method.
But, I feel my test method is getting messy with failure handling. I haven't been able to find any best practices on this. Any ideas? If I move the asserts into the page object class, it will be a bit messy there too.
#Test
public void test() {
// steps here
// then asserts here
SoftAssert soft = new SoftAssert();
String expectedHeaderText = "foo";
soft.assertTrue(pageObjectClass.isHeaderPresent(), "Unable to find the Header page object.");
soft.assertTrue(pageObjectClass.getHeader().contains(expectedHeaderText),
String.format("Expected to find '%s'. Page actually shows '%s'", expectedHeaderText, pageObjectClass.getHeader()));
// more asserts
sa.assertAll();
}
Check below convention
#Test
public void test() {
// steps here
// then asserts here
SoftAssert soft = new SoftAssert();
String expectedHeaderText = "foo";
Boolean checkHeader=pageObjectClass.isHeaderPresent() //Change the method on POM pageObjectClass such that it returns the true or false
soft.assertTrue(checkHeader,true);
String checkHeaderContent=pageObjectClass.getHeader()//change method on POM pageObjectClass to return a string
soft.assertTrue(checkHeaderContent.contains(expectedHeaderText), String.format("Expected to find '%s'. Page actually shows '%s'", expectedHeaderText, checkHeaderContent));
// more asserts
sa.assertAll();
}
You can check QMetry Automation Framework which provided assertion and verification methods. For example:
//verify element present
firstName.verifyPresent();
firstName.assertPresent();
//verify Text of Element
firstName.verifyText("First User");
firstName.assertText("First User");
//verify Text of element with StringMatchers conditions
firstName.verifyText(StringMatcher.contains("First User"));
firstName.assertText(StringMatcher.contains("First User"),"Username Validation");
In case of assert method, your test will not continue on assert fail.
In case of any verify method, your test will continue even if verification failed and the final status of test will be failed if one or more verification failed.
It's always a dilemma: to have explicit checks and readable error messages, or omit something in order to make the code shorter or more generic to be reused.
Your example is a plain SoftAssert usage which is recommended by many tutorials.
It's a best practice to keep all assertions on Test Level, not in Page Objects.
But how to deal when you see, some assertions are huge and duplicated within several test methods?
I suggest following next rules:
Similar(duplicated) assert constructions can be moved to a new helper method within a current Test Class.
Try to split the tests into classes in the way, test classes contain only similar test methods.
If several test classes contain the same code, you can create a common parent for a group of classes and move some reusable code on this test-class-group level.
Do not try to fully avoid duplications on Test Level, tests are something that is changed and outdated rapidly. They should be still readable and easy to understand what happens in the code.
I don't use SoftAsserts, but can suggest some of this as a point for the extension:
import org.testng.asserts.SoftAssert
import static java.lang.String.format
public class ProjectSoftAssert extends SoftAssert {
public void assertElementVisibleAndContainsText(
boolean isVisible, String actualText, String expectedText, String elementName
) {
assertTrue(isVisible, format("Unable to find the '%s' page object.", elementName));
assertTrue(
actualText.contains(expectedText),
format(
"Wrong '%s' page object text. Expected to find '%s'. Page actually shows '%s'",
elementName, expectedText, actualText
)
);
}
}
And in your scenario:
#Test
public void test() {
// steps here
// then asserts here
ProjectSoftAssert soft = new ProjectSoftAssert();
soft.assertElementVisibleAndContainsText(
pageObjectClass.isHeaderPresent(), pageObjectClass.getHeader(), "foo", "Header"
);
// more asserts
soft.assertAll();
}

How to mock big ArrayList by Mockito?

I am going to make method which will analyze big ArrayList. and I want to write test method in JUnit. The size of ArrayList could reach up to couple of millions. I think that it is not good idea to connect to the database and get datas from there for analyzing because a test is not a unit test if it talks to the database. So how should I act correctly in this situation? Or how big datas are analyzed by unit tests generally?
Example:
public void analyze(List<Double> list) {
double n1, n2, n3;
for (int i = 3; i < list.size(); i += 3) {
n1 = list.get(i - 3);
n2 = list.get(i - 2);
n3 = list.get(i - 1);
if (/* Some condition here using n1, n2, n3*/) {
list.remove(i);
}
}
}
#Test
public void analyzeTest() {
List<Double> list = new ArrayList<Double>();
// To add 1M data here.
analyze(list);
assertEquals(list, expected);
}
a test is not a unit test if it talks to the database
You are right.
So how should I act correctly in this situation?
Create an ArrayList object and fill it with data. Then test against this data to assert that your production code behaves as intended. You don't need millions of entries, just the minimum to cover the different cases of analyze().
how big datas are analyzed by unit tests generally?
A good practice is to have multiple levels of test:
Unit tests - verifies the logic of your code, without external resources such as database.
Integration test - verifies that different parts of your system (ex: database, web server, api, etc.) interact correctly with one another.
Performance tests - verifies how your system behaves under stress or with large volumes of data. There are special tools for this (jMeter, gatling).
Hello you can use like this:
fleResult is your Database or service object that returns the result.
And that class mocked as method param
#Test
public void testGetLogEvents(#Mocked final LogRecProcess fleResult,
And add #RunWith(JMockit.class) to test class, add Expectations and return your sample data as array
new Expectations() {
{
fleResult.getEvents();
result = Arrays.asList(new FilteredLogEvent[] { evnt });
}
};

Test if object was properly created

I'm putting more attention into unit tests these days and I got in a situation for which I'm not sure how to make a good test.
I have a function which creates and returns an object of class X. This X class is part of the framework, so I'm not very familiar with it's implementation and I don't have freedom as in the case of my "regular collaborator classes" (the ones which I have written). Also, when I pass some arguments I cannot check if object X is set to right parameters and I'm not able to pass mock in some cases.
My question is - how to check if this object was properly created, that is, to check which parameters were passed to its constructor? And how to avoid problem when constructor throws an exception when I pass a mock?
Maybe I'm not clear enough, here is a snippet:
public class InputSplitCreator {
Table table;
Scan scan;
RegionLocator regionLocator;
public InputSplitCreator(Table table, Scan scan, RegionLocator regionLocator) {
this.table = table;
this.scan = scan;
this.regionLocator = regionLocator;
}
public InputSplit getInputSplit(String scanStart, String scanStop, Pair<byte[][], byte[][]> startEndKeys, int i) {
String start = Bytes.toString(startEndKeys.getFirst()[i]);
String end = Bytes.toString(startEndKeys.getSecond()[i]);
String startSalt;
if (start.length() == 0)
startSalt = "0";
else
startSalt = start.substring(0, 1);
byte[] startRowKey = Bytes.toBytes(startSalt + "-" + scanStart);
byte[] endRowKey = Bytes.toBytes(startSalt + "-" + scanStop);
TableSplit tableSplit;
try {
HRegionLocation regionLocation = regionLocator.getRegionLocation(startEndKeys.getFirst()[i]);
String hostnamePort = regionLocation.getHostnamePort();
tableSplit = new TableSplit(table.getName(), scan, startRowKey, endRowKey, hostnamePort);
} catch (IOException ex) {
throw new HBaseRetrievalException("Problem while trying to find region location for region " + i, ex);
}
return tableSplit;
}
}
So, this creates an InputSplit. I would like to know whether this split is created with correct parameters. How to do that?
If the class is part of a framework, then you shouldn't test it directly, as the framework has tested it for you. If you still want to test the behaviour of this object, look at the cause-reaction this object would cause. More specifically: mock the object, have it do stuff and check if the affected objects (which you can control) carry out the expected behaviour or are in the correct state.
For more details you should probably update your answer with the framework you're using and the class of said framework you wish to test
This is possibly one of those cases where you shouldn't be testing it directly. This object is supposedly USED for something, yes? If it's not created correctly, some part of your code will break, no?
At some point or another, your application depends on this created object to behave in a certain way, so you can test it implicitly by testing that these procedures that depend on it are working correctly.
This can save you from coupling more abstract use cases from the internal workings and types of the framework.

Verifying partially ordered method invocations in JMockit

I'm trying to write a unit test (using JMockit) that verifies that methods are called according to a partial order. The specific use case is ensuring that certain operations are called inside a transaction, but more generally I want to verify something like this:
Method beginTransaction is called.
Methods operation1 through to operationN are called in any order.
Method endTransaction is called.
Method someOtherOperation is called some time before, during or after the transaction.
The Expectations and Verifications APIs don't seem to be able to handle this requirement.
If I have a #Mocked BusinessObject bo I can verify that the right methods are called (in any order) with this:
new Verifications() {{
bo.beginTransaction();
bo.endTransaction();
bo.operation1();
bo.operation2();
bo.someOtherOperation();
}};
optionally making it a FullVerifications to check that there are no other side-effects.
To check the ordering constraints I can do something like this:
new VerificationsInOrder() {{
bo.beginTransaction();
unverifiedInvocations();
bo.endTransaction();
}};
but this does not handle the someOtherOperation case. I can't replace the unverifiedInvocations with bo.operation1(); bo.operation2() because that puts a total ordering on the invocations. A correct implementation of the business method could call bo.operation2(); bo.operation1().
If I make it:
new VerificationsInOrder() {{
unverifiedInvocations();
bo.beginTransaction();
unverifiedInvocations();
bo.endTransaction();
unverifiedInvocations();
}};
then I get a "No unverified invocations left" failure when someOtherOperation is called before the transaction. Trying bo.someOtherOperation(); minTimes = 0 also doesn't work.
So: Is there a clean way to specify partial ordering requirements on method calls using the Expectations/Verifications API in JMockIt? Or do I have to use a MockClass and manually keep track of invocations, a la:
#MockClass(realClass = BusinessObject.class)
public class MockBO {
private boolean op1Called = false;
private boolean op2Called = false;
private boolean beginCalled = false;
#Mock(invocations = 1)
public void operation1() {
op1Called = true;
}
#Mock(invocations = 1)
public void operation2() {
op2Called = true;
}
#Mock(invocations = 1)
public void someOtherOperation() {}
#Mock(invocations = 1)
public void beginTransaction() {
assertFalse(op1Called);
assertFalse(op2Called);
beginCalled = true;
}
#Mock(invocations = 1)
public void endTransaction() {
assertTrue(beginCalled);
assertTrue(op1Called);
assertTrue(op2Called);
}
}
if you really need such test then: don't use mocking library but create your own mock with state inside that can simply check the correct order of methods.
but testing order of invocations is usually a bad sign. my advice would be: don't test it, refactor. you should test your logic and results rather than a sequence of invocations. check if side effects are correct (database content, services interaction etc). if you test the sequence then your test is basically exact copy of your production code. so what's the added value of such test? and such test is also very fragile (as any duplication).
maybe you should make your code looks like that:
beginTransaction()
doTransactionalStuff()
endTransaction()
doNonTransactionalStuff()
From my usage of jmockit, I believe the answer is no even in the latest version 1.49.
You can implement this type of advanced verification using a MockUp extension with some internal fields to keep track of which functions get called, when, and in what order.
For example, I implemented a simple MockUp to track method call counts. The purpose of this example is real, for where the Verifications and Expectations times fields did not work when mocking a ThreadGroup (useful for other sensitive types as well):
public class CalledCheckMockUp<T> extends MockUp<T>
{
private Map<String, Boolean> calledMap = Maps.newHashMap();
private Map<String, AtomicInteger> calledCountMap = Maps.newHashMap();
public void markAsCalled(String methodCalled)
{
if (methodCalled == null)
{
Log.logWarning("Caller attempted to mark a method string" +
" that is null as called, this is surely" +
" either a logic error or an unhandled edge" +
" case.");
}
else
{
calledMap.put(methodCalled, Boolean.TRUE);
calledCountMap.putIfAbsent(methodCalled, new AtomicInteger()).
incrementAndGet();
}
}
public int methodCallCount(String method)
{
return calledCountMap.putIfAbsent(method, new AtomicInteger()).get();
}
public boolean wasMethodCalled(String method)
{
if (method == null)
{
Log.logWarning("Caller attempted to mark a method string" +
" that is null as called, this is surely" +
" either a logic error or an unhandled edge" +
" case.");
return false;
}
return calledMap.containsKey(method) ? calledMap.get(method) :
Boolean.FALSE;
}
}
With usage like the following, where cut1 is a dynamic proxy type that wraps an actual ThreadGroup:
String methodId = "activeCount";
CalledCheckMockUp<ThreadGroup> calledChecker = new CalledCheckMockUp<ThreadGroup>()
{
#Mock
public int activeCount()
{
markAsCalled(methodId);
return active;
}
};
. . .
int callCount = 0;
int activeCount = cut1.activeCount();
callCount += 1;
Assertions.assertTrue(calledChecker.wasMethodCalled(methodId));
Assertions.assertEquals(callCount, calledChecker.methodCallCount(methodId));
I know question is old and this example doesn't fit OP's use case exactly, but hoping it may help guide others to a potential solution that come looking (or the OP, god-forbid this is still unsolved for an important use case, which is unlikely).
Given the complexity of what OP is trying to do, it may help to override the $advice method in your custom MockUp to ease differentiating and recording method calls. Docs here: Applying AOP-style advice.

Categories