Continuing test execution in junit4 even when one of the asserts fails - java

I have my existing framework built up using Jfunc which provides a facility to continue exection even when one of the asserts in the test case fails. Jfunc uses junit 3.x framework. But now we are migrating to junit4 so I can't use Jfunc anymore and have replaced it with junit 4.10 jar.
Now the problem is since we have extensively used jfunc in our framework, and with junit 4 we want to make our code continue the execution even when one of the asserts fails in a test case.
Does anyone has any suggestion/idea for this, i know in junit the tests needs to be more atomic i.e. one assert per test case but we can't do that in our framework for some reason.

You can do this using an ErrorCollector rule.
To use it, first add the rule as a field in your test class:
public class MyTest {
#Rule
public ErrorCollector collector = new ErrorCollector();
//...tests...
}
Then replace your asserts with calls to collector.checkThat(...).
e.g.
#Test
public void myTest() {
collector.checkThat("a", equalTo("b"));
collector.checkThat(1, equalTo(2));
}

I use the ErrorCollector too but also use assertThat and place them in a try catch block.
import static org.junit.Assert.*;
import static org.hamcrest.Matchers.*;
#Rule
public ErrorCollector collector = new ErrorCollector();
#Test
public void calculatedValueShouldEqualExpected() {
try {
assertThat(calculatedValue(), is(expected));
} catch (Throwable t) {
collector.addError(t);
// do something
}
}

You can also use assertj - soft assertion
#Test
public void testCollectErrors(){
SoftAssertions softly = new SoftAssertions();
softly.assertThat(true).isFalse();
softly.assertThat(false).isTrue();
// Don't forget to call SoftAssertions global verification !
softly.assertAll();
}
Also exist other way to use it without manually invoke softly.assertAll();
with rule
with autoclosable
Using the static assertSoftly method

Use try/finally blocks. This worked in my case:
...
try {
assert(...)
} finally {
// code to be executed after assert
}
...

Try - catch, in "try" use the assertion, in "catch" add the possible error to collection.
Then throw the exception at the end of test, in tearDown().
So if there will be fail/error in assert, it will be catched and test will continue.
(The collection in example is static, you can also make new instance in setUp() for each #Test)
public static List<String> errors = new ArrayList<>();
try {
//some assert...
}
catch (AssertionError error) {
errors.add(error.toString());
}
#After
public void tearDown() {
try {
if (!errors.isEmpty()) {
throw new AssertionError(errors);
}
}
finally {
//empty list because it's static, alternatively make instance for each test in setUp()
errors.clear();
}
}

I created my own simple assertions class. Easy to extend with your use-cases:
public class MyEquals {
public static void checkTestSummary(MyTestSummary myTestSummary) {
final List<MyTestResult> conditions = myTestSummary.getTestResults();
final int total = conditions.size();
final boolean isSuccessful = myTestSummary.isSuccessful();
if (isSuccessful) {
System.out.println(format("All [%s] conditions are successful!", total));
} else {
final List<MyTestResult> failedConditions = conditions.stream().filter(MyTestResult::isTestResult).collect(Collectors.toList());
System.out.println(format("\nNot yet.. [%s out of %s] conditions are failed", failedConditions.size(), total));
}
if (!isSuccessful) {
for (int i = 0; i < total; i++) {
final MyTestResult myTestResult = conditions.get(i);
if (myTestResult.isTestResult()) {
System.out.println(format(" Success [%s of %s] => Expected %s Actual %s Good!", i + 1, total, myTestResult.getExpected(), myTestResult.getActual()));
} else {
System.out.println(format("!! Failed [%s of %s] => Expected %s Actual %s", i + 1, total, myTestResult.getExpected(), myTestResult.getActual()));
}
}
}
assertTrue(isSuccessful);
}
public static void myAssertEquals(MyTestSummary myTestSummary, Object expected, Object actual) {
if (checkEquals(expected, actual)) {
assertEquals(expected, actual);
myTestSummary.addSuccessfulResult(expected, actual);
} else {
myTestSummary.addFailedResult(expected, actual);
myTestSummary.setSuccessful(false);
}
}
public static boolean checkEquals(Object value1, Object value2) {
if (value1 == null && value2 == null) {
return true;
} else if (value1 != null && value2 == null) {
return false;
} else if (value1 == null && value2 != null) {
return false;
} else if (value1 != null && value2 != null) {
return value1.equals(value2);
}
return false;
}
}
#Builder
#Value
public class MyTestResult {
String expected;
String actual;
boolean testResult;
}
#Data
public class MyTestSummary {
private boolean successful = true;
private List<MyTestResult> testResults = new ArrayList<>();
public MyTestSummary() {
}
public void addSuccessfulResult(Object expected, Object actual) {
getTestResults().add(MyTestResult.builder()
.expected(String.valueOf(expected))
.actual(String.valueOf(actual))
.testResult(true)
.build()
);
}
public void addFailedResult(Object expected, Object actual) {
getTestResults().add(MyTestResult.builder()
.expected(String.valueOf(expected))
.actual(String.valueOf(actual))
.testResult(false)
.build()
);
}
}
Usage in the junit test
#Test
public void testThat() {
MyTestSummary myTestSummary = new MyTestSummary();
myAssertEquals(myTestSummary, 10, 5 + 5);
myAssertEquals(myTestSummary, "xxx", "x" + "x");
checkTestSummary(myTestSummary);
}
Output:
Not yet.. [1 out of 2] conditions are failed
Success [1 of 2] => Expected 10 Actual 10 Good!
!! Failed [2 of 2] => Expected xxx Actual xx
org.opentest4j.AssertionFailedError: expected: <true> but was: <false>
Expected :true
Actual :false

Another option is the observable pattern in conjunction with lambda expressions. You can use something like the above.
public class MyTestClass {
private final List<Consumer<MyTestClass>> AFTER_EVENT = new ArrayList<>();
#After
public void tearDown() {
AFTER_EVENT.stream().forEach(c -> c.accept(this));
}
#Test
public void testCase() {
//=> Arrange
AFTER_EVENT.add((o) -> {
// do something after an assertion fail.
}));
//=> Act
//=> Assert
Assert.assertTrue(false);
}
}

Related

JMockit Mock an object inside while loop

I have a below test class method that I am writing a test for.
public String doSomething(Dependency dep) {
StringBuilder content = new StringBuilder();
String response;
while ((response = dep.get()) != null) {
content.append(response);
}
return content.toString();
}
Below is the test case I have. Basically I want the Dependency#get() method to return "content" in the first iteration and null in the second.
#Test
void test(#Mocked Dependency dep) {
new Expectations() {
{
dep.get();
result = "content";
times = 1;
}
};
Assertions.assertEquals("content", testSubject.doSomething(dep));
}
However this results in JMockit throwing Unexpected invocation to: Dependency#get() error. If I remove the times field, my test runs in a forever loop.
How can I test this method?
So I found out that JMockit apart from the result paramater can also set a returns(args...) method to which we can pass a series of arguments that it will expect sequentially. So modifying the test to this worked for me.
#Test
void test(#Mocked Dependency dep) {
new Expectations() {
{
dep.get();
returns("content", null);
times = 2; // limiting to mock only twice as only 2 values are provided in returns
}
};
Assertions.assertEquals("content", testSubject.doSomething(dep));
}

Need design suggestions for nested conditions

I need to write the logic with many conditions(up to 30 conditions) in one set of rule with many if else conditions and it could end in between or after all the conditions.
Here is the sample code I have tried with some possible scenario. This gives me result but doesn't look good and any minor miss in one condition would take forever to track.
What I have tried so far is, Take out common conditions and refactored to some methods. Tried creating interface with conditions and various set would implement it.
If you have any suggestion to design this, would help me. Not looking for detailed solution but even a hint would be great.
private Boolean RunCondition(Input input) {
Boolean ret=false;
//First if
if(input.a.equals("v1")){
//Somelogic1();
//Second if
if(input.b.equals("v2"))
//Third if
if(input.c >1)
//Fourth if
//Somelogic2();
//Go fetch key Z1 from database and see if d matches.
if(input.d.equals("Z1"))
System.out.println("Passed 1");
// Fourth Else
else{
System.out.println("Failed at fourth");
}
//Third Else
else{
if(input.aa.equals("v2"))
System.out.println("Failed at third");
}
//Second Else
else{
if(input.bb.equals("v2"))
System.out.println("Failed at second");
}
}
//First Else
else{
if(input.cc.equals("v2"))
System.out.println("Failed aat first");
}
return ret;
}
public class Input {
String a;
String b;
int c;
String d;
String e;
String aa;
String bb;
String cc;
String dd;
String ee;
}
The flow is complicated because you have a normal flow, plus many possible exception flows when some of the values are exceptional (e.g. invalid).
This is a perfect candidate to be handled using a try/catch/finally block.
Your program can be rewritten into following:
private Boolean RunCondition(Input input) {
Boolean ret=false;
try {
//First if
if(!input.a.equals("v1")) {
throw new ValidationException("Failed aat first");
}
//Somelogic1();
//Second if
if(!input.b.equals("v2")) {
throw new ValidationException("Failed at second");
}
//Somelogic2()
//Third if
if(input.c<=1) {
throw new ValidationException("Failed at third");
}
//Fourth if
//Somelogic2();
//Go fetch key Z1 from database and see if d matches.
if(!input.d.equals("Z1")) {
throw new ValidationException("Failed at fourth");
}
System.out.println("Passed 1");
} catch (ValidationException e) {
System.out.println(e.getMessage());
}
return ret;
}
Where you can define your own ValidationException (like below), or you can reuse some of the existing standard exception such as RuntimeException
class ValidationException extends RuntimeException {
public ValidationException(String arg0) {
super(arg0);
// TODO Auto-generated constructor stub
}
/**
*
*/
private static final long serialVersionUID = 1L;
}
You can read more about this in
https://docs.oracle.com/javase/tutorial/essential/exceptions/index.html
Make a separate class for the condition:
package com.foo;
import java.util.ArrayList;
import java.util.Arrays;
import java.util.List;
public class App
{
static class Condition<T> {
final int idx;
final T compareValue;
public Condition(final int idx, final T compareValue) {
this.idx = idx;
this.compareValue = compareValue;
}
boolean satisfies(final T other) {
return other.equals(compareValue);
}
int getIdx() {
return idx;
}
}
public static void main( String[] args )
{
final List<Condition<String>> conditions = new ArrayList<Condition<String>>();
conditions.add(new Condition<String>(1, "v1"));
conditions.add(new Condition<String>(2, "v2"));
final List<String> inputs = new ArrayList<String>(Arrays.asList("v1", "xyz"));
boolean ret = true;
for (int i = 0; i < inputs.size(); i++) {
if (!conditions.get(i).satisfies(inputs.get(i)))
{
System.out.println("failed at " + conditions.get(i).getIdx());
ret = false;
break;
}
}
System.out.println("ret=" + ret);
}
}
#leeyuiwah's answer has a clear structure of the conditional logic, but exceptions aren't the right tool for the job here.
You shouldn't use exceptions to cope with non-exceptional conditions. For one thing, exceptions are really expensive to construct, because you have to walk all the way up the call stack to construct the stack trace; but you don't need the stack trace at all.
Check out Effective Java 2nd Ed Item 57: "Use exceptions only for exceptional conditions" for a detailed discussion of why you shouldn't use exceptions like this.
A simpler option is to define a little helper method:
private static boolean printAndReturnFalse(String message) {
System.out.println(message);
return false;
}
Then:
if(!input.a.equals("v1")) {
return printAndReturnFalse("Failed aat first");
}
// etc.
which I think is a simpler; and it'll be a lot faster.
Think of each rule check as an object, or as a Strategy that returns whether or not the rule passes. Each check should implement the same IRuleCheck interface and return a RuleCheckResult, which indicates if the check passed or the reason for failure.
public interface IRuleCheck
{
public RuleCheckResult Check(Input input);
public String Name();
}
public class RuleCheckResult
{
private String _errorMessage;
public RuleCheckResult(){}//All Good
public RuleCheckResult(String errorMessage)
{
_errorMessage = errorMessage;
}
public string ErrorMessage()
{
return _errorMessage;
}
public Boolean Passed()
{
return _errorMessage == null || _errorMessage.isEmpty();
}
}
public class CheckOne implements IRuleCheck
{
public RuleCheckResult Check(Input input)
{
if (input.d.equals("Z1"))
{
return new RuleCheckResult();//passed
}
return new RuleCheckResult("d did not equal z1");
}
public String Name();
}
Then you can simply build a list of rules and loop through them,
and either jump out when one fails, or compile a list of failures.
for (IRuleCheck check : checkList)
{
System.out.println("checking: " + check.Name());
RuleCheckResult result = check.Check(input);
if(!result.Passed())
{
System.out.println("FAILED: " + check.Name()+ " - " + result.ErrorMessage());
//either jump out and return result or add it to failure list to return later.
}
}
And the advantage of using the interface is that the checks can be as complicated or simple as necessary, and you can create arbitrary lists for checking any combination of rules in any order.

TestNG: RetryAnalyzer, dependent groups are skipped if a test succeeds upon retry

I have a RetryAnalyzer and RetryListener. In RetryListener onTestFailure, I check if the test is retryable, if yes I set the result to SUCCESS. I also do, testResult.getTestContext().getFailedMethods.removeResult(testResult) in this method.
I again remove failed results (with valid if conditions) in onFinish method in the listener.
Now the problem I am running into is, I made each test class into groups. One test class does the WRITES and one test class does the READS. So READs group depends on WRITES.
If a test case fails on 1st attempts and succeeds on retrying, then all the test cases in the dependent group are SKIPPED, despite removing failed result in onTestFailure method.
Is there a way to run dependent method if a test case succeeds on retrying? I am fine with the behavior if the test case fails in all attempts, so I am not looking to add "alwaysRun=true" on each dependent method.
On retry you should be removing the test from the Failed tests. And plz be sure to remove ITestResult object. (i.e, result but not result.getMethod())
#Override
public boolean retry(ITestResult result) {
if (currentCount < maxRetryCount) {
result.getTestContext().getFailedTests().removeResult(result);
currentCount++;
return true;
}
return false;
}
I was using TestNG 6.8.7, upgraded it to 6.9.5.
After that, upon retry, TestNG was marking test case as SKIPPED. I just had to create a Listener, which implemented TestListenerAdapter and override onTestSkipped, if there are retries available then remove the method from skippedTests.
result.getTestContext().getSkippedTests().removeResult(result.getMethod());
If not set test to FAILURE. Now it works as expected.
In retry file, add a mechanism to see if retry is left of the case.
In Custom Listener, override onTestSkipped() and check if RetryLeft, remove it from skippedResult and return
public class Retry implements IRetryAnalyzer {
private int count = 0;
private static final List retriedTests = new CopyOnWriteArrayList();
private static final ConcurrentHashMap<String, Boolean> retriedTestsMap = new ConcurrentHashMap();
#Override
public boolean retry(ITestResult iTestResult) {
int maxTry = 3;
if (!iTestResult.isSuccess()) { // Check if test not succeed
String name = getNameForTestResult(iTestResult);
if (count < maxTry) { // Check if maxTry count is reached
count++; // Increase the count count by 1
retriedTests.add(iTestResult);
retriedTestsMap.put(name, true);
RestApiUtil.println("**" + name + " retry count " + count + " **");
iTestResult.setStatus(ITestResult.FAILURE); // Mark test as failed
return true; // Tells TestNG to re-run the test
} else {
iTestResult.setStatus(ITestResult.FAILURE); // If maxCount reached,test marked as failed
retriedTestsMap.put(name, true);
}
} else {
iTestResult.setStatus(ITestResult.SUCCESS); // If test passes, TestNG marks it as passed
}
return false;
}
public static List getRetriedTests() {
return retriedTests;
}
public static boolean isRetryLeft(ITestResult tr) {
return retriedTestsMap.get(getNameForTestResult(tr));
}
private static String getNameForTestResult(ITestResult tr) {
return tr.getTestClass().getRealClass().getSimpleName() + "::" + tr.getName();
}
}
public class CustomTestNGListener extends TestListenerAdapter {
#Override
public void onTestSkipped(ITestResult tr) {
if (Retry.isRetryLeft(tr)) {
tr.getTestContext().getSkippedTests().removeResult(tr);
return;
}
super.onTestSkipped(tr);
}
}

JUnit Test for Comparator in Java

How can I test the following class using JUnit testing. I am new to unit testing I just need a push to start
public class ComponentComparator implements Comparator< Component >
{
#Override
public int compare ( final Component c1, final Component c2 )
{
if ( c1.getBandwidthWithHeader() > c2.getBandwidthWithHeader() )
{
return -1;
}
else if ( c1.getBandwidthWithHeader() < c2.getBandwidthWithHeader() )
{
return 1;
}
return 0;
}
}
Part of the component class is, there is no constructor for this class
public class Component
{
private float bandwidthwithHeader;
public void setBandwidthWithHeader ( float bandwidthwithHeader )
{
this.bandwidthwithHeader = bandwidthwithHeader;
}
public float getBandwidthWithHeader ()
{
return this.bandwidthwithHeader;
}
}
You should go through some tutorial on JUnit.
Morfic's comment points to a good tutorial.
To begin with helping you with this - there are three possible return values from the comparator -> wrote a case for each one.
import org.junit.Assert;
import org.junit.Test;
public class ComponentComparatorTest {
#Test
public void testCompare() throws Exception {
ComponentComparator comparator = new ComponentComparator();
Assert.assertEquals(comparator.compare(new Component(1), new Component(1)), 0);
Assert.assertEquals(comparator.compare(new Component(2), new Component(1)), -1);
Assert.assertEquals(comparator.compare(new Component(1), new Component(2)), 1);
}
}
I am using a dummy class
public class Component {
int bandwidth;
public Component(int bandwidth) {
this.bandwidth = bandwidth;
}
public int getBandwidthWithHeader(){
return bandwidth;
}
}
The unit test should test all possible outcomes.
A comparator has three success outcomes.
You need to decide how you want to handle null parameter values (your current solution: NullPointerException).
Here is a unit test of your current comparator:
public class Component
{
private int bandwidthWithHeader;
public int getBandwidthWithHeader()
{
return bandwidthWithHeader;
}
public void setBandwidthWithHeader(final int newValue)
{
bandwidthWithHeader = newValue;
}
}
public class ComponentTest
{
private final ComponentComparator componentComparator = new ComponentComparator();
#Test
public void negative1()
{
Component two = new Component();
try
{
componentComparator.compare(null, two);
fail("Expected exception was not thrown");
}
catch(NullPointerException exception)
{
// The NullPointerException is the expected result.
assertTrue(true);
}
}
#Test
public void negative2()
{
Component one = new Component();
try
{
componentComparator.compare(one, null);
fail("Expected exception was not thrown");
}
catch(NullPointerException exception)
{
// The NullPointerException is the expected result.
assertTrue(true);
}
}
#Test
public void negative3()
{
try
{
componentComparator.compare(null, null);
fail("Expected exception was not thrown");
}
catch(NullPointerException exception)
{
// The NullPointerException is the expected result.
assertTrue(true);
}
}
#Test
public void positive1()
{
Component one = new Component();
Component two = new Component();
// test one < two
one.setBandwidthWithHeader(7);
two.setBandwidthWithHeader(16);
assertEquals(-1, componentComparator.compare(one, two);
// test two < one
one.setBandwidthWithHeader(17);
two.setBandwidthWithHeader(16);
assertEquals(1, componentComparator.compare(one, two);
// test two == one
one.setBandwidthWithHeader(25);
two.setBandwidthWithHeader(25);
assertEquals(0, componentComparator.compare(one, two);
}
}
How about something like this:
package mypackage;
import org.junit.Test;
import static junit.framework.Assert.assertEquals;
public class ComponentComparatorTestCase {
#Test
public void testCompareExpectZero() {
ComponentComparator sut = new ComponentComparator();
// create some components to test with
Component c1 = new Component();
Component c2 = new Component();
// execute test
int result = sut.compare(c1, c2);
// verify
assertEquals("Did not get expected result.", result, 0);
}
}
There are numerous tests you should do for any Comparator implementation.
First off, there is the fact that a Comparator should define (as stipulated in the Comparator contract) a total order on the given type.
This comes down to 3 things :
the order should be antisymmetric : if a ≤ b and b ≤ a then a = b
the order should be transitive : if a ≤ b and b ≤ c then a ≤ c
the order should be total : either a ≤ b or b ≤ a
Secondly, a Comparator implementation may choose to accept null values. So we need tests that verify whether null values are treated correctly, if accepted. Or if they are not accepted that they properly result in a NullPointerException being thrown.
Lastly, if the type being compared may be subclassed, it is worth testing that the comparator properly compares instances of various subclasses mixed with instances of the class itself. (You may need to define some subclasses in test scope for these tests)
As these tests tend to repeat for every Comparator implementation, it may be worth extracting them in an abstract test superclass.

Junit4 run a test class a fixed number of times and display results (eclipse)

I want to be able to run a Test class a specified number of times. The class looks like :
#RunWith(Parameterized.class)
public class TestSmithWaterman {
private static String[] args;
private static SmithWaterman sw;
private Double[][] h;
private String seq1aligned;
#Parameters
public static Collection<Object[]> configs() {
// h and seq1aligned values
}
public TestSmithWaterman(Double[][] h, String seq1aligned) {
this.h = h;
this.seq1aligned = seq1aligned;
}
#BeforeClass
public static void init() {
// run smith waterman once and for all
}
#Test
#Repeat(value = 20) // does nothing
// see http://codehowtos.blogspot.gr/2011/04/run-junit-test-repeatedly.html
public void testCalculateMatrices() {
assertEquals(h, sw.getH());
}
#Test
public void testAlignSeq1() {
assertEquals(seq1aligned, sw.getSeq1Aligned());
}
// etc
}
Any of the tests above may fail (concurrency bugs - EDIT : the failures provide useful debug info) so I want to be able to run the class multiple times and preferably have the results grouped somehow. Tried the Repeat annotation - but this is test specific (and did not really make it work - see above) and struggled with the RepeatedTest.class, which cannot seem to transfer to Junit 4 - the closest I found on SO is this - but apparently it is Junit3. In Junit4 my suite looks like :
#RunWith(Suite.class)
#SuiteClasses({ TestSmithWaterman.class })
public class AllTests {}
and I see no way to run this multiple times.
Parametrized with empty options is not an option really - as I need my params anyway
So I am stuck hitting Control + F11 in eclipse again and again
Help
EDIT (2017.01.25): someone went ahead and flagged this as duplicate of the question whose accepted answer I explicitly say does not apply here
As suggested by #MatthewFarwell in the comments I implemented a test rule as per his answer
public static class Retry implements TestRule {
private final int retryCount;
public Retry(int retryCount) {
this.retryCount = retryCount;
}
#Override
public Statement apply(final Statement base,
final Description description) {
return new Statement() {
#Override
#SuppressWarnings("synthetic-access")
public void evaluate() throws Throwable {
Throwable caughtThrowable = null;
int failuresCount = 0;
for (int i = 0; i < retryCount; i++) {
try {
base.evaluate();
} catch (Throwable t) {
caughtThrowable = t;
System.err.println(description.getDisplayName()
+ ": run " + (i + 1) + " failed:");
t.printStackTrace();
++failuresCount;
}
}
if (caughtThrowable == null) return;
throw new AssertionError(description.getDisplayName()
+ ": failures " + failuresCount + " out of "
+ retryCount + " tries. See last throwable as the cause.", caughtThrowable);
}
};
}
}
as a nested class in my test class - and added
#Rule
public Retry retry = new Retry(69);
before my test methods in the same class.
This indeed does the trick - it does repeat the test 69 times - in the case of some exception a new AssertionError, with an individual message containing some statistics plus the original Throwable as a cause, gets thrown. So the statistics will be also visible in the jUnit view of Eclipse.

Categories