EasyMock verifying calls to mock in tearDown method after verify finished - java

I am seeing inconsistent behaviour in EasyMock tests that I don't understand.
My first test passes..
public class MockATest {
private final AtomicLong aMock = createStrictMock(AtomicLong.class);
#Before
public void setUp() {
aMock.set(101L);
}
#After
public void tearDown() {
aMock.set(999L);
}
#Test
public void testA() {
reset(aMock);
replay(aMock);
// TODO : test stuff here
verify(aMock);
}
}
.. but my second test fails ...
public class MockBTest {
private final List<Long> bMock = createStrictMock(List.class);
#Before
public void setUp() {
bMock.add(101L);
}
#After
public void tearDown() {
bMock.add(999L);
}
#Test
public void testB() {
reset(bMock);
replay(bMock);
// TODO : test stuff here
verify(bMock);
}
}
The failure reason is
Unexpected method call List.add(999)
I have 2 questions really...
Why is the behaviour different for the 2 tests?
Why is the add(999L) that happens in the tearDown method is being verified after the verification in the testB method has already fully completed?
(I know I can make this work by adding another reset(bMock) in after the verify(bMock) but I am not sure whether this is just avoiding the issue)

Why is the behaviour different for the 2 tests?
Because AtomicLong.set is typed void AtomicLong.set(long) so it's a void method. The recording is fine. However, List.add is typed boolean List.add(E) so it's not a void method. The correct way to record a non-void method is to do expect(list.add(101L)).andReturn(true).
Why is the add(999L) that happens in the tearDown method is being verified after the verification in the testB method has already fully completed?
Because it never goes in testB(). EasyMock throws an error on the call to bMock.add(101L) in setUp() so it goes directly to the tearDown which fail as well and hides to exception from setUp().

Related

How to mark #Test as failed in #AfterMethod

I am trying to find out a way if there is any way in TetstNG to mark a test method annotated with #Test as failed inside #AfterMethod.
#Test
public void sampleTest() {
// do some stuff
}
#AfterMethod
public void tearDown() {
// 1st operation
try {
// some operation
} catch(Exception e) {
// mark sampleTest as failed
}
// 2nd operation
try {
// perform some cleanup here
} catch (Exception e) {
// print something
}
}
I have some verification to be done in all tests, which I am doing under 1st try-catch block in tearDown(). If there is an exception in that block, mark the test as failed. Then proceed for next try-catch block.
I cannot reverse the order of try-catch blocks in tearDown() because, 1st block depends on 2nd.
To the best of my knowledge you cannot do it from within #AfterMethod configuration method, because the ITestResult object that gets passed to your configuration method [ Yes you can get access to the test method's result object by adding a parameter ITestResult result to your #AfterMethod annotated method ] is not used to update back the original test method's result.
But you can easily do this if you were to leverage the IHookable interface.
You can get more information on IHookable by referring to the official documentation here.
Here's an example that shows this in action.
import org.testng.IHookCallBack;
import org.testng.IHookable;
import org.testng.ITestResult;
import org.testng.annotations.Test;
public class TestClassSample implements IHookable {
#Test
public void testMethod1() {
System.err.println("testMethod1");
}
#Test
public void failMe() {
System.err.println("failMe");
}
#Override
public void run(IHookCallBack callBack, ITestResult result) {
callBack.runTestMethod(result);
if (result.getMethod().getMethodName().equalsIgnoreCase("failme")) {
result.setStatus(ITestResult.FAILURE);
result.setThrowable(new RuntimeException("Simulating a failure"));
}
}
}
Note: I am using TestNG 7.0.0-beta7 (latest released version as of today)

How do I stop automation if 50% test methods are failed in testng?

I want to stop my execution if written #Test methods get failed more than 50%.
E.g:
public void LoginTest(){
#Test
public void ValidUserName(){
}
#Test
public void InValidUserName(){
}
#Test
public void ValidUserID(){
}
#Test
public void ValidUserIDInvalidPassword(){
}
#Test
public void EmptyUserNamePassword(){
}
}
If ValidUserName(),InValidUserName() and ValidUserID() get failed that means LoginTest 50% #Test methods got failed and Now, I do not want to execute ValidUserIDInvalidPassword() and EmptyUserNamePassword()
It would be great if anyone can help me on this.
Implement the IInvokedMethodListener interface and throw a SkipException when the threshold is reached. In the below code have used 30%.
public class MyMethodInvoke implements IInvokedMethodListener {
private int failure = 0;
#Override
public void beforeInvocation(IInvokedMethod method, ITestResult testResult) {
int testCount = testResult.getTestContext().getAllTestMethods().length;
if((failure * 1.0) / testCount > 0.3)
throw new SkipException("Crossed the failure rate");
}
#Override
public void afterInvocation(IInvokedMethod method, ITestResult testResult) {
if(testResult.getStatus()==ITestResult.FAILURE)
failure++;
}
}
#Listeners({package.MyMethodInvoke.class})
public class Test {
It works for tests in a single class, no idea how it behaves with tests in multiple classes in a suite. Or even for a parallel execution.
What you want makes no sense.
Try restructuring your methods with dependencies. Example from testNG page:
#Test
public void serverStartedOk() {}
#Test(dependsOnMethods = { "serverStartedOk" })
public void method1() {}
see: http://testng.org/doc/documentation-main.html#dependent-methods
UPDATE:
There is the concept of successPercentage but it is usually used per method and with combination with invocationCount. For example, in async invocations where the one cannot guarantee 100% invocations are successful. So, one can do:
//method is invocated 2 times. If 1 passes, test is considered OK/green.
#Test(timeOut = 2000, invocationCount = 2, successPercentage = 50)
public void waitForAnswer() throws InterruptedException{
...}
but this is not compatible with what you want.
UPDATE 2:
"What you want makes no sense." --> Read: "is not supported out of the box by TestNG". But there are some workarounds. See the nice answers in http://testng.1065351.n5.nabble.com/how-to-stop-a-test-suite-if-one-method-fails-td13441.html

dependsOnMethods for #AfterTest not finding the test method

I am trying following code:
public class ShashiTest {
#Test
public void test1(){
System.out.println("1===========");
}
#Test(dependsOnMethods="test1")
public void test2(){
System.out.println("2===========");
}
#Test(dependsOnMethods="test2")
public void test3(){
System.out.println("3===========");
}
#AfterMethod(dependsOnMethods={"test2","test3"})
public void test4(){
System.out.println("4===========");
}
}
I am expecting output as:
1===========
2===========
4===========
3===========
4===========
But I am getting exception as test method not found:
com.ShashiTest.test4() is depending on method public void com.ShashiTest.test2(), which is not annotated with #Test or not included.
at org.testng.internal.MethodHelper.findDependedUponMethods(MethodHelper.java:111)
Where I am making the mistake? How I can achieve my goal?
#AfterMethod declares that this method is run after every method annotated with #Test. Right now you have a conflict with test4() being called after test1() and before test2(), while also requiring it to be run after test2(). Refer to this for more in-depth discussion.
edit: I should probably make the call order more clear.
test1()->test4()
test2()->test4()
test3()->test4()
As you can see, requiring test4() to be run after test2() and test3() is in conflict with the #AfterMethod annotation requiring it be called after every method.
dependsOnMethod is not working like that and just used to order methods between them.
The javadoc is clear enough IMO:
The list of methods this method depends on. There is no guarantee on the order on which the methods depended upon will be run, but you are guaranteed that all these methods will be run before the test method that contains this annotation is run. Furthermore, if any of these methods was not a SUCCESS, this test method will not be run and will be flagged as a SKIP. If some of these methods have been overloaded, all the overloaded versions will be run.
But the exception should not happen so I opened an issue for it.
About your need which is running an #AfterMethod for specific methods only (something looks weird, but why not), you can do the following:
public class ShashiTest {
#Test
public void test1(){
System.out.println("1===========");
}
#Test(dependsOnMethods="test1")
public void test2(){
System.out.println("2===========");
}
#Test(dependsOnMethods="test2")
public void test3(){
System.out.println("3===========");
}
#AfterMethod
public void test4(Method m){
switch(m.getName()) {
case "test2":
case "test3":
System.out.println("4===========");
}
}
}
should work as expected.
Bit late to answer but I just faced this problem today. Error: com.expedia.FlightBooking.tearDown() is depending on method public void com.expedia.FlightBooking.flightBooking(), which is not annotated with #Test or not included.
Solution: Changing dependsOnMethods to dependsOnGroups Ex: #AfterTest(dependsOnGroups = {"flightBooking"}) has solved my problem.

How can i get the junit aassert failure reason in teardown method of test

I have written one simple test with setup,Test and tearwon method.In test i have writen 3 assert statements.First one is passed and second one is fail and third one is again pass.
Now i want to get the result of test in teardown whether it is pass or fail and If it is fail what was the reason.
Help will be appreciated!!!
You could use a TestWatcher rule.
public static class YourTest {
#Rule
public TestWatcher watchman = new TestWatcher() {
#Override
protected void failed(Throwable e, Description description) {
// log the AssertionError e
}
};
#Test
public void testSomething() {
//your test
}
}

How to run tearDown type method for a specific test in JUnit class with multiple tests?

I have a junit testCase class with multiple test methods in it ( As requirement , we don't want to create separate class for each test.)
I wanna create a tearDown type method for EACH test method , which will run specifically for that test. Not for ALL test.
My problem is , in many tests i Insert record in database, test it and delete it after test.
But, If a test fails mid way , control don't reaches till end my dummy record ain't deleting.
I think only ONE tearDown() is allowed for one class, and this tearDown() don't know what object/record i created or inserted and what to delete!!!
I want to create a tearDown() or #After method just for one specific test. Something like finally{} in java for each method.
For Eg:
public class TestDummy extends TestCase {
public void testSample1(){
InsertSomeData1();
assertFalse(true);
runTearDown1();
}
public void testSample2(){
InsertSomeData2();
assertFalse(true);
runTearDown2();
}
public void runTearDown1(){
deleteDummyDatafromTestSample1....
}
public void runTearDown2(){
deleteDummyDatafromTestSample2....
}
}
Here control will never go to runTearDown1() or runTearDown2() and I don't a one common tearDown() because it won't know what data I inserted and thats specific to each method.
It seems your test relies on a fixed database, and future tests will break if your current test breaks. What I'd recommend is not to focus on this particular problem (a test-specific tearDown method that runs for each test), but your main problem - borken tests. Before your test run, it should always work with a clean database, and this should be the case for each test. Right now, your first test has a relationship with the second (through the database).
What the right approach would be is that you recreate your database before each test, or at the very least reset it to a basic state. In this case, you'll want a test like this:
public class TestDummy {
// this code runs (once) when this test class is run.
#BeforeClass
public void setupDatabase() {
// code that creates the database schema
}
// this code runs after all tests in this class are run.
#AfterClass
public void teardownDatabase() {
// code that deletes your database, leaving no trace whatsoever.
}
// This code runs before each test case. Use it to, for example, purge the
// database and fill it with default data.
#Before
public void before() {
}
// You can use this method to delete all test data inserted by a test method too.
#After
public void after() {
}
// now for the tests themselves, we should be able to assume the database will
// always be in the correct state, independent from the previous or next test cases.
#Test
public void TestSample2() {
insertSomeData();
assertTrue(someData, isValid());
}
}
Disclaimer: JUnit 4 tests (using annotations), might not be the right annotations, might not even be the right answer(s).
You could have smth like this:
interface DBTest {
void setUpDB();
void test();
void tearDownDB();
}
class DBTestRunner {
void runTest(DBTest test) throws Exception {
test.setUpDB();
try {
test.test();
} finally {
test.tearDownDB();
}
}
}
public void test48() throws Exception {
new DBTestRunner().runTest(new DBTest() {
public void setUpDB() {...}
public void test() {...}
public void tearDownDB() {...}
});
}
#iluxa . Gr8.. Your solution is perfect!!! In one test class i created two tests test48 and test49 (same as required in my code above testSample1 and testSample2) and viola! every test method now gets its own setup() and tearDown. Only this solution looks little complicated as need to use DBTestRunner in each method, but I don't see any better solution. I was thinking Junit may have some direct solution. like #After or tearDown() with some parameter or something.
Tks a lot.
Use MethodRule:
public class MyRule implements MethodRule {
#Override
public Statement apply(final Statement base, FrameworkMethod method, Object target) {
return new Statement() {
#Override
public void evaluate() throws Throwable {
try {
base.evaluate();
} catch (AssertionError e) {
doFail();
} finally {
doAnyway();
}
}
};
}
}
Then declare it in your test class:
public class TestDummy{
public MethodRule rule = new MyRule();
......
}

Categories