I want to continue test run execution even the one or more assertions get fails in TestNG.
I referred below links in order to implement soft assertion in my project.
http://beust.com/weblog/2012/07/29/reinventing-assertions/
http://seleniumexamples.com/blog/guide/using-soft-assertions-in-testng/
http://www.seleniumtests.com/2008/09/soft-assertion-is-check-which-doesnt.html
But I'm not understanding the flow of code execution, like function calls, FLOW.
Kindly help me to understand the work flow of the soft assertions.
Code:
import org.testng.asserts.Assertion;
import org.testng.asserts.IAssert;
//Implementation Of Soft Assertion
public class SoftAssertions extends Assertion{
#Override public void executeAssert(IAssert a){
try{ a.doAssert(); }
catch(AssertionError ex){
System.out.println(a.getMessage()); } } }
//Calling Soft Assertion
SoftAssertions sa = new SoftAssertions();
sa.assertTrue(actualTitle.equals(expectedTitle),
"Login Success, But Uname and Pwd are wrong");
Note:
Execution Continues even though above assertion fails
Soft assertions work by storing the failure in local state (maybe logging them to stderr as they are encountered). When the test is finished it needs to check for any stored failures and, if any were encountered, fail the entire test at that point.
I believe what the maintainer of TestNG had in mind was a call to myAssertion.assertAll() at the end of the test which will run Assert.fail() and make the test fail if any previous soft-assertion checks failed.
You can make this happen yourself by adding a #Before method to initialize your local soft-assertion object, use it in your test and add an #After method to run the assertAll() method on your soft-assertion object.
Be aware that this #Before/#After approach makes your test non-thread-safe so each test must be run within a new instance of your test class. Creating your soft-assertion object inside the test method itself and running the assertAll() check at the end of the method is preferable if your test needs to be thread-safe. One of the cool features of TestNG is its ability to run multi-threaded tests, so be aware of that as you implement these soft-asserts.
Related
I'm executing a few hundred tests in test classes consisting of a singular beforeMethod test, followed by a variable amount of primary tests and occasionally an afterMethod.
The purpose of the beforMethod test, is to populate the test environment with data used in the primary tests while separating logging and recording from the primary tests, which we report on.
We have set up an automatic issue creation tool using a listener. We've found that it would give great value to add execution time to this tool, so that it can show us how long it would take to reproduce the errors in said issues.
To this end, I have made a simple addition to this code, that uses ITestResult.getEndMillis() and getStartMillis() to get the execution time.
The problem we're experiencing with this approach, is that if the test encounters a failure during the primary tests, ITestResult.getStartMillis() will not account for the start time of the before method, but only the primary method.
How would we go about determining the start time of the test class itself (always the beforeMethod), rather than just the current method?
Since we're running hundreds of tests in a massive setup, a solution that allows this without changing each separate test class, would definitely be preferable.
The setup of the java test classes look something like this (scrubbed of business specifics):
package foobar;
import foobar
#UsingTunnel
#Test
public class FLOWNAME_TESTNAME extends TestBase {
private final Value<String> parameter;
public FLOWNAME_TESTNAME(Value<String> parameter) {
super(PropertyProviderImpl.get());
this.parameter = parameter;
}
#StoryCreating(test = "TESTNAME")
#BeforeMethod
public void CONDITIONS() throws Throwable {
new TESTNAME_CONDITIONS(parameter).executeTest();
}
#TestCoverage(test = "TESTNAME")
public void PRIMARYTESTS() throws Throwable {
TESTCASE1 testcase1 = new TESTCASE1(parameter.get());
testcase1.executeTest();
testcase1.throwSoftAsserts();
TESTCASE2 testcase2 = new TESTCASE2(parameter.get());
testcase2.executeTest();
testcase2.throwSoftAsserts();
}
}
So in this case, the problem arises when the listener detects a failure in either TESTCASE1 or TESTCASE2, because these will not include the execution time of TESTNAME_CONDITIONS because that test is inside a different method, yet practically speaking, they are part of the same test flow, aka the same test class.
I found a solution to the issue.
It is possible to use ITestResult.getTestContext().getStartDate().getTime() to obtain the time of which the test class itself is run, rather than the current test method.
The final solution was quite simply:
result.getEndMillis() - result.getTestContext().getStartDate().getTime()) / 60000
Where "result" corresponds to ITestResult.
This outputs the time between the start of the test and the end of the last executed method.
I am trying to write a test method in TestNG, that after it fails - the entire test suite will stop running.
#Test
public void stopTestingIfThisFailed() throws Exception
{
someTestStesp();
if (softAsserter.isOneFailed()) {
asserter.fail("stopTestingIfThisFailed test Failed");
throw new Exception("Test can't continue, fail here!");
}
}
The exception is being thrown, but other test methods are running.
How to solve this?
You can use the dependsOnMethods or dependsOnGroups annotation parameter in your other test methods:
#Test(dependsOnMethods = {"stopTestingIfThisFailed"})
public void testAnotherTestMehtod() {
}
JavaDoc of the dependsOnMethods parameter:
The list of methods this method depends on. There is no guarantee on the order on which the methods depended upon will be run, but you are guaranteed that all these methods will be run before the test method that contains this annotation is run. Furthermore, if any of these methods was not a SUCCESS, this test method will not be run and will be flagged as a SKIP. If some of these methods have been overloaded, all the overloaded versions will be run.
See https://testng.org/doc/documentation-main.html#dependent-methods
It depends on what you expect (there is no direct support for this in TestNG). You can create ShowStopperException which is thrown in #Test and then in your ITestListener implementation (see docs) you can call System.exit(1 (or whatever number)) when you find this exeption in result but there will be no report and in general it's not good practice. Second option is to have some base class which is parent of all test classes and some context variable which will handle ShowStopperException in #BeforeMethod in parent class and throw SkipException so workflow can be like:
test passed
test passed
showstopper exception in some test
test skipped
test skipped
test skipped
...
I solved the problem like this: after a test that mustn't fail fails - I'm writing data to a temporary text file.
Later, in the next test I added code in the #BeforeClass that checks that data in the former mentioned text file. If a show stopper was found I'm killing the current process.
If a test the "can't" fail actually fails:
public static void saveShowStopper() {
try {
General.createFile("ShowStopper","tempShowStopper.txt");
} catch (ParseException e) {
e.printStackTrace();
}
}
The #BeforeClass validating code:
#BeforeClass(alwaysRun = true)
public void beforeClass(ITestContext testContext, #Optional String step, #Optional String suiteLoopData,
#Optional String group) throws Exception
{
boolean wasShowStopperFound = APIUtils.loadShowStopper();
if (wasShowStopperFound){
Thread.currentThread().interrupt();
return;
}
}
It behaves if you throw a specific exception, SkipException, from the #BeforeSuite setup method.
See (possible dupe)
TestNG - How to force end the entire test suite from the BeforeSuite annotation if a condition is met
If you want to do it from an arbitrary test, it doesn't appear there is a framework mechanism. But you could always flip a flag, and check that flag in the #BeforeTest setup method. Before you jump to that, maybe have a think if you could check once before the whole suite runs, and just abort there (ie #BeforeSuite).
I am using extentreports to add reports for my test which is written with Java and Selenium.
I notice that if a ExtentTest has two logs, "INFO" and "PASSED", if the pass log is not catched it will be considered as PASSED. How can I change it in a way that if a test is not passed it will automatically be singed as Failed?
Not possible to change the behavior to fail the test case by default. Its a design decision, every test framework follows (TestNG, Junit, NUnit etc) not just Extent Reports.
The assumption is to start the test, that it will always pass. So even you have only Info logs, test case is marked as passed.
However, if by default, you assume the test is failed, until you encounter PASS, the test is still considered FAILED, because, even a single fail status in the entire test fails the test, regardless of how many pass logs you have.
You don't provide detail of your code and version you are using for extent report.
Assume u are using extent report version 2.40 and use this code :
#AfterMethod
public void tearDown(ITestResult result)
{
if(result.getStatus()==ITestResult.FAILURE)
{
//hope u know how to create ExtenTest and ExtentReport instance
//logger is extent test instance
logger.log(LogStatus.FAIL, "Title verification", image);
}
//report is ExtentReport instance
report.endTest(logger);
report.flush();
}
I have searched around but have been unsuccessful in determining whether the following approach is possible / good practice. Basically what I would like to do is the following:
Create JUnit Tests which use DBUnit to initialize the data. Run multiple test methods each which run off of the same initial dataset. Rollback after each test method back to the state right after the initial setUp function. After all test methods have run rollback any changes that were made in the setUp function. At this point the data in the database should be exactly the same as it was prior to running the JUnit test class.
Ideally I would not have to reinitialize the data before each test case because I could just roll back to the state right after setUp.
I have been able to roll back individual test methods but have been unable to roll back changes made in setUp after all test methods have run.
NOTE: I am aware of the different functionality of DBUnit such as CLEAN_INSERT, DELETE etc. I am using the Spring framework to inject my dataSource.
An example layout would look like:
public class TestClass {
public void setUp() {
// Calls a method in a different class which uses DBUnit to initialize the database
}
public void runTest1() {
// Runs a test which may insert / delete data in the database
// After running the test the database is in the same state as it was
// after running setUp
}
public void runTest2() {
// Runs a test which may insert / delete data in the database
// After running the test the database is in the same state as it was
// after running setUp
}
// After runTest1 and runTest2 have finished the database will be rolled back to the
// state before any of the methods above had run.
// The data will be unchanged as if this class had never even been run
}
I would be running the tests in a development database however I would prefer to not affect any data currently in the database. I am fine with running CLEAN_INSERT at the start to initialize the data however after all test methods have run I would like the data back to how it was before I ran my JUnit test.
Thanks in advance
Just as "setUp" JUnit offers a "tearDown" method executed after each test method. that you could use to rollback. Also starting with JUnit 4 you have the following annotations:
#BeforeClass: run once before running any of your tests in the test case
#Before: run every time before a test method
#After: run every time after a test method
#AfterClass: run once after all your tests in the current suite have been executed
We solved a similar problem at the oVirt open source project. Please take a look at the code residing at engine\backend\manager\modules\dal\src\test\java\org\ovirt\engine\core\dao\BaseDAOTestCase.java.
In general look at what we did there at the #BeforeClass and #AfterClass methods. You can use it on per method basis. We used the Spring-test framework for that.
I'm having a strange problem with Easymock 3.0 and JUnit 4.8.2.
The problem only occurs when executing the tests from Maven and not from Eclipse.
This is the unit test (very simple):
...
protected ValueExtractorRetriever mockedRetriever;
...
#Before
public void before() {
mockedRetriever = createStrictMock(ValueExtractorRetriever.class);
}
#After
public void after() {
reset(mockedRetriever);
}
#Test
public void testNullValueExtractor() {
expect(mockedRetriever.retrieve("PROP")).andReturn(null).once();
replay(mockedRetriever);
ValueExtractor retriever = mockedRetriever.retrieve("PROP");
assertNull(retriever);
assertTrue(true);
}
And I get:
java.lang.IllegalStateException: 1 matchers expected, 2 recorded.
The weird thing is that I'm not even using an argument matcher. And that is the only method of the test! and to make it even worst it works from Eclipse and fails from Maven!
I found a few links which didn't provide me with an answer:
Another StackOverflow post
Expected Exceptions in JUnit
If I change the unit test and add one more method (which does use an argument matcher):
#Test
public void testIsBeforeDateOk() {
expect(mockedRetriever.retrieve((String)anyObject())).andReturn(new PofExtractor()).anyTimes();
replay(this.mockedRetriever);
FilterBuilder fb = new FilterBuilder();
assertNotNull(fb);
CriteriaFilter cf = new CriteriaFilter();
assertNotNull(cf);
cf.getValues().add("2010-12-29T14:45:23");
cf.setType(CriteriaType.DATE);
cf.setClause(Clause.IS_BEFORE_THE_DATE);
CriteriaQueryClause clause = CriteriaQueryClause.fromValue(cf.getClause());
assertNotNull(clause);
assertEquals(CriteriaQueryClause.IS_BEFORE_THE_DATE, clause);
clause.buildFilter(fb, cf, mockedRetriever);
assertNotNull(fb);
Filter[] filters = fb.getFilters();
assertNotNull(filters);
assertEquals(filters.length, 1);
verify(mockedRetriever);
logger.info("OK");
}
this last method passes the test but not the other one. How is this possible!?!?!
Regards,
Nico
More links:
"bartling.blogspot.com/2009/11/using-argument-matchers-in-easymock-and.html"
"www.springone2gx.com/blog/scott_leberknight/2008/09/the_n_matchers_expected_m_recorded_problem_in_easymock"
"stackoverflow.com/questions/4605997/3-matchers-expected-4-recorded"
I had a very similar problem and wrote my findings in the link below.
http://www.flyingtomoon.com/2011/04/unclosed-record-state-problem-in.html (just updated)
I believe the problem in on another test that affects your current test. The problem is on another test class and it affects you test. In order to find the place of the real problem, I advice to disable the problematic tests one by one till you notify the failing test.
Actually this is what I did. I disabled the failing tests one by one till I found the problematic test. I found a test that throws an exception and catches by "#extected" annotation without stopping the recording.
We had this problem recently, and it only reared its head when we ran the entire test suite (1100+ test cases). Eventually, I found that I could put a breakpoint on the test that was blowing up, and then step back in the list of tests that Eclipse had already executed, looking for the previous test case that had set up a mock incorrectly.
Our problem turned out to be somebody using EasyMock.anyString() outside of an EasyMock.expect(...) statement. Sure enough, it was done two tests before the one that was failing.
So essentially, what was happening is that the misuse of a matcher outside of an expect statement was poisoning EasyMock's state, and the next time we tried to create a mock, EasyMock would blow up.
I believe the first error message
java.lang.IllegalStateException: 1
matchers expected, 2 recorded.
means your mockedRetriever methods called twice but test expects it was called once. So your Eclipse and Maven's configuration differs.
And I have no reason to reset mock after test. Just keep in mind JUnit creates new class instance for every single test method.
EDITED:
What about the reason why the last test method passed the answer is:
expect(mockedRetriever.retrieve((String)anyObject())).andReturn(new PofExtractor()).anyTimes();
But in your first test method it is:
expect(mockedRetriever.retrieve("PROP")).andReturn(null).once();
as equivalent of:
expect(mockedRetriever.retrieve("PROP")).andReturn(null);