I am using extentreports to add reports for my test which is written with Java and Selenium.
I notice that if a ExtentTest has two logs, "INFO" and "PASSED", if the pass log is not catched it will be considered as PASSED. How can I change it in a way that if a test is not passed it will automatically be singed as Failed?
Not possible to change the behavior to fail the test case by default. Its a design decision, every test framework follows (TestNG, Junit, NUnit etc) not just Extent Reports.
The assumption is to start the test, that it will always pass. So even you have only Info logs, test case is marked as passed.
However, if by default, you assume the test is failed, until you encounter PASS, the test is still considered FAILED, because, even a single fail status in the entire test fails the test, regardless of how many pass logs you have.
You don't provide detail of your code and version you are using for extent report.
Assume u are using extent report version 2.40 and use this code :
#AfterMethod
public void tearDown(ITestResult result)
{
if(result.getStatus()==ITestResult.FAILURE)
{
//hope u know how to create ExtenTest and ExtentReport instance
//logger is extent test instance
logger.log(LogStatus.FAIL, "Title verification", image);
}
//report is ExtentReport instance
report.endTest(logger);
report.flush();
}
Related
This is more of a question on test automation framework design. Very hard indeed to summarize whole question in one line :)
I am creating a test automation framework using Selenium. Mostly I am accessing the data (methods name) from an excel file.
In my main Runner class I am getting a list of test cases. Each test case has a set of methods (can be same or different) which I have defined in a java class and executing each method using java reflection api. Everything is fine till this point.
Now I want to incorporate TestNG and reporting/logging in my automation suite. Problem is I cant use #Test for each method as TestNG considers #Test = 1 Test Case - but my 1 Test case might have more than 1 methods. My methods are more like a test steps for a test case, reason is I dont want repeat the code. I want to create a #Test dynamically calling different sets of methods and executing them in Java Or defining each teststeps for a #Test. I was going through the TestNG documentation, but could not able to locate any feature to handle this situation.
Any help is really appreciated and if you have any other thoughts to handle this situaton I am here to listen.
Did you try the following?
#Test(priority = 1)
public void step1() {
//code
}
#Test(priority = 2)
public void step2() {
//code
}
You need to use "priority" for each method, otherwise it won't work.
I have a jar file that I reflect from it test classes (junit test).
I created TestSuite instance and added the tests to it.
In order to check my code, I've tried to add only one example test class to the test suite.
TestSuite suite = new TestSuite();
suite.addTest(new JUnit4TestAdapter(ExampleTest.class));
Then I called suite.run() in order to run the test:
TestResult result = new TestResult();
suite.run(result);
The problem is that when the test is done, the failures list that should be in result is empty.
Enumeration<TestFailure> failures = result.failures(); // = empty list
How can I get the failures using testSuite? If I use JUnitCore.runclasses , I do get the failures list but it can't be used with an instance of TestSuite, which I have to use since I get the test classes as input.
Edit-1:
If it is possible to create a Class that extends TestSuite and add classes to the suite dynamically, it will be good for my needs to.
Any suggestion will be great.
Edit-2:
From more searching in the web, I saw there us a difference between a failure and a testFailure. How can I fail a test with testFailure and not a failure?
After some more debugging, I found out that the TestResult object did catch the failures but reported them as Errors (Assertion Errors).
I also found out that TestResult.errors() returned an enumerator of TestFailure. Each TestFailure contain information about the thrown exception, so I can distinguish now between errors and failures by checking which of the errors are Assertion Errors and which are not.
Let us have the following TestNG tests:
#Test
public void Test1() {
}
#Test (dependsOnMethods={"Test1"})
public void Test2() {
}
#Test (dependsOnMethods={"Test2"})
public void Test3() {
}
Tests serves as functional end to end webui tests (with Selenium Webdriver). Each test method is a step in context of long e2e scenario.
How can we refactor the tests to make them more readable? The best solution may be to remove all this 'dependsOnMethods' parameters in annotation and provide this 'dependsOnMethods' functionality implicitly. The question is How?
Expectations in priority order:
Find a solution keeping TestNG on board
Keep TestNG, but involve any other instrument, e.g. easyb? using groovy instead of java... Can I use TestNG groups with easyb? Is it possible, to easyb not in bdd style but 'junit' style, like:
given "user is logged in and sets expert mode", {
//... setup, a la #BeforeClass
}
then "user can enable bla bla bla" {
//...
}
then "user can check poo poo poo" {
//...
}
then "user save changes" {
//...
}
then "user revert changes", {
// Tear Down, a la #AfterClass
}
Is there any problems with 'just starting to write your other test classes in groovy in the same java project'?
Kick TestNG, but use what? TestNG groups feature - is needed.
One of crazy solution may be - broke everything and move to Thucydides. But this is not an option in my case.
PS
I know dependent tests is a 'bad practice'. But I believe that 'testing dependencies itself' is also a good point in automation...
Yes.. there is an alternative to using depends on... DONT DO IT!
As i've answered here...
This is terrible test logic.. As an experienced professional software test engineer.. I advise you to immediately disperse from this automation path that you are on.
Good test architecture requires that each method be SELF-SUFFICIENT and should not depend on other tests to complete before continuing. Why? Because say Test 2 depends on Test 1. Say Test 1 fails.. Now Test 2 will fail.. eventually, you'll have Test 1,2,3,4,5 tests failing, and you don't even know what the cause was.
My recommendation to you sir would be to create self-sufficient, maintainable, and short tests.
This is a great read which will help you in your endeavours: http://www.lw-tech.com/q1/ug_concepts.htm
Keep TestNG, but involve any other instrument, e.g. easyb? using
groovy instead of java... Can I use TestNG groups with easyb? Is it
possible, to easyb not in bdd style but 'junit' style
This is the choice I Used.
You will have no testng like groups with easyb (at least out of the box), and I didn't find so far any way to 'anotate' easyb/groovy 'test methods'.
But, now I have:
implicit dependent methods. Though they are not fully dependent - if some methods will be 'disabled' the next ones will be still executed. But the my actual goal can be achieved: since the test file is a groovy script, all 'test methods' will be executed in the order they are written. Once you need to disable any 'test method' you can simply comment its code - this will disable test from execution and show it as 'pending' in the report.
test methods have readable "sting" names.
This is how tests can look:
beforeAllSetup()
before "each method setup", {
//implementation
}
after "each method tear down", {
//...
}
it "first test this", {
//...
}
it "this will be shown as pending in the report", /*{
//...
}*/
it "then test this", {
//...
}
afterAllTearDown()
I want to continue test run execution even the one or more assertions get fails in TestNG.
I referred below links in order to implement soft assertion in my project.
http://beust.com/weblog/2012/07/29/reinventing-assertions/
http://seleniumexamples.com/blog/guide/using-soft-assertions-in-testng/
http://www.seleniumtests.com/2008/09/soft-assertion-is-check-which-doesnt.html
But I'm not understanding the flow of code execution, like function calls, FLOW.
Kindly help me to understand the work flow of the soft assertions.
Code:
import org.testng.asserts.Assertion;
import org.testng.asserts.IAssert;
//Implementation Of Soft Assertion
public class SoftAssertions extends Assertion{
#Override public void executeAssert(IAssert a){
try{ a.doAssert(); }
catch(AssertionError ex){
System.out.println(a.getMessage()); } } }
//Calling Soft Assertion
SoftAssertions sa = new SoftAssertions();
sa.assertTrue(actualTitle.equals(expectedTitle),
"Login Success, But Uname and Pwd are wrong");
Note:
Execution Continues even though above assertion fails
Soft assertions work by storing the failure in local state (maybe logging them to stderr as they are encountered). When the test is finished it needs to check for any stored failures and, if any were encountered, fail the entire test at that point.
I believe what the maintainer of TestNG had in mind was a call to myAssertion.assertAll() at the end of the test which will run Assert.fail() and make the test fail if any previous soft-assertion checks failed.
You can make this happen yourself by adding a #Before method to initialize your local soft-assertion object, use it in your test and add an #After method to run the assertAll() method on your soft-assertion object.
Be aware that this #Before/#After approach makes your test non-thread-safe so each test must be run within a new instance of your test class. Creating your soft-assertion object inside the test method itself and running the assertAll() check at the end of the method is preferable if your test needs to be thread-safe. One of the cool features of TestNG is its ability to run multi-threaded tests, so be aware of that as you implement these soft-asserts.
I'm having a strange problem with Easymock 3.0 and JUnit 4.8.2.
The problem only occurs when executing the tests from Maven and not from Eclipse.
This is the unit test (very simple):
...
protected ValueExtractorRetriever mockedRetriever;
...
#Before
public void before() {
mockedRetriever = createStrictMock(ValueExtractorRetriever.class);
}
#After
public void after() {
reset(mockedRetriever);
}
#Test
public void testNullValueExtractor() {
expect(mockedRetriever.retrieve("PROP")).andReturn(null).once();
replay(mockedRetriever);
ValueExtractor retriever = mockedRetriever.retrieve("PROP");
assertNull(retriever);
assertTrue(true);
}
And I get:
java.lang.IllegalStateException: 1 matchers expected, 2 recorded.
The weird thing is that I'm not even using an argument matcher. And that is the only method of the test! and to make it even worst it works from Eclipse and fails from Maven!
I found a few links which didn't provide me with an answer:
Another StackOverflow post
Expected Exceptions in JUnit
If I change the unit test and add one more method (which does use an argument matcher):
#Test
public void testIsBeforeDateOk() {
expect(mockedRetriever.retrieve((String)anyObject())).andReturn(new PofExtractor()).anyTimes();
replay(this.mockedRetriever);
FilterBuilder fb = new FilterBuilder();
assertNotNull(fb);
CriteriaFilter cf = new CriteriaFilter();
assertNotNull(cf);
cf.getValues().add("2010-12-29T14:45:23");
cf.setType(CriteriaType.DATE);
cf.setClause(Clause.IS_BEFORE_THE_DATE);
CriteriaQueryClause clause = CriteriaQueryClause.fromValue(cf.getClause());
assertNotNull(clause);
assertEquals(CriteriaQueryClause.IS_BEFORE_THE_DATE, clause);
clause.buildFilter(fb, cf, mockedRetriever);
assertNotNull(fb);
Filter[] filters = fb.getFilters();
assertNotNull(filters);
assertEquals(filters.length, 1);
verify(mockedRetriever);
logger.info("OK");
}
this last method passes the test but not the other one. How is this possible!?!?!
Regards,
Nico
More links:
"bartling.blogspot.com/2009/11/using-argument-matchers-in-easymock-and.html"
"www.springone2gx.com/blog/scott_leberknight/2008/09/the_n_matchers_expected_m_recorded_problem_in_easymock"
"stackoverflow.com/questions/4605997/3-matchers-expected-4-recorded"
I had a very similar problem and wrote my findings in the link below.
http://www.flyingtomoon.com/2011/04/unclosed-record-state-problem-in.html (just updated)
I believe the problem in on another test that affects your current test. The problem is on another test class and it affects you test. In order to find the place of the real problem, I advice to disable the problematic tests one by one till you notify the failing test.
Actually this is what I did. I disabled the failing tests one by one till I found the problematic test. I found a test that throws an exception and catches by "#extected" annotation without stopping the recording.
We had this problem recently, and it only reared its head when we ran the entire test suite (1100+ test cases). Eventually, I found that I could put a breakpoint on the test that was blowing up, and then step back in the list of tests that Eclipse had already executed, looking for the previous test case that had set up a mock incorrectly.
Our problem turned out to be somebody using EasyMock.anyString() outside of an EasyMock.expect(...) statement. Sure enough, it was done two tests before the one that was failing.
So essentially, what was happening is that the misuse of a matcher outside of an expect statement was poisoning EasyMock's state, and the next time we tried to create a mock, EasyMock would blow up.
I believe the first error message
java.lang.IllegalStateException: 1
matchers expected, 2 recorded.
means your mockedRetriever methods called twice but test expects it was called once. So your Eclipse and Maven's configuration differs.
And I have no reason to reset mock after test. Just keep in mind JUnit creates new class instance for every single test method.
EDITED:
What about the reason why the last test method passed the answer is:
expect(mockedRetriever.retrieve((String)anyObject())).andReturn(new PofExtractor()).anyTimes();
But in your first test method it is:
expect(mockedRetriever.retrieve("PROP")).andReturn(null).once();
as equivalent of:
expect(mockedRetriever.retrieve("PROP")).andReturn(null);