I am using TestNG to run data driven tests
My data is read from an external file
I have a retry logic that is essentially a different test method in the same class but retries only the failed entities from the previous test. I am controlling that using priority
Test(dataProvider="customTestDataProvider" , priority = 1)
public void testSomething(final ITestContext testContext , final CustomTestDataItem testData) throws CustomTestException{
setTestData(testData, testContext);
performStep1();
performStep2();
validateResult();
}
#Test(dataProvider="customTestDataProvider" , priority = 2)
public void testSomethingRetry1(final ITestContext testContext ,final CustomTestDataItem testData) throws CustomTestException{
testSomething(testContext , testData);
}
#Test(dataProvider="customTestDataProvider" , priority = 3)
public void testSomethingRetry2(final ITestContext testContext ,final CustomTestDataItem testData) throws CustomTestException{
testSomething(testContext , testData);
}
customTestDataProvider knows which testData item the method has failed for so in testSomethingRetry1 only the failed test data will be supplied
If a test method fails in testSomething it is retried in testSomethingRetry1 but testNG considers it is failed since it failed in testSomething
So I need a custom logic to determine if the suite has passed or failed. How do i override the testNG result( pass/fail) with the result I have determined ?
Instead of duplicating test methods I would recommend to use org.testng.IRetryAnalyzer which basically runs failed test again. You can see some example here http://seleniumeasy.com/testng-tutorials/execute-only-failed-test-cases-using-iretryanalyzer.
But if you really want to override result you can use listeners and implement methods in which you get ITestResult. In this object you can check method class/name/result/etc. and change some of these attributes (including result).
http://testng.org/javadocs/org/testng/ITestListener.html
http://testng.org/javadocs/org/testng/IInvokedMethodListener.html
or for whole test suite
http://testng.org/javadocs/org/testng/ISuiteListener.html
Related
I am trying to write a test method in TestNG, that after it fails - the entire test suite will stop running.
#Test
public void stopTestingIfThisFailed() throws Exception
{
someTestStesp();
if (softAsserter.isOneFailed()) {
asserter.fail("stopTestingIfThisFailed test Failed");
throw new Exception("Test can't continue, fail here!");
}
}
The exception is being thrown, but other test methods are running.
How to solve this?
You can use the dependsOnMethods or dependsOnGroups annotation parameter in your other test methods:
#Test(dependsOnMethods = {"stopTestingIfThisFailed"})
public void testAnotherTestMehtod() {
}
JavaDoc of the dependsOnMethods parameter:
The list of methods this method depends on. There is no guarantee on the order on which the methods depended upon will be run, but you are guaranteed that all these methods will be run before the test method that contains this annotation is run. Furthermore, if any of these methods was not a SUCCESS, this test method will not be run and will be flagged as a SKIP. If some of these methods have been overloaded, all the overloaded versions will be run.
See https://testng.org/doc/documentation-main.html#dependent-methods
It depends on what you expect (there is no direct support for this in TestNG). You can create ShowStopperException which is thrown in #Test and then in your ITestListener implementation (see docs) you can call System.exit(1 (or whatever number)) when you find this exeption in result but there will be no report and in general it's not good practice. Second option is to have some base class which is parent of all test classes and some context variable which will handle ShowStopperException in #BeforeMethod in parent class and throw SkipException so workflow can be like:
test passed
test passed
showstopper exception in some test
test skipped
test skipped
test skipped
...
I solved the problem like this: after a test that mustn't fail fails - I'm writing data to a temporary text file.
Later, in the next test I added code in the #BeforeClass that checks that data in the former mentioned text file. If a show stopper was found I'm killing the current process.
If a test the "can't" fail actually fails:
public static void saveShowStopper() {
try {
General.createFile("ShowStopper","tempShowStopper.txt");
} catch (ParseException e) {
e.printStackTrace();
}
}
The #BeforeClass validating code:
#BeforeClass(alwaysRun = true)
public void beforeClass(ITestContext testContext, #Optional String step, #Optional String suiteLoopData,
#Optional String group) throws Exception
{
boolean wasShowStopperFound = APIUtils.loadShowStopper();
if (wasShowStopperFound){
Thread.currentThread().interrupt();
return;
}
}
It behaves if you throw a specific exception, SkipException, from the #BeforeSuite setup method.
See (possible dupe)
TestNG - How to force end the entire test suite from the BeforeSuite annotation if a condition is met
If you want to do it from an arbitrary test, it doesn't appear there is a framework mechanism. But you could always flip a flag, and check that flag in the #BeforeTest setup method. Before you jump to that, maybe have a think if you could check once before the whole suite runs, and just abort there (ie #BeforeSuite).
I have a sequence of tests which have to be fed an input data in the form of a file.However,the exact data content to be fed into each would be specific.
I intend to use temporary files to achieve this.
The Setup method does not take a parameter.
SO ,what could be done so that the setup can be made to read a specific fragment for each specific test.
The actual set of steps in Setup would be same - creating a temporary file,but with a specific tailored piece of data.
Setup methods (i.e., methods annotated with #Before) are designed for running the same steps before every test case. If this isn't the behavior you need, just don't use them.
At the end of the day, a JUnit test is just Java - you could just have a method that takes a parameter and sets up the test accordingly and call it explicitly with the different arguments you need:
public class MyTest {
private void init(String fileName) {
// Reads data from the file and sets up the test
}
#Test
public testSomething() {
init("/path/to/some/file");
// Perform the test and assert the result
}
#Test
public testSomethingElse() {
init("/path/to/another/file");
// Perform the test and assert the result
}
}
Background: I'm executing tests with TestNG and I have a class annotated with #Test that generates a number, or ID if you will, and that same number is the input value of my second test. Is it possible to pass values between TestNG tests?
Sure. For example if you have two tests that is related you can pass the values from one test to another via test context attributes:
#Test
public void test1(ITestContext context) { //Will be injected by testNG
/* Do the test here */
context.setAttribute("myOwnAttribute", "someTestResult");
}
#Test(dependsOnMethods = "test1")
public void test2(ITestContext context) { //Will be injected by testNG
String prevResult = (String) context.getAttribute("myOwnAttribute");
}
You should create one test that handles whole case. Tests can't depend on each other, it's considered as bad practise. If you are using maven order of tests execution can be different in different environments.
Bad practice or not, it can be accomplished by simply using class fields. Just make sure your cases are executed in predictable order (eg. using #Test(priority) or dependsOn TestNG feature).
I do some cleanup in external systems using testng #AfterClass annotation. But when tests are failed I really need that data. Can I make testng perform some actions if only tests are passed?
There is an option to get information about all failed tests till current moment. You have to inject ITestContext into your "afterClass" method.
#AfterClass
public void after(ITestContext context) {
context.getFailedTests().getAllResults()
}
Iterate through all results and filter by TestClass
AFAIK there is nothing at afterclass/aftersuite level. What you can do is couple of things:
AfterMethod does take ITestResult as an argument which gives you the result of the currently executed test. Based on that you can cleanup.
Or
ISuiteListener gives you an onFinish method with the testresult object, which you can iterate and then do the cleanup.
Example, where you can delete test data just for current test class if only tests are passed:
#AfterClass
public void deleteCreatedData(ITestContext context) {
if (hasClassFailedTests(context)) return;
//do your cleanup for current test class
}
protected boolean hasClassFailedTests(ITestContext context) {
Class clazz = this.getClass();
return context.getFailedTests().getAllMethods().stream().anyMatch(it ->
it.getRealClass().equals(clazz));
}
I am learning selenium grid and testng.
I am trying to create customized html result using testng interfaces. my understanding on how to create html results is fine in a single machine execution.
But when i factor remote machines, i am not able to understand like how results are consolidated if tests are executed in parallel in remote machines?
Any factors should i consider before implementing my own interfaces?
Any help is appreciated..
Thanks in advance
It works fine, from multi-threaded tests to the HTML report because the ITestResult is always the same in the end, no matter what you do. You can simple create a "CustomReport" class that extends IReporter. Then, override the generateReport method and just let TestNG create and pass these 2 arguments into it:
#Override
public void generateReport( List<XmlSuite> xml, List<ISuite> suites, String outdir )
{
for ( ISuite thisSuite: suites ) {
thisSuite.getResults(). ...
...
}
....
Then, inside that method, do what you will, to customize the report and generate HTML tables or whatever.
Also, one thing I do (to reduce confusion in the console output while multithreaded tests run) is log the thread name to messages on the TestNG report by using something like:
public void logIt( String message ) {
Reporter.log( "Thread-" + Thread.currentThread().getId() + ": " + message, true );
}
TestNG is awesome, especially when you understand what I said above as well as the fact that you implicitly allow TestNG to pass a XMLTest or ITestContext or ITestResult to some of the #Configuration methods. For example:
#BeforeClass
public void setUp( ITestContext context ) {
logger.info("BeforeClass setUp...");
suiteParams = context.getSuite().getXmlSuite().getAllParameters();
...