Tagged Cucumber Scenarios Functioning - java

I have been experiencing something really weird. Maybe someone can explain me where I am making mistake.
I have a following scenario in a feature file
#DeleteUserAfterTest
Scenario: Testing a functionality
Given admin exists
When a user is created
Then the user is verified
My #After method in Hooks class looks like following
#After
public void tearDown(Scenario scenario) {
if (scenario.isFailed()) {
final byte[] screenshot = ((TakesScreenshot) driver)
.getScreenshotAs(OutputType.BYTES);
scenario.embed(screenshot, "image/png"); //stick it in the report
}
driver.quit();
}
I am using the following method in my step definition to delete the created user based on tag passed in the Test Scenario as follows:
#After("#DeleteUserAfterTest")
public void deleteUser(){
//Do fucntionalities to delete user
}
My test runner looks something like this:
import io.cucumber.testng.AbstractTestNGCucumberTests;
import io.cucumber.testng.CucumberOptions;
#CucumberOptions(
plugin = {"pretty","com.aventstack.extentreports.cucumber.adapter.ExtentCucumberAdapter:", "json:target/cucumber-report/TestResult.json"},
monochrome = false,
features = "src/test/resources/features/IntegrationScenarios.feature",
tags="#DeleteUserAfterTest",
glue="Steps")
public class IntegrationTest extends AbstractTestNGCucumberTests {
}
However, when I launch the test case, sometimes my user is deleted in the After("#DeleteUserAfterTest") but sometimes my test does not recognises the tagged After at all. It directly goes to After method in my Hooks class and quits the driver. Maybe someone has encountered this problem or knows a workaround!

Method order is not defined in Java. So you have to tell Cucumber in which order you hooks should be executed. Higher numbers are run first (before hooks are the other way around).
#After(order = 500)
public void tearDown(Scenario scenario) {
}
#After(value = "#DeleteUserAfterTest", order = 1000)
public void deleteUser(){
}

Related

TestNG Test getting failed when two test cases are being executed using Selenide. I am new to selenide

Can somebody help me, please? I'm working with Selenide framework using TestNG and Java.My test are getting failed while running multiple #Test annotations for single #Test it is workingConsole error eclipse
public class LoginTest {
#BeforeTest
public void beforeTest() {
System.setProperty("webdriver.chrome.driver", ".//src//test//resources//Drivers//chromedriver.exe");
Configuration.browser = "chrome";
Configuration.timeout = 5000;
open("https://opensource-demo.orangehrmlive.com/");
}
#Test
public void Test1() {
$(By.id("txtUsername")).setValue("Admin");
$(By.id("txtPassword")).setValue("admin123");
$(By.id("btnLogin")).click();
$(By.id("welcome")).shouldHave(text("Welcome Admin"));
}
#Test
public void Test2() {
$(By.id("txtUsername")).setValue("Admin");
$(By.id("txtPassword")).setValue("admin123");
$(By.id("btnLogin")).click();
$(By.id("welcome")).shouldHave(text("Welcome Admin"));
$(By.id("welcome")).click();
$(By.xpath("//a[#href='/index.php/auth/logout']")).click();
}
}
from your console error you can see that Element not found
Element not found {By.id: txtUsername}
please check if the element -> exist of visible
First, you don't need
System.setProperty("webdriver.chrome.driver", ".//src//test//resources//Drivers//chromedriver.exe");
Since Selenide 4.7 contains WebDriverManager - a library that can automatically download latest webdriver binary file. You don't need to care about downloading geckodriver.exe or chromedriver.exe and adding it to PATH. So you can just remove this line.
Also, you may add Configuration.startMaximized = true;
For the parallel execution, I would suggest you create a new class for example
TestNGBase with the tag #BeforeClass(alwaysRun = true) and put there the action before each test. Also, each test inherits from this class
public class LoginTest extends TestNGBase
So, Test1() and Test2() should be in diff classes
If you want to have more than one test steps you can, but in the class as testStep1() , testStep2() for example.
if you want to keep this set up you already have you may use
#BeforeMethod instead of #BeforeTest

Reusability of tests in TestNG

I would like to run some automated tests for testing a web application in which process workflows are handled.
For the interfacing with the application itself I've written already a Page Object Model which makes use of Selenium WebDriver in order to interact with the several components of the application.
And now I'm about to write a number of tests which should allow me to run a number of automated tests for that particular application. And as a test framework I would like to use TestNG.
But because of the fact that the application under test is a workflow application, I discovered that I always need to work me through a certain part of the process flow first in order to do a test afterwards.
Example testcase 1: Add an activity to certain task in a dossier
Login to application
Open dossier x
Open task y within dossier x
Add activity z to task y within dossier x
Example testcase 2: Add a planning for a certain activity on a task in a dossier
Login to application
Open dossier x
Open task y within dossier x
Add activity z to task y within dossier x
Add the planning for activity z
So as you can see from the examples above, I always need to work myself through a certain amount of similar steps before I can do the actual test.
As a starting point for myself I started writing TestNG classes. One for testcase 1 and a second one for testcase 2. Then, within each test class I have implemented a number of test methods which correspond to the test steps.
See example code below for testcase 1:
public class Test_Add_Activity_To_Task_In_Dossier extends BaseTestWeb {
private Dossier d;
private Task t;
#Test
public void login() {
System.out.println("Test step: login");
}
#Test(dependsOnMethods = "login")
public void open_dossier() {
System.out.println("Test step: open dossier");
}
#Test(dependsOnMethods = "open_dossier")
public void open_task() {
System.out.println("Test step: open task");
}
#Test(dependsOnMethods = "open_task")
public void add_activity() {
System.out.println("Test step: add activity");
}
}
And here the example code for testcase 2:
public class Test_Add_Planning_For_Activity_To_Task_In_Dossier extends BaseTestWeb {
private Dossier d;
private Task t;
#Test
public void login() {
System.out.println("Test step: login");
}
#Test(dependsOnMethods = "login")
public void open_dossier() {
System.out.println("Test step: open dossier");
}
#Test(dependsOnMethods = "open_dossier")
public void open_task() {
System.out.println("Test step: open task");
}
#Test(dependsOnMethods = "open_task")
public void add_activity() {
System.out.println("Test step: add activity");
}
#Test(dependsOnMethods = "add_activity")
public void add_planning() {
System.out.println("Test step: add planning");
}
}
So as you can notice already this kind of structuring the tests is not maintainable as the amount of testcases to be written grows because I'm now always repeating the same steps first before I arrive to the actual test to be done...
Therefore I would like to ask the community here upon how it would be possible to make everything more reusable and avoid the writing of repeated steps over and over in every single testcase
All ideas are more than welcome!!
As per my understanding, you want to remove repeated stuff that you are doing before each test case like
Login to application
Open dossier x
Open task y within dossier x
You can use #BeforeMethod so it will do these prerequisites task (basically you can keep common task that need before every test) before each test case.
#BeforeMethod
public void setUp()
{
login();
open_dossier();
open_task();
}
Test case - 1
#Test
public void testAddActivity()
{
add_activity();
}
Test case - 2
#Test
public void testAddPlanning()
{
add_planning()
}
As you have mentioned you are using page object model. so i assume you might have written object repositories and its operation for every page by creating a separate class.
So while writing the test just call those methods from the POM class. For example in your case:
test case 1:
public class Test_Add_Activity_To_Task_In_Dossier extends BaseTestWeb {#Test
public void add_activity() {
//To call below methods , please create object of the classes they belong to .
login();
open_dossier();
open_task();
System.out.println("Test step: add activity");
}
}
Test case 2:
public class Test_Add_Planning_For_Activity_To_Task_In_Dossier extends BaseTestWeb {
#Test
public void add_planning() {
//To call below methods , please create object of the classes they belong to .
login();
open_dossier();
open_task();
System.out.println("Test step: add planning");
}
}
Hope it will be helpful.
I have had a simillar environment where i needed to have accounts in the right state to start my tests. I made a pacakage named accelerators and opend some process based classes to move the accounts from process to process to get them in the right state. My advice is not to put the #Test annotation above the methods of the accelerators. But to call those accelerator classes and methods inside your actual tests.
If you have more questions feel free to ask them.
So i edited my answer because i couldnt post a long comment lol im fairly new here.
#Hans Mens So what i did is I created the class and methods as how the processes are as accelerators. And inside the #BeforeTest annotation i invoked all accelerator classes i use in my tests. THen i extended my test method classes from the class where the #BeforeTest annotation is.That way i can use all the objects i invoked in the #BeforeTest without invoking them in the test classes so that my test scripts are clean.

Check that JUnit Extension throws specific Exception

Suppose I develop an extension which disallows test method names to start with an uppercase character.
public class DisallowUppercaseLetterAtBeginning implements BeforeEachCallback {
#Override
public void beforeEach(ExtensionContext context) {
char c = context.getRequiredTestMethod().getName().charAt(0);
if (Character.isUpperCase(c)) {
throw new RuntimeException("test method names should start with lowercase.");
}
}
}
Now I want to test that my extension works as expected.
#ExtendWith(DisallowUppercaseLetterAtBeginning.class)
class MyTest {
#Test
void validTest() {
}
#Test
void TestShouldNotBeCalled() {
fail("test should have failed before");
}
}
How can I write a test to verify that the attempt to execute the second method throws a RuntimeException with a specific message?
Another approach could be to use the facilities provided by the new JUnit 5 - Jupiter framework.
I put below the code which I tested with Java 1.8 on Eclipse Oxygen. The code suffers from a lack of elegance and conciseness but could hopefully serve as a basis to build a robust solution for your meta-testing use case.
Note that this is actually how JUnit 5 is tested, I refer you to the unit tests of the Jupiter engine on Github.
public final class DisallowUppercaseLetterAtBeginningTest {
#Test
void testIt() {
// Warning here: I checked the test container created below will
// execute on the same thread as used for this test. We should remain
// careful though, as the map used here is not thread-safe.
final Map<String, TestExecutionResult> events = new HashMap<>();
EngineExecutionListener listener = new EngineExecutionListener() {
#Override
public void executionFinished(TestDescriptor descriptor, TestExecutionResult result) {
if (descriptor.isTest()) {
events.put(descriptor.getDisplayName(), result);
}
// skip class and container reports
}
#Override
public void reportingEntryPublished(TestDescriptor testDescriptor, ReportEntry entry) {}
#Override
public void executionStarted(TestDescriptor testDescriptor) {}
#Override
public void executionSkipped(TestDescriptor testDescriptor, String reason) {}
#Override
public void dynamicTestRegistered(TestDescriptor testDescriptor) {}
};
// Build our test container and use Jupiter fluent API to launch our test. The following static imports are assumed:
//
// import static org.junit.platform.engine.discovery.DiscoverySelectors.selectClass
// import static org.junit.platform.launcher.core.LauncherDiscoveryRequestBuilder.request
JupiterTestEngine engine = new JupiterTestEngine();
LauncherDiscoveryRequest request = request().selectors(selectClass(MyTest.class)).build();
TestDescriptor td = engine.discover(request, UniqueId.forEngine(engine.getId()));
engine.execute(new ExecutionRequest(td, listener, request.getConfigurationParameters()));
// Bunch of verbose assertions, should be refactored and simplified in real code.
assertEquals(new HashSet<>(asList("validTest()", "TestShouldNotBeCalled()")), events.keySet());
assertEquals(Status.SUCCESSFUL, events.get("validTest()").getStatus());
assertEquals(Status.FAILED, events.get("TestShouldNotBeCalled()").getStatus());
Throwable t = events.get("TestShouldNotBeCalled()").getThrowable().get();
assertEquals(RuntimeException.class, t.getClass());
assertEquals("test method names should start with lowercase.", t.getMessage());
}
Though a little verbose, one advantage of this approach is it doesn't require mocking and execute the tests in the same JUnit container as will be used later for real unit tests.
With a bit of clean-up, a much more readable code is achievable. Again, JUnit-Jupiter sources can be a great source of inspiration.
If the extension throws an exception then there's not much a #Test method can do since the test runner will never reach the #Test method. In this case, I think, you have to test the extension outside of its use in the normal test flow i.e. let the extension be the SUT.
For the extension provided in your question, the test might be something like this:
#Test
public void willRejectATestMethodHavingANameStartingWithAnUpperCaseLetter() throws NoSuchMethodException {
ExtensionContext extensionContext = Mockito.mock(ExtensionContext.class);
Method method = Testable.class.getMethod("MethodNameStartingWithUpperCase");
Mockito.when(extensionContext.getRequiredTestMethod()).thenReturn(method);
DisallowUppercaseLetterAtBeginning sut = new DisallowUppercaseLetterAtBeginning();
RuntimeException actual =
assertThrows(RuntimeException.class, () -> sut.beforeEach(extensionContext));
assertThat(actual.getMessage(), is("test method names should start with lowercase."));
}
#Test
public void willAllowTestMethodHavingANameStartingWithAnLowerCaseLetter() throws NoSuchMethodException {
ExtensionContext extensionContext = Mockito.mock(ExtensionContext.class);
Method method = Testable.class.getMethod("methodNameStartingWithLowerCase");
Mockito.when(extensionContext.getRequiredTestMethod()).thenReturn(method);
DisallowUppercaseLetterAtBeginning sut = new DisallowUppercaseLetterAtBeginning();
sut.beforeEach(extensionContext);
// no exception - good enough
}
public class Testable {
public void MethodNameStartingWithUpperCase() {
}
public void methodNameStartingWithLowerCase() {
}
}
However, your question suggests that the above extension is only an example so, more generally; if your extension has a side effect (e.g. sets something in an addressable context, populates a System property etc) then your #Test method could assert that this side effect is present. For example:
public class SystemPropertyExtension implements BeforeEachCallback {
#Override
public void beforeEach(ExtensionContext context) {
System.setProperty("foo", "bar");
}
}
#ExtendWith(SystemPropertyExtension.class)
public class SystemPropertyExtensionTest {
#Test
public void willSetTheSystemProperty() {
assertThat(System.getProperty("foo"), is("bar"));
}
}
This approach has the benefit of side stepping the potentially awkward setup steps of: creating the ExtensionContext and populating it with the state required by your test but it may come at the cost of limiting the test coverage since you can really only test one outcome. And, of course, it is only feasible if the extension has a side effect which can be evaulated in a test case which uses the extension.
So, in practice, I suspect you might need a combination of these approaches; for some extensions the extension can be the SUT and for others the extension can be tested by asserting against its side effect(s).
After trying the solutions in the answers and the question linked in the comments, I ended up with a solution using the JUnit Platform Launcher.
class DisallowUppercaseLetterAtBeginningTest {
#Test
void should_succeed_if_method_name_starts_with_lower_case() {
TestExecutionSummary summary = runTestMethod(MyTest.class, "validTest");
assertThat(summary.getTestsSucceededCount()).isEqualTo(1);
}
#Test
void should_fail_if_method_name_starts_with_upper_case() {
TestExecutionSummary summary = runTestMethod(MyTest.class, "InvalidTest");
assertThat(summary.getTestsFailedCount()).isEqualTo(1);
assertThat(summary.getFailures().get(0).getException())
.isInstanceOf(RuntimeException.class)
.hasMessage("test method names should start with lowercase.");
}
private TestExecutionSummary runTestMethod(Class<?> testClass, String methodName) {
SummaryGeneratingListener listener = new SummaryGeneratingListener();
LauncherDiscoveryRequest request = request().selectors(selectMethod(testClass, methodName)).build();
LauncherFactory.create().execute(request, listener);
return listener.getSummary();
}
#ExtendWith(DisallowUppercaseLetterAtBeginning.class)
static class MyTest {
#Test
void validTest() {
}
#Test
void InvalidTest() {
fail("test should have failed before");
}
}
}
JUnit itself will not run MyTest because it is an inner class without #Nested. So there are no failing tests during the build process.
Update
JUnit itself will not run MyTest because it is an inner class without #Nested. So there are no failing tests during the build process.
This is not completly correct. JUnit itself would also run MyTest, e.g. if "Run All Tests" is started within the IDE or within a Gradle build.
The reason why MyTest was not executed is because I used Maven and I tested it with mvn test. Maven uses the Maven Surefire Plugin to execute tests. This plugin has a default configuration which excludes all nested classes like MyTest.
See also this answer about "Run tests from inner classes via Maven" and the linked issues in the comments.
JUnit 5.4 introduced the JUnit Platform Test Kit which allows you to execute a test plan and inspect the results.
To take a dependency on it from Gradle, it might look something like this:
testImplementation("org.junit.platform:junit-platform-testkit:1.4.0")
And using your example, your extension test could look something like this:
import org.junit.jupiter.api.extension.ExtendWith
import org.junit.jupiter.api.fail
import org.junit.platform.engine.discovery.DiscoverySelectors
import org.junit.platform.testkit.engine.EngineTestKit
import org.junit.platform.testkit.engine.EventConditions
import org.junit.platform.testkit.engine.TestExecutionResultConditions
internal class DisallowUpperCaseExtensionTest {
#Test
internal fun `succeed if starts with lower case`() {
val results = EngineTestKit
.engine("junit-jupiter")
.selectors(
DiscoverySelectors.selectMethod(ExampleTest::class.java, "validTest")
)
.execute()
results.tests().assertStatistics { stats ->
stats.finished(1)
}
}
#Test
internal fun `fail if starts with upper case`() {
val results = EngineTestKit
.engine("junit-jupiter")
.selectors(
DiscoverySelectors.selectMethod(ExampleTest::class.java, "TestShouldNotBeCalled")
)
.execute()
results.tests().assertThatEvents()
.haveExactly(
1,
EventConditions.finishedWithFailure(
TestExecutionResultConditions.instanceOf(java.lang.RuntimeException::class.java),
TestExecutionResultConditions.message("test method names should start with lowercase.")
)
)
}
#ExtendWith(DisallowUppercaseLetterAtBeginning::class)
internal class ExampleTest {
#Test
fun validTest() {
}
#Test
fun TestShouldNotBeCalled() {
fail("test should have failed before")
}
}
}

Java Unit testing path matching

I am trying to understand Unit testing and how the correct classes are being fetched on test time,
I'm having a hard time understanding exactly what is going on behind the scenes and how safe/correct my usages for this are.
This is a very simple example of what i am trying to do, i wrote it here inline so it probably contains some errors, please try to ignore them, as stupid as they may be.
A very simple project directory:
ba/src/main/java/utils/BaUtils.java
ba/test/main/java/utils/BaUtilsTest.java
notBa/src/main/java/im/BaObj.java
BaUtils.java code:
package com.ba.utils;
import notBa.im.BaObj;
public class BaUtils{
public String doSomething(BaObj obj){
obj.doSomething();
}
}
I would like to test BaUtils wihtout actually calling doSomething, and i can't change anything on BaObj class or notBa package. I know that i can (by 'can' i mean it will work) add a new java file to 'ba' project (ba/test/java/notBa/main/java/im/BaObj.java) that will have the same package as the original BaObj, and at runtime the test will import this one instead of the real one, so BaUtils code is tested but BaObj code is not excecuted.
that should look something like like :
package notBa.im.Baobj
public class BaObj{
public void doSomething(){
System.out.println("Did something");
}
}
My questions are (And thank you for reaching this far):
How does this work (Reading references would be great).
Is this kind of test building considered 'good' or 'safe' ?
Thanks!
The solution is to use a mocking framework (I for myself like Mockito).
The test would look like this:
class BlaUtilTes{
#Rule
public MockitoRule mockitoRule = MockitoJUnit.rule();
#Mock
Blaobj blaobj;
#Test
public void doSomething_WithMockedBlaobj_callsDosomethingOnBlaobj(){
// arrange
BlaUtil blaUtil= new BlaUtil();
// act
blaUtil.doSomething(blaobj);
// assert
Mockito.verify(blaobj).doSomething();
}
}
find more information here http://www.vogella.com/tutorials/Mockito/article.html#testing-with-mock-objects
Your BaUtilsTest class should look like this.. I have used mockito for for mocking external dependencies. Also I changed the method return type to String for easy understanding.
#RunWith(MockitoJUnitRunner.class)
class BaUtilsTest {
BaUtils util;
#Mock
BaObj mockBaObj;
#Before
public void setup() {
util = new BaUtils();
}
#Test
public void testDoSomething() {
Mockito.when(mockBaObj.doSomething()).thenReturn("did the work using mock");
String result = util.doSomething(mockBaObj);
Assert.assertEquals("did the work using mock", result);
}
}

How to disable TestNG test based on a condition

Is there currently a way to disable TestNG test based on a condition
I know you can currently disable test as so in TestNG:
#Test(enabled=false, group={"blah"})
public void testCurrency(){
...
}
I will like to disable the same test based on a condition but dont know how. something Like this:
#Test(enabled={isUk() ? false : true), group={"blah"})
public void testCurrency(){
...
}
Anyone has a clue on whether this is possible or not.
An easier option is to use the #BeforeMethod annotation on a method which checks your condition. If you want to skip the tests, then just throw a SkipException. Like this:
#BeforeMethod
protected void checkEnvironment() {
if (!resourceAvailable) {
throw new SkipException("Skipping tests because resource was not available.");
}
}
You have two options:
Implement an annotation transformer.
Use BeanShell.
Your annotation transformer would test the condition and then override the #Test annotation to add the attribute "enabled=false" if the condition is not satisfied.
There are two ways that I know of that allow you the control of "disabling" tests in TestNG.
The differentiation that is very important to note is that SkipException will break out off all subsequent tests while implmenting IAnnotationTransformer uses Reflection to disbale individual tests, based on a condition that you specify. I will explain both SkipException and IAnnotationTransfomer.
SKIP Exception example
import org.testng.*;
import org.testng.annotations.*;
public class TestSuite
{
// You set this however you like.
boolean myCondition;
// Execute before each test is run
#BeforeMethod
public void before(Method methodName){
// check condition, note once you condition is met the rest of the tests will be skipped as well
if(myCondition)
throw new SkipException();
}
#Test(priority = 1)
public void test1(){}
#Test(priority = 2)
public void test2(){}
#Test(priority = 3)
public void test3(){}
}
IAnnotationTransformer example
A bit more complicated but the idea behind it is a concept known as Reflection.
Wiki - http://en.wikipedia.org/wiki/Reflection_(computer_programming)
First implement the IAnnotation interface, save this in a *.java file.
import java.lang.reflect.Constructor;
import java.lang.reflect.Method;
import org.testng.IAnnotationTransformer;
import org.testng.annotations.ITestAnnotation;
public class Transformer implements IAnnotationTransformer {
// Do not worry about calling this method as testNG calls it behind the scenes before EVERY method (or test).
// It will disable single tests, not the entire suite like SkipException
public void transform(ITestAnnotation annotation, Class testClass, Constructor testConstructor, Method testMethod){
// If we have chose not to run this test then disable it.
if (disableMe()){
annotation.setEnabled(false);
}
}
// logic YOU control
private boolean disableMe() {
}
}
Then in you test suite java file do the following in the #BeforeClass function
import org.testng.*;
import org.testng.annotations.*;
/* Execute before the tests run. */
#BeforeClass
public void before(){
TestNG testNG = new TestNG();
testNG.setAnnotationTransformer(new Transformer());
}
#Test(priority = 1)
public void test1(){}
#Test(priority = 2)
public void test2(){}
#Test(priority = 3)
public void test3(){}
One last step is to ensure that you add a listener in your build.xml file.
Mine ended up looking like this, this is just a single line from the build.xml:
<testng classpath="${test.classpath}:${build.dir}" outputdir="${report.dir}"
haltonfailure="false" useDefaultListeners="true"
listeners="org.uncommons.reportng.HTMLReporter,org.uncommons.reportng.JUnitXMLReporter,Transformer"
classpathref="reportnglibs"></testng>
I prefer this annotation based way for disable/skip some tests based on environment settings. Easy to maintain and not requires any special coding technique.
Using the IInvokedMethodListener interface
Create a custom anntotation e.g.: #SkipInHeadlessMode
Throw SkipException
public class ConditionalSkipTestAnalyzer implements IInvokedMethodListener {
protected static PropertiesHandler properties = new PropertiesHandler();
#Override
public void beforeInvocation(IInvokedMethod invokedMethod, ITestResult result) {
Method method = result.getMethod().getConstructorOrMethod().getMethod();
if (method == null) {
return;
}
if (method.isAnnotationPresent(SkipInHeadlessMode.class)
&& properties.isHeadlessMode()) {
throw new SkipException("These Tests shouldn't be run in HEADLESS mode!");
}
}
#Override
public void afterInvocation(IInvokedMethod iInvokedMethod, ITestResult iTestResult) {
//Auto generated
}
}
Check for the details:
https://www.lenar.io/skip-testng-tests-based-condition-using-iinvokedmethodlistener/
A Third option also can be Assumption
Assumptions for TestNG - When a assumption fails, TestNG will be instructed to ignore the test case and will thus not execute it.
Using the #Assumption annotation
Using AssumptionListener Using the Assumes.assumeThat(...)
method
You can use this example: Conditionally Running Tests In TestNG
Throwing a SkipException in a method annotated with #BeforeMethod did not work for me because it skipped all the remaining tests of my test suite with no regards if a SkipException were thrown for those tests.
I did not investigate it thoroughly but I found another way : using the dependsOnMethods attribute on the #Test annotation:
import org.testng.SkipException;
import org.testng.annotations.Test;
public class MyTest {
private boolean conditionX = true;
private boolean conditionY = false;
#Test
public void isConditionX(){
if(!conditionX){
throw new SkipException("skipped because of X is false");
}
}
#Test
public void isConditionY(){
if(!conditionY){
throw new SkipException("skipped because of Y is false");
}
}
#Test(dependsOnMethods="isConditionX")
public void test1(){
}
#Test(dependsOnMethods="isConditionY")
public void test2(){
}
}
SkipException: It's useful in case of we have only one #Test method in the class. Like for Data Driven Framework, I have only one Test method which need to either executed or skipped on the basis of some condition. Hence I've put the logic for checking the condition inside the #Test method and get desired result.
It helped me to get the Extent Report with test case result as Pass/Fail and particular Skip as well.

Categories