Reusability of tests in TestNG - java

I would like to run some automated tests for testing a web application in which process workflows are handled.
For the interfacing with the application itself I've written already a Page Object Model which makes use of Selenium WebDriver in order to interact with the several components of the application.
And now I'm about to write a number of tests which should allow me to run a number of automated tests for that particular application. And as a test framework I would like to use TestNG.
But because of the fact that the application under test is a workflow application, I discovered that I always need to work me through a certain part of the process flow first in order to do a test afterwards.
Example testcase 1: Add an activity to certain task in a dossier
Login to application
Open dossier x
Open task y within dossier x
Add activity z to task y within dossier x
Example testcase 2: Add a planning for a certain activity on a task in a dossier
Login to application
Open dossier x
Open task y within dossier x
Add activity z to task y within dossier x
Add the planning for activity z
So as you can see from the examples above, I always need to work myself through a certain amount of similar steps before I can do the actual test.
As a starting point for myself I started writing TestNG classes. One for testcase 1 and a second one for testcase 2. Then, within each test class I have implemented a number of test methods which correspond to the test steps.
See example code below for testcase 1:
public class Test_Add_Activity_To_Task_In_Dossier extends BaseTestWeb {
private Dossier d;
private Task t;
#Test
public void login() {
System.out.println("Test step: login");
}
#Test(dependsOnMethods = "login")
public void open_dossier() {
System.out.println("Test step: open dossier");
}
#Test(dependsOnMethods = "open_dossier")
public void open_task() {
System.out.println("Test step: open task");
}
#Test(dependsOnMethods = "open_task")
public void add_activity() {
System.out.println("Test step: add activity");
}
}
And here the example code for testcase 2:
public class Test_Add_Planning_For_Activity_To_Task_In_Dossier extends BaseTestWeb {
private Dossier d;
private Task t;
#Test
public void login() {
System.out.println("Test step: login");
}
#Test(dependsOnMethods = "login")
public void open_dossier() {
System.out.println("Test step: open dossier");
}
#Test(dependsOnMethods = "open_dossier")
public void open_task() {
System.out.println("Test step: open task");
}
#Test(dependsOnMethods = "open_task")
public void add_activity() {
System.out.println("Test step: add activity");
}
#Test(dependsOnMethods = "add_activity")
public void add_planning() {
System.out.println("Test step: add planning");
}
}
So as you can notice already this kind of structuring the tests is not maintainable as the amount of testcases to be written grows because I'm now always repeating the same steps first before I arrive to the actual test to be done...
Therefore I would like to ask the community here upon how it would be possible to make everything more reusable and avoid the writing of repeated steps over and over in every single testcase
All ideas are more than welcome!!

As per my understanding, you want to remove repeated stuff that you are doing before each test case like
Login to application
Open dossier x
Open task y within dossier x
You can use #BeforeMethod so it will do these prerequisites task (basically you can keep common task that need before every test) before each test case.
#BeforeMethod
public void setUp()
{
login();
open_dossier();
open_task();
}
Test case - 1
#Test
public void testAddActivity()
{
add_activity();
}
Test case - 2
#Test
public void testAddPlanning()
{
add_planning()
}

As you have mentioned you are using page object model. so i assume you might have written object repositories and its operation for every page by creating a separate class.
So while writing the test just call those methods from the POM class. For example in your case:
test case 1:
public class Test_Add_Activity_To_Task_In_Dossier extends BaseTestWeb {#Test
public void add_activity() {
//To call below methods , please create object of the classes they belong to .
login();
open_dossier();
open_task();
System.out.println("Test step: add activity");
}
}
Test case 2:
public class Test_Add_Planning_For_Activity_To_Task_In_Dossier extends BaseTestWeb {
#Test
public void add_planning() {
//To call below methods , please create object of the classes they belong to .
login();
open_dossier();
open_task();
System.out.println("Test step: add planning");
}
}
Hope it will be helpful.

I have had a simillar environment where i needed to have accounts in the right state to start my tests. I made a pacakage named accelerators and opend some process based classes to move the accounts from process to process to get them in the right state. My advice is not to put the #Test annotation above the methods of the accelerators. But to call those accelerator classes and methods inside your actual tests.
If you have more questions feel free to ask them.
So i edited my answer because i couldnt post a long comment lol im fairly new here.
#Hans Mens So what i did is I created the class and methods as how the processes are as accelerators. And inside the #BeforeTest annotation i invoked all accelerator classes i use in my tests. THen i extended my test method classes from the class where the #BeforeTest annotation is.That way i can use all the objects i invoked in the #BeforeTest without invoking them in the test classes so that my test scripts are clean.

Related

Selenium Java: With TestNG, Not executing 2nd test case

Console Output
Please help me to fix the below issue in Selenium WebDriver with Java.
I am validating eComm web based application. I have couple of test cases.
1. Sign in to the application
2. Add to item to the cart. This test case is a continuation of first test case. It means that after sign in, user is gonna add the item to the cart.
I have 2 class files, In that I have given #Test condition in both the class files.
In CommonFunctions class file, I am lauching browser in #BeforeSuite section. And closing the browser in #AfterSuite section.
I am doing it in Page object model and executing from TestNG XML.
On running the suite, First test case is getting passed. When it comes to the second test case (In the #Test condition present in the 2nd class file), the in it entering into the #Test section but getting failed immediately with out any reason.
I have tried with Implicit, Explicit wait and even thread.sleep as well. But no luck. Please can some one take a look and suggect me.
Appreciate your help!
I assume that it is getting failed for some exception and the stack trace is not getting printed somehow. Try explicitly getting it printed by:
#Test
public void SecondTestCase(){
try{
yourSecondTestCase;
}
catch(Throwable e){
e.printStackTrace();
}
}
Just don't reuse the driver.
Create it in #BeforeMethod then quit it in #AfterMethod and you should not meet any issues.
Update answering a comment question below:
In general it's not a good idea to have the tests depending on each other.
Regardless of how special you think your case is, you should never do it if you want to keep your dignity.
This does not neccessarily mean that you can't reuse 'logic' in tests, it just means that you should not expect any other test to have executed before the current one.
Let's put some thought about it:
Why would you want to do it at first place?
What is the effect that you're seeking?
My blind-guess answers are:
You want to have the login test executed first.
This will help you avoid repeating the same login code in all of the remaining tests, and you save time from starting/stopping the driver.
If my guesses are correct, then I can tell you that it's perfectly normal to want those things, and that everyone who ever needed to write more than 2 tests has been there
You may ask:
Am I not trying to achieve the same thing?
Where is my mistake then?
The following answers can be considered brief and quite insufficient compared to the truth that lies beneath, but I think they will serve you as a good starting point for where you're heading at.
Yes, you're trying to achieve the same thing.
Your approach can't be called mistake, but negliance - you're trying to write automated tests with Selenium while you lack basic coding skills. It's actualy not so bad, because you didn't knew what you're getting into, but after reading this you have the choice of either letting it slip or begin the journey to personal improvement. Take some time to learn Java - a really good starter would be the 'Thinking in Java' book. Do make the exercises after each chapter and the experience that you'll gain will be priceless. The book will help you get familiar with the Java language features and good idea of how code organization works.
Enough with the general notes, below is a simple guide for you to follow in implementing v0.1 of your newly-born 'automation project'.
(Yes, it is 'project' and 'framework' is having completely different meaning, to be clear.)
First you need to decide how to approach the page objects.
(Based on my own experience)
the best way to do it is to keep all the business-logic inside the page objects and by business-logic I mean:
all the methods that perform some action,
and all the methods that retrieve some data.
Examples include doSomething / getSometing / setSomething / isSomething / canSomething / waitSomething and so on.
You should not be doing any assertions inside the page objects
You should never throw AssertionError from your page objects
If you need to throw something from page-object method/constructor just
throw IllegalArgumentException
or RuntimeException.
All the assertions should happen exclusively in the test-method-body
You should never write 'assert...' statement outside #Test method
You can have Util classes to do some data transformations
but do not ever write assertion logic inside them
rather expose methods with boolean/other return type and assert on their result.
The following example is one of the best I've ever seen:
// Don't do this:
ColorVerifications.assertAreSame(pen.getColor(), ink.getColor());
// If you do, you'll eventually end-up having to write this:
ColorVerifications.assertAreNotSame(pen.getColor(), ink.getColor());
// Do this instead:
Assert.isTrue(ColorUtil.areSame(pen.getColor(), ink.getColor());
// Then you can actually 'reuse' the logic that's already in the Assert class:
Assert.isFalse(ColorUtil.areSame(pen.getColor(), ink.getColor());
// NOTE: Regarding the term "code reuse"
// - it is NOT referring to "reusing your own code"
// - it IS referring to "reusing all the existing code"
The idea is to clearly communicate both the assert intention and how the check is done.
Always keep the logic of 'checking' and 'asserting' separate, becuse those words are having different semantics.
It is really important to put a good amount of thinking when naming classes / methods / variables - really!
Don't be afraid to rename it when you come up with a better fit - go for it on the spot!
Now back to our page obects and login stuff - I won't be diving much into it but, you'll get where I'm going at.
Carefully organize the #Test methods in separate classes
Keep the ones that need common setup in the same class
Keep the number of test-class methods that are not annotated with #Test to the minimum
Do not inherit between test classes to reuse logic other than
Outputs/Results folders preparaton/colletion/archiving
Logs/Reports initialization/disposal
Driver creation/cleanup
In general: prefer composition to inheritance
Put other logic in the page objects / util classes and reuse them
Have a base DriverWrapper class for the generic UI checks/interactions
Have a base PageObject class that is hosting a DriverWrapper member
Don't use the #FindBy / PageFactory model (you'r life will be happier)
Use static final By locators instead
Log everything!
logging is not included in the examples
but do assume every method's first line is logging the method name and the passed arguments
always remember - your log is your best friend
Reading what happened in the log takes considerably less time than manually debugging the code (which is pratically re-running it at half-speed)
You are writing atuomation code, not production code so you can nevver be wrong when logging additional info.
Except for the cases of passwords and confidential data
those you should never log.
Now that you have been thaught some basic ideas, lets dive into code:
Page basics
Sample DriverWrapper:
class DriverWrapper {
protected final WebDriver driver;
public DriverWrapper(WebDriver driver){
this.driver = Objects.requireNotNull(driver, "WebDriver was <null>!");
}
// it's okay to have 'checked exceptions' declared by all wait* methods
// but it is tottaly not okay to have 'checked exceptions' for the others
public WebElement waitForVisibleElement(By locator, int timeoutMillis)
throws TimeoutException { // <- the 'checked exception'
return new WebDriverWait(driver)
.withTimeout(Duration.ofMillis(timeoutMillis))
.pollingEvery(Duration.ofMillis(100))
.until(
ExpectedConditions.visibilityOfElementLocatedBy(locator)
);
}
public boolean isVisible(By locator, int timeoutMillis){
try{
return waitForVisibleElement(locator, timeoutMillis) != null;
catch(TimeoutException ignored){
return false;
}
}
// .get(String url){...}
// .click(By locator){... elementToBeClickable(locator) ....}
// .typeInto(bool shouldLog, By locator, CharSequence... keys){...}
// .typeInto(By locator, CharSequence... keys){typeInto(true, locator, keys);}
// you got the idea ;)
}
Sample PageObject:
class PageObject{
protected final DriverWrapper driver;
public PageObject(WebDriver driver){
this.driver = new DriverWrappr(driver);
}
}
Sample LoginPage:
class LoginPage extends PageObjet{
// NOTE: keep the locators private
private static final By USERNAME_INPUT = By.id("usernameInput");
private static final By PASSWORD_INPUT = By.id("passwordInput");
private static final By LOGIN_BUTTON = By.id("loginButton");
private static final By ERROR_MESSAGE = By.id("errorMessage");
public LoginPage(WebDriver driver){
super(driver);
}
public LoginPage goTo(){
driver.get(url);
return this;
}
public void loginAs(String user, String pass){
// NOTE:
// Do not perform navigation (or other actions) under the hood!
// Resist the urge to call goTo() here!
// Page object methods should be transparent about what they do.
// This results in better level of control/transparency in the tests.
driver.typeInto(USERNAME_INPUT, user);
driver.typeInto(PASSWORD_INPUT, pass);
driver.click(LOGIN_BUTTON);
}
public boolean isErrorMessageVisible(int timeoutMillis){
// NOTE: We delegate the call to the driver
// Allowing the page-object to later define it's own isVisible method
// Without having collision with driver methods.
return driver.isVisible(ERROR_MESSAGE, timeoutMillis);
}
}
Infrastructure basics
Sample DriverManager class:
class DriverManager{
private static WebDriver driver;
public static WebDriver getDriver(){
return driver;
}
public static void setDriver(WebDriver driver){
// NOTE: Don't do null checks here.
DriverManager.driver = driver;
}
public static WebDriver createDriver(String name){
//...
return new ChromeDriver();
}
Sample TestBase class:
class TestBase{
// NOTE: just define the methods, do not annotate them.
public static void setUpDriver(){
// In v0.1 we'll be sharing the driver between tests in same class
// Assuming the tests will not be running in parallel.
// For v1.0 you can improve the model after reading about test-listeners
WebDriver driver = DriverManager.getDriver();
if(driver != null){
return;
}
driver = DriverManager.createDriver("chrome");
DriverManager.setDriver(driver);
}
public static void tearDownDriver(){
WebDriver driver = DriverManager.getDriver();
if(driver != null){
driver.quit();
DriverManager.setDriver(null);
}
}
}
Finally - a test class:
class LoginTests extends TestBase{
private LoginPage loginPage;
#BeforeClass
public static void setUpClass(){
setUpDriver();
}
#AfterClass
public static void tearDownClass(){
tearDownDriver();
}
#BeforeMethod
public void setUp(){
// actions, that are common for all test cases in the class
loginPage = new LoginPage(DriverManager.getDriver());
loginPage.goTo();
}
#AfterMethod
public void tearDown(){
// dispose the page objets to ensure no local data leftovers
loginPage = null;
}
#Test
public void testGivenExistingCredentialsWhenLoginThenNoError(){
loginPage.loginAs("TestUser", "plain-text password goes here");
boolean errorHere = loginPage.isErrorMessageVisible(30 * 1000);
Assert.assertFalse(errorHere, "Unexpected error during login!");
}
#Test
public void testGivenBadCredentialsWhenLoginThenErrorShown(){
loginPage.loginAs("bad", "guy");
boolean errorHere = loginPage.isErrorMessageVisible(30 * 1000);
Assert.assertTrue(errorHere, "Error message not shown!");
}
}
That's all there is to it.
Hope you enjoyed the ride.

Tagged Cucumber Scenarios Functioning

I have been experiencing something really weird. Maybe someone can explain me where I am making mistake.
I have a following scenario in a feature file
#DeleteUserAfterTest
Scenario: Testing a functionality
Given admin exists
When a user is created
Then the user is verified
My #After method in Hooks class looks like following
#After
public void tearDown(Scenario scenario) {
if (scenario.isFailed()) {
final byte[] screenshot = ((TakesScreenshot) driver)
.getScreenshotAs(OutputType.BYTES);
scenario.embed(screenshot, "image/png"); //stick it in the report
}
driver.quit();
}
I am using the following method in my step definition to delete the created user based on tag passed in the Test Scenario as follows:
#After("#DeleteUserAfterTest")
public void deleteUser(){
//Do fucntionalities to delete user
}
My test runner looks something like this:
import io.cucumber.testng.AbstractTestNGCucumberTests;
import io.cucumber.testng.CucumberOptions;
#CucumberOptions(
plugin = {"pretty","com.aventstack.extentreports.cucumber.adapter.ExtentCucumberAdapter:", "json:target/cucumber-report/TestResult.json"},
monochrome = false,
features = "src/test/resources/features/IntegrationScenarios.feature",
tags="#DeleteUserAfterTest",
glue="Steps")
public class IntegrationTest extends AbstractTestNGCucumberTests {
}
However, when I launch the test case, sometimes my user is deleted in the After("#DeleteUserAfterTest") but sometimes my test does not recognises the tagged After at all. It directly goes to After method in my Hooks class and quits the driver. Maybe someone has encountered this problem or knows a workaround!
Method order is not defined in Java. So you have to tell Cucumber in which order you hooks should be executed. Higher numbers are run first (before hooks are the other way around).
#After(order = 500)
public void tearDown(Scenario scenario) {
}
#After(value = "#DeleteUserAfterTest", order = 1000)
public void deleteUser(){
}

Espresso: What are the advantages/disadvantages of having multiple tests vs. one user journey?

As an example, I have an app with a MainActivity that has a button and a NextActivity that has a RecyclerView populated with the integers in a vertical list. I could write the following separate Espresso tests:
Test 1:
public class MainScreenTest {
#Rule
public IntentsTestRule<MainActivity> mainActivityRule =
new IntentsTestRule<>(MainActivity.class);
#Test
public void shouldOpenNextActivityOnClick() {
onView(withId(R.id.btn)).check(matches(withText("foo")));
onView(withId(R.id.btn))
.perform(click());
intended(hasComponent("com.example.androidplayground.NextActivity"));
}
}
Test 2:
public class NextScreenTest {
#Rule
public ActivityTestRule<NextActivity> nextActivityRule =
new ActivityTestRule<>(NextActivity.class);
#Test
public void shouldScrollToItem() throws Exception {
int position = 15;
onView(withId(R.id.rv))
.perform(RecyclerViewActions.scrollToPosition(position));
onView(withText(position)).check(matches(isDisplayed()));
}
}
Alternatively, I could write one test that covers both:
public class UserJourneyTest {
#Rule
public ActivityTestRule<MainActivity> mainActivityRule =
new ActivityTestRule<MainActivity>(MainActivity.class);
#Test
public void userJourney() {
onView(withId(R.id.btn)).check(matches(withText("foo")));
onView(withId(R.id.btn))
.perform(click());
int position = 15;
onView(withId(R.id.rv))
.perform(RecyclerViewActions.scrollToPosition(position));
onView(withText(position)).check(matches(isDisplayed()));
}
}
Is one way better than the other? Will I gain a significant increase in performance by having one user journey instead of multiple separate tests?
My opinion is that if you're navigating from MainActivity to NextActivity by clicking a button, you wouldn't want to write a test which launches directly the NextActivity. For sure, espresso allows this, but if from MainActivity there are some data passed to NextActivity, you won't have them if your test launches NextActivity directly.
I'd say that first of all by writing a UI automation test you want to simulate a user's behaviour. So that I would go for the third option you've posted above, UserJourneyTest.class
In your case it's not a matter of performance, it's a matter of testing it right.

JUnit: Run one test with different configurations

I have 2 test methods, and i need to run them with different configurations
myTest() {
.....
.....
}
#Test
myTest_c1() {
setConf1();
myTest();
}
#Test
myTest_c2() {
setConf2();
myTest();
}
//------------------
nextTest() {
.....
.....
}
#Test
nextTest_c1() {
setConf1();
nextTest();
}
#Test
nextTest_c2() {
setConf2();
nextTest();
}
I cannot run them both from one config (as in code below) because i need separate methods for tosca execution.
#Test
tests_c1() {
setConf1();
myTest()
nextTest();
}
I don't want to write those 2 methods to run each test, how can i solve this?
First i thought to write custom annotation
#Test
#RunWithBothConf
myTest() {
....
}
But maybe there are any other solutions for this?
What about using Theories?
#RunWith(Theories.class)
public class MyTest{
private static enum Configs{
C1, C2, C3;
}
#DataPoints
public static Configs[] configValues = Configs.values();
private void doConfig(Configs config){
swich(config){...}
}
#Theory
public void test1(Config config){
doConfig(config);
// rest of test
}
#Theory
public void test2(Config config){
doConfig(config);
// rest of test
}
Not sure why formatting if off.
I have a similar issue in a bunch of test cases I have, where certain tests need to be run with different configurations. Now, 'configuration' in your case might be more like settings, in which case maybe this isn't the best option, but for me it's more like a deployment model, so it fits.
Create a base class containing the tests.
Extend the base class with one that represents the different configuration.
As you execute each of the derived classes, the tests in the base class will be run with the configuration setup in its own class.
To add new tests, you just need to add them to the base class.
Here is how I would approach it:
Create two test classes
The first class configures to conf1 but uses the #Before attribute trigger the setup
The second class extends the first but overrides the configure method
In the example below I have a single member variable conf. If no configuration is run it stays at its default value 0. setConf1 is now setConf in the Conf1Test class which sets this variable to 1. setConf2 is now setConf in the Conf2Test class.
Here is the main test class:
public class Conf1Test
{
protected int conf = 0;
#Before
public void setConf()
{
conf = 1;
}
#Test
public void myTest()
{
System.out.println("starting myTest; conf=" + conf);
}
#Test
public void nextTest()
{
System.out.println("starting nextTest; conf=" + conf);
}
}
And the second test class
public class Conf2Test extends Conf1Test
{
// override setConf to do "setConf2" function
public void setConf()
{
conf = 2;
}
}
When I configure my IDE to run all tests in the package I get the following output:
starting myTest; conf=1
starting nextTest; conf=1
starting myTest; conf=2
starting nextTest; conf=2
I think this gives you what. Each test only has to be written once. Each test gets run twice, once with conf1 and once with conf2
The way you have it right now seems fine to me. You aren't duplicating any code, and each test is clear and easy to understand.

Does TestNG guarantee #BeforeSuite methods are executed before #BeforeTest methods?

BACKGROUND: My goal is to code a TestNG-Selenium system that runs self-contained (no strings to Maven or Ant plugins; just Java). It must allow for test cases to accept parameters including the browser and the domain URL. When the TestRunner instantiates these test cases, the browser and domain are used to get a Selenium object to perform its testing.
PROBLEM: Only one test class per suite succeeds in getting the domain parameter (in a #BeforeSuite method) before attempting to get a Selenium object (in a #BeforeTest). The test classes that do not receive the domain have a null selenium object b/c it can't be instantiated.
CODE: The XmlClasses are each contained within their own XmlTest and all three are in a single XmlSuite. The suite contains the in the order of TestClass1, TestClass2, then TestClass3. The test classes themselves are subclasses of 2 layers of abstract base classes that includes functionality to initialize injected variables and subsequently get an instance of Selenium. The purpose for this is to test one or multiple
applications (on multiple domains) with as little repeated code as possible (ie: Selenium instantiation is in the root base class because it's common to all tests). See the methods below for details.
// Top-most custom base class
abstract public class WebAppTestBase extends SeleneseTestBase
{
private static Logger logger = Logger.getLogger(WebAppTestBase.class);
protected static Selenium selenium = null;
protected String domain = null;
protected String browser = null;
#BeforeTest(alwaysRun = true)
#Parameters({ "selenium.browser" })
public void setupTest(String browser)
{
this.browser = browser;
logger.debug(this.getClass().getName()
+ " acquiring Selenium instance ('" + this.browser + " : " + domain + "').");
selenium = new DefaultSelenium("localhost", 4444, browser, domain);
selenium.start();
}
}
// Second level base class.
public abstract class App1TestBase extends WebAppTestBase
{
#BeforeSuite(alwaysRun = true)
#Parameters({"app1.domain" })
public void setupSelenium(String domain)
{
// This should execute for each test case prior to instantiating any Selenium objects in #BeforeTest
logger.debug(this.getClass().getName() + " starting selenium on domain '" + domain+ "'.");
this.domain = domain;
}
}
// Leaf level test class
public class TestClass1 extends App1TestBase
{
#Test
public void validateFunctionality() throws Exception
{
// Code for tests go here...
}
}
// Leaf level test class
public class TestClass2 extends App1TestBase
{
#Test
public void validateFunctionality() throws Exception
{
selenium.isElementPresent( ...
// Rest of code for tests go here...
// ....
}
}
// Leaf level test class
public class TestClass3 extends App1TestBase
{
#Test
public void validateFunctionality() throws Exception
{
// Code for tests go here...
}
}
OUTPUT: TestCase3 runs correctly. TestCase1 and TestCase2 fails. Stack trace gets generated...
10:08:23 [DEBUG RunTestCommand.java:63] - Running Tests.
10:08:23 [Parser] Running:
Command line suite
Command line suite
[DEBUG App1TestBase.java:49] - TestClass3 starting selenium on domain 'http://localhost:8080'.
10:08:24 [DEBUG WebAppTestBase.java:46] - TestClass2 acquiring Selenium instance ('*firefox : null').
10:08:24 [ERROR SeleniumCoreCommand.java:40] - Exception running 'isElementPresent 'command on session null
10:08:24 java.lang.NullPointerException: sessionId should not be null; has this session been started yet?
at org.openqa.selenium.server.FrameGroupCommandQueueSet.getQueueSet(FrameGroupCommandQueueSet.java:216)
at org.openqa.selenium.server.commands.SeleniumCoreCommand.execute(SeleniumCoreCommand.java:34)
at org.openqa.selenium.server.SeleniumDriverResourceHandler.doCommand(SeleniumDriverResourceHandler.java:562)
at org.openqa.selenium.server.SeleniumDriverResourceHandler.handleCommandRequest(SeleniumDriverResourceHandler.java:370)
at org.openqa.selenium.server.SeleniumDriverResourceHandler.handle(SeleniumDriverResourceHandler.java:129)
at org.openqa.jetty.http.HttpContext.handle(HttpContext.java:1530)
at org.openqa.jetty.http.HttpContext.handle(HttpContext.java:1482)
at org.openqa.jetty.http.HttpServer.service(HttpServer.java:909)
at org.openqa.jetty.http.HttpConnection.service(HttpConnection.java:820)
at org.openqa.jetty.http.HttpConnection.handleNext(HttpConnection.java:986)
at org.openqa.jetty.http.HttpConnection.handle(HttpConnection.java:837)
at org.openqa.jetty.http.SocketListener.handleConnection(SocketListener.java:245)
at org.openqa.jetty.util.ThreadedServer.handle(ThreadedServer.java:357)
at org.openqa.jetty.util.ThreadPool$PoolThread.run(ThreadPool.java:534)
I appreciate any information you may have on this issue.
I think the problem is that your #BeforeSuite method is assigning a value to a field, but you have three different instances, so the other two never get initialized.
Remember that #BeforeSuite is only run once, regardless of what class it belongs to. As such, #Before/AfterSuite methods are usually defined on classes that are outside the entire test environment. These methods should really be static but I decided not to enforce this requirement because it's sometimes impractical.
I think a better way to approach your problem is to look at your domain field as an injected resource that each of your test will receive from Guice or other dependency injection framework.

Categories