I have searched around but have been unsuccessful in determining whether the following approach is possible / good practice. Basically what I would like to do is the following:
Create JUnit Tests which use DBUnit to initialize the data. Run multiple test methods each which run off of the same initial dataset. Rollback after each test method back to the state right after the initial setUp function. After all test methods have run rollback any changes that were made in the setUp function. At this point the data in the database should be exactly the same as it was prior to running the JUnit test class.
Ideally I would not have to reinitialize the data before each test case because I could just roll back to the state right after setUp.
I have been able to roll back individual test methods but have been unable to roll back changes made in setUp after all test methods have run.
NOTE: I am aware of the different functionality of DBUnit such as CLEAN_INSERT, DELETE etc. I am using the Spring framework to inject my dataSource.
An example layout would look like:
public class TestClass {
public void setUp() {
// Calls a method in a different class which uses DBUnit to initialize the database
}
public void runTest1() {
// Runs a test which may insert / delete data in the database
// After running the test the database is in the same state as it was
// after running setUp
}
public void runTest2() {
// Runs a test which may insert / delete data in the database
// After running the test the database is in the same state as it was
// after running setUp
}
// After runTest1 and runTest2 have finished the database will be rolled back to the
// state before any of the methods above had run.
// The data will be unchanged as if this class had never even been run
}
I would be running the tests in a development database however I would prefer to not affect any data currently in the database. I am fine with running CLEAN_INSERT at the start to initialize the data however after all test methods have run I would like the data back to how it was before I ran my JUnit test.
Thanks in advance
Just as "setUp" JUnit offers a "tearDown" method executed after each test method. that you could use to rollback. Also starting with JUnit 4 you have the following annotations:
#BeforeClass: run once before running any of your tests in the test case
#Before: run every time before a test method
#After: run every time after a test method
#AfterClass: run once after all your tests in the current suite have been executed
We solved a similar problem at the oVirt open source project. Please take a look at the code residing at engine\backend\manager\modules\dal\src\test\java\org\ovirt\engine\core\dao\BaseDAOTestCase.java.
In general look at what we did there at the #BeforeClass and #AfterClass methods. You can use it on per method basis. We used the Spring-test framework for that.
Related
This is more of a question on test automation framework design. Very hard indeed to summarize whole question in one line :)
I am creating a test automation framework using Selenium. Mostly I am accessing the data (methods name) from an excel file.
In my main Runner class I am getting a list of test cases. Each test case has a set of methods (can be same or different) which I have defined in a java class and executing each method using java reflection api. Everything is fine till this point.
Now I want to incorporate TestNG and reporting/logging in my automation suite. Problem is I cant use #Test for each method as TestNG considers #Test = 1 Test Case - but my 1 Test case might have more than 1 methods. My methods are more like a test steps for a test case, reason is I dont want repeat the code. I want to create a #Test dynamically calling different sets of methods and executing them in Java Or defining each teststeps for a #Test. I was going through the TestNG documentation, but could not able to locate any feature to handle this situation.
Any help is really appreciated and if you have any other thoughts to handle this situaton I am here to listen.
Did you try the following?
#Test(priority = 1)
public void step1() {
//code
}
#Test(priority = 2)
public void step2() {
//code
}
You need to use "priority" for each method, otherwise it won't work.
When writing code that interacts with external resources (such as using a web service or other network operation), I often structure the classes so that it can also be "stubbed" using a file or some other input method. So then I end up using the stubbed implementation to test other parts of the system and then one or two tests that specifically test calling the web service.
The problem is I don't want to be calling these external services either from Jenkins or when I run all of the tests for my project (e.g. "gradle test"). Some of the services have side effects, or may not be accessible to all developers.
Right now I just uncomment and then re-comment the #Test annotation on these particular test methods to enable and disable them. Enable it, run it manually to check it, then remember to comment it out again.
// Uncomment to test external service manually
//#Test
public void testSomethingExternal() {
Is there is a better way of doing this?
EDIT: For manual unit testing, I use Eclipse and am able to just right-click on the test method and do Run As -> JUnit test. But that doesn't work without the (uncommented) annotation.
I recommend using junit categories. See this blog for details : https://community.oracle.com/blogs/johnsmart/2010/04/25/grouping-tests-using-junit-categories-0.
Basically, you can annotate some tests as being in a special category and then you can set up a two test suites : one that runs the tests of that category and one that ignores tests in that category (but runs everything else)
#Category(IntegrationTests.class)
public class AccountIntegrationTest {
#Test
public void thisTestWillTakeSomeTime() {
...
}
#Test
public void thisTestWillTakeEvenLonger() {
....
}
}
you can even annotate individual tests"
public class AccountTest {
#Test
#Category(IntegrationTests.class)
public void thisTestWillTakeSomeTime() {
...
}
Anytime I see something manually getting turned on or off I cringe.
As far as I can see you use gradle and API for JUnit says that annotation #Ignore disables test. I will add gradle task which will add #Ignore for those tests.
If you're just wanting to disable tests for functionality that hasn't been written yet or otherwise manually disable some tests temporarily, you can use #Ignore; the tests will be skipped but still noted in the report.
If you are wanting something like Spring Profiles, where you can define rulesets for which tests get run when, you should either split up your tests into separate test cases or use a Filter.
You can use #Ignore annotation to prevent them from running automatically during test. If required, you may trigger such Ignored tests manually.
#Test
public void wantedTest() {
return checkMyFunction(10);
}
#Ignore
#Test
public void unwantedTest() {
return checkMyFunction(11);
}
In the above example, unwantedTest will be excluded.
I want to continue test run execution even the one or more assertions get fails in TestNG.
I referred below links in order to implement soft assertion in my project.
http://beust.com/weblog/2012/07/29/reinventing-assertions/
http://seleniumexamples.com/blog/guide/using-soft-assertions-in-testng/
http://www.seleniumtests.com/2008/09/soft-assertion-is-check-which-doesnt.html
But I'm not understanding the flow of code execution, like function calls, FLOW.
Kindly help me to understand the work flow of the soft assertions.
Code:
import org.testng.asserts.Assertion;
import org.testng.asserts.IAssert;
//Implementation Of Soft Assertion
public class SoftAssertions extends Assertion{
#Override public void executeAssert(IAssert a){
try{ a.doAssert(); }
catch(AssertionError ex){
System.out.println(a.getMessage()); } } }
//Calling Soft Assertion
SoftAssertions sa = new SoftAssertions();
sa.assertTrue(actualTitle.equals(expectedTitle),
"Login Success, But Uname and Pwd are wrong");
Note:
Execution Continues even though above assertion fails
Soft assertions work by storing the failure in local state (maybe logging them to stderr as they are encountered). When the test is finished it needs to check for any stored failures and, if any were encountered, fail the entire test at that point.
I believe what the maintainer of TestNG had in mind was a call to myAssertion.assertAll() at the end of the test which will run Assert.fail() and make the test fail if any previous soft-assertion checks failed.
You can make this happen yourself by adding a #Before method to initialize your local soft-assertion object, use it in your test and add an #After method to run the assertAll() method on your soft-assertion object.
Be aware that this #Before/#After approach makes your test non-thread-safe so each test must be run within a new instance of your test class. Creating your soft-assertion object inside the test method itself and running the assertAll() check at the end of the method is preferable if your test needs to be thread-safe. One of the cool features of TestNG is its ability to run multi-threaded tests, so be aware of that as you implement these soft-asserts.
I am trying to use H2 or HSQL for my unit testing. But my application is not of spring and hibernate. It seems most of the references are there only with spring and hibernate for HSQL/H2 in memory db for unit testing.
Can someone point to a right reference where only hsql/h2 is used with junit plainly? Appreciate your time.
I ususally do something like this:
In the #Before method I establish a connection to a memory database, something like this:
#Before
public void setup()
{
this.dbConnection = DriverManager.getConnection("jdbc:hsqldb:mem:testcase;shutdown=true", "sa", null);
}
The connection is stored in an instance variable, so it's available for each test.
Then if all tests share the same tables, I also create those inside the setup() method, otherwise each tests creates its own tables:
#Test
public void foo()
{
Statement stmt = this.dbConnection.createStatement();
stmt.execute("create table foo (id integer)");
this.dbConnection.commit();
... now run the test
}
In the #After method I simplic close the connection which means the in-memory database gets wiped and the next test runs with a clean version:
#After
public void tearDown()
throws Exception
{
dbConnection.disconnect();
}
Sometimes I do need to run unitt-tests agains a real database server (you can't test Postgres or Oracle specific features using HSQLDB or H2). In that case I establish the connection only once for each Testclass instead of once for each test method. I then have methods to drop all objects in order to cleanup the schema.
This can all be put into a little utility class to avoid some of the boilerplate code. Apache's DbUtils can also make life easier as will DbUnit if you want to externalize the test data somehow.
I know I am a little late to the party :-)
I had the same issue a while back and created a JUNIT integration that use the #Rule mechanism to setup an in-memory database for JUnit tests. I found it to be a real easy and good way to be able to test my database integration code. Feedback is more than welcome.
The source code and instructions for use can be found at https://github.com/zapodot/embedded-db-junit
I have a problem with some JUnit 4 tests that I run with a test suite.
If I run the tests individually they work with no problems but when run in a suite most of them, 90% of the test methods, fail with errors. What i noticed is that always the first tests works fine but the rest are failing. Another thing is that a few of the tests the methods are not executed in the right order (the reflection does not work as expected or it does because the retrieval of the methods is not necessarily in the created order). This usually happens if there is more than one test with methods that have the same name. I tried to debug some of the tests and it seems that from a line to the next the value of some attributes becomes null.
Does anyone know what is the problem, or if the behavior is "normal"?
Thanks in advance.
P.S.:
OK, the tests do not depend on each other, none of them do and they all have the #BeforeClass, #Before, #After, #AfterClass so between tests everything is cleared up. The tests work with a database but the database is cleared before each test in the #BeforeClass so this should not be the problem.
Simplefied example:
TEST SUITE:
import org.junit.BeforeClass;
import org.junit.runner.RunWith;
import org.junit.runners.Suite;
importy testclasses...;
#RunWith(Suite.class)
#Suite.SuiteClasses({ Test1.class, Test2.class })
public class TestSuiteX {
#BeforeClass
public static void setupSuite() { System.out.println("Tests started"); }
#AfterClass
public static void setupSuite() { System.out.println("Tests started"); }
}
TESTS:
The tests are testing the functionalily on a server application running on Glassfish.
Now the tests extend a base class that has the #BeforeClass - method that clears the database and login's and the #AfterClass that only makes a logoff.
This is not the source of the problems because the same thing happened before introducing this class.
The class has some public static attributes that are not used in the other tests and implements the 2 controll methods.
The rest of the classes, for this example the two extends the base class and does not owerride the inherited controll methods.
Example of the test classes:
imports....
public class Test1 extends AbstractTestClass {
protected static Log log = LogFactory.getLog( Test1.class.getName() );
#Test
public void test1_A() throws CustomException1, CustomException2 {
System.out.println("text");
creates some entities with the server api.
deletes a couple of entities with the server api.
//tests if the extities exists in the database
Assert.assertNull( serverapi.isEntity(..) );
}
}
and the second :
public class Test1 extends AbstractTestClass {
protected static Log log = LogFactory.getLog( Test1.class.getName() );
private static String keyEntity;
private static EntityDO entity;
#Test
public void test1_B() throws CustomException1, CustomException2 {
System.out.println("text");
creates some entities with the server api, adds one entities key to the static attribute and one entity DO to the static attribute for the use in the next method.
deletes a couple of entities with the server api.
//tests if the extities exists in the database
Assert.assertNull( serverapi.isEntity(..) );
}
#Test
public void test2_B() throws CustomException1, CustomException2 {
System.out.println("text");
deletes the 2 entities, the one retrieved by the key and the one associated with the static DO attribute
//tests if the deelted entities exists in the database
Assert.assertNull( serverapi.isEntity(..) );
}
This is a basic example, the actual tests are more complex but i tried with simplified tests and still it does not work.
Thank you.
The situation you describe sounds like a side-effecting problem. You mention that tests work fine in isolation but are dependent on order of operations: that's usually a critical symptom.
Part of the challenge of setting up a whole suite of test cases is the problem of ensuring that each test starts from a clean state, performs its testing and then cleans up after itself, putting everything back in the clean state.
Keep in mind that there are situations where the standard cleanup routines (e.g., #Before and #After) aren't sufficient. One problem I had some time ago was in a set of databases tests: I was adding records to the database as a part of the test and needed to specifically remove the records that I'd just added.
So, there are times when you need to add specific cleanup code to get back to your original state.
It seems that you built your test suite on the assumption that the order of executing methods is fixed. This is wrong - JUnit does not guarantee the order of execution of test methods, so you should not count on it.
This is by design - unit tests should be totally independent of each other. To help guaranteeing this, JUnit creates a distinct, new instance of your test class for executing each test method. So whatever attributes you set in one method, will be lost in the next one.
If you have common test setup / teardown code, you should put it into separate methods, annotated with #Before / #After. These are executed before and after each test method.
Update: you wrote
the database is cleared before each test in the #BeforeClass
if this is not a typo, this can be the source of your problems. The DB should be cleared in the #Before method - #BeforeClass is run only once for each class.
Be careful in how you use #BeforeClass to set up things once and for all, and #Before to set up things before each individual test. And be careful about instance variables.
We may be able to help more specifically, if you can post a simplified example of what is going wrong.