I have written tests in the past using in memory databses.
What i wanted to know was is it possible to write tests in spring, junit, java, using in memory DB and the data is not rolledback after each test but kept in the db.
Basically whereby the tests are dependant on each other?
any ideas?
Rollbacking db changes or not is up to you.
But unit test should be independent from each other.
Small extract from a recent DZone article on the subject:
Make each test independent to all the others
Do not make chain of unit test cases. It will prevent you to identify the root cause of test case failures and you will have to
debug the code. Also, it creates dependency, means if you have to
change one test case then you need to make changes in multiple
testcases unnecessarily.
Try to use #Before and #After methods to setup per-requisites if any for all your test cases. If you need to multiple things to support
different test cases in #Before or #After, then consider creating new
Test class.
Your tests should be independent.
But if you want I guess you can try the #Rollback annotation.
I have not tried but have seen in the doc spec while doing transactions.
Related
I am doing some Junit testing and I need to know how to run a Test class only if a specific test from another class was passed.
There is 'Categories' feature in JUnit. (See: https://github.com/junit-team/junit/wiki/Categories)
This question has already been replied in this post
In the #Before function you need to retreive a runtime value from your system and check that it matches your requirement. Everything will stop if it doesn't
IMHO, This is bad practice, even in a well-designed Integration test-suite. I would encourage you to rethink your overall test class design.
If these tests are truly meant to be unit tests, they should be atomic and independent of each other. See this post for a good read.
Having said that, I have often used JUnit 4.x to build & run rather large suites of Integration tests (backend functional tests that test a RESTful services responses). If this is your use case, I recommend restructuring your tests such that you never have one test in TestClassA depend on a test in TestClassB. That is a bad idea, as it will making your tests more fragile. And difficult for other devs to understand the intention of your tests, taken together as a whole.
When I have found that I have dependencies across multiple test classes, I have factored out a "test-superclass" to both test classes and do my "setup work" in that superclass. Or you can factor out a utility class that contains static methods for creating somewhat complex test conditions to start with.
But, even using JUnit as a vehicle to run these kind of "integration" tests should be done with caution and careful intent.
We've a Java-Tomcat project, using Spring, JPA, with maven build, JUnit for unit tests and TestNG for integration tests.
Some integration tests will require a database, so a new DB is created each time mvn verify is run. The problem is now to populate it to have test data.
Should I look into dbunit, persist the objects myself using JPA, or another way?
How to load test data in the DB, each time integration tests are run to have a stable testing environment?
I'm using dbunit with an in memory database. It's helpful to load the specific test datasets, to run the tests, to verify the database contents after each test and to clean up the database after the test is run.
The "pros" of dbunit would be that it allows you to control the state of the database before and after each test. The "cons" is that you will work with test datasets in a custom xml format, not SQL. You can export from sql to this custom xml format, but you will still need occasionally to manually edit the xml file.
I take a copy of live database and make tests transactional so they are rolled back each time.
We use Dbunit.
We load test data within junit in a #BeforeClass method.
And delete/clean data in a #BeforeClass and a #AfterClass method.
The problem is now to populate it to have test data
As each integration test might need to have different test data, I think that should be done as part of the set-up phase of each of the integration tests.
There are two patterns to consider Fresh Fixture and Shared Fixture. The first one provides better tests isolation as it is about recreating test data for each test case, i.e. assuring a clean state. The later one introduces the risk of tests coupling but is faster as reuses the same instances of test data across many tests. Both are described in details in Meszaros: xUnit Test Patterns.
Regardless of the choice, it may be worth to consider the random data driven approach designed on top of the test-arranger: How to organize tests with Test Arranger. According to my knowledge, it's the cheapest approach with regard to maintenance costs and the required amount of code.
I'm currently testing an interface for Java which enables to use R calls with Java. Therefore I need a connection which also encapsulates a process.
Now I need to know, how JUnit processes those Unit Tests. I'm using JUnit4 with it's #Before and #After annotations to init a new connection (internally it's a process) once per test.
With "how JUnit processes" I mean:
is every test executed in it's own thread? ( which could probably cause problems)
are those tests executed sequentially?
do they have a specific order (not that important, but would be nice to know)
My concern is that those tests could cause problems which wouldn't exist, if used properly (as documented) in a real environment.
The tests are executed sequentially. You should not rely on this fact, because that indicates you are not writing a pure unit test and have created an anti-pattern (in terms of testing). Each test must be its own separate piece of work with no external dependencies outside of After and Before initializations. I believe each test is executed in its own thread, once again this harkens back to your test suite not being pure unit tests.
My concern is that those tests could cause problems which wouldn't
exist, if used properly (as documented) in a real environment.
Unit tests only validate one small piece of a function, typically one possible logic branch. If you want to test the system integration you will need to do what is called integration testing. Further if you are looking to do multi-threaded testing I highly recommend: Multi-threaded TC
I believe that unless otherwise specified, JUnit is free to use as many threads as it likes to run your unit tests. You can restrict this to a single thread. The order is arbitrary as to which tests are run when. In theory, your tests should be properly thread-safe to avoid having nondeterminism issues if run from different threads.
is every test executed in it's own thread? ( which could probably cause problems) - one thread for all tests
are those tests executed sequentially? - yes
do they have a specific order (not that important, but would be nice to know) - no, it is principally impossible to tell beforehand the order of processing of #Test methods in the class. But using #Rules, you can read the number of the current test in their sequence
Of course, we are talking on different tests in one class.
For each test class:
Call #BeforeClass and/or #ClassRule annotations.
For each test (#Test):
Create an instance of the class
Call #Before and/or #Rule annotations
Call test method
Call #After and/or #Rule annotations
Call #AfterClass or #ClassRule annotations.
Usually everything works from the same thread - however don't rely on that as some rules (timeout) will fork a thread - and you can decide to run tests in parallel.
I am trying to figure out the best way(s) to test Service and DAO layers. So, a few sub questions...
When testing a service layer, is it best to test against a mock DAO layer or a "live" DAO layer pointed at a testing environment?
How should SQL in the DAO layer be tested when the only test database is in a shared environment (Oracle/DB2)
How do you solve the paradox of any DAO writes/updates need to be tested with DAO reads which is something that also has to be tested?
I am looking for any good documentation, articles, or references in this area along with any tools to help automate the process. I already know about JUint for unit testing and Hudson for CI.
Get Growing Object-Oriented Software, Guided by Tests. It has some great tips about how to test database access.
Personally, I usually break the DAO tests in 2, a unit test with a mocked database to test functionality on the DAO, and an integration test, to test the queries against the DB. If your DAO only has database access code, you won't need a unit test.
One of the suggestions from the book that I took, is that the (integration) test has to commit the changes to the DB. I've learn to do this, after using hibernate and figuring out that the test was marked for rollback and the DB never got the insert statement. If you use triggers or any kind of validation (even FKs) I think this is a must.
Another thing, stay away from dbunit, it's a great framwork to start working, but it becomes hellish when a project becomes something more than tiny. My preference here, is to have a set of Test Data Builder classes to create the data, and insert it in the setup of the test or in the test itself.
And check dbmigrate, it's not for testing, but it will help you to manage scripts to upgrade and downgrade your DB schema.
In the scenario where the DB server is shared, I've creates one schema/user per environment. Since each developer has his own "local" environment, he also owns one schema.
Here are my answers :
Use mock DAOs to test your services. Much easier, mush faster. Use EasyMock or Mockito or any other mock framework to test the service layer.
Give each developer its own database schema to execute his tests. Such schemas are typically empty : the unit tests populate the database with a small test data set before running a test, and empties it once the test is completed. Use DBUnit for this.
If the reads work against a well-defined, static, test data set (which you should unit-test), then you can rely on them to unit-test the writes. But you can also use ad-hoc queries or even DBUnit to test that the writes work as expected. The fact that the tests are not necessarily run in this order doesn't matter. If everything passes, then everything is OK.
What do you use for writing data-driven tests in jUnit?
(My definition of) a data-driven test is a test that reads data from some external source (file, database, ...), executes one test per line/file/whatever, and displays the results in a test runner as if you had separate tests - the result of each run is displayed separately, not in one huge aggregate.
In JUnit4 you can use the Parameterized testrunner to do data driven tests.
It's not terribly well documented, but the basic idea is to create a static method (annotated with #Parameters) that returns a Collection of Object arrays. Each of these arrays are used as the arguments for the test class constructor, and then the usual test methods can be run using fields set in the constructor.
You can write code to read and parse an external text file in the #Parameters method (or get data from another external source), and then you'd be able to add new tests by editing this file without recompiling the tests.
This is where TestNG, with its #DataSource, shines. That's one reason why I prefer it to JUnit. The others are dependencies and parallel threaded tests.
I use an in-memory database such as hsqldb so that I can either pre-populate the database with a "production-style" set of data or I can start with an empty hsqldb database and populate it with rows that I need to perform my testing. On top of that I will write my tests using JUnit and Mockito.
I use combination of dbUnit, jMock and jUnit 4. Then you can ether run it as suite or separately
You are better off extending TestCase with a DataDrivenTestCase that suits your needs.
Here is working example:
http://mrlalonde.blogspot.ca/2012/08/data-driven-tests-with-junit.html
Unlike parameterized tests, it allows for nicely named test cases.
I'm with #DroidIn.net, that is exactly what I am doing, however to answer your question literally "and displays the results in a test runner as if you had separate tests," you have to look at the JUnit4 Parameterized runner. DBUnit doesn't do that. If you have to do a lot of this, honestly TestNG is more flexible, but you can absolutely get it done in JUnit.
You can also look at the JUnit Theories runner, but my recollection is that it isn't great for data driven datasets, which kind of makes sense because JUnit isn't about working with large amounts of external data.
Even though this is quite an old topic, i still thought of contributing my share.
I feel JUnit's support for data driven testing is to less and too unfriendly. for eg. in order to use parameterized, we need to write our constructor. With Theories runner we do not have control over the set of test data that is passed to the test method.
There are more drawbacks as identified in this blog post series: http://www.kumaranuj.com/2012/08/junits-parameterized-runner-and-data.html
There is now a comprehensive solution coming along pretty nicely in the form of EasyTest which is a a framework extended out of JUnit and is meant to give a lot of functionality to its users. Its primary focus is to perform Data Driven Testing using JUnit, although you are not required to actually depend on JUnit anymore. Here is the github project for refernece: https://github.com/anujgandharv/easytest
If anyone is interested in contributing their thoughts/code/suggestions then this is the time. You can simply go to the github repository and create issues.
Typically data driven tests use a small testable component to handle the data. (File reading object, or mock objects) For databases, and resources outside of the application mocks are used to similate other systems. (Web services, and databases etc). Typically I see is that there are external data files that handle the data and the output. This way the data file can be added to the VCS.
We currently have a props file with our ID numbers in it. This is horribly brittle, but is easy to get something going. Our plan is to initially have these ID numbers overridable by -D properties in our ant builds.
Our environment uses a legacy DB with horribly intertwined data that is not loadable before a run (e.g. by dbUnit). Eventually we would like to get to where a unit test would query the DB to find an ID with the property under test, then use that ID in the unit test. It would be slow and is more properly called integration testing, not "unit testing", but we would be testing against real data to avoid the situation where our app runs perfectly against test data but fails with real data.
Some tests will lend themselves to being interface driven.
If the database/file reads are retrieved by an interface call then simply get your unit test to implement the interface and the unit test class can return whatever data you want.