I'm hitting a strange error related to unit-testing in my Java application.
During unit-testing I use in-memory HSQLDB pre-filled with custom data (via an Insert script triggered automatically) and Hibernate as ORM to access it.
Problem is following, if I start the unit-test on a single class (i.e.: TestDummyClass.java) the db is recreated (from the original insert script) after each method test.
If I launch the unit-test on the whole project (src/test) which contains multiple test classes, the DB is initialize on the beginning for each test-class and not on each test-method.
#RunWith(SpringJUnit4ClassRunner.class)
#ContextConfiguration(locations = { "classpath:com/wizche/app-context-test.xml" })
public class TestDummyClass {
.....
}
This means for example that if in TestDummyClass I change the DB (i.e. creating a new object) in some test-method the new object will stay there for the following test-methods in the same class. Therefore the asserts should differs if I start it on the project or on the single class!
Can someone explain me why is this happening? How can I decide when to restore the clean-db?
NB: I'm using SpringJUnit4ClassRunner with a custom context configuration for the whole test-project (in which there is no parameter related to unit-testing).
NB2: I start JUnit direct in SpringEclipse
The reason is that if you use SpringJUnit4ClassRunner, it tends to cache the application context if you use the exact same location for multiple tests - this caching is across a suite so if you execute all the tests in your project in a single suite, and use the same application context location across multiple tests, it is likely that you will get a cached context - and if you modify beans in a context, that will reflect for other tests too.
A fix is to add #DirtiesContext annotation to the test class or test method
Eclipse runs unit tests in a single JVM so theoretically even running on a single class you should see the same behaviour as running from src/test. I'm thinking running all your tests your more likely to see a problem. What makes you sure the DB is recreated after each method when you a single class test - just want to make sure this is accurate?
A similar question has been raised before check this link out. It will be helpful.
I would suggest using DBUnit to assist with building & tearing down your database. It provides methods to do this on the fly and interacts with all the main database providers.
Related
I'm working on an application where we use integration tests intensively since a core framework we are using operates on the database.
I have test classes using configuration context class such as this:
#RunWith(SpringJUnit4ClassRunner.class)
#ContextConfiguration(classes = ConfigA.class)
public A_Test(){
}
The majority of tests are using same context like above. We have over 200+ such tests. But recently we needed some additional configuration for some use cases as well, like this:
#RunWith(SpringJUnit4ClassRunner.class)
#ContextConfiguration(classes = {ConfigA.class, ConfigB.class})
public B_Test(){
}
Problem is now when we execute all tests with maven, or IDE runners , loaded cache for ConfigA no longer works. Spring tries to recreate a context for ConfigA which fails because we have H2 DB already configured and Spring tries to create schemas, tables which fails to do so.
To overcome we started to use #DirtiesContext on all tests. Result is over 1H build time, which reduces the developer productivity significantly.
Question: is it possible to clear context for tests like B_Test only? #DirtiesContext(ClassMode=AFTER_CLASS) doesn't help because order of the tests are not guaranteed(and really we don't want to go that way). It fails when type of B_Test tests are last to run. Same for #DirtiesContext(ClassMode=BEFORE_CLASS) visa versa
Is it possible to simulate #DirtiesContext(ClassMode=AFTER_CLASS) and #DirtiesContext(ClassMode=BEFORE_CLASS) at the same time on a bunch of tests?
Or is there any other way to solve in general this problem?
What we tried so far:
Junit Suites : didn't help anything with spring context
ContextHierarchies : didn't help with the case that B_Type tests also dirties the context
Test Ordering: well nobody is really happy about refactoring all the tests to make it work magically
How about using both #DirtiesContext(ClassMode=AFTER_CLASS) and #DirtiesContext(MethodMode=BEFORE_METHOD)?
When you do that, Spring will reload context ConfigA.class and ConfigB.class just before invoking test methods annotated with #DirtiesContext(MethodMode=BEFORE_METHOD).
And then, after all tests of B_Test finished, Spring shutdowns the contexts (and next test class with SpringJUnit4ClassRunner will load its context).
This is essentially a duplicate of Make Spring Boot Recreate Test Databases.
In summary, you likely only need to ensure that you are using unique names for each embedded database that is created using your ConfigA class.
Please read my comments here for details: https://stackoverflow.com/a/28867247/388980
Also, see the comments in SPR-8849 for further details.
Regards,
Sam (author of the Spring TestContext Framework)
I have a set of tests based which need a spring context.
For fast test execution I want to make sure that the Spring context is initialized just once, then all the tests should be run against this context, then it should shut down.
I already tried the following approaches:
Use #RunWith(SpringJUnit4ClassRunner.class) and #ContextConfiguration(MyAnnotatedConfig.class) to initialize the spring context
Use a #RunWith(SpringJUnit4ClassRunner.class) and #TestExecutionListeners({MyTestExecutionListener.class}) with a handwritten test execution listener that initializes the spring context and injects it into the concrete test classes
Use a #BeforeClass listener in a base class and a static field to store the spring context, and a #AfterClass for shutdown
With all three approaches, the spring context seems to be initialized more than once, which takes a lot of time. It seems that JUnit unloads classes while running tests, so that the content of static fields is sometimes lost.
Is there a way to make sure the spring context is initialized only once?
For fast test execution I want to make sure that the Spring context is
initialized just once, then all the tests should be run against this
context, then it should shut down.
I hate to ask the obvious, but...
Have you read the Testing chapter of the Spring Reference Manual?
Specifically, these sections explain what's going on:
Context management and caching
Context caching
Soooo, the TestContext framework certainly supports caching across tests within a test suite, and I should know, because I wrote it. ;)
Now as for why caching is not working for you, I can only assume that you have configured your build framework to fork for each test (or that you are running tests individually and manually within your IDE). Here's an excerpt from the last link above that might help you out:
Test suites and forked processes
The Spring TestContext framework stores application contexts in a
static cache. This means that the context is literally stored in a
static variable. In other words, if tests execute in separate
processes the static cache will be cleared between each test
execution, and this will effectively disable the caching mechanism.
To benefit from the caching mechanism, all tests must run within the
same process or test suite. This can be achieved by executing all
tests as a group within an IDE. Similarly, when executing tests with a
build framework such as Ant, Maven, or Gradle it is important to make
sure that the build framework does not fork between tests. For
example, if the forkMode for the Maven Surefire plug-in is set to
always or pertest, the TestContext framework will not be able to cache
application contexts between test classes and the build process will
run significantly slower as a result.
If you still experience issues after taking the above into consideration, please consider submitting a project that demonstrates your problem.
Cheers,
Sam
We are using JUnit to execute integration tests and also the system integration tests which rely on external test systems (not necessarily maintained by our own company).
I wonder where to put the code that checks if the system is available prior to running the test cases? So I can determine if there is a network or other issue and not one with the test itself.
JUnit allows to setup some parts of the test in JUnit-Rules. Is it a good idea to setup the service that communicates with the external system within the rule and do some basic checks ("ping") to the external system within the rule? Or to store the state and within the Test use a JUnit assume(rule.isAvailable) to avoid having the test executed?
Or would it be smarter to put this verification code in a custom JUnit Runner?
Or is there even another way to do this? (simply create some utils?)
The goal is to skip the tests if some conditions are not met since it is obvious the tests will fail. I know this indicates a bad exception handling but there is a lot of legacy code I can't change altogether.
I tried to find some articles myself but it seems the search terms ("test", "external system" and so on) are a little thankless.
thanks!
Consider using org.junit.Assume.* as described here. When assumptions fail, your tests are ignored by default. You can write a custom runner to do something else when your assumptions fail.
The key thing though is that the tests don't fail when assumptions like the availability of your external services fail.
If the ping applies to every single test in the class, I would put the ping call in the #Before method. #Before will get executed before every single test method (i.e., #JUnit -annotated methods).
If the ping does not apply to all tests in the class, then you would have to hit ping explicitly from those methods.
I have written tests in the past using in memory databses.
What i wanted to know was is it possible to write tests in spring, junit, java, using in memory DB and the data is not rolledback after each test but kept in the db.
Basically whereby the tests are dependant on each other?
any ideas?
Rollbacking db changes or not is up to you.
But unit test should be independent from each other.
Small extract from a recent DZone article on the subject:
Make each test independent to all the others
Do not make chain of unit test cases. It will prevent you to identify the root cause of test case failures and you will have to
debug the code. Also, it creates dependency, means if you have to
change one test case then you need to make changes in multiple
testcases unnecessarily.
Try to use #Before and #After methods to setup per-requisites if any for all your test cases. If you need to multiple things to support
different test cases in #Before or #After, then consider creating new
Test class.
Your tests should be independent.
But if you want I guess you can try the #Rollback annotation.
I have not tried but have seen in the doc spec while doing transactions.
What do you use for writing data-driven tests in jUnit?
(My definition of) a data-driven test is a test that reads data from some external source (file, database, ...), executes one test per line/file/whatever, and displays the results in a test runner as if you had separate tests - the result of each run is displayed separately, not in one huge aggregate.
In JUnit4 you can use the Parameterized testrunner to do data driven tests.
It's not terribly well documented, but the basic idea is to create a static method (annotated with #Parameters) that returns a Collection of Object arrays. Each of these arrays are used as the arguments for the test class constructor, and then the usual test methods can be run using fields set in the constructor.
You can write code to read and parse an external text file in the #Parameters method (or get data from another external source), and then you'd be able to add new tests by editing this file without recompiling the tests.
This is where TestNG, with its #DataSource, shines. That's one reason why I prefer it to JUnit. The others are dependencies and parallel threaded tests.
I use an in-memory database such as hsqldb so that I can either pre-populate the database with a "production-style" set of data or I can start with an empty hsqldb database and populate it with rows that I need to perform my testing. On top of that I will write my tests using JUnit and Mockito.
I use combination of dbUnit, jMock and jUnit 4. Then you can ether run it as suite or separately
You are better off extending TestCase with a DataDrivenTestCase that suits your needs.
Here is working example:
http://mrlalonde.blogspot.ca/2012/08/data-driven-tests-with-junit.html
Unlike parameterized tests, it allows for nicely named test cases.
I'm with #DroidIn.net, that is exactly what I am doing, however to answer your question literally "and displays the results in a test runner as if you had separate tests," you have to look at the JUnit4 Parameterized runner. DBUnit doesn't do that. If you have to do a lot of this, honestly TestNG is more flexible, but you can absolutely get it done in JUnit.
You can also look at the JUnit Theories runner, but my recollection is that it isn't great for data driven datasets, which kind of makes sense because JUnit isn't about working with large amounts of external data.
Even though this is quite an old topic, i still thought of contributing my share.
I feel JUnit's support for data driven testing is to less and too unfriendly. for eg. in order to use parameterized, we need to write our constructor. With Theories runner we do not have control over the set of test data that is passed to the test method.
There are more drawbacks as identified in this blog post series: http://www.kumaranuj.com/2012/08/junits-parameterized-runner-and-data.html
There is now a comprehensive solution coming along pretty nicely in the form of EasyTest which is a a framework extended out of JUnit and is meant to give a lot of functionality to its users. Its primary focus is to perform Data Driven Testing using JUnit, although you are not required to actually depend on JUnit anymore. Here is the github project for refernece: https://github.com/anujgandharv/easytest
If anyone is interested in contributing their thoughts/code/suggestions then this is the time. You can simply go to the github repository and create issues.
Typically data driven tests use a small testable component to handle the data. (File reading object, or mock objects) For databases, and resources outside of the application mocks are used to similate other systems. (Web services, and databases etc). Typically I see is that there are external data files that handle the data and the output. This way the data file can be added to the VCS.
We currently have a props file with our ID numbers in it. This is horribly brittle, but is easy to get something going. Our plan is to initially have these ID numbers overridable by -D properties in our ant builds.
Our environment uses a legacy DB with horribly intertwined data that is not loadable before a run (e.g. by dbUnit). Eventually we would like to get to where a unit test would query the DB to find an ID with the property under test, then use that ID in the unit test. It would be slow and is more properly called integration testing, not "unit testing", but we would be testing against real data to avoid the situation where our app runs perfectly against test data but fails with real data.
Some tests will lend themselves to being interface driven.
If the database/file reads are retrieved by an interface call then simply get your unit test to implement the interface and the unit test class can return whatever data you want.