I have a bunch of tests in a Hibernate/Spring application. Yesterday, I transitioned them from using the JUnit 3.8 base test class provided by Spring to the JUnit 4.4 one.
Everything works great, because now, my tests are wrapped in transactions, and data created/modified is automatically rolled back (instead of me writing code to delete newly-created entities).
The only problem is that I cannot peek into the database during test execution. If a test fails, I often add breakpoints near the end and peer into the MySQL database via SQL Yog to see what's going on. But now, I just see empty tables. (I mean in integration tests that simulate production very closely and actually touch the database.)
I tried setting the global isolation level to read uncommitted, but it didn't change the fact that I can't see the data. How can I configure Spring/Hibernate to allow me to view the data from another process?
I had the same issue, and found that setting the session isolation level while using YOG sometimes helped.
SET SESSION TRANSACTION ISOLATION LEVEL READ UNCOMMITTED;
This only uncovered another disturbing issue - while running tests, the Hibernate didn't actually run some of the actions unless I used HibernateTemplate.flush(); after every Hibernate operation.
As this very annoying, I finally set Hibernate so it would always flush queries, like this:
HibernateTemplate hibernateTemplate;
...
hibernateTemplate.setFlushMode(HibernateTemplate.FLUSH_ALWAYS);
Related
I'm trying to test the service which updates multiple tables from the database and I want to rollback the database to previous state after each test case. All solutions I have found are using #Transactional and #Rollback from Spring framework, but since my application is not a Spring web application, I would like to use javax #Transactional, which does not work for me.
Is this possible with javax at all or anything else except the Spring?
Rollback a transaction isn't a good idea for test (integration test) as the constraint may not be validated before the commit.
You should:
have a DB only for integration tests (or an embedded db or a container db or in RAM db)
execute, for example in a class rule or in a test rule, script SQL in order to bring the db in a known status
execute a test
if test modifies the db then run a truncate of tables modified (again or in your class or test rule) and, before peform a new test, run again the script at point 2
run integration tests not so often as unit tests
Better idea is to use in-memory database:
H2 https://www.h2database.com/
Recommendation for a Java in memory database
Not always and not everything in database can be rolled-back to initial state ( ex. sequences ).
There were some proposed solutions to the question "How to test SQL statements in an application" -
Using RAM memory - I can't change the configuration of staging environment where testing happens.
Using H2 - Not very compatible even in PostgreSQL mode
Use the same database to run the tests.
Using in-memory mode - PostgreSQL doesn't have one.
The third one was viable and I looked into Test Containers which is actually a beautiful solution but a relatively new one. As a result, our company is sceptical of adopting it.
We use Mybatis to access PostgreSQL.
Another way would be to recreate entire schema and populate required tables before tests. Here is the problem, I could create and delete schema with tables with the same name. To avoid name collision I'd have to change schema's name, as a result, even queries should be renamed which is not at all preferred. Is there a way to do this without changing queries but pointing them to the dummy schema.
You should NOT change your queries. In tests you should only change the connection url your application will use. The problem is, how to get that url working.
To have full test coverage you need the same db (as you noticed, h2 and other in-memory db are not very compatible). postgres doesn't have in-memory mode so you have to manage the lifecycle yourself. there is a few decisions you have to make. some of them:
where will you get the db from: require all the devs to provide postgres (installation / docker / vagrant) or automate the setup?
how to prepare db for tests: manual schema setup and cleanup?
how to reset db between tests: restart? always rollback? predefined and separately defined content? some kind of reverse operations?
if and how to make those tests fast?
there are some tools that can help you solve some of the problems:
testcontainers will help you provide
db.
dbunit - will help you prepare data for your test.
cons:
a lot of work is required to create and maintain schema and data. especially when your project is in a intensive development stage.
it's another abstraction layer so if suddenly you want to use some db feature that is unsupported by this tool, it may be difficult to test it
testegration - intents to provide you full, ready to use and extensible lifecycle (disclosure: i'm a creator).
cons:
free only for small projects
very young project
you can also fill the gaps on your own. as always it's a trade: time vs money
you can define database configuration for test purpose and connect to your real database base for execute tests. you should access to test database configuration in test classes.
for example, if you use spring and hibernate to connect to the database, you can define a test hibernate configuration xml file where it connect to test database. then in your test classes, use this configuration file as follow:
#RunWith(SpringJUnit4ClassRunner.class)
#ContextConfiguratiion({testHibernate.xml, testSpring.xml , .... })
#TestExecutionListeners({...})
public class TestClass {
....
#Test
public void test1(){
...
}
}
so, you can access your test hibernate session factory to execute your queries.
I have a transaction based application and it rolls the operation back on an error. However
in some cases, the rollback doesn't happen. ( Though its getting called from the application). Ours is a very complex application and there are chances that some code is directly committing the data. Is there a way to debug the commit to the database? (Either from java or from the database?). From java, we are not able to do this because, java.sql.Connection is an interface and the implementation is provided by sybase jconnect for which we don't have the source.
I am not sure this will help, but with this driver, this helps in tracing all the DML operations you perform on the database including commit & rollback. When you use p6spy
it logs every database hit into a log file, from where you can easily figure out where your application is performing a commit.
Apart from the above suggestion, I fell every database comes with some sort of monitoring tools, which with you can monitor which DML is fired within a span of time.
I am writing a test which extends Spring's AbstractTransactionalJUnit4SpringContextTests.
In my application code I have a method which I call inside the test annotated by the following:
#Transactional(readOnly = false, propagation = Propagation.REQUIRES_NEW)
Problem
I run into a problem while using H2 as the underlying data source in-memory mode. It gives me the error:
Caused by:org.h2.jdbc.JdbcSQLException: Timeout trying to lock tableMY_TABLE[50200-131]
When I remove the propagation, it works, and when I use an alternative database such as Oracle or MySQL with Propagation.REQUIRES_NEW, everything works fine.
I am using Spring 3.0.2-RELEASE and H2 1.2.131.
How can I get H2 to work with Spring?
I don't know what the problem is, but try appending ;MVCC=TRUE to the database URL.
Had the same problem doing JUnit Tests with play-framework Jobs (that run into separate threads) and trick provided by Thomas works! (appending ;MVCC=TRUE to the database URL.)
I guess this MVCC option enables 'Row Level Locking' instead of locking the whole TABLE, and so the LOCK issue is gone, see "Row Level Locking" feature on:
http://www.h2database.com/html/features.html#in_memory_databases
'Row Level Locking' supported in H2: <*9 When using MVCC (multi version concurrency).>
I have an application - more like a utility - that sits in a corner and updates two different databases periodically.
It is a little standalone app that has been built with a Spring Application Context. The context has two Hibernate Session Factories configured in it, in turn using Commons DBCP data sources configured in Spring.
Currently there is no transaction management, but I would like to add some. The update to one database depends on a successful update to the other.
The app does not sit in a Java EE container - it is bootstrapped by a static launcher class called from a shell script. The launcher class instantiates the Application Context and then invokes a method on one of its beans.
What is the 'best' way to put transactionality around the database updates?
I will leave the definition of 'best' to you, but I think it should be some function of 'easy to set up', 'easy to configure', 'inexpensive', and 'easy to package and redistribute'. Naturally FOSS would be good.
The best way to distribute transactions over more than one database is: Don't.
Some people will point you to XA but XA (or Two Phase Commit) is a lie (or marketese).
Imagine: After the first phase have told the XA manager that it can send the final commit, the network connection to one of the databases fails. Now what? Timeout? That would leave the other database corrupt. Rollback? Two problems: You can't roll back a commit and how do you know what happened to the second database? Maybe the network connection failed after it successfully committed the data and only the "success" message was lost?
The best way is to copy the data in a single place. Use a scheme which allows you to abort the copy and continue it at any time (for example, ignore data which you already have or order the select by ID and request only records > MAX(ID) of your copy). Protect this with a transaction. This is not a problem since you're only reading data from the source, so when the transaction fails for any reason, you can ignore the source database. Therefore, this is a plain old single source transaction.
After you have copied the data, process it locally.
Setup a transaction manager in your context. Spring docs have examples, and it is very simple. Then when you want to execute a transaction:
try {
TransactionTemplate tt = new TransactionTemplate(txManager);
tt.execute(new TransactionCallbackWithoutResult(){
protected void doInTransactionWithoutResult(
TransactionStatus status) {
updateDb1();
updateDb2();
}
} catch (TransactionException ex) {
// handle
}
For more examples, and information perhaps look at this:
XA transactions using Spring
When you say "two different databases", do you mean different database servers, or two different schemas within the same DB server?
If the former, then if you want full transactionality, then you need the XA transaction API, which provides full two-phase commit. But more importantly, you also need a transaction coordinator/monitor which manages transaction propagation between the different database systems. This is part of JavaEE spec, and a pretty rarefied part of it at that. The TX coordinator itself is a complex piece of software. Your application software (via Spring, if you so wish) talks to the coordinator.
If, however, you just mean two databases within the same DB server, then vanilla JDBC transactions should work just fine, just perform your operations against both databases within a single transaction.
In this case you would need a Transaction Monitor (server supporting XA protocol) and make sure your databases supports XA also. Most (all?) J2EE servers comes with Transaction Monitor built in. If your code is running not in J2EE server then there are bunch of standalone alternatives - Atomicos, Bitronix, etc.
You could try Spring ChainedTransactionManager - http://docs.spring.io/spring-data/commons/docs/1.6.2.RELEASE/api/org/springframework/data/transaction/ChainedTransactionManager.html that supports distributed db transaction. This could be a better alternative to XA