I'm running a test within a subclass of AbstractTransactionalTestNGSpringContextTests, where I execute tests over a partial Spring context. Each test runs within a transaction, which is rolled back at the end to leave the database unchanged.
One test writes to the database through Hibernate, while another reads from the same database using the JdbcTemplate, with both share the same datasource.
I'm finding that I can't see the hibernate updates when querying through the JdbcTemplate. This does make some sense, as each is presumably getting its own connection from the connection pool and so is operating within its own transaction.
I've seen indications that it's possible to get the two to share a connection & transaction, but am not clear of the best way to set this up, especially with the involvement of the connection factory. All these components are declared as Spring beans. Can anyone give me any pointers?
Edit:
Well I've gone to the trouble of actually reading some documentation and the HibernateTransactionManager class states that this is definitely possible: "This transaction manager is appropriate for applications that use a single Hibernate SessionFactory for transactional data access, but it also supports direct DataSource access within a transaction (i.e. plain JDBC code working with the same DataSource)...".
The only requirement appears to be setting the datasource property, which isn't otherwise required. Having done that, though, I'm still not seeing my changes shared before the transaction has been committed. I will update if I get it working.
The Spring bean that writes has to flush its changes in order for the bean that reads to see them.
Related
I am just learning JTA and can't understand if I should use it if I have only one database. Currently I use hibernate 5 as JPA provider and if I need to use one transaction between methods I just pass EntityManager as argument.
However, I don't like this method as I need to remember if transaction is opened or not. I would like to find any library that will help me to control transactions (but without Spring) in SE environment. So, should I use JTA in my situation or should I use something different?
Normally when talking about JTA , it refers to the distributed transaction across multiple systems (e.g. across two databases , one database and one JMS compliant message broker). If you only have one database , it is not necessary to use JTA transaction although it should also work. Instead, use a local transaction for one database which should theoretically faster than using JTA transaction.
On the other hands , if you are just talking about #Transactional defined in the JTA , which allow you to declaratively control the transaction boundary by annotating it on a method and without passing the EntityManager between methods , you should look into which frameworks support it.
Under the cover , a proxied EntityManager is injected into different classes such that when it is invoked , it will get the actual EntityManager from the ThreadLocal. So every class in the same thread will get the same EntityManager which prevent you from passing the EntityManager around the methods. A new EntityManager instance will be set to the ThreadLocal just before the #Transactional method is executed using some sort of AOP technique.
Please note that #Transcational is nothing to do with the underlying transaction is JTA or local transaction. It should work for both transaction type.
I would use a framework that support #Transactional such as Spring , Quarkus or Micronaut etc , and configure Hibernate to use local transaction but not JTA transaction for single database.
Im trying to understand what is going on when im using AbstractTransactionalJUnit4SpringContextTests in my integration tests when trying to rollback changes made by legacy code.
The legacy code uses NamedParameterJdbcTemplate to communicate with the database. The transaction manager is of type DataSourceTransactionManager and my datasource is of type DriverManagerDataSource.
This is the overall structure of my test class.
#Begin
//Make initial setup of database using `JDBCTemplate` from `AbstractTransactionalJUnit4SpringContextTests`.
#Test
//Call legacy code that makes inserts to database.
My question is if my assuption is wrong that by extending AbstractTransactionalJUnit4SpringContextTests I make all my tests transactional. This has the expected effect that all changes made directly by my testfunction AND in legacy code called from the test function transactional and implicitly rolled back at the end of the tests???
Some observations I made are:
The #Begin function works as expected when used with testfunctions that does not call legacy code that makes changes. In this case the changes made in #Begin are rolledback.
If however I use #Begin with #Test functions calling legacy code that makes changes both the changes made by #Begin and #Test does not get rolled back! The log message printed dos state that the transactions is initated and that the rollback is successful but im not getting the expected behavior.
The Spring TestContext Framework (TCF) manages what I call test-managed transactions. The TCF does not manage application-managed transactions.
If you have code that it is managing its own transactions (e.g., via Spring's TransactionTemplate or some other programmatic means), then those interactions with the database will not be rolled back by the TCF.
For further details, consult the Test-managed Transactions section of the reference manual or slide #43 from my Testing with Spring: An Introduction presentation.
Furthermore, you naturally have to ensure that the DataSource used by the TCF is the exact same DataSource used by the Spring DataSourceTransactionManager and your legacy code. If all of your legacy code uses Spring's NamedParameterJdbcTemplate, that code should participate in the correct transaction. Otherwise, you'll need to use Spring's DataSourceUtils.getConnection(DataSource) to ensure that legacy code works with Spring-managed and test-managed transactions.
We have a somewhat huge application which started a decade ago and is still under active development. So some parts are still in J2EE 1.4 architecture, others using Java EE 5/6.
While testing some new code, I realized that I had data inconsistency between information coming in through old and new code parts, where the old one uses the Hibernate session directly and the new one an injected EntityManager. This led to the problem, that one part couldn't see new data from the other part and thus also created a database record, resulting in primary key constraint violation.
It is planned to migrate the old code completely to get rid of J2EE, but in the meantime - what can I do to coordinate database access between the two parts? And shouldn't at some point within the application server both ways come together in the Hibernate layer, regardless if accessed via JPA or directly?
You can mix both Hibernate Session and Entity Manager in the same application without any problem. The EntityManagerImpl simply delegates calls the a private SessionImpl instance.
What you describe is a Transaction configuration anomaly. Every database transaction runs in isolation (unless you use REAN_UNCOMMITED which I guess it's not the case), but once you commit it the changes are available from any other transaction or connection. So once a transaction is committed you should see al changes in any other Hibernate Session, JDBC connection or even your database UI manager tool.
You said that there was a primary key conflict. This can't happen if you use Hibernate identity or sequence generator. For the old hi-lo generator you can have problems if an external connection tries to insert records in the same table Hibernate uses an old hi/lo identifier generator.
This problem can also occur if there is a master/master replication anomaly. If you have multiple nodes and there is no strict consistency replication you can end up with primar key constraint violations.
Update
Solution 1:
When coordinating the new and the old code trying to insert the same entity, you could have a slect-than-insert logic running in a SERIALIZABLE transaction. The SERIALIZABLE transaction acquires the appropriate locks on tour behalf and so you can still have a default READ_COMMITTED isolation level, while only the problematic Service methods are marked as SERIALIZABLE.
So both the old code and the new code have this logic running a select for checking if there is already a row satisfying the select constraint, only to insert it if nothing is found. The SERIALIZABLE isolation level prevents phantom reads so I think it should prevent constraint violations.
Solution 2:
If you are open to delegate this task to JDBC, you might also investigate the MERGE SQL statement, if your current database supports it. Basically, this is an upsert operation issuing an update or an insert behind the scenes. This command is much more attractive since you can still run it with even on READ_COMMITTED. The only drawback is that you can't use Hibernate for it, and only some databases support it.
If you instanciate separately a SessionFactory for the old code and an EntityManagerFactory for new code, that can lead to different value in first level cache. If during a single Http request, you change a value in old code, but do not immediately commit, the value will be changed in session cache, but it will not be available for new code until it is commited. Independentely of any transaction or database locking that would protect persistent values, that mix of two different Hibernate session can give weird things for in memory values.
I admit that the injected EntityManager still uses Hibernate. IMHO the most robust solution is to get the EntityManagerFactory for the PersistenceUnit and cast it to an Hibernate EntityManagerFactoryImpl. Then you can directly access the the underlying SessionFactory :
SessionFactory sessionFactory = entityManagerFactory.getSessionFactory();
You can then safely use this SessionFactory in your old code, because now it is unique in your application and shared between old and new code.
You still have to deal with the problem of session creation-close and transaction management. I suppose it is allready implemented in old code. Without knowing more, I think that you should port it to JPA, because I am pretty sure that if an EntityManager exists, sessionFactory.getCurrentSession() will give its underlying Session but I cannot affirm anything for the opposite.
I've run into a similar problem when I had a list of enumerated lookup values, where two pieces of code would check for the existence of a given value in the list, and if it didn't exist the code would create a new entry in the database. When both of them came across the same non-existent value, they'd both try to create a new one and one would have its transaction rolled back (throwing away a bunch of other work we'd done in the transaction).
Our solution was to create those lookup values in a separate transaction that committed immediately; if that transaction succeeded, then we knew we could use that object, and if it failed, then we knew we simply needed to perform a get to retrieve the one saved by another process. Once we had a lookup object that we knew was safe to use in our session, we could happily do the rest of the DB modifications without risking the transaction being rolled back.
It's hard to know from your description whether your data model would lend itself to a similar approach, where you'd at least commit the initial version of the entity right away, and then once you're sure you're working with a persistent object you could do the rest of the DB modifications that you knew you needed to do. But if you can find a way to make that work, it would avoid the need to share the Session between the different pieces of code (and would work even if the old and new code were running in separate JVMs).
In our application we mainly use Spring #Transactional annotations together with JpaTemplate and Hibernate for database interaction. One part of the project, however, communicates with a different database, and hence we cannot use #Transactional here as a different transactionManager is required.
Here, for database updates, we explicitly create transactions in code, using PlatformTransactionManager, and either commit them or roll back after the call to JpaTemplate.execute(JpaCallback).
I have noticed a few places in this area of our code where we are not doing this for reads, however, and simply call JpaTemplate.execute(JpaCallback) without wrapping in a transaction. I am wondering what the dangers are of this. I appreciate that a transaction must be being created, as no database query can be run without a transaction, but since nothing in our code attempts to commit, will something potentially be holding on to resources?
I'm having a difficult time understanding Spring Transactions. First off, I am trying to use as little of Spring as possible. Basically, I want to use #Transactional and some of its injection capabilities with JPA/Hibernate. My transactional usage is in my service layer. I try to keep them out of test cases. I use CTW and am spring-configured. I have component scan on the root of my packages right now. I also am using Java configuration for my datasource, JpaTransactionManager, and EntityManagerFactory. My configuration for the JpaTransactionFactory uses:
AnnotationTransactionAspect.aspectOf().setTransactionManager( txnMgr );
I do not use #EnableTransactionManagement.
Unfortunately, I'm having a hard time understanding the rules for #Transactional and can't find an easy page that describes them simply. Especially with regards to Session usage. For example, what if I want to use #Transactional on a class that does not have a default no-arg constructor?
The weirdest problem I'm having is that in some of the POJO service classes Transacitonal works great while in others I can see the transactions being created but operations ultimately fall saying that there is "no session or the session has been closed". I apologize for not having code to reproduce this, I can't get it down to a small set. This is why I am looking for the best resources so I can figure it out myself.
For example, I can have a method that gets a lazily fetched collection of children, iterates through it and puts it into a Set and returns that set. In one class it will work fine while in another class also marked with #Transactional it will fail while trying to iterate through the PersistentSet saying that there is no session (even though there IS a transaction).
Maybe this is because I create both of these service objects in a test case and the first one is somehow hijacking the session?
By the way, I have read through the spring source transaction documents. I'm looking for a clear set of rules and tips for debugging issues like this. Thanks.
Are you sure you loaded your parent entity in the scope of the very transaction where you try to load the lazy children? If it was passed in as parameter for example (that is, loaded from outside your #Transactional method) then it might not be bound to a persistence context anymore...
Note that when no #Transactional context is given, any database-related action may have a short tx to be created, then immediately closed - disabling subsequent lazy-loading calls. It depends on your persistence context and auto-commit configurations.
To put it simply, the behaviour with no transactional context being sometimes unexpected, the ground rule is to always have one. If you access your database, then you give yourself a well-defined tx - period. With spring-tx, it means all your #Repository's and #Services are #Transactional. This way you should avoid most of tx-related issues.