we are developing an (JavaSE-) application which communicates to many clients via persistent tcp-connections. The client connects, performs some/many operations (which are updated to a SQL-Database) and closes the application / disconnects from server. We're using Hibernate-JPA and manage the EntityManager-lifecycle on our own, using a ThreadLocal-variable. Actually we create a new EntityManager-instance on every client-request which works fine so far. Recently we profiled a bit and we found out that hibernate performs a SELECT-query to the DB before every UPDATE-statement. That is because our entities are in detached-state and every new EntityManager attaches the entity to the persistence context first. This leads to a massive SQL-overhead when the server is under load (because we have an write-heavy application)and we try to eliminate that leak.
First, we thought about 2nd-Level-Cache. However, we discovered that hibernate invalidates it's Query- and Collection-Caches whenever a new item is added or removed.
On second thought, we evaluate whether to keep an EntityManager up as long as the client is logged in on the server. But I wonder if this is a "best practice", because there are some drawbacks: thread-safety, managing-overhead of the EntityManager-instances, etc.
In short: we are looking for a way to get rid of those SELECT-statements before every UPDATE. Any ideas out there?
One possible way to get rid of select statements when reattaching detached entities is to use Hibernate-specific update() operation instead of merge().
update() unconditionally runs an update SQL statement and makes detached object persistent. If persistent object with the same identifier already exists in a session, it throws an exception. Thus, it's a good choice when you are sure that:
Detached object contains modified state that should be saved in the database
Saving that state is the main goal of opening a session for that request (i.e. there were no other operations that loaded entity with the same id in that session)
In JPA 2.0 you can access Hibernate-specific operations as follows:
em.unwrap(Session.class).update(o);
See also:
11.6. Modifying detached objects
One possible option would be to use StatelessSession for the update statements. I've successfully used it in my 'write-heavy' application.
Related
We have a somewhat huge application which started a decade ago and is still under active development. So some parts are still in J2EE 1.4 architecture, others using Java EE 5/6.
While testing some new code, I realized that I had data inconsistency between information coming in through old and new code parts, where the old one uses the Hibernate session directly and the new one an injected EntityManager. This led to the problem, that one part couldn't see new data from the other part and thus also created a database record, resulting in primary key constraint violation.
It is planned to migrate the old code completely to get rid of J2EE, but in the meantime - what can I do to coordinate database access between the two parts? And shouldn't at some point within the application server both ways come together in the Hibernate layer, regardless if accessed via JPA or directly?
You can mix both Hibernate Session and Entity Manager in the same application without any problem. The EntityManagerImpl simply delegates calls the a private SessionImpl instance.
What you describe is a Transaction configuration anomaly. Every database transaction runs in isolation (unless you use REAN_UNCOMMITED which I guess it's not the case), but once you commit it the changes are available from any other transaction or connection. So once a transaction is committed you should see al changes in any other Hibernate Session, JDBC connection or even your database UI manager tool.
You said that there was a primary key conflict. This can't happen if you use Hibernate identity or sequence generator. For the old hi-lo generator you can have problems if an external connection tries to insert records in the same table Hibernate uses an old hi/lo identifier generator.
This problem can also occur if there is a master/master replication anomaly. If you have multiple nodes and there is no strict consistency replication you can end up with primar key constraint violations.
Update
Solution 1:
When coordinating the new and the old code trying to insert the same entity, you could have a slect-than-insert logic running in a SERIALIZABLE transaction. The SERIALIZABLE transaction acquires the appropriate locks on tour behalf and so you can still have a default READ_COMMITTED isolation level, while only the problematic Service methods are marked as SERIALIZABLE.
So both the old code and the new code have this logic running a select for checking if there is already a row satisfying the select constraint, only to insert it if nothing is found. The SERIALIZABLE isolation level prevents phantom reads so I think it should prevent constraint violations.
Solution 2:
If you are open to delegate this task to JDBC, you might also investigate the MERGE SQL statement, if your current database supports it. Basically, this is an upsert operation issuing an update or an insert behind the scenes. This command is much more attractive since you can still run it with even on READ_COMMITTED. The only drawback is that you can't use Hibernate for it, and only some databases support it.
If you instanciate separately a SessionFactory for the old code and an EntityManagerFactory for new code, that can lead to different value in first level cache. If during a single Http request, you change a value in old code, but do not immediately commit, the value will be changed in session cache, but it will not be available for new code until it is commited. Independentely of any transaction or database locking that would protect persistent values, that mix of two different Hibernate session can give weird things for in memory values.
I admit that the injected EntityManager still uses Hibernate. IMHO the most robust solution is to get the EntityManagerFactory for the PersistenceUnit and cast it to an Hibernate EntityManagerFactoryImpl. Then you can directly access the the underlying SessionFactory :
SessionFactory sessionFactory = entityManagerFactory.getSessionFactory();
You can then safely use this SessionFactory in your old code, because now it is unique in your application and shared between old and new code.
You still have to deal with the problem of session creation-close and transaction management. I suppose it is allready implemented in old code. Without knowing more, I think that you should port it to JPA, because I am pretty sure that if an EntityManager exists, sessionFactory.getCurrentSession() will give its underlying Session but I cannot affirm anything for the opposite.
I've run into a similar problem when I had a list of enumerated lookup values, where two pieces of code would check for the existence of a given value in the list, and if it didn't exist the code would create a new entry in the database. When both of them came across the same non-existent value, they'd both try to create a new one and one would have its transaction rolled back (throwing away a bunch of other work we'd done in the transaction).
Our solution was to create those lookup values in a separate transaction that committed immediately; if that transaction succeeded, then we knew we could use that object, and if it failed, then we knew we simply needed to perform a get to retrieve the one saved by another process. Once we had a lookup object that we knew was safe to use in our session, we could happily do the rest of the DB modifications without risking the transaction being rolled back.
It's hard to know from your description whether your data model would lend itself to a similar approach, where you'd at least commit the initial version of the entity right away, and then once you're sure you're working with a persistent object you could do the rest of the DB modifications that you knew you needed to do. But if you can find a way to make that work, it would avoid the need to share the Session between the different pieces of code (and would work even if the old and new code were running in separate JVMs).
This seems like it would come up often, but I've Googled to no avail.
Suppose you have a Hibernate entity User. You have one User in your DB with id 1.
You have two threads running, A and B. They do the following:
A gets user 1 and closes its Session
B gets user 1 and deletes it
A changes a field on user 1
A gets a new Session and merges user 1
All my testing indicates that the merge attempts to find user 1 in the DB (it can't, obviously), so it inserts a new user with id 2.
My expectation, on the other hand, would be that Hibernate would see that the user being merged was not new (because it has an ID). It would try to find the user in the DB, which would fail, so it would not attempt an insert or an update. Ideally it would throw some kind of concurrency exception.
Note that I am using optimistic locking through #Version, and that does not help matters.
So, questions:
Is my observed Hibernate behaviour the intended behaviour?
If so, is it the same behaviour when calling merge on a JPA EntityManager instead of a Hibernate Session?
If the answer to 2. is yes, why is nobody complaining about it?
Please see the text from hibernate documentation below.
Copy the state of the given object onto the persistent object with the same identifier. If there is no persistent instance currently associated with the session, it will be loaded. Return the persistent instance. If the given instance is unsaved, save a copy of and return it as a newly persistent instance.
It clearly stated that copy the state(data) of object in database. if object is not there then save a copy of that data. When we say save a copy hibernate always create a record with new identifier.
Hibernate merge function works something like as follows.
It checks the status(attached or detached to the session) of entity and found it detached.
Then it tries to load the entity with identifier but not found in database.
As entity is not found then it treat that entity as transient.
Transient entity always create a new database record with new identifier.
Locking is always applied to attached entities. If entity is detached then hibernate will always load it and version value gets updated.
Locking is used to control concurrency problems. It is not the concurrency issue.
I've been looking at JSR-220, from which Session#merge claims to get its semantics. The JSR is sadly ambiguous, I have found.
It does say:
Optimistic locking is a technique that is used to insure that updates
to the database data corresponding to the state of an entity are made
only when no intervening transaction has updated that data since the
entity state was read.
If you take "updates" to include general mutation of the database data, including deletes, and not just a SQL UPDATE, which I do, I think you can make an argument that the observed behaviour is not compliant with optimistic locking.
Many people agree, given the comments on my question and the subsequent discovery of this bug.
From a purely practical point of view, the behaviour, compliant or not, could lead to quite a few bugs, because it is contrary to many developers' expectations. There does not seem to be an easy fix for it. In fact, Spring Data JPA seems to ignore this issue completely by blindly using EM#merge. Maybe other JPA providers handle this differently, but with Hibernate this could cause issues.
I'm actually working around this by using Session#update currently. It's really ugly, and requires code to handle the case when you try to update an entity that is detached, and there's a managed copy of it already. But, it won't lead to spurious inserts either.
1.Is my observed Hibernate behaviour the intended behaviour?
The behavior is correct. You just trying to do operations that are not protected against concurrent data modification :) If you have to split the operation into two sessions. Just find the object for update again and check if it is still there, throw exception if not. If there is one then lock it by using em.(class, primary key, LockModeType); or using #Version or #Entity(optimisticLock=OptimisticLockType.ALL/DIRTY/VERSION) to protect the object till the end of the transaction.
2.If so, is it the same behaviour when calling merge on a JPA EntityManager instead of a Hibernate Session?
Probably: yes
3.If the answer to 2. is yes, why is nobody complaining about it?
Because if you protect your operations using pessimistic or optimistic locking the problem will disappear:)
The problem you are trying to solve is called: Non-repeatable read
We are currently planning an application and are looking to use Hibernate. The database for the application will be an online one, but the application should be able to work in an offline mode. So, you can load an object from the database, close the connection, play around with the object and maybe later update it in the database.
The problem now is that (well, as far as I know) Hibernate performs an UPDATE on the database every timethe object is modified, meaning that it'd throw an exception if the connection was closed in the meantime.
My question is now: can Hibernate be configured to perform updates at some manually specified time?
It looks like you don't completely understand the concept of Unit of Work used by Hibernate.
You can load the object in one session, then close the session and later merge that object (or another object with the same identity) into another session (so that modification of the object made in between will be flushed in that new session). In the meantime all sessions can be closed, and detached object can be used as a normal object (if you don't try to access its uninitialized lazy properties).
See also:
13.1. Session and transaction scopes
Describe please a typical lifecycle of a Hibernate object (that maps to a db table) in a web app.
Suppose, you create a new instance of an object and persist in the db.
But during the app lifetime you'll be working on a detached object and finally
you need to update it in the database, for example on exit.
How does it look like with hibernate and spring?
p.s. Can transactions and sessions live between servlet transitions? So that we opened 1 session and use it in all servlets without a need to reopen it?
I'll try to give a descriptive example.
Suppose, when the app starts, the log record is created. this can be done at once,
Log log = new Log(...) and then something like save(log) -- log corresponds to a table LOG.
then, as the application processes user inputs and keeps going, new data is being accumulated.
and after the second step we could add something to a log object, a collection for example:
// now we have a tracking of what user chosen: Set thisUserChoice,
// so we can update the persistent object, we have new data now !
// log.userChoices = thisUserChoice.
Here occurs the nature of my question. How are we supposed to deal with it, if we want to
update the database whenever new data is gotten from a user?
In a relational model we can work with a row id, so we could get this record and update some other data of the row.
In Hibernate we are also able to load a object by its id.
But is IT THE WAY TO GO? IS ANYTHING BETTER?
You could do everything in a single session. But that's like doing everything in a single class. It could make sense from a beginner's point of view, but nobody does it like that in practice.
In a web app, you can normally expect to have several threads running at once, each dealing with a different user. Each thread would typically have a separate session, and the session would only have managed instances of the objects that were actually needed by that user. It's not that you can completely ignore concurrency in your own code, but it's useful to have hibernate's help. If you were to do everything with one session, you would have to do all the concurrency management yourself.
Hibernate can also manage the concurrency if you have multiple application servers talking to a single database. The separate JVMs can't possibly share the same session in this case...
The lifecycle is described in the hibernate documentation (which I'm sure you've seen).
Whenever a request comes from the web client to the server, the first thing you should do is load the relevant objects (see section 10.3) so that you have persistent, not detached entities to deal with. Then, you do whatever operations are required. When the session closes (ie. when the server returns the response to the client), it will write any updates to the database. Or, if your operation involves creating new entities, you'll have to create transient ones (with new) and then call persist() or save() (see section 10.2). That will result in a managed entity -- you can make more changes to it, and hibernate will record those changes when the session closes.
I try to avoid using detached objects. But if I have to (perhaps they're stored in the user's session), then whenever they might need to be saved to the database, you'll have to use update() (see section 10.6). This converts it into a managed object, and so the session will save any changes to the database when it's closed.
Spring makes it very easy to generate a new session for each request. You would normally tell Spring to create a sessionFactory, and then every request will be given its own session. Search for "spring hibernate tutorial" and you'll find several examples.
http://scbcd.blogspot.com/2007/01/hibernate-persistence-lifecycle.html This explains transient, persistent objects.
Also have a look at the Lifecycle interface to know what hibernate does (and it provides hooks at all stages for user to do something)
We are using Hibernate Spring MVC with OpenSessionInView filter.
Here is a problem we are running into (pseudo code)
transaction 1
load object foo
transaction 1 end
update foo's properties (not calling session.save or session.update but only foo's setters)
validate foo (using hibernate validator)
if validation fails ?
go back to edit screen
transaction 2 (read only)
load form backing objects from db
transaction 2 end
go to view
else
transaction 3
session.update(foo)
transaction 3 end
the problem we have is if the validation fails
foo is marked "dirty" in the hibernate session (since we use OpenSessionInView we only have one session throughout the http request), when we load the form backing objects (like a list of some entities using an HQL query), hibernate before performing the query checks if there are dirty objects in the session, it sees that foo is and flushes it, when transaction 2 is committed the updates are written to the database.
The problem is that even though it is a read only transaction and even though foo wasn't updated in transaction 2 hibernate doesn't have knowledge of which object was updated in which transaction and doesn't flush only objects from that transaction.
Any suggestions? did somebody ran into similar problem before
Update: this post sheds some more light on the problem: http://brian.pontarelli.com/2007/04/03/hibernate-pitfalls-part-2/
You can run a get on foo to put it into the hibernate session, and then replace it with the object you created elsewhere. But for this to work, you have to know all the ids for your objects so that the ids will look correct to Hibernate.
There are a couple of options here. First is that you don't actually need transaction 2 since the session is open you could just load the backing objects from the db, thus avoiding the dirty check on the session. The other option is to evict foo from the session after it is retrieved and later use session.merge() to reattach it when you what your changes to be stored.
With hibernate it is important to understand what exactly is going on under the covers. At every commit boundary it will attempt to flush all changes to objects in the current session regardless of whether or not the changes where made in the current transaction or any transaction at all for that matter. This is way you don't actually need to call session.update() for any object that is already in the session.
Hope this helps
There is a design issue here. Do you think an ORM is a transparent abstraction of your datastore, or do you think it's a set of data manipulation libraries? I would say that Hibernate is the former. Its whole reason for existing is to remove the distinction between your in-memory object state and your database state. It does provide low-level mechanisms to allow you to pry the two apart and deal with them separately, but by doing so you're removing a lot of Hibernate's value.
So very simply - Hibernate = your database. If you don't want something persisted, don't change your persistent objects.
Validate your data before you update your domain objects. By all means validate domain objects as well, but that's a last line of defense. If you do get a validation error on a persistent object, don't swallow the exception. Unless you prevent it, Hibernate will do the right thing, which is to close the session there and then.
What about using Session.clear() and/or Session.evict()?
What about setting singleSession=false on the filter? That might put your operations into separate sessions so you don't have to deal with the 1st level cache issues. Otherwise you will probably want to detach/attach your objects manually as the user above suggests. You could also change the FlushMode on your Session if you don't want things being flushed automatically (FlushMode.MANUAL).
Implement a service layer, take a look at spring's #Transactional annotation, and mark your methods as #Transactional(readOnly=true) where applicable.
Your flush mode is probably set to auto, which means you don't really have control of when a DB commit happens.
You could also set your flush mode to manual, and your services/repos will only try to synchronize the db with your app when you tell them to.