Consider this simple Hibernate scenario:
session = getHibernateSession();
tx = session.beginTransaction();
SomeObject o = (SomeObject) session.get(SomeObject.class, objectId);
tx.commit();
This code produces the following exception:
org.hibernate.TransactionException: Transaction not successfully started
at org.hibernate.transaction.JDBCTransaction.commit(JDBCTransaction.java:100)
at com.bigco.package.Clazz.getSomeData(Clazz.java:1234)
What's going on?
Well, it looks like once we reach the tx.commit() line, the transaction has already been committed. My only guess is that Hibernate already commits the transaction when get()ing the object.
The fix for this is simple:
// commit only if tx still hasn't been committed yet (by hibernate)
if (!tx.wasCommitted())
tx.commit();
This is a really old question and I figure you've already solved it (or given up on Hibernate) but the answer is tragically simple. I'm surprised no one else picked it up.
You haven't done a session.save(o), so there is nothing in the transaction to commit. The commit may still not work if you haven't changed anything in the object, but why would you want to save it if nothing has changed?
BTW: It is also perfectly acceptable to do the session.get(...) before the session.beginTransaction().
I got to know that this is already solved; even though I am posting my answer here.
I haven't found wasCommitted() method on the transaction.
But the following code worked for me:
// commit only, if tx still hasn't been committed yet by Hibernate
if (tx.getStatus().equals(TransactionStatus.ACTIVE)) {
tx.commit();
}
One situation this can happen in is when the code is in an EJB/MDB using container-managed transactions (CMT), either intentionally or because it's the default. To use bean-managed transactions, add the following annotation:
#TransactionManagement(TransactionManagementType.BEAN)
There's more to it than that, but that's the beginning of the story.
remove session.close(); from your program as few of the bigger transaction require more time and while closing the connection problem get occurred. use session.flus() only.
You should check weather you have used this session.getTransaction().commit(); or rollback command as higher version of hibernate removed manual code interaction by using
#Transactional(propagation = Propagation.SUPPORTS, readOnly = false, rollbackFor = Exception.class) annotation you can avoid any transaction related exception.
Above solutions were not helpful for me and that is why I want to share my solution.
In my case, I was not using #Column annotation properly in one of my entity. I changed my code from
#Column(columnDefinition = "false")
private boolean isAvailable;
to
#Column(columnDefinition = "boolean default false")
private boolean isAvailable;
And it worked.
My create method in dao
public int create(Item item) {
Session session = sessionFactory.getCurrentSession();
try {
int savedId = (int) session.save(item);
return savedId;
} catch (Exception e) {
e.printStackTrace();
session.getTransaction().rollback();
return 0; //==> handle in custom exception
}
}
Related
I have the following code (simplified for the sake of the question)
EntityManager em = EMF.get().createEntityManager();
TypedQuery<T> query = em.createQuery...
for(T result : em.getResultlist()) {
try {
em.getTransaction().begin();
// do some stuff, update the T object
em.getTransaction().commit();
} catch(Exception e) {
// something has gone wrong, rollback the current transaction
if(em.getTransaction().isActive()) {
em.getTransaction().rollback();
}
}
}
em.close();
I am using JPA EclipseLink.
Basically, I want to update a set of tasks, take action and update their statuses. Sometimes the task action fails and I need to revert the change.
This works perfectly fine UNTIL something goes wrong with one of the transactions and the roll back is called. At that point any subsequent transaction commit IS NOT performed, ie the database is not updated.
I read "on rollback all objects managed are detached". I guess this is where the problem is... if correct how could I implement the desired behaviour?
Any help would be vastly appreciated!
I've been asked to write some coded tests for a hibernate-based data access object.
I figure that I'd start with a trivial test: when I save a model, it should be in the collection returned by dao.getTheList(). The problem is, no matter what, when I call dao.getTheList(), it is always an empty collection.
The application code is already working in production, so let's assume that the problem is just with my test code.
#Test
#Transactional("myTransactionManager")
public void trivialTest() throws Exception {
...
// create the model to insert
...
session.save(model);
session.flush();
final Collection<Model> actual = dao.getTheList();
assertEquals(1, actual.size());
}
The test output is expected:<1> but was:<0>
So far, I've tried explicitly committing after the insert, and disabling the cache, but that hasn't worked.
I'm not looking to become a master of Hibernate, and I haven't been given enough time to read the entire documentation. Without really knowing where to start, this seemed like this might be a good question for the community.
What can I do to make sure that my Hibernate insert is flushed/committed/de-cached/or whatever it is, before the verification step of the test executes?
[edit] Some additional info on what I've tried. I tried manually committing the transaction between the insert and the call to dao.getTheList(), but I just get the error Could not roll back Hibernate transaction; nested exception is org.hibernate.TransactionException: Transaction not successfully started
#Test
#Transactional("myTransactionManager")
public void trivialTest() throws Exception {
...
// create the model to insert
...
final Transaction firstTransaction = session.beginTransaction();
session.save(model);
session.flush();
firstTransaction.commit();
final Transaction secondTransaction = session.beginTransaction();
final Collection<SystemConfiguration> actual = dao.getTheList();
secondTransaction.commit();
assertEquals(1, actual.size());
}
I've also tried breaking taking the #Transactional annotation off the test thread and annotating each of 2 helper methods, one for each Hibernate job. For that, though I get the error: No Hibernate Session bound to thread, and configuration does not allow creation of non-transactional one here.
[/edit]
I think the underlying DBMS might hide the change to other transactions as long as the changing transaction is not completed yet. Is getTheList running in an extra transaction? Are you using oracle or postgres?
I'm questioning my implementation of the lock in various scenarios and I'd like to have some suggestion by user more expert than me.
I'm using two support class, named HibernateUtil and StorageManager.
HibernateUtil
Simply returns a singleton instance of session factory; obviously, it creates the session factory on the first call.
StorageManager
Encloses the common operations between the various entities. On its creation, it gets the session factory from HibernateUtil and store it into a static variable.
This class implements the session-per-request (or maybe session-per-operation) pattern and for this reason for every kind of request it basically does this things in sequence:
open a new session (from the session factory previously stored)
begin a new transaction
execute the specific request (depends on the specific methods of StorageManager invoked
commit transaction
close session
Of course, comment on this style are really appreciated.
Then, there are basically 3 categories of operations that implements point 3
Insert, Update or Delete entity
session.save(entity);
// OR session.update(entity) OR session.delete(entity)
session.buildLockRequest(LockOptions.UPGRADE).lock(entity);
Get entity
T entity = (T) session.byId(type).with(LockOptions.READ).load(id);
// There are other forms of get, but they are pretty similar
Get list
List<T> l = session.createCriteria(type).list();
// Same here, various but similar forms of get list
Again, don't know if it is the right way to implement the various actions.
Also, and this is the real problem, whenever an error occurred, it is impossible to access to the datastore in any way, even from command line, until I manually stop the application that caused the problem. How can I solve the problem?
Thanks in advance.
EDIT
Some more code
This is the code for the parts listed above.
private void createTransaction() // Parts 1 and 2 of the above list
{
session = sessionFactory.openSession();
transaction = null;
try
{
transaction = session.beginTransaction();
}
catch (HibernateException exception)
{
if (transaction != null) transaction.rollback();
exception.printStackTrace();
}
}
private void commitTransaction() // Part 4 of the above list
{
try
{
transaction.commit();
}
catch (HibernateException exception)
{
if (transaction != null) transaction.rollback();
exception.printStackTrace();
}
}
private void closeSession() // Part 5 of the above list
{
try
{
// if (session != null)
{
session.clear();
session.close();
}
}
catch (HibernateException exception)
{
exception.printStackTrace();
}
}
public void update(T entity) // Example usage for part 3 of the above list
{
try
{
this.createTransaction();
session.update(entity);
// session.buildLockRequest(LockOptions.UPGRADE).lock(entity);
this.commitTransaction();
}
catch (HibernateException exception)
{
exception.printStackTrace();
}
finally
{
this.closeSession();
}
}
Your error case (the real problem) indicates you are not following the typical transaction usage idiom (from the Session Javadoc):
Session sess = factory.openSession();
Transaction tx = null;
try {
tx = sess.beginTransaction();
//do some work, point 3
tx.commit();
} catch (Exception e) {
if (tx!=null) tx.rollback();
throw e;
} finally {
sess.close();
}
Note the catch and finally blocks which ensure any database resources are released in case of an error. (*)
I'm not sure why you would want to lock a database record (LockOptions.UPGRADE) after you have changed it (insert, update or delete entity). You normally lock a database record before you (read and) update it so that you are sure you get the latest data and no other open transactions using the same database record can interfere with the (read and) update.
Locking makes little sense for insert operations since the default transaction isolation level is "read committed"(1) which means that when a transaction inserts a record, that record only becomes visible to other database transactions AFTER the transaction that inserts the record commits. I.e. before the transaction commit, other transactions cannot select and/or update the newly "not yet comitted" inserted record.
(1) Double check this to make sure. Search for "hibernate.connection.isolation" in the Hibernate configuration chapter. Value should be "2" as shown in the Java Connection constant field values.
(*) There is a nasty corner case when a database connection is lost after a database record is locked. In this case, the client cannot commit or rollback (since the connection is broken) and the database server might keep the lock on the record forever. A good database server will unlock records locked by a database connection that is broken and discarded (and rollback any open transaction for that broken database connection), but there is no guarantee (e.g. how and when will a database server discover a broken database connection?). This is one of the reasons to use database record locks sparsely: try to use them only when the application(s) using the database records cannot prevent concurrent/simultaneous updates to the same database record.
Why don't you use Spring Hibernate/JPA support. You can then have a singleton SessionFactory and transaction boundaries are explicitly set by using #Transactional.
The session is automatically managed by the Transactioninterceptor so no matter how many DAOs you call from a service, all of those will use the same thread-bound Session.
The actual Spring configuration is much easier than having to implement your current solution.
If you don;t plan on using Spring then you have to make sure you are actually implementing the session-per-request patterns. If you are using session-per-operation anti-pattern then you won't be able to include two or more operations into a single unit-of-work.
The session-per-request pattern requires an external interceptor/AOP aspect to open/close and bind the current session to the current calling thread. You might want to configure this property also:
hibernate.current_session_context_class=thread
so that Hibernate can bind the current Session into the current thread local storage.
In the following code sample:
Session session = getSessionFactory().openSession();
MyObject currentState = (MyObject)
session.get(MyObject.class, id, new LockOptions(LockMode.???));
if (!statusToUpdateTo.equals(currentState.getStatus())) {
tx = session.beginTransaction();
currentState.setStatus(statusToUpdateTo);
session.save(currentState);
tx.commit();
}
session.close();
As you might hopefully interpret from the code, the idea is to check our store and find an object with a given id, then update it to a given status. If we find that the object is already in the status we want to update to, we abandon doing anything. Otherwise, we update the status and save it back to the store.
The worry I've got is that what if several of these requests come through at once, and all try to read and update the same object? Looking through the doco it would seem like Hibernate "usually obtains exactly the right lock level automatically." (LockMode.class) but I'm still keen to ensure that only one thing can read the object, make the decision that it needs to update it, and then update it - without any other threads doing the same to the same database entry.
From the LockMode class I think PESSIMISTIC_WRITE is what I'm after, but can't seem to find a documentation resource that confirms this. Is anyone able to confirm this for me, and that the above code will do what I'm after?
So I noticed in my original code that upon session close, a lock was still left on the database, as subsequent calls to delete the rows I'd inserted didn't complete when tx.commit() was called on them.
After adding the following else block, my tests pass which I infer as meaning the lock has been released (as these rows are now being cleaned up).
Session session = getSessionFactory().openSession();
MyObject currentState = (MyObject)
session.get(MyObject.class, id,
new LockOptions(LockMode.PESSIMISTIC_WRITE));
if (!statusToUpdateTo.equals(currentState.getStatus())) {
tx = session.beginTransaction();
currentState.setStatus(statusToUpdateTo);
session.save(currentState);
tx.commit();
} else {
// Seems to clear lock
tx = session.beginTransaction();
tx.rollback();
}
session.close();
To me, this obviously reflects my lack of understanding about Hibernate's locking mechanisms, but it does seem slightly strange that one can get a lock using session.get(...) or session.load(...) and then not release a lock using session itself, but rather only through creating a transaction and committing/rolling back.
Of course, I could just be misunderstanding the observed behaviour :)
I am trying to use Hibernate for a multi threaded application wherein each thread retrieves an object and tries to insert it into a table. My code looks like below.
I have local hibernate Session objects per thread and in each InsertData I do beginTransaction and commit.
The problem I am facing is that many times I get "org.hibernate.TransactionException: nested transactions not supported"
Since I am new to hibernate I don't know if what I am doing is correct or not? Please let me know what is the correct way to use hibernate in multi threaded app and how to avoid the above mentioned exception.
Thanks
public class Worker extends Thread {
private Session session = null;
Worker() {
SessionFactory sf = HibernateUtil.getSessionFactory(); // Singleton
session = sf.openSession();
session.setFlushMode(FlushMode.ALWAYS);
}
public void run() {
// Some loop which will run thousand of times
for (....)
{
InsertData(b);
}
session.close();
}
// BlogPost Table has (pk = id AutoGenerated), dateTime, blogdescription etc.
private void InsertData(BlogPost b) {
session.beginTransaction();
Long id = (Long) session.save(b);
b.setId(id);
session.getTransaction().commit();
}
}
My hibernate config file has c3p0.min_size=10 and c3p0.max_size=20
With session-objects-per-thread, as long as you are not sharing session objects between multiple threads, you will be fine.
The error you are receiving is unrelated to your multithreaded usage or your session management. Your usage of session.save() as well as explicitly setting the ID is not quite right.
Without seeing your mapping for BlogPost its hard to tell, but if you have told Hibernate to use the id field as the primary key, and you are using the native generator for primary keys, the all you need to do is this:
session.beginTransaction();
session.persist(b);
session.flush(); // only needed if flush mode is "manual"
session.getTransaction().commit();
Hibernate will fill in the ID for you, persist() will cause the insert to happen within the bounds of the transaction (save() does not care about transactions). If your flush mode is not set to manual then you don't need to call flush() as Transaction.commit() will handle that for you.
Note that with persist(), the BlogPost's ID is not guaranteed to be set until the session is flushed, which is fine for your usage here.
To handle errors gracefully:
try {
session.beginTransaction();
try {
session.persist(b);
session.flush(); // only needed if flush mode is "manual"
session.getTransaction().commit();
} catch (Exception x) {
session.getTransaction().rollback();
// log the error
}
} catch (Exception x) {
// log the error
}
By the way, I suggesting making BlogPost.setId() private, or package visible. It is most likely an implementation error if another class sets the ID explicitly (again assuming native generator, and id as primary key).