A(){
con.begin;
.........
.........
B();
........
........(con.rollback;)
con.commit;
}
B{
con.begin;
.......
.......
con.commit;
}
In the above code, I begin a new DB transaction at A(). It executes some transaction successfully. After that B() starts executing and it also executes some transaction successfully and now the control returns to A(). At this point some exception occurs and I do a rollback. I would like to know whether the transaction which succeeded in B() will rollback or not.
The short answer, no. The long answer is as follows.
Support for nested transactions in Java depends on various variables at play.
Support for Nested transactions in JTA
First and foremost, if you are using JTA, it is upto to the Transaction Manager to support nested transactions. Any attempt to begin a transaction may result in a NotSupportedException being thrown by a Transaction Manager (that does not support nested transactions) if there is an attempt to start a new transaction in a thread that is already associated with a transaction.
From the Java Transaction API 1.1 specification:
3.2.1 Starting a Transaction
The TransactionManager.begin method starts
a global transaction and associates
the transaction context with the
calling thread. If the Transaction
Manager implementation does not
support nested transactions, the
TransactionManager.begin
methodthrowsthe NotSupportedException
whenthe calling thread is already
associated with a transaction.
Support for Nested transactions in JDBC
JDBC 3.0 introduces the Savepoint class, which is more or less similar to the concept of savepoints in the database. Savepoints have to be initialized using the Connection.setSavepoint() method that returns an instance of a Savepoint. One can roll back to this savepoint at a later point in time using the Connection.rollback(Savepoint svpt) method. All of this, of course, depends on whether you are using a JDBC 3.0 compliant driver that supports setting of savepoints and rolling back to them.
Impact of Auto-Commit
By default, all connections obtained are set to auto-commit, unless there is a clear deviation on this front by the JDBC driver. This feature, if enabled, automatically rules out the scope of having nested transactions, for all changes made in the database via the connection are committed automatically on execution.
If you disable the auto-commit feature, and choose to explicitly commit and rollback transactions, then committing a transaction always commits all changes performed by a connection until that point in time. Note, that the changes chosen for commit cannot be defined by a programmer - all changes until that instant are chosen for commit, whether they have been performed in one method or another. The only way out is to define savepoints, or hack your way past the JDBC driver - the driver usually commits all changes performed by a connection associated with a thread, so starting a new thread (this is bad) and obtaining a new connection in it, often gives you a new transaction context.
You might also want to check how your framework offers support for nested transactions, especially if you're isolated from the JDBC API or from starting new JTA transactions on your own.
Based on the above description of how nested transaction support is possibly achieved in various scenarios, it appears that a rollback in your code will rollback all changes associated with the Connection object.
That looks like poor transaction management i'm afraid. It would be good if you handle the commits and rollbacks from the callers to A and B instead.
A()
{
//business code A
B();
//more business code A
}
B()
{
//business code B
}
DoA()
{
try
{
con.begin();
A();
con.commit();
}
catch(Exception e)
{
con.rollback();
}
}
DoB()
{
try
{
con.begin();
B();
con.commit();
}
catch(Exception e)
{
con.rollback();
}
}
As per your code, in A() you are starting transaction. Then jump to B() where you start transaction again, which in turn will commit all previous transaction. Then at end of B(), transaction is explicitly committed. At this point, all your code is committed. Now the code return to A() and remaining code is processed. In case of exception, only this part after B() call will be rolled back.
You can use Java.SQL's built-in SavePoint function in Postgres 8 and up.
Connection conn = null;
Savepoint save = null;
DatabaseManager mgr = DatabaseManager.getInstance();
try {
conn = mgr.getConnection();
proc = conn.prepareCall("{ call writeStuff(?, ?) }");
//Set DB parameters
proc.setInt(1, stuffToSave);
proc.setString(2, moreStuff);
//Set savepoint here:
save = conn.setSavepoint();
//Try to execute the query
proc.execute();
//Release the savepoint, otherwise buffer memory will be eaten
conn.releaseSavepoint(save);
} catch (SQLException e) {
//You may want to log the first one only.
//This block will attempt to rollback
try {
//Rollback to the Savepoint of prior transaction:
conn.rollback(save);
} catch (SQLException e1) {
e1.printStackTrace();
}
}
When a SQL-exception occurs, the current transaction is rolled-back to the SavePoint, and the remaining transactions may occur. Without the roll-back, subsequent transactions will fail.
Related
in updateUser method: If an exception occurs when calling one of the macro services (like : updateUserContact,updateAccountContact), the updateUser operation must be rollback.
How do I perform a transaction operation to create, update and delete manually in Java?
In the creation method, when an event exception occurs, I delete the related records.
But I do not know what to do in the update and delete.
if invoke userContactStub.grpcUpdate has Exception , i must rollback userAcount.
Does anyone have any suggestions about the rollback transaction in the update ?
I use jpa, grpc(To connect micro services),springBoot.
each micro service has a datasource.
//updateUser
AdminUser adminUser = findById();
adminUser.setFirstName(adminUserModel.getFirstName());
adminUser.setLastName(adminUserModel.getLastName());
adminUser.setPassword(PasswordEncoderGenerator.generate(adminUserModel.getPassword()));
adminUser.setUsername(adminUserModel.getUsername());
adminUser.setDateOfBirth(CalendarTools.getDateFromCustomDate(adminUserModel.getDateOfBirth()));
adminUser.setGender(etcItemService.findByIdAndCheckEntity(adminUserModel.getGender_id(), GenderEnum.class,null,true));
adminUser = adminUserRepository.save(adminUser);
//update userAcount For Admin
//call grpcUpdate
this.userAcountStub.grpcUpdate(createRequestModel);
//update UserContact For Admin
//call grpcUpdate
this.userContactStub.grpcUpdate(createRequestModel);
adminUserModel.setId(adminUser.getId());
return adminUserModel;
What framework are you using? Are u using JPA?
Assume you are using JPA, you don't have to worry about it. JPA shell pretty much guarantee your data integrity if exception occurred (usually it rollback manually).
However I am not sure about how to rollback a database translation if one of the micoservice you called has thrown an exception.
if you are using JPa (hibernate?), you can simply add #Transactional annotation for rollback on top of create , update methods etc. İt handles this job and solves the problem.
try(Connection conn = DriverManager.getConnection(dbURL,dbUser,dbPassword);){
conn.setAutoCommit(false);
// perform operations such as insert, update, delete here
// ..
// if everything is OK, commit the transaction
conn.commit();
} catch(SQLException e) {
// in case of exception, rollback the transaction
conn.rollback();
}
I have a Java Spring Boot application reading and writing data to a local Oracle 19c database.
I have the following CommandLineRunner:
#Override
public void run(String... args) {
final EntityManager em = entityManagerFactory.createEntityManager();
final EntityTransaction transaction = em.getTransaction();
transaction.begin();
em.persist(customer());
//COMMENT1 em.flush();
/*COMMENT2
Query q = em.createQuery("from " + Customer.class.getName() + " c");
#SuppressWarnings("unchecked")
final Iterator<Object> iterator = (Iterator<Object>) q.getResultList().iterator();
while (iterator.hasNext()) {
Object o = iterator.next();
final Customer c = (Customer) o;
log.info(c.getName());
}
*/
transaction.rollback();
}
When I run this code, using a packet sniffer to monitor TCP traffic between the application and database, I see what I expect: nothing particularly interesting in the conversation, as the em.persist(customer()) will not be flushed.
When I include the code in COMMENT1, then I'm surprised to find that the conversation looks the same - there is nothing interesting after the connection handshake.
When I include the code in COMMENT2, however, then I get a more complete TCP conversation. Now, the captured packets show that the write operation was indeed flushed to the database, and I can also see evidence of the read operation to list all entities following it.
Why is it that the TCP conversation does not reflect the explicit flush() when only COMMENT1 is removed? Why do I need to include COMMENT2 to see an insert into customer... statement captured in the TCP connection?
A call to flush() synchronizes your changes in the persistence context with the database but it may not commit the transaction immediately. Consider it as an optimization to avoid unnecessary DB writes on each flush.
When you un-comment the 2nd block then you see the flush getting executed for sure. This happens because the EM ensures your select query gets all the results in the latest state from DB. It, therefore, commits the flushed changes (alongwith any other changes done via other transactions, if any).
em.persist(customer());
persist does not directly insert the object into the database:
it just registers it as new in the persistence context (transaction).
em.flush();
It flushes the changes to the database but doesn't commit the transaction.
A query gets fired if there is a change expected in database(insert/update/delete)
em.rollback or em.commit will actually rollback or commit the transaction
And all these scenario depends on the flush mode, and I think behaviour is
vendor dependent. Assuming Hibernate, most probably FlushMode is set to auto in
your application and so the desired result as in second scenario.
AUTO Mode : The Session is sometimes flushed before query execution in order to ensure
that queries never return stale state
https://docs.jboss.org/hibernate/orm/3.5/javadocs/org/hibernate/FlushMode.html
In spring boot I think you can set it as
spring.jpa.properties.org.hibernate.flushMode=AUTO
I'm running certain process that inserts new elements in DB from a for loop. I'm using JPA (Eclipselink), and sometimes there's a problem with the transation status. This is the case:
One of the INSERTS doesn't work (Primary Key duplicated)
After that, all the inserts will fail (Exception Description: Transaction is currently active).
for (Element l:e){
try{
//Should I add: if(!em.getTransaction().isActive())
em.getTransaction().begin();
em.createNativeQuery("INSERT INTO...").executeUpdate();
em.getTransaction().commit();
}
catch(Exception ep)
{
//right now I don't do anything here
}
}
I get that what is happening is that, since the commit in 1) didn't work, the transaction didn't finish, so the next em.getTransaction().begin() will find an already active transaction.
I have two ideas:
A) Before em.getTransaction().begin(), check if the transcation is active, and only if it is not, call begin(); otherwise, create query and commit.
B) Do something within the catch block. And here's my doubt... Should I call clear()? flush()? close()?
Which one looks better?
Thanks!
An exception thrown by ElementManager.Query does not rollback the active transaction. I see two options here:
Rollback the transaction by yourself within the catch clause with use of em.getTransaction().rollback().
Instead of inserting data with query use the preferred way based on EntityManager.persist whose exceptions cause an automatic rollback (in your particular case this will lead to javax.persistence.EntityExistsException).
I'm questioning my implementation of the lock in various scenarios and I'd like to have some suggestion by user more expert than me.
I'm using two support class, named HibernateUtil and StorageManager.
HibernateUtil
Simply returns a singleton instance of session factory; obviously, it creates the session factory on the first call.
StorageManager
Encloses the common operations between the various entities. On its creation, it gets the session factory from HibernateUtil and store it into a static variable.
This class implements the session-per-request (or maybe session-per-operation) pattern and for this reason for every kind of request it basically does this things in sequence:
open a new session (from the session factory previously stored)
begin a new transaction
execute the specific request (depends on the specific methods of StorageManager invoked
commit transaction
close session
Of course, comment on this style are really appreciated.
Then, there are basically 3 categories of operations that implements point 3
Insert, Update or Delete entity
session.save(entity);
// OR session.update(entity) OR session.delete(entity)
session.buildLockRequest(LockOptions.UPGRADE).lock(entity);
Get entity
T entity = (T) session.byId(type).with(LockOptions.READ).load(id);
// There are other forms of get, but they are pretty similar
Get list
List<T> l = session.createCriteria(type).list();
// Same here, various but similar forms of get list
Again, don't know if it is the right way to implement the various actions.
Also, and this is the real problem, whenever an error occurred, it is impossible to access to the datastore in any way, even from command line, until I manually stop the application that caused the problem. How can I solve the problem?
Thanks in advance.
EDIT
Some more code
This is the code for the parts listed above.
private void createTransaction() // Parts 1 and 2 of the above list
{
session = sessionFactory.openSession();
transaction = null;
try
{
transaction = session.beginTransaction();
}
catch (HibernateException exception)
{
if (transaction != null) transaction.rollback();
exception.printStackTrace();
}
}
private void commitTransaction() // Part 4 of the above list
{
try
{
transaction.commit();
}
catch (HibernateException exception)
{
if (transaction != null) transaction.rollback();
exception.printStackTrace();
}
}
private void closeSession() // Part 5 of the above list
{
try
{
// if (session != null)
{
session.clear();
session.close();
}
}
catch (HibernateException exception)
{
exception.printStackTrace();
}
}
public void update(T entity) // Example usage for part 3 of the above list
{
try
{
this.createTransaction();
session.update(entity);
// session.buildLockRequest(LockOptions.UPGRADE).lock(entity);
this.commitTransaction();
}
catch (HibernateException exception)
{
exception.printStackTrace();
}
finally
{
this.closeSession();
}
}
Your error case (the real problem) indicates you are not following the typical transaction usage idiom (from the Session Javadoc):
Session sess = factory.openSession();
Transaction tx = null;
try {
tx = sess.beginTransaction();
//do some work, point 3
tx.commit();
} catch (Exception e) {
if (tx!=null) tx.rollback();
throw e;
} finally {
sess.close();
}
Note the catch and finally blocks which ensure any database resources are released in case of an error. (*)
I'm not sure why you would want to lock a database record (LockOptions.UPGRADE) after you have changed it (insert, update or delete entity). You normally lock a database record before you (read and) update it so that you are sure you get the latest data and no other open transactions using the same database record can interfere with the (read and) update.
Locking makes little sense for insert operations since the default transaction isolation level is "read committed"(1) which means that when a transaction inserts a record, that record only becomes visible to other database transactions AFTER the transaction that inserts the record commits. I.e. before the transaction commit, other transactions cannot select and/or update the newly "not yet comitted" inserted record.
(1) Double check this to make sure. Search for "hibernate.connection.isolation" in the Hibernate configuration chapter. Value should be "2" as shown in the Java Connection constant field values.
(*) There is a nasty corner case when a database connection is lost after a database record is locked. In this case, the client cannot commit or rollback (since the connection is broken) and the database server might keep the lock on the record forever. A good database server will unlock records locked by a database connection that is broken and discarded (and rollback any open transaction for that broken database connection), but there is no guarantee (e.g. how and when will a database server discover a broken database connection?). This is one of the reasons to use database record locks sparsely: try to use them only when the application(s) using the database records cannot prevent concurrent/simultaneous updates to the same database record.
Why don't you use Spring Hibernate/JPA support. You can then have a singleton SessionFactory and transaction boundaries are explicitly set by using #Transactional.
The session is automatically managed by the Transactioninterceptor so no matter how many DAOs you call from a service, all of those will use the same thread-bound Session.
The actual Spring configuration is much easier than having to implement your current solution.
If you don;t plan on using Spring then you have to make sure you are actually implementing the session-per-request patterns. If you are using session-per-operation anti-pattern then you won't be able to include two or more operations into a single unit-of-work.
The session-per-request pattern requires an external interceptor/AOP aspect to open/close and bind the current session to the current calling thread. You might want to configure this property also:
hibernate.current_session_context_class=thread
so that Hibernate can bind the current Session into the current thread local storage.
Why do we need to always Start a transaction in hibernate to save,insert,delete or update?
Is the auto commit feature by default false in hibernate?
like for this
public static void main(String[] args) {
System.out.println("creating empopbjects");
emp e1=new emp("a","x",1234);
emp e2=new emp("b","y",324);
emp e3=new emp("c","z",23345);
System.out.println("saving emp objects..");
Session s=myfactory.getsession();
s.save(e1);
s.save(e2);
s.save(e3);
s.close();
System.out.println("successfully saved");
}
This wont save anything whereas if i add Transaction then only it is added?why so?
public static void main(String[] args) {
System.out.println("creating empopbjects");
emp e1=new emp("a","x",1234);
emp e2=new emp("b","y",324);
emp e3=new emp("c","z",23345);
System.out.println("saving emp objects..");
Session s=myfactory.getsession();
Transaction t =s.beginTransaction();
s.save(e1);
s.save(e2);
s.save(e3);
t.commit();
s.close();
System.out.println("successfully saved");
}
Well defined answer of your question from one of the SO page is here.
Look at the following code, which
accesses the database without
transaction boundaries:
Session session = sessionFactory.openSession();
session.get(Item.class, 123l);
session.close();
By default, in a Java SE environment
with a JDBC configuration, this is
what happens if you execute this
snippet:
A new Session is opened. It doesn’t obtain a database connection at this
point.
The call to get() triggers an SQL SELECT. The Session now obtains a JDBC
Connection from the connection pool.
Hibernate, by default, immediately
turns off the autocommit mode on this
connection with setAutoCommit(false).
This effectively starts a JDBC
transaction!
The SELECT is executed inside this JDBC transaction. The Session is
closed, and the connection is returned
to the pool and released by Hibernate
— Hibernate calls close() on the JDBC
Connection. What happens to the
uncommitted transaction?
The answer to that question is, “It
depends!” The JDBC specification
doesn’t say anything about pending
transactions when close() is called on
a connection. What happens depends on
how the vendors implement the
specification. With Oracle JDBC
drivers, for example, the call to
close() commits the transaction! Most
other JDBC vendors take the sane route
and roll back any pending transaction
when the JDBC Connection object is
closed and the resource is returned to
the pool.
Obviously, this won’t be a problem for
the SELECT you’ve executed, but look
at this variation:
Session session = getSessionFactory().openSession();
Long generatedId = session.save(item);
session.close();
This code results in an INSERT
statement, executed inside a
transaction that is never committed or
rolled back. On Oracle, this piece of
code inserts data permanently; in
other databases, it may not. (This
situation is slightly more
complicated: The INSERT is executed
only if the identifier generator
requires it. For example, an
identifier value can be obtained from
a sequence without an INSERT. The
persistent entity is then queued until
flush-time insertion — which never
happens in this code. An identity
strategy requires an immediate INSERT
for the value to be generated.)
Bottom line: use explicit transaction demarcation.
Hibernate Session is a Unit of Work, and queries are only executed during flush time (which may occur prior to executing any query, or when your currently executing Transaction is committed).
Auto-commit is only meaningful in SQL consoles and it's undesirable in Enterprise applications. When using an ORM tool, you are managing Entity object state transitions rather than executing DML operations. It's only at the flush time, when state transitions are translated to DML operations.
So, while you can write JDBC statements in auto-commit, JPA doesn't allow you to do so.