How does default #Transactional work in the low level? - java

Does a propagation-Required default #Transactional collects all queries and executes them at the end of the method altogether or does it open a db transaction and executes BEGIN, every query as it finds it and when transaction finishes executes COMMIT?
Is this what is referred as Logical vs Physical transactions?
I am wondering that because I am using a #Transactional tests that executes GET endpoint + DELETE endpoont + GET endpoint with READ_UNCOMMITED, behavior manages to work well, but I see no trace of delete queries in the logs, only selects.
I would have expected I see all the queries issued and then a rollback, but I have the feeling that the transaction is just modifying the managed entities of the persistance context and just tries to save by the end of the test...
If I should be seeing all the delete queries as the repository.removes() are executed then it might be that for some reason hibernate is only logging queries out of a readonly=false transaction

Maybe this answer helps you: JPA flush vs commit
If there is an active transaction, JPA/Hibernate will execute the flush method when transaction is committed. Meanwhile, all changes applied to entities are collected in the Unit of Work.
In flush() the changes to the data are reflected in database after encountering flush, but it is still in transaction.flush() MUST be enclosed in a transaction context and you don't have to do it explicitly unless needed (in rare cases), when EntityTransaction.commit() does that for you.
You can change this behavior changing the flush strategy.

Related

entityManager.flush does not insert to database immediately, why?

This is just an insert to db at the end of a transaction. Is there any point in using entityManager.flush()?
#Transactional
public long saveNewWallet(String name) {
Wallet emptyWallet = new Wallet();
emptyWallet.setAmount(new BigDecimal(2.00));
entityManager.persist(emptyWallet);
entityManager.flush();
return 5;
}
Since you are in a #Transactional scope, the changes are sent to the database but are not actually committed until Spring's transaction interceptor commits the local transaction. In that scenario you could remove it.
The following entry explains the uses of EntityManager.flush(): https://en.wikibooks.org/wiki/Java_Persistence/Persisting
Flush
The EntityManager.flush() operation can be used to write all changes
to the database before the transaction is committed. By default JPA
does not normally write changes to the database until the transaction
is committed. This is normally desirable as it avoids database access,
resources and locks until required. It also allows database writes to
be ordered, and batched for optimal database access, and to maintain
integrity constraints and avoid deadlocks. This means that when you
call persist, merge, or remove the database DML INSERT, UPDATE, DELETE
is not executed, until commit, or until a flush is triggered.
The flush() does not execute the actual commit: the commit still
happens when an explicit commit() is requested in case of resource
local transactions, or when a container managed (JTA) transaction
completes.
Flush has several usages:
Flush changes before a query execution to enable the query to return new objects and changes made in the persistence unit.
Insert persisted objects to ensure their Ids are assigned and accessible to the application if using IDENTITY sequencing.
Write all changes to the database to allow error handling of any database errors (useful when using JTA or SessionBeans).
To flush and clear a batch for batch processing in a single transaction.
Avoid constraint errors, or reincarnate an object.

JTA - how is transaction registered?

I am using the following piece of code I found online (Here) as an example of JTA Transaction processing:
// Get a UserTransaction
UserTransaction txn = new InitialContext().lookup("java:comp/UserTransaction");
try {
System.out.println("Starting top-level transaction.");
txn.begin();
stmtx = conn.createStatement(); // will be a tx-statement
stmtx.executeUpdate("INSERT INTO test_table (a, b) VALUES (1,2)");
stmtx.executeUpdate("INSERT INTO test_table2 (a, b) VALUES (3,4)");
System.out.print("\nNow attempting to rollback changes.");
txn.rollback();
}
I have a few questions, in general, about the JTA that are drawn from the example above:
I presume the whole point of saying txn.begin and then rollback is to be able to (apperently) rollback TWO SQL statements correct?
Each of the update queries were TRANSACTIONS themselves, right? They must have succeded so that we can get to rollback call at the bottom. Well, if they succeded i.e. commited, how on earth can we roll them back all of a sudden?
The most important question: what happens when we say txn.begin()? I understand from the JTA API that it is supposed to register this transaction with a calling thread by TransactionManager instance. How is TM even linked to the UserTransaction? And finally, how is the txn aware of the fact that we modified the DB twice and is able to speak to DB to roll it back? We have not registered ANY ResourceManagers with it so it should not be aware of any resources being at play...
I am a bit lost here, so any info would be appreciated... Question 3 bothers me the most.
yes, or event just one. It's also the ability of committing the transaction at the end, and thus have the other concurrent transaction only see the new state after the transaction has been committed, and not all the temporary states between the beginning and the end of the transaction (i.e. the I in ACID)
No. An update is an update. It's executed as part of the transaction that you begun previously. If one of them doesn't succeed, you'll have an exception, and can still choose to commit the transaction (i.e. have all the previous updates committed), or to rollback the transaction (i.e. have all the previous updates canceled).
The UserTransaction has a reference to its transaction manager, presumably. When you get a connection from a DataSource in a Java EE environment, the DataSource is linked to the transaction manager of the Java EE container, and rollbacking the JTA transaction will use the XA protocol to rollback all the operations done on all the data sources during the transaction. That's the container's business, not yours.
There's a lot to learn about transactions, but maybe I can give you a head start:
Yes. But you will usually only want to rollback in case of a problem - some step of the transaction could not be completed because of a technical issue (syntax error, table not found, segment overrun, ...) or an application logic problem (customer has not enough funds for all order line items for example).
Given auto commit mode is disabled, the inserts are not committed before you actually commit. They are temporarily applied to the database using a Write-Ahead-Log (PostgreSQL, InnoDB-Engine, Oracle) with sophisticated Multi-Version-Concurrency-Control (MVCC) which determines which state of the database each transactional client can see. A very interesting topic :-).
A UserTransaction is registered with your current Thread. Resources (i.e. Databases or Messaging services) enlist themselves with the UserTransaction. This is usually only necessary when you are using distributed transactions (XA transactions, 2PC).
I suggest to get a good read on SQL programming (for example Head First SQL) and check out the Java EE 6 tutorial.

Hibernate: flush() and commit()

Is it good practice to call org.hibernate.Session.flush() separately?
As said in org.hibernate.Session docs,
Must be called at the end of a unit of work, before commiting the transaction and closing the session (depending on flush-mode, Transaction.commit() calls this method).
Could you explain the purpose of calling flush() explicitely if org.hibernate.Transaction.commit() will do it already?
In the Hibernate Manual you can see this example
Session session = sessionFactory.openSession();
Transaction tx = session.beginTransaction();
for (int i = 0; i < 100000; i++) {
Customer customer = new Customer(...);
session.save(customer);
if (i % 20 == 0) { // 20, same as the JDBC batch size
// flush a batch of inserts and release memory:
session.flush();
session.clear();
}
}
tx.commit();
session.close();
Without the call to the flush method, your first-level cache would throw an OutOfMemoryException
Also you can look at this post about flushing
flush() will synchronize your database with the current state of object/objects held in the memory but it does not commit the transaction. So, if you get any exception after flush() is called, then the transaction will be rolled back.
You can synchronize your database with small chunks of data using flush() instead of committing a large data at once using commit() and face the risk of getting an OutOfMemoryException.
commit() will make data stored in the database permanent. There is no way you can rollback your transaction once the commit() succeeds.
One common case for explicitly flushing is when you create a new persistent entity and you want it to have an artificial primary key generated and assigned to it, so that you can use it later on in the same transaction. In that case calling flush would result in your entity being given an id.
Another case is if there are a lot of things in the 1st-level cache and you'd like to clear it out periodically (in order to reduce the amount of memory used by the cache) but you still want to commit the whole thing together. This is the case that Aleksei's answer covers.
flush(); Flushing is the process of synchronizing the underlying persistent store with persistable state held in memory. It will update or insert into your tables in the running transaction, but it may not commit those changes.
You need to flush in batch processing otherwise it may give
OutOfMemoryException.
Commit(); Commit will make the database commit. When you have a persisted object and you change a value on it, it becomes dirty and hibernate needs to flush these changes to your persistence layer. So, you should commit but it also ends the unit of work (transaction.commit()).
It is usually not recommended to call flush explicitly unless it is necessary. Hibernate usually auto calls Flush at the end of the transaction and we should let it do it's work. Now, there are some cases where you might need to explicitly call flush where a second task depends upon the result of the first Persistence task, both being inside the same transaction.
For example, you might need to persist a new Entity and then use the Id of that Entity to do some other task inside the same transaction, on that case it's required to explicitly flush the entity first.
#Transactional
void someServiceMethod(Entity entity){
em.persist(entity);
em.flush() //need to explicitly flush in order to use id in next statement
doSomeThingElse(entity.getId());
}
Also Note that, explicitly flushing does not cause a database commit, a database commit is done only at the end of a transaction, so if any Runtime error occurs after calling flush the changes would still Rollback.
By default flush mode is AUTO which means that: "The Session is sometimes flushed before query execution in order to ensure that queries never return stale state", but most of the time session is flushed when you commit your changes. Manual calling of the flush method is usefull when you use FlushMode=MANUAL or you want to do some kind of optimization. But I have never done this so I can't give you practical advice.
session.flush() is synchronise method means to insert data in to database sequentially.if we use this method data will not store in database but it will store in cache,if any exception will rise in middle we can handle it.
But commit() it will store data in database,if we are storing more amount of data then ,there may be chance to get out Of Memory Exception,As like in JDBC program in Save point topic

OpenJPA Transactions - Single or Multiple Entity managers?

I have a DBManager singleton that ensures instantiation of a single EntityManagerFactory. I'm debating on the use of single or multiple EntityManager though, because a only single transaction is associated with an EntityManager.
I need to use multiple transactions. JPA doesn't support nested transactions.
So my question is: In most of your normal applications that use transactions in a single db environment, do you use a single EntityManager at all? So far I have been using multiple EntityManagers but would like to see if creating a single one could do the trick and also speed up a bit.
So I found the below helpful: Hope it helps someone else too.
http://en.wikibooks.org/wiki/Java_Persistence/Transactions#Nested_Transactions
Technically in JPA the EntityManager is in a transaction from the
point it is created. So begin is somewhat redundant. Until begin is
called, certain operations such as persist, merge, remove cannot be
called. Queries can still be performed, and objects that were queried
can be changed, although this is somewhat unspecified what will happen
to these changes in the JPA spec, normally they will be committed,
however it is best to call begin before making any changes to your
objects. Normally it is best to create a new EntityManager for each
transaction to avoid have stale objects remaining in the persistence
context, and to allow previously managed objects to garbage collect.
After a successful commit the EntityManager can continue to be used,
and all of the managed objects remain managed. However it is normally
best to close or clear the EntityManager to allow garbage collection
and avoid stale data. If the commit fails, then the managed objects
are considered detached, and the EntityManager is cleared. This means
that commit failures cannot be caught and retried, if a failure
occurs, the entire transaction must be performed again. The previously
managed object may also be left in an inconsistent state, meaning some
of the objects locking version may have been incremented. Commit will
also fail if the transaction has been marked for rollback. This can
occur either explicitly by calling setRollbackOnly or is required to
be set if any query or find operation fails. This can be an issue, as
some queries may fail, but may not be desired to cause the entire
transaction to be rolled back.
The rollback operation will rollback the database transaction only.
The managed objects in the persistence context will become detached
and the EntityManager is cleared. This means any object previously
read, should no longer be used, and is no longer part of the
persistence context. The changes made to the objects will be left as
is, the object changes will not be reverted.
EntityManagers by definition are not thread safe. So unless your application is single threaded, using a single EM is probably not the way to go.

How can I configure Hibernate to immediately apply all saves, updates, and deletes?

How can I configure Hibernate to apply all saves, updates, and deletes to the database server immediately after the session executes each operation? By default, Hibernate enqueues all save, update, and delete operations and submits them to the database server only after a flush() operation, committing the transaction, or the closing of the session in which these operations occur.
One benefit of immediately flushing database "write" operations is that a program can catch and handle any database exceptions (such as a ConstraintViolationException) in the code block in which they occur. With late or auto-flushing, these exceptions may occur long after the corresponding Hibernate operation that caused the SQL operation.
Update:
According to the Hibernate API documentation for interface Session, the benefit of catching and handling a database exception before the session ends may be of no benefit at all: "If the Session throws an exception, the transaction must be rolled back and the session discarded. The internal state of the Session might not be consistent with the database after the exception occurs."
Perhaps, then, the benefit of surrounding an "immediate" Hibernate session write operation with a try-catch block is to catch and log the exception as soon as it occurs. Does immediate flushing of these operations have any other benefits?
How can I configure Hibernate to apply all saves, updates, and deletes to the database server immediately after the session executes each operation?
To my knowledge, Hibernate doesn't offer any facility for that. However, it looks like Spring does and you can have some data access operations FLUSH_EAGER by turning their HibernateTemplate respectively HibernateInterceptor to that flush mode (source).
But I warmly suggest to read the javadoc carefully (I'll come back on this).
By default, Hibernate enqueues all save, update, and delete operations and submits them to the database server only after a flush() operation, committing the transaction, or the closing of the session in which these operations occur.
Closing the session doesn't flush.
One benefit of immediately flushing database "write" operations is that a program can catch and handle any database exceptions (such as a ConstraintViolationException) in the code block in which they occur. With late or auto-flushing, these exceptions may occur long after the corresponding Hibernate operation that caused the SQL operation
First, DBMSs vary as to whether a constraint violation comes back on the insert (or update ) or on the subsequent commit (this is known as immediate or deferred constraints). So there is no guarantee and your DBA might even not want immediate constraints (which should be the default behavior though).
Second, I personally see more drawbacks with immediate flushing than benefits, as explained black in white in the javadoc of FLUSH_EAGER:
Eager flushing leads to immediate
synchronization with the database,
even if in a transaction. This causes
inconsistencies to show up and throw a
respective exception immediately, and
JDBC access code that participates in
the same transaction will see the
changes as the database is already
aware of them then. But the drawbacks
are:
additional communication roundtrips with the database, instead of a single
batch at transaction commit;
the fact that an actual database rollback is needed if the Hibernate
transaction rolls back (due to already
submitted SQL statements).
And believe me, increasing the database roundtrips and loosing the batching of statements can cause major performance degradation.
Also keep in mind that once you get an exception, there is not much you can do apart from throwing your session away.
To sum up, I'm very happy that Hibernate enqueues the various actions and I would certainly not use this EAGER_FLUSH flushMode as a general setting (but maybe only for the specific operations that actually require eager, if any).
Look in to autocommit though it is not recommended. If your work includes more than one update or insert SQL statement, you autocommit some of the work, and then a statement fails, you have a potentially arduous task of undoing the first part of the action. It gets really fun when the 'undo' operation fails.
Anyway, here's a link that shows how to do it.

Categories