I have some service layer method with #Transactional(readOnly=true) and this method causes some RuntimeException quite often (let's say it is some NotFoundException exception).
I'm using ORM Hibernate also for the DB interaction process.
Is it legal pattern to do so?
What is the default behaviour in this case in sense of "roll-back" behaviour? Can it influence somehow badly on connections's state or lead to any problems?
It isn't something like "why not try it by your self?". I have a suspicion that this could lead to the Transaction rolled back because it has been marked as rollback-only error in the same method after some number of exceptions. This could be very specific JDBC PostgreSQL driver error. That is why I'm wondering about this design in general: is it something legal or illegal to do so?
So as far as I understand, you are worried about the roll-back. In this case a readOnly is a select statement and usually there is nothing to roll-back from a read. The only place where this is handy is when you read under a lock and when the transaction finishes you release that lock.
AFAIK readOnly will set the flushmode to FlushMode.NEVER and that is good and bad at the same time. Good, because there will no dirty checking, as described here. Bad because if you call a read/write transaction within a readOnly transaction, the transaction will silently fail to commit because the session is not flushed. This is easily testable btw - and I hope things have not changed since I've tried this.
Then there is the pool of connections. I know that C3P0's default policy is to rollback any uncommitted work. The flag to control this is autoCommitOnClose.
Then there is this link about readOnly and postgres - which I have not worked with and can't really tell my opinion on.
Now to your point about Transaction rolled back because it has been marked as rollback-only. For a readOnly transaction there might be nothing to roll-back as I said before, so this really depends on how you chain your #Transactional methods IMO.
Related
I have a problem concerning java optimistic locking exception. I have a service class that is instantiated (by spring) for every new user session and it contains a non static method that perform db operations. I wonder how I can avoid optimistic locking exception on the entity that is read/written to db. I would like to achieve a similar result as a synchronized method would but I guess using "synchronized" is out of the question since the method is not static and would not have any effect when users have own instances of the service? Can I somehow detect if a new version of the entity is saved to db and then retrieve a new version and then edit and save that one? I want the transaction to hold until it is ok even if it implies the transaction have to wait for other transactions. My first idea was to put the transaction code into a try-catch block and then retry the transaction (read & write) if optimistic locking exceptions is thrown. Is that solution "too easy" or?
Optimistic locking is used to improve performance, but still avoid messing up the data.
If there's an Optimistic lock failure, the user (that failed the update) needs to decide if he wants to do his operation again. You can't automate that, since it depends entirely on what was changed and how.
So no, your idea of a retry the transaction with a try/catch is not a "too easy solution". It's not a solution, it would be a serious (and dumb) bug.
I was going through ACID properties regarding Transaction and encountered the statement below across the different sites
ACID is the acronym for the four properties guaranteed by transactions: atomicity, consistency, isolation, and durability.
**My question is specifically about the phrase.
guaranteed by transactions
**. As per my experience these properties are not taken care by
transaction automatically. But as a java developer we need to ensure that these properties criteria are met.
Let's go through for each property:-
Atomicity:- Assume when we create the customer the account should be created too as it is compulsory. So now during transaction
the customer gets created while during account creation some exception oocurs. So the developer can now go two ways: either he rolls back the
complete transaction (atomicity is met in this case) or he commits the transaction so customer will be created but not the
account (which violates the atomicity). So responsibility lies with developer?
Consistency:- Same reason holds valid for consistency too
Isolation :- as per definition isolation makes a transaction execute without interference from another process or transactions.
But this is achieved when we set the isolation level as Serializable. Otherwis in another case like read commited or read uncommited
changes are visible to other transactions. So responsibility lies with the developer to make it really isolated with Serializable?
Durability:- If we commit the transaction, then even if the application crashes, it should be committed on restart of application. Not sure if it needs to be taken care by developer or by database vendor/transaction?
So as per my understanding these ACID properties are not guaranteed automatically; rather we as a developer sjould achieve them. Please let me know
if above understanding regarding each point is correct? Would appreciate if you folks can reply for each point(yes/no will also do.
As per my understanding read committed should be most logical isolation level in most application, though it depends on requirement too.
The transactions guarantees ACID more or less:
1) Atomicity. Transaction guarantees all changes are made or none of them. But you need to manually set the start and end of a transaction and manually perform commit or rollback. Depending on the technology you use (EJB...), transactions are container-managed, setting the start and end to the whole "method" you are creating. You can control by configuration if a method invoked requires a new transaction or an existing one, no transaction...
2) Consistency. Guaranteed by atomicity.
3) Isolation. You must define the isolation level your application needs. Default value is defined depending upon the database, container... The commonest one is READ COMMITTED. Be careful with locks as can cause dead-lock depending on your logic and isolation level.
4) Durability. Managed entirely by the database. If your commit executes without error, nearly all database guarantees durability of changes, but some scenarios can cause to not guarantee that (writes to disk are cached in memory and flushed later...)
In general, you should be aware of transactions and configure it in the container of declare by code the star and end (commit, rollback).
Database transactions are atomic: They either happen in their entirety or not at all. By itself, this says nothing about the atomicity of business transactions. There are various strategies to map business transactions to database transactions. In the simplest case, a business transaction is implemented by one database transaction (where a business transaction is aborted by rolling back the database one). Then, atomicity of database transactions implies atomicity of business transactions. However, things get tricky once business transactions span several database transactions ...
See above.
Your statement is correct. Often, the weaker guarantees are sufficient to prove correctness.
Database transactions are durable (unless there is a hardware failure): if the transaction has committed, its effect will persist until other transactions change the data. However, calling code might not learn whether a transaction has comitted if the database or the network between database and calling code fails. Therefore
If we commit the transaction, then even if application crash, it should be committed on restart of application.
is wrong. If the transaction has committed, there is nothing left to do.
To summarize, the database does give strong guarantees - about the behaviour of the database. Obviously, it can not give guarantees about the behaviour of the entire application.
We use Spring and Hibernate in our project and has a layered Architechture. Controller -> Service -> Manager -> Dao. Transactions start in the Manager layer. A method in the service layer which updates an object in the db is called by many threads and this is causing to throw a stale object expection. So I made this method Synchronized and still see the stale object exception thrown. What am I doing wrong here? Any better way to handle this case?
Thanks for the help in advance.
The stale object exception is thrown when an entity has been modified between the time it was read and the time it's updated. This can happen inside a single transaction, but may also happen when you read an object in a transaction, modify it (in the controller layer, for example), then start another transaction and merge/update it (in this case, minutes or hours can separate the read and the update).
The exception is thrown to help you avoid conflicts between users.
If you don't care about conflicts (i.e. the last update always wins and replaces what the previous ones have written), then don't use optimistic locking. If you're concerned about conflicts, then StaleObjectExceptions will happen, and you should popup a meaningful message to the end user, asking him to reload the data and try to modify it again. There's no way to avoid them. You must just be optimistic and hope that they won't happen often.
Note that your synchronized trick will work only if
the exception happens only when reading and writing in the same transaction
updates to the entity are only made by this service
your application is not clustered.
It might also reduce the throughput dramatically, because you forbid any concurrent updates, regardless of which entities are updated by the concurrent transactions. It's like if you locked the whole table for the duration of the whole transaction.
My guess is that you would need to configure optimistic locking on the Hibernate side.
How can I configure Hibernate to apply all saves, updates, and deletes to the database server immediately after the session executes each operation? By default, Hibernate enqueues all save, update, and delete operations and submits them to the database server only after a flush() operation, committing the transaction, or the closing of the session in which these operations occur.
One benefit of immediately flushing database "write" operations is that a program can catch and handle any database exceptions (such as a ConstraintViolationException) in the code block in which they occur. With late or auto-flushing, these exceptions may occur long after the corresponding Hibernate operation that caused the SQL operation.
Update:
According to the Hibernate API documentation for interface Session, the benefit of catching and handling a database exception before the session ends may be of no benefit at all: "If the Session throws an exception, the transaction must be rolled back and the session discarded. The internal state of the Session might not be consistent with the database after the exception occurs."
Perhaps, then, the benefit of surrounding an "immediate" Hibernate session write operation with a try-catch block is to catch and log the exception as soon as it occurs. Does immediate flushing of these operations have any other benefits?
How can I configure Hibernate to apply all saves, updates, and deletes to the database server immediately after the session executes each operation?
To my knowledge, Hibernate doesn't offer any facility for that. However, it looks like Spring does and you can have some data access operations FLUSH_EAGER by turning their HibernateTemplate respectively HibernateInterceptor to that flush mode (source).
But I warmly suggest to read the javadoc carefully (I'll come back on this).
By default, Hibernate enqueues all save, update, and delete operations and submits them to the database server only after a flush() operation, committing the transaction, or the closing of the session in which these operations occur.
Closing the session doesn't flush.
One benefit of immediately flushing database "write" operations is that a program can catch and handle any database exceptions (such as a ConstraintViolationException) in the code block in which they occur. With late or auto-flushing, these exceptions may occur long after the corresponding Hibernate operation that caused the SQL operation
First, DBMSs vary as to whether a constraint violation comes back on the insert (or update ) or on the subsequent commit (this is known as immediate or deferred constraints). So there is no guarantee and your DBA might even not want immediate constraints (which should be the default behavior though).
Second, I personally see more drawbacks with immediate flushing than benefits, as explained black in white in the javadoc of FLUSH_EAGER:
Eager flushing leads to immediate
synchronization with the database,
even if in a transaction. This causes
inconsistencies to show up and throw a
respective exception immediately, and
JDBC access code that participates in
the same transaction will see the
changes as the database is already
aware of them then. But the drawbacks
are:
additional communication roundtrips with the database, instead of a single
batch at transaction commit;
the fact that an actual database rollback is needed if the Hibernate
transaction rolls back (due to already
submitted SQL statements).
And believe me, increasing the database roundtrips and loosing the batching of statements can cause major performance degradation.
Also keep in mind that once you get an exception, there is not much you can do apart from throwing your session away.
To sum up, I'm very happy that Hibernate enqueues the various actions and I would certainly not use this EAGER_FLUSH flushMode as a general setting (but maybe only for the specific operations that actually require eager, if any).
Look in to autocommit though it is not recommended. If your work includes more than one update or insert SQL statement, you autocommit some of the work, and then a statement fails, you have a potentially arduous task of undoing the first part of the action. It gets really fun when the 'undo' operation fails.
Anyway, here's a link that shows how to do it.
Is there any way to "replay" transaction?
I mean, sometimes I get RollbackException and rollback the transaction. Can I then "clone" the transaction and try again, or once rollback is called, transaction is lost?
I really need the changes, and really don't want to trace every change for rerunning later...
thanks,
udi
Why do you get the exception in the first place ? This seems to me to be the crux of the matter.
Are you relying on optimistic writing ? If so, then you'll have to wrap your database writes in some form of loop, incorporating (perhaps) a backoff and a number of retries. You can't do this automatically, unfortunately (unless you investigate some form of AOP solution wrapping your database writes with a retry strategy ?)
That depends where that transaction comes from. In Java/JDBC, a transaction is tied to a connection. You start one by setting setAutoCommit() to false (otherwise, every statement becomes its own little transaction).
There is nothing preventing you from reusing the connection after a transaction failed (i.e. you called rollback).
Things get more tricky when you use Spring. Spring wraps your methods in a transaction handler and this handler tries to guess what it should do with the current transaction from the exceptions that get thrown in the method. The next question is: Which wrapper created the current transaction? I just had a case where I would call a method foo() which would in turn call bar(), both #Transactional.
I wanted to catch errors from bar() in foo() and save them into the DB. That didn't work because the transaction was created for foo() (so I was still in a transaction which Spring thought broken by the exception in bar()) and it wouldn't let me save the error.
The solution was to create baz(), make it #Transactional(propagation=Propagation.REQUIRES_NEW) and call it from foo(). baz() would get a new, fresh transaction and would be able to write to the DB even though it was called from foo() which already had a (broken) transaction.
Another alternative is to use JDBC savepoints to partly roll back.