Context is:
producer (JTA transaction PT) is both sending message to JMS queue and making DB update;
consumer (JTA transaction CT) listens on same queue and reads DB when message is received;
application server - WebLogic, DB - Oracle.
I've observed, that sometimes CT is not (yet?) able to see DB changes of PT, event if corresponding JMS message is already received (PT is committed?).
It seems that JTA can't guarantee consistency of such kind (this was also confirmed in Jurgen Holler's presentation "Transaction Choices for Performance").
What is the best way to avoid such problem (except obvious - not using JTA)?
Thanks.
So it seems there is no simple, elegant and fail-proof solution for that. In our case it was decided to rely on simple redelivery mechanism (throwing exception and letting JMS message to be redelivered after certain amount of time).
Also considered:
Marking DB datasource as and expecting Last Resource Commit Optimization (LRCO) to kick-in (thus partially controlling order of commits inside XA transaction). Rejected due to dependency to internals of application server (WL).
Setting DeliveryDelay to JMS message, so it can be consumed only after some time, when (supposedly) DB sync is over. Rejected due to lack of guarantee and need to fine-tune it for different environments.
Blog post mentioned in other answer indeed contains all these and several other options covered (but no definitive one).
some options are outlined here:
http://jbossts.blogspot.co.uk/2011/04/messagingdatabase-race-conditions.html
Concerning the Answer:
"So it seems there is no simple, elegant and fail-proof solution for
that. In our case it was decided to rely on simple redelivery
mechanism (throwing exception and letting JMS message to be
redelivered after certain amount of time)."
This is only fail proof if your second transaction that starts after Transaction1 logically ends has a way of detecting that the Transaction 1 changes are not yet visible and blow up itself on techichal exception.
When you have Transaction 2 that is a different process than Transaction 1 then this is likely to be possible to check. Most likely the output of Transaction 1 is necessary to the success of transaction 2 to go forward. You can only make french fries if you have potatoes... If you have no potatoes you can blow up and try again next time.
However, if your process that is breaking due to the DB appearing stale is the exact same process that run on Transaction 1 itself. You are just adding potatoes into a bowel (e.g. a db table) and fail to detect that you bowel is overlfowing and continue running transactions to pumptit up... Such a check may be out of your hands.
Something of the sort, happens to be my case.
A theoretical solution for this might very well be to try to induce a Dirty Read on the DB by creating an artificial entity equivalent to the #Version field of JPA, forcing each process that needs to run serially to hammer an update on a common entity. If both transaction 2 and 1 update a common field on a common entity, the process will have to break - either you get a JPA optimistic lock exception on the second transaction or if you get a dirty read update exception from the DB.
I have not tested this approach yet, but it is likely going to be the needed work around, sadly enough.
Related
I am working on building a microservice which is using transaction manager implemented based on Java Transaction API(JTA).
My question is does Trasaction maanger have ability to handle concurrency issue in distributed database scenario's .
Scenario:
Assume there are multiple instance of a service running and we get two requests to update balance amount by 10 in an account. Initially an account can have $100 and the first instance gets that and increments it to $10 but has not been commited yet.
At the same time the second instance also retreive's account which is still 100 and increments it by $10 and then commits it updating balance to $110 and then service one updates account again to $110.
By this time you must have figured that balance was supposed to be incremented by $20 and not 10. Do I have to write some kind of Optimistic lock exception mechanism to prevent the above scenario or will Transaction Manager based on JTA specification already ensure such a thing will not happen ?
does Trasaction maanger have ability to handle concurrency issue in distributed database scenario's .
Transactions and concurrency are two independent concepts and though Transactions become most siginificant in context where we also see concurrency , transactions can be important without concurrency.
To answer your question : No , Transaction Manager generally does not concern itself with handling issues that arise with concurrent updates. It takes a very naive and simple ( and often most meaningful ) approach : if after the start of a transaction , it detects that the state has become inconsistent ( because of concurrent updates ) it would simply raise it as an exception and Rollback the transaction. If only it can establish that all the conditions of the ACID properties of the transaction are still valid will it commit the transaction.
For such type of requests, you can handle through Optimistic Concurrency where you would have a column on the database (Timestamp) as a reference to the version number.
Each time when a change is commited it would modify the timestamp value.
If two requests try to commit the change at the same time, only one of them will succeed as the version (Timestamp) column will change by then negating other request from comitting its changes.
The transaction manager (as implementation of the JTA specification) makes transparent a work above multiple resources. It ensures all the operations happens as a single unit of work. The "work above multiple resources" mean that that the application can insert data to database and meanwhile it sends a message to a JMS broker. Transaction manager guarantees ACID properties to be hold for this two operations. In simplistic form when the transaction finishes successfully the application developer can be sure both operation was processed. When some trouble happens is on the transaction manager to handle it - possibly throw an exception and rollback the data changes. Thus neither operation was processed.
It makes this transparent for the application developer who does not need to care to update first database and then JMS and checks if all data changes were really processed or a failure happens.
In general the JTA specification was not written with microservice architecture in mind. Now it really depends on your system design(!) But if I consider you have two microservices where each one has attached its own transaction manager then the transaction manager can't help you to sort out your concurrency issue. Transaction managers does not work (usually) in some synchronization. You don't work with multiple resources from one microservice (what is the usecase for the transaction manager) but with one resource from multiple microservices.
As there is the one resource it's the synchronization point for all you updates. It depends on it how it manages concurrency. Considering it's a SQL database then it depends on the level of the isolation it uses (ACID - I = isolation, see https://en.wikipedia.org/wiki/ACID_(computer_science)). Your particular example talks about lost update phenomena (https://vladmihalcea.com/a-beginners-guide-to-database-locking-and-the-lost-update-phenomena/). As both microservices tries to update one record. One solution for the avoiding the issue is using optimistic/pesimistic locking (you can implement it on your own by e.g. timestamps as stated above), the other is to use serializable isolation level in your database, or you can design your application for not reading and updating data based on what is read first time but change the sql query having the update atomic (or there are possibly other strategies how to work with your data model to achieve the desired outcome).
In summary - it depends on how your transaction manager is implemented, it can help you in a way but it's not its purpose. Your goal should be to check how the isolation level is set up at the shared storage and consider if your application needs to handle lost update phenomena at application level or your storage cang manage it for you.
My program needs to add data to two lists in Redis as a transaction. Data should be consistent in both lists. If there is an exception or system failure and thus program only added data to one list, system should be able to recover and rollback. But based on Redis doc, it doesn't support rollback. How can I implement this? The language I use is Java.
If you need transaction rollback, I recommend using something other than Redis. Redis transactions are not the same as for other datastores. Even Multi/Exec doesn't work for what you want - first because there is no rollback. If you want rollback you will have to pull down both lists so you can restore - and hope that between our error condition and the "rollback" no other client also modified either of the lists. Doing this in a sane and reliable way is not trivial, nor simple. It would also probably not be a good question for SO as it would be very broad, and not Redis specific.
Now as to why EXEC doesn't do what one might think. In your proposed scenario MULTI/EXEC only handles the cases of:
You set up WATCHes to ensure no other changes happened
Your client dies before issuing EXEC
Redis is out of memory
It is entirely possible to get errors as a result of issuing the EXEC command. When you issue EXEC, Redis will execute all commands in the queue and return a list of errors. It will not provide the case of the add-to-list-1 working and add-to-list-2 failing. You would still have your two lists out of sync. When you issue, say an LPUSH after issuing MULTI, you will always get back an OK unless you:
a) previously added a watch and something in that list changed or
b) Redis returns an OOM condition in response to a queued push command
DISCARD does not work like some might think. DISCARD is used instead of EXEC, not as a rollback mechanism. Once you issue EXEC your transaction is completed. Redis does not have any rollback mechanism at all - that isn't what Redis' transaction are about.
The key to understanding what Redis calls transactions is to realize they are essentially a command queue at the client connection level. They are not a database state machine.
Redis transactions are different. It guarantees two things.
All or none of the commands are executed
sequential and uninterrupted commands
Having said that, if you have the control over your code and know when the system failure would happen (some sort of catching the exception) you can achieve your requirement in this way.
MULTI -> Start transaction
LPUSH queue1 1 -> pushing in queue 1
LPUSH queue2 1 -> pushing in queue 2
EXEC/DISCARD
In the 4th step do EXEC if there is no error, if you encounter an error or exception and you wanna rollback do DISCARD.
Hope it makes sense.
I have to do a save in the mysql db, and then send an event over rabbitmq to my clients.
Now when pushing to RMq, if it fails for whatever reason, i need to rollback the save. It should be all or nothing, there cannot be data in the db for which events have not gone out.
So essentially,
Begin Transaction
Save to db
Push to queue
If exception rollback else commit
End transaction
Another approach
Now i can do the save operation only in the transaction. And then after that have some way of retrying the queuing if it fails, but that becomes overly complex.
Are there any best practices around this? Any suggestions regarding which approach to follow.
PS: the events over rmq contain ids for which some data has changed. The clients are expected to do a http get for the changed ids to perform their actions.
Well you pretty much doing everything right. RabbitMQ guys published a good article about semaphore queues: https://www.rabbitmq.com/blog/2014/02/19/distributed-semaphores-with-rabbitmq/
I would consider this to be a "best practice", since it is written by the people who somehow involved in developing the RabbitMQ.
If I understood correctly your main concern is whether you published the message to exchange successfully. You can use Confirms in RabbitMQ to be sure that your message is accepted by the queue.
Depending on how did you design the updates in your DB and how much time difference is tolerated from DB update to your clients getting the ID few things come to mind:
You can add a new field in a DB that you can use as a flag to check if the message for specific update is sent to the RabbitMQ. When you do the update you set the flag to 0, try to publish the message to the exchange and if it is successful you update the flag to 1. This can be complicated if you need to use the old values until the message to the clients has been sent and will require you to go through the database in some intervals and try to send the message to RabbitMQ again for all rows that don't have a flag set to 1.
Publish the message to RabbitMQ and if you get the confirm you then commit the DB update. If the commit for any reason fails your clients will do the update but get the old values which usually won't be problematic, but it really depends on your application. This is probably the better/easier way especially since there is no reason to expect anything to fail constantly. You should also consider to add some kind of short delay on the client side from the time it receives the ID for update and actually doing the update (to give some time for DB update).
In such heterogeneous environment you can use Two Phase Commits. You add "dirty " record to db, publish message, then edit record to mark it "clean" and ready.
Also when you deal with messaging middleware you have to be ready that some messages may be potentially lost or their consumer fail to process it in full, so maybe some feedback from consumer may be required, say you publish message and when consumer receive it, then it will add record to db.
I am using jboss5.1.x, ejb3.0
I have a transaction which goes like this:
MDB listen to JMS Queue.
MDB takes msg from JMS writing to Database.
in some of the catch clauses i throw "New EJBException(..)", in order to have rollbacks when specific exceptions occurs.
beside of that I have configured a re-try mechanism, after 3 times the msg going to error queue.
What i wanna achive is:
when Iam having a rollback, i want to increase the current re-try number, so if some1 is observing the database, he/she can see on-line the current re-try number.
the problem is: when I do rollback, so even the "insert_number_of_retry" query is being rolled back itself, which preventing from me to add the current retry number to the database
how can I solve this?
Thanks,
ray.
You can try to execute your logging method inside a separate transaction by annotating it with #TransactionAttribute(TransactionAttributeType.REQUIRES_NEW).
You need a separate transaction in a separate thread (you may use dedicated thread/pool for or spawn one, if need be). You have an option to wait for the forked tx to end or forfeit (and just continue w/ the rollback and fast exit), that depends on the extra logic and so.
We have a JMS queue of job statuses, and two identical processes pulling from the queue to persist the statuses via JDBC. When a job status is pulled from the queue, the database is checked to see if there is already a row for the job. If so, the existing row is updated with new status. If not, a row is created for this initial status.
What we are seeing is that a small percentage of new jobs are being added to the database twice. We are pretty sure this is because the job's initial status is quickly followed by a status update - one process gets one, another process the other. Both processes check to see if the job is new, and since it has not been recorded yet, both create a record for it.
So, my question is, how would you go about preventing this in a vendor-neutral way? Can it be done without locking the entire table?
EDIT: For those saying the "architecture" is unsound - I agree, but am not at liberty to change it.
Create a unique constraint on JOB_ID, and retry to persist the status in the event of a constraint violation exception.
That being said, I think your architecture is unsound: If two processes are pulling messages from the queue, it is not guaranteed they will write them to the database in queue order: one consumer might be a bit slower, a packet might be dropped, ..., causing the other consumer to persist the later messages first, causing them to be overridden with the earlier state.
One way to guard against that is to include sequence numbers in the messages, update the row only if the sequence number is as expected, and delay the update otherwise (this is vulnerable to lost messages, though ...).
Of course, the easiest way would be to have only one consumer ...
JDBC connections are not thread safe, so there's nothing to be done about that.
"...two identical processes pulling from the queue to persist the statuses via JDBC..."
I don't understand this at all. Why two identical processes? Wouldn't it be better to have a pool of message queue listeners, each of which would handle messages landing on the queue? Each listener would have its own thread; each one would be its own transaction. A Java EE app server allows you to configure the size of the message listener pool to match the load.
I think a design that duplicates a process like this is asking for trouble.
You could also change the isolation level on the JDBC connection. If you make it SERIALIZABLE you'll ensure ACID at the price of slower performance.
Since it's an asynchronous process, performance will only be an issue if you find that the listeners can't keep up with the messages landing on the queue. If that's the case, you can try increasing the size of the listener pool until you have adequate capacity to process the incoming messages.