Mysql transactions and RabbitMq - java

I have to do a save in the mysql db, and then send an event over rabbitmq to my clients.
Now when pushing to RMq, if it fails for whatever reason, i need to rollback the save. It should be all or nothing, there cannot be data in the db for which events have not gone out.
So essentially,
Begin Transaction
Save to db
Push to queue
If exception rollback else commit
End transaction
Another approach
Now i can do the save operation only in the transaction. And then after that have some way of retrying the queuing if it fails, but that becomes overly complex.
Are there any best practices around this? Any suggestions regarding which approach to follow.
PS: the events over rmq contain ids for which some data has changed. The clients are expected to do a http get for the changed ids to perform their actions.

Well you pretty much doing everything right. RabbitMQ guys published a good article about semaphore queues: https://www.rabbitmq.com/blog/2014/02/19/distributed-semaphores-with-rabbitmq/
I would consider this to be a "best practice", since it is written by the people who somehow involved in developing the RabbitMQ.

If I understood correctly your main concern is whether you published the message to exchange successfully. You can use Confirms in RabbitMQ to be sure that your message is accepted by the queue.
Depending on how did you design the updates in your DB and how much time difference is tolerated from DB update to your clients getting the ID few things come to mind:
You can add a new field in a DB that you can use as a flag to check if the message for specific update is sent to the RabbitMQ. When you do the update you set the flag to 0, try to publish the message to the exchange and if it is successful you update the flag to 1. This can be complicated if you need to use the old values until the message to the clients has been sent and will require you to go through the database in some intervals and try to send the message to RabbitMQ again for all rows that don't have a flag set to 1.
Publish the message to RabbitMQ and if you get the confirm you then commit the DB update. If the commit for any reason fails your clients will do the update but get the old values which usually won't be problematic, but it really depends on your application. This is probably the better/easier way especially since there is no reason to expect anything to fail constantly. You should also consider to add some kind of short delay on the client side from the time it receives the ID for update and actually doing the update (to give some time for DB update).

In such heterogeneous environment you can use Two Phase Commits. You add "dirty " record to db, publish message, then edit record to mark it "clean" and ready.
Also when you deal with messaging middleware you have to be ready that some messages may be potentially lost or their consumer fail to process it in full, so maybe some feedback from consumer may be required, say you publish message and when consumer receive it, then it will add record to db.

Related

Should I ensure that Kafka messages are send successfully before deleting the data?

I need to read data from the database, send them to Kafka, and then delete those data (which were successfully sent) from the database. I would think to do it straitforward:
public void syncData() {
List<T> data = repository.findAll();
data.forEach(value -> kafkaTemplate.send(topicName, value));
repository.deleteAll(data);
}
But I have never worked with Kafka before and I have a confusion with kafkaTemplate.send operation. As the method returns ListenableFuturethat means that the iteration data.forEach might be finished before all the messages are really sent to a broker. Thus, I might delete the data before they are really sent. What if, for some reason, some messages are not sent. Say I have 10 data, and starting from 7th the broker gets down.
Will Kafka throw an exception if a message is not sent?
Should I introduce an extra logic to ensure that all messages are sent before going to the next stage of deleting the data?
P.S. I use Kafka with Spring-boot
You should implement a callback that will trigger when the producer either succeeds or fails to write the data to Kafka before deleting it from the DB.
https://docs.spring.io/spring-kafka/docs/1.0.0.M2/reference/html/_reference.html
On top of this, you can set required acks to ALL so that every broker acknowledges the messages before it's considered sent.
Also little tid bit worth knowing in this context - Acks=ALL is not all assigned replicas, it's all in sync replicas for the partition need to acknowledge the write. So, it's important to have your min isr settings sensible for this also. If you have min isr = one, in a very strict sense Acks=all is still only guaranteeing that 1 broker saw the write. If you then lose that one broker, you lose the write. That's obviously not going to be a common situation, but it's one that you should be aware of.
The usage of outbox pattern. (as the safe way of doing this)
Also there's some directions that might be helpful are, investigate how the replication factor of a topic relays to the amount of brokers. Get in touch with the min.insync.replicas broker setting. Then read on the ack-setting for the client-(producer) and what it means in terms of communication with the broker. For restarting at the correct data position when something bad happens to your application or database connection, you can get some inspiration from the kafka-connect library (and maybe use this as a separately deployed db-polling-service).
One of the strategies would be to keep those Future objects that are returned and monitor them (possibly in a separate thread). Once all of the tasks complete you can either delete the records that were successfully sent or write the IDs that need to be deleted in DB. And then have a scheduled task (once per hour or day or whatever period that fits you) that would delete all the ids that should be deleted.

Transactions with multiple resources (database and JMS broker)

I have an application where we insert to database and we publish event to ActiveMQ.
I am facing problems with the transaction. I will explain the issue with the code below:
#Transactional(rollbackFor = Exception.class)
public class ProcessInvoice {
public boolean insertInvoice(Object obj){
/* Some processing logic here */
/* DB Insert */
insert(obj);
/* Some processing logic here again */
/* Send event to Queue 1 */
sendEvent(obj);
/* Send event to Queue 2 */
sendEvent(obj);
return true;
}
}
Class is annotated with #Transactional, in the insertInvoice method I am doing some processing, inserting to DB, and sending event's to two queues.
With the above code I am facing two problems:
If the queue is slow then I am facing performance issue as process takes time in sendEvent method.
If for some reason ActiveMQ is down or consumer not able to process the message, how to rollback the transaction?
How to deal with these issue?
If you need to send your message transactionally (i.e. you need to be sure the broker actually got your message when you send it) and the broker is performing slowly which is impacting your application then you only have two choices:
Accept the performance loss in your application.
Improve the broker's performance so that your application performance improves as well. Improving broker performance is a whole other subject.
In JMS (and most other messaging architectures) producers and consumers are unaware of each other by design. Therefore, you will not know if the consumer of the message you send is unable to process the message for any reason, at least not through any automatic JMS mechanism.
When the broker is down the sendEvent method should fail outright. However, I'm not terribly familiar with how Spring handles transactions so I can't say what should happen in that regard.
I have some questions regarding your issue:
If the sendEvent(Object o) method is that expensive (according to what you say) in terms of performance, why do you consider to call it twice (apparently for processing the same object)?
Apparently the result of those 2 calls would be the same, with the difference that they would be sent to 2 different queues. I believe that you could send it to both queues in just one call, in order not to execute the same code twice.
When thinking in transactions, the first things that come to my head are synchronous operations. Do you want to perform those operations asynchronously or synchronously? For example, do you want to wait until the invoice is inserted in the DB for sending right after the message to Queue1 and Queue2?
Maybe you should do it asynchronously. If you don't or cannot, maybe you could opt for an "optimistic" strategy, where you send first the message to Queue1 and Queue2, and afterwards while you are processing those messages on the broker side, you perform the insertion of the invoice into the DB. If the database has a high availability, in most cases the insertion will succeed, so you will not have to wait until it is persisted to send the messages to Queue1 and 2. In case the insertion did not succeed (what would be very unlikely), you could send a second message to undo those changes on the broker side. In case that due to your business logic this "undo" process is not trivial, this alternative might not suit for you.
You mention if ActiveMQ is down, how to rollback. Well, in that case maybe you need some monitoring of the queues to find out if the message reached its destination or not. I would advise you to take a look to the Advisory messages, they may help you to control that and act in consequence.
But maybe what you need could also be re-thought and solved with durable subscribers, in that way once the subscribers were available again, they would receive that message that was en-queued. But this performs slightly worse since it needs to persist the messages to files to recover them afterwards if the broker goes down.
Hope these suggestions help you, but in my opinion I believe you should describe more how should it be the result you want (the flow) since it does not seem to be very clear (at least to me)

Queueing a message in JMS, for delayed processing

I have a piece of middleware that sits between two JMS queues. From one it reads, processes some data into the database, and writes to the other.
Here is a small diagram to depict the design:
With that in mind, I have some interesting logic that I would like to integrate into the service.
Scenario 1: Say the middleware service receives a message from Queue 1, and hits the database to store portions of that message. If all goes well, it constructs a new message with some data, and writes it to Queue 2.
Scenario 2: Say that the database complains about something, when the service attempts to perform some logic after getting a message from Queue 1.In this case, instead of writing a message to Queue 2, I would re-try to perform the database functionality in incremental timeouts. i.e Try again in 5 sec., then 30 sec, then 1 minute if still down. The catch of course, is to be able to read other messages independently of this re-try. i.e Re-try to process this one request, while listening for other requests.
With that in mind, what is both the correct and most modern way to construct a future proof solution?
After reading some posts on the net, it seems that I have several options.
One, I could spin off a new thread once a new message is received, so that I can both perform the "re-try" functionality and listen to new requests.
Two, I could possibly send the message back to the Queue, with a delay. i.e If the process failed to execute in the db, write the message to the JMS queue by adding some amount of delay to it.
I am more fond of the first solution, however, I wanted to get the opinion of the community if there is a newer/better way to solve for this functionality in java 7. Is there something built into JMS to support this sort of "send message back for reprocessing at a specific time"?
JMS 2.0 specification describes the concept of delayed delivery of messages. See "What's new" section of https://java.net/projects/jms-spec/pages/JMS20FinalReleaseMany JMS providers have implemented the delayed delivery feature.
But I wonder how the delayed delivery will help your scenario. Since the database writes have issues, subsequent messages processing and attempt to write to database might end up in same situation. I guess it might be better to sort out issues with database updates and then pickup messages from queue.

JMS Message Ordering and Transaction Rollback

We're building a system that will be sending messages from one application to another via JMS (Using Websphere MQ if that matters). These messages are of the form "Create x" or "Delete x". (The end result of this is that a third-party system needs to be informed of the Create and Delete messages, so the Read end of the JMS queue is going to talk to the third-party system, whilst the Write end of the JMS queue is just broadcasting messages out to be handled)
The problem that we're worried about here is if one of the messages fails. The initial thought here was simply to roll the failures back onto the JMS queue and let the normal retry mechanism handle it. That works until you get a Delete followed by a Create for the same identifier, e.g.
Delete 123 - Fails, gets rolled back on to the queue
Create 123 - Succeeds
Delete 123 - Retry from earlier failure
The end result of this is that the third party was told to Create 123 and then immediately to Delete 123, instead of the other way around.
Whilst not ideal, from what I've been reading up on it seems that Message Affinity would help here, so that we can guarantee that the messages are processed in the correct order. However, I'm not sure how message affinity will work when messages are processed and failed back onto the queue? (Message Affinity is generally considered a bad idea, but the load here isn't going to be great, and the risk of poison messages is very low. It's simply the risk that the third-party that we're interacting with has a brief outage that we're concerned with)
Failing that, are there any better thoughts on how to do this?
Edit - Further complications. The system we're building to integrate with the third-party is to replace a system they used from a different supplier until recently. As such, there's a bunch of data that is already in the third-party, but it turns out to be very difficult to actually get this out. (The third-party doesn't even send success/failure messages back, merely an acknowledgement of receipt!), so we don't actually know the initial state of the system.
The definitive way to address this is to include in the message a sequence such that the earlier message can't overwrite the later one.
Once upon a time, your transactions at the bank were processed in the order they arrived. However, it was this exact problem that caused that to change. People became aware that their account balance could be positively or negatively affected depending on the order in which the transactions were processed. When it was left to chance it was occasionally a problem for people but in general no malice was perceived.
Later, banks started to memo-post transactions during the day, then sort them into the order most favorable to the bank prior to processing them. For example, if the largest checks cleared first and the account ran out of money, several smaller checks might bounce, generating multiple bounce fees for the bank. Once this was discovered to have become widespread practice, it was changed to always apply the transactions in the order most favorable to the account holder. (At least here in the US.)
This is a common problem and it's been solved many times, the first being decades ago. There really is no excuse anymore for an Enterprise-grade application to both
Use asynchronous messaging in which delivery order by design cannot be guaranteed, and
Contain an inherent dependency on delivery order to assure the integrity of the transactions and data.
The mention of message affinities hints at the solution to this when approached as a transport problem. To guarantee message order delivery requires the following:
One and only one sender of messages
One and only one node from which messages are sent.
One and only one path between sender and receiver of messages.
One and only one queue at which messages are received.
All messages processed under syncpoint.
The ability to pend processing of any messages whilst an orphaned transaction exists (connection exception handling).
No backout queue on the input queue.
No DLQ if the messages traverse a channel.
No use of Priority delivery on any queue along the path.
One and only one receiver of messages.
No ability to scale the application other than by adding CPU and memory to the node(s) hosting the QMgr(s).
Or the problem could be addressed in the application design using memo posting, compensating transactions, or any of the other techniques commonly used to eliminate sequence dependencies.
(Side note: If the application in question is an off-the-shelf vendor package this would seem to me to create a credibility issue. No application can claim to be robust if something so commonplace as a backed out message can mess with data integrity.)
The one way to avoid the scenario you described above is to have Different classification of message failures; keeping in mind that your Message should be processed in order.(Message affinity)
Consider scenario :-
If application receives Delete X for which it haven't receive Create x before,
then classify this Error scenario as "Business Error" since this Error occured because
Producer of Message sent wrong message . Or we can say Producer of message sent message in wrong order.
Once you classify this error as "Business Error", you should not call rollback; instead of that insert this message into Database with "Business Error" as identification.
So in this way, you committed this message from queue and reduced risk of rollback and further reduce risk of inconsistent behaviour of your application.
Now Consider Another Scenario :-
If your application itself has some problem,( like , database or web server goes down or any such technical error) then in that case use Rollback mechanism of JMS queue and treate this Error as "Technical Error".
So if any "Technical Error" occured, JMS Queue will retry message untill your Application is able to accept and process those.
Once your application is up after this "Technical Error" and tried processing messages in sequential order,same rule applied here i.e. if "Business Error" occured then that message will no longer retried.
Note: The "Business Error" classification should be agreed by all parties, i.e. if you are marking any message as a "Business Error" it means this message is no longer useful and your Producer should sent a new "Delete x" for any Valid 'Create x".
Some of the "Business Error" you can take into accounts are--
Received "Delete x" before "Create x"
Received "Create x" after "Create x"
Received "Delete x" after valid "Delete x"

Multithread-safe JDBC Save or Update

We have a JMS queue of job statuses, and two identical processes pulling from the queue to persist the statuses via JDBC. When a job status is pulled from the queue, the database is checked to see if there is already a row for the job. If so, the existing row is updated with new status. If not, a row is created for this initial status.
What we are seeing is that a small percentage of new jobs are being added to the database twice. We are pretty sure this is because the job's initial status is quickly followed by a status update - one process gets one, another process the other. Both processes check to see if the job is new, and since it has not been recorded yet, both create a record for it.
So, my question is, how would you go about preventing this in a vendor-neutral way? Can it be done without locking the entire table?
EDIT: For those saying the "architecture" is unsound - I agree, but am not at liberty to change it.
Create a unique constraint on JOB_ID, and retry to persist the status in the event of a constraint violation exception.
That being said, I think your architecture is unsound: If two processes are pulling messages from the queue, it is not guaranteed they will write them to the database in queue order: one consumer might be a bit slower, a packet might be dropped, ..., causing the other consumer to persist the later messages first, causing them to be overridden with the earlier state.
One way to guard against that is to include sequence numbers in the messages, update the row only if the sequence number is as expected, and delay the update otherwise (this is vulnerable to lost messages, though ...).
Of course, the easiest way would be to have only one consumer ...
JDBC connections are not thread safe, so there's nothing to be done about that.
"...two identical processes pulling from the queue to persist the statuses via JDBC..."
I don't understand this at all. Why two identical processes? Wouldn't it be better to have a pool of message queue listeners, each of which would handle messages landing on the queue? Each listener would have its own thread; each one would be its own transaction. A Java EE app server allows you to configure the size of the message listener pool to match the load.
I think a design that duplicates a process like this is asking for trouble.
You could also change the isolation level on the JDBC connection. If you make it SERIALIZABLE you'll ensure ACID at the price of slower performance.
Since it's an asynchronous process, performance will only be an issue if you find that the listeners can't keep up with the messages landing on the queue. If that's the case, you can try increasing the size of the listener pool until you have adequate capacity to process the incoming messages.

Categories