Given the following scenario:
I have a system that creates, updates and deletes records. For each of these actions I need to do something (lets say write the events to a log as a silly example) however I need to process these events for each record in order - Meaning I can't log the delete before I have done the create, or any of the previous updates. I also can't log the update before I have logged the create.
I am investigating Queues in order to preserve sequence. However I don't really want RecordID_2 to be held up behind RecordID_14 The records do not need to be processed in sequence as much as the actions on each record have to. Hence I don't think I can/should use one queue.
As I don't have hundreds of RecordID_XX active at the same time, I was thinking of having a queue for each RecordID_XX so if several updates can in for that one RecordID each event for that record would be added to that same queue and be processed in order (I.e. Create first, Update_1 after Create is complete, Update_2 is processed after Update_1 is complete etc), however if additional events for a different record came in they would be added to their own queue. If the queue is empty for a period of time it simply gets deleted. I realize that this may result in a queue getting one message and then being deleted as there were no updates before the idle timeout expired. (This does not seem at all efficient)
Based on Andres T Finnell's excellent answer to this question.
I was thinking of doing the following
Producer (Web Service) -> Queue_Original <- Dispatcher -> RecordID_14
-> RecordID_2
-> RecordID_8
-> RecordID_15
Some of the "logging" may take long. So I want to be able to have a few consumers listening for these queues.
Lets say I have Consumer_1 and Consumer_2 (I may want to add Consumer_3 later to assist with growing load)
What I would like is Consumer_1 to do a getDistinations()
where the broker will return [RecordID_14, RecordID_2, RecordID_8, RecordID_15]
Questions:
Is it possible for Consumer_1 to iterate through the list of queues returned from the broker looking for the first available queue that does not have a Consumer_X connected to it and begin processing the 1st message on this queue?
And then each subsequent Consumer to do the same until it finds the next queue without a Consumer connected to it?
Would Advisory-Messages be the thing to use here?
Am I going down the wrong path completely? Is there a better approach
to handling this scenario?
Related
Good time guys!
We have a pretty straightforward application-adapter: once in 30 seconds it reads records from a database (can't write to it) of one system, converts each of these records into an internal format, performs filtering, encrichment, ..., and, finally, transforms the resulting, let's say, entities into an xml format and sends them via a JMS to other system. Nothing new.
Let's add some spice here: records in the database are sequential (that means that their identifies are generated by a sequence), and when it is time to read a new bunch of records, we get a last-processed-sequence-number -- which is stored in our internal databese and updated each time the next record is processed (sent to the JMS) -- and start reading from that record (+1).
The problem is our customers gave us an NFR: processing of a read record bunch must not last longer than 30 seconds. As far as there are a lot of steps in the workflow (with some pretty long running ones), and it is possible to get a pretty big amount of records, and as far as we process them one by one, it can take more than 30 seconds.
Because of all the above I want to ask 2 questions:
1) Is there an approach of a parallel processing of sequential data, maybe with one or several intermediate storages, or Disruptor patern, or cqrs-like, or a notification-based, or ... that provides a possibility of working in such a system?
2) A general one. I need to store a last-processed-number and send an entity to the JMS. If I save a number to a database and then some problem raises with the JMS, on an application's restart my adapter will think that it successfuly sended the entity, which is not true and it won't be ever received. If I send an entity and after that try so save a number to a database and get an exception, on an application's restart a reprocessing will be performed which will lead to duplications in the JMS. I'm not sure that xa transactions will help here or some kind of a last resorce gambit...
Could somebody, please, share experience or ideas?
Thanks in advance!
1) 30 seconds is a long time and you can do a lot in that time esp with more than one CPU. Without specifics I can only say it is likely you can make it faster if you profile it and use more CPUs.
2) You can update the database before you send and listen to the JMS queue yourself to see it was received by the broker.
Dimitry - I don't know the detail around your problem so I'm just going to make a set of assumptions. I hope it willtrigger an idea that will lead to the solution at least.
Here goes:
Grab you list of items to process.
Store the last id (and maybe the starting id)
Process each item on a different thread (suggest using Tasks).
Record any failed item in a local failed queue.
When you grab the next bunch, ensure you process the failed queue first.
Have a way of determining a max number of retries and a way of moving/marking it as permanently failed.
Not sure if that was what you were after. NServiceBus has a retry process where the gap between each retry gets longer up to a point, then it is marked as failed.
Folks, finally we ended up with the following solution. We implemented a kind of the Actor Model. The idea is the following.
There are two main (internal) database tables for our application, let's call them READ_DATA_INFO, which contains a last-read-record-number of the 'source' external system, and DUMPED_DATA, which stores a metadata about each read record of the source system. This is how it all works: each n (a configurable property) seconds a service bus reads the last processed identifier of the source system and sends a request to the source system to get new records from it. If there are several new records, they are being wrapped with a DumpRecordBunchMessage message and sent to a DumpActor class. This class begins a transaction which comprises two operations: update the last-read-record-number (the READ_DATA_INFO table) and save a metadata about each reacord (the DUMPED_DATA table) (each dumped record gets the 'NEW' status. When a record is successfully processed, it gets the 'COMPLETED' status; otherwise - the 'FAILED' status). In case of a successfull transaction commit each of those records is wrapped with a RecordMessage message class and send to next processing actor; otherwise those records are just skipped - they would be reread after next n seconds.
There are three interesting points:
an application's disaster recovery. What if our application will be stopped somehow at the middle of a processing. No problem, at an application's startup (#PostConstruct marked method) we find all the records with the 'NEW' statuses at the DUMPED_DATA table and with a help of a stored metadata rebuild restore them from the source system.
parallel processing. After all records are successfully dumped, they become independent, which means that they can be processed in parallel. We introduced several mechanisms of a parallelism and a loa balancing. The simplest one is a round robin approach. Each processing actor consists of a parant actor (load balancer) and a configurable set of it's child actors (worker). When a new message arrives to the parent actor's queue, it dispatches it to the next worker.
duplicate record prevention. This is the most interesting one. Let's assume that we read data each 5 seconds. If there is an actor with a long running operation, it is possible to have several tryings to read from the source system's database starting from the same last-read-record number. Thus there would potentially be a lot duplicate records dumped and processed. In order to prevent this we added a CAS-like check of DumpActor's messages: if the last-read-record from a message is equal to a one from the DUMPED_DATA table, this message should be processed (no messages were processed before it); otherwise this message is rejected. Rather simple, but powerfull.
I hope this overview will help somebody. Have a good time!
I am currently developing a system that uses allot of async processing. The transfer of information is done using Queues. So one process will put info in the Queue (and terminate) and another will pick it up and process it. My implementation leaves me facing a number of challenges and I am interested in what everyone's approach is to these problems (in terms of architecture as well as libraries).
Let me paint the picture. Lets say you have three processes:
Process A -----> Process B
|
Process C <-----------|
So Process A puts a message in a queue and ends, Process B picks up the message, processes it and puts it in a "return" queue. Process C picks up the message and processes it.
How does one handle Process B not listening or processing messages off the Queue? Is there some JMS type method that prevents a Producer from submitting a message when the Consumer is not active? So Process A will submit but throw an exception.
Lets say Process C has to get a reply with in X minutes, but Process B has stopped (for any reason), is there some mechanism that enforces a timeout on a Queue? So guaranteed reply within X minutes which would kick off Process C.
Can all of these matters be handled using a dead letter Queue of some sort? Should I maybe be doing this all manually with timers and check. I have mentioned JMS but I am open to anything, in fact I am using Hazelcast for the Queues.
Please note this is more of a architectural question, in terms of available java technologies and methods, and I do feel this is a proper question.
Any suggestions will be greatly appreciated.
Thanks
IMHO, The simplest solution is to use an ExecutorService, or a solution based on an executor service. This supports a queue of work, scheduled tasks (for timeouts).
It can also work in a single process. (I believe Hazelcast supports distributed ExecutorService)
It seems to me that the type of questions you're asking are "smells" that queues and async processing may not be the best tools for your situation.
1) That defeats a purpose of a queue. Sounds like you need a synchronous request-response process.
2) Process C is not getting a reply generally speaking. It's getting a message from a queue. If there is a message in the queue and the Process C is ready then it will get it. Process C could decide that the message is stale once it gets it, for example.
I think your first question has already been answered adequately by the other posters.
On your second question, what you are trying to do may be possible depending on the messaging engine used by your application. I know this works with IBM MQ. I have seen this being done using the WebSphere MQ Classes for Java but not JMS. The way it works is that when Process A puts a message on a queue, it specifies the time it will wait for a response message. If Process A fails to receive a response message within the specified time, the system throws an appropriate exception.
I do not think there is a standard way in JMS to handle request/response timeouts the way you want so you may have to use platform specific classes like WebSphere MQ Classes for Java.
Well, kind of the point of queues is to keep things pretty isolated.
If you're not stuck on any particular tech, you could use a database for your queues.
But first, a simple mechanism to ensure two processes are coordinated is to use a socket. If practical, simply have process B create an open socket listener on some well know port, and process A will connect to that socket, and monitor it. If process B ever goes away, process A can tell because their socket gets shutdown, and it can use that as an alert of problems with process B.
For the B -> C problem, have a db table:
create table queue (
id integer,
payload varchar(100), // or whatever you can use to indicate a payload
status varchar(1),
updated timestamp
)
Then, Process A puts its entry on the queue, with the current time and a status of "B". B, listens on the queue:
select * from queue where status = 'B' order by updated
When B is done, it updates the queue to set the status to "C".
Meanwhile, "C" is polling the DB with:
select * from queue where status = 'C'
or (status = 'B' and updated < (now - threshold) order by updated
(with the threshold being however long you want things to rot on the queue).
Finally, C updates the queue row to 'D' for done, or deletes it, or whatever you like.
The dark side is there is a bit of a race condition here where C might try and grab an entry while B is just starting up. You can probably get through that with a strict isolation level, and some locking. Something as simply as:
select * from queue where status = 'C'
or (status = 'B' and updated < (now - threshold) order by updated
FOR UPDATE
Also use FOR UPDATE for B's select. This way whoever win the select race will get an exclusive lock on the row.
This will get you pretty far down the road in terms of actual functionality.
You are expecting the semantics of synchronous processing with async (messaging) setup which is not possible. I have worked on WebSphere MQ and normally when the consumer dies, the messages are kept in the queue forever (unless you set the expiry). Once the queue reaches its depth, the subsequent messages are moved to the dead letter queue.
I've used a similar approach to create a queuing and processing system for video transcoding jobs. Basically the way it worked was:
Process A posts a "schedule" message to Arbiter Q, which adds the job into its "waiting" queue.
Process B requests the next job from Arbiter Q, which removes the next item in its "waiting" queue (subject to some custom scheduling logic to ensure that a single user couldn't flood transcode requests and prevent other users from being able to transcode videos) and inserts it into its "processing" set before returning the job back to Process B. The job is timestamped when it goes into the "processing" set.
Process B completes the job and posts a "complete" message to Arbiter Q, which removes the job from the "processing" set and then modifies some state so that Process C knows the job completed.
Arbiter Q periodically inspects the jobs in its "processing" set, and times out any that have been running for an unusually long amount of time. Process A is then free to attempt to queue up the same job again, if it wants.
This was implemented using JMX (JMS would have been much more appropriate, but I digress). Process A was simply the servlet thread which responded to a user-initiated transcode request. Arbiter Q was an MBean singleton (persisted/replicated across all the nodes in a cluster of servers) that received "schedule" and "complete" messages. Its internally managed "queues" were simply List instances, and when a job completed it modified a value in the application's database to refer to the URL of the transcoded video file. Process B was the transcoding thread. Its job was simply to request a job, transcode it, and then report back when it finished. Over and over again until the end of time. Process C was another user/servlet thread. It would see that the URL was available, and present the download link to the user.
In such a case, if Process B were to die then the jobs would sit in the "waiting" queue forever. In practice, however, that never happened. If your Process B is not running/doing what it is supposed to do then I think that suggests a problem in your deployment/configuration/implementation of Process B more than it does a problem in your overall approach.
I have a requirement which I am currently not aware of if it is possible at all. I would like to temporary disable the devliery of a JMS message if the message contains a specified property. Currently I am using HornetQ as message provider.
Let's make an example:
The queue contains of the following three entries:
{1, "foo", "A_CATEGORY"}
{2, "bar", "B_CATEGORY"}
{9, "bof", "A_CATEGORY"}
At a certain point the app must be able to tell the HornetQ message server that messages belonging to B_CATEGORY shouldn't be delivered at the moment (e.g. because the underlying database for B_CATEGORY objects gets updated). So the message with id 2 wouldn't be delivered at the moment, while 1 and 9 would be delivered as they have a different value for the category object.
It must happen out of the Java code without restarting the application at all. Is this possible at all?
Thanks for your help!
Just thought about an alternative design approach for this problem. Let's assume that the first Queue contains messages with all kind of categories (btw it isn't possible to create a queue per category as there could be a lot of them). This 'normal' queue is normaly configured (e.g. with no expiry, but DLQ).
Now if a listener consumes such a message and sees that it can't process messages belonging to a certain category, it puts it into a second queue. This queue is configured with redelivery delay and also an expiry time. If one sets now the expiry time quite high enough (of course not that the queue overflows) and the redelivery time not too short, then this should work out if there is no solution to the above question.
Of course one must calculate how many of those queue entries could be created during the time a category can't be processed. And also how long such an inavailability for a category could take so that the redelivery could be adjusted accordingly.
As far as I can tell, it is not possible with message driven beans.
A similar functionality is achievable with standard JMS consumer:
MessageConsumer c = session.createConsumer(destination);
while ( b-category-can-be-processed ) {
Message m = c.receive();
// process messages until b category is OK to be processed
}
c.close();
// now create a different consumer with message selector ignoring "B_CATEGORY"
MessageConsumer c1 = session.createConsumer(destination, "Category <> 'B_CATEGORY'");
while ( b-is-locked ) {
Message m = c1.receive();
// process messages until b category is locked
}
c1.close();
// go to start
This example assumes you're able to tell when to process B's again based on the messages received. If not, then you could resume the normal routine after certain time. The example also presents only a single thread of execution.
Exploring this path further, you could take a look at Spring's DefaultMessageListenerContainer — Spring message driven bean. It can do exactly what I described, but in a far more advanced way. It can be fed with a message selector, and it's live, you can change it any time you want. It handles messages in multiple threads, too, if you set the concurrentConsumers higher than 1.
As for your solution with redirecting messages to another queue while they cannot be processed, please notice that it generates extra traffic; you do want all your messages to be processed in the end, right? Why not leave them where they are and just fetch them in appropriate time? You won't have to estimate the redelivery delay ahead, which might be hard.
You could create a core queue (or a Subscription) with a filter and stop the queue using management API. Or if you are working embedded you could just cause pause at the Server Queue object.
As this would be a very custom feature, you could probably use it embedded, or make special adjustments at your own branch.
i have the following situation:
Read data from database
do work "calculation"
write result to database
I have a thread that reads from the database and puts the generated objects into a BlockingQueue. These objects are extremely heavy weight hence the queue to limit amount of objects in memory.
A multiple threads take objects from the Queue, performs work and put the results in a second queue.
The final thread takes results from second queue and saves result to database.
The problem is how to prevent deadlocks, eg. the "calculation threads" need to know when no more objects will be put into the queue.
Currently I achieve this by passing a references of the threads (callable) to each other and checking thread.isDone() before a poll or offer and then if the element is null. I also check the size of the queue, as long as there are elements in it, the must be consumed. Using take or put leads to deadlocks.
Is there a simpler way to achieve this?
One of the ways to accomplish would be to put a "dummy" or "poison" message as the last message on the queue when you are sure that no more tasks are going to arrive on the queue.. for example after putting the message related to the last row of the db query. So the producer puts a dummy message on the queue, the consumer on receiving this dummy message knows that no more meaningful work is expected in this batch.
Maybe you should take a look at CompletionService
It is designed to combine executor and a queue functionality in one.
Tasks which completed execution will be available from the completions service via
completionServiceInstance.take()
You can then again use another executor for 3. i.e. fill DB with the results, which you will feed with the results taken from the completionServiceInstance.
We have a JMS queue of job statuses, and two identical processes pulling from the queue to persist the statuses via JDBC. When a job status is pulled from the queue, the database is checked to see if there is already a row for the job. If so, the existing row is updated with new status. If not, a row is created for this initial status.
What we are seeing is that a small percentage of new jobs are being added to the database twice. We are pretty sure this is because the job's initial status is quickly followed by a status update - one process gets one, another process the other. Both processes check to see if the job is new, and since it has not been recorded yet, both create a record for it.
So, my question is, how would you go about preventing this in a vendor-neutral way? Can it be done without locking the entire table?
EDIT: For those saying the "architecture" is unsound - I agree, but am not at liberty to change it.
Create a unique constraint on JOB_ID, and retry to persist the status in the event of a constraint violation exception.
That being said, I think your architecture is unsound: If two processes are pulling messages from the queue, it is not guaranteed they will write them to the database in queue order: one consumer might be a bit slower, a packet might be dropped, ..., causing the other consumer to persist the later messages first, causing them to be overridden with the earlier state.
One way to guard against that is to include sequence numbers in the messages, update the row only if the sequence number is as expected, and delay the update otherwise (this is vulnerable to lost messages, though ...).
Of course, the easiest way would be to have only one consumer ...
JDBC connections are not thread safe, so there's nothing to be done about that.
"...two identical processes pulling from the queue to persist the statuses via JDBC..."
I don't understand this at all. Why two identical processes? Wouldn't it be better to have a pool of message queue listeners, each of which would handle messages landing on the queue? Each listener would have its own thread; each one would be its own transaction. A Java EE app server allows you to configure the size of the message listener pool to match the load.
I think a design that duplicates a process like this is asking for trouble.
You could also change the isolation level on the JDBC connection. If you make it SERIALIZABLE you'll ensure ACID at the price of slower performance.
Since it's an asynchronous process, performance will only be an issue if you find that the listeners can't keep up with the messages landing on the queue. If that's the case, you can try increasing the size of the listener pool until you have adequate capacity to process the incoming messages.