I need to lock one of the instances to allow scheduling for multiinstances web application
Right now we have two started instances of application. Each has scheduler of work. I need to avoid double run the same process because both of instances send message of the same processing
Don't try and do distributed locking, it's a really hard problem to try and solve.
Instead, just set up your two applications to consume from the same queue and have RabbitMQ round-robin messages between them, and then neither will conflict with the work the other one is doing.
Related
I am running a Spring Cloud Stream application, consuming messages from RabbitMq. I want to implement a behaviour where a given queue, having three consumers instances, delivers exactly one message to any of them, and wait for the next to be delivered until the current is acked (some sort of synchronization between all consumers).
I think that this can be done using this https://www.rabbitmq.com/consumer-prefetch.html with global option enabled, but I can't find a way of achieving this using spring cloud stream. Any help will be appreciated.
Spring uses a separate channel for each consumer so global channel prefetch won't have any effect.
Bear in mind that, even if it was supported, it would only work if the consumers were all in the same instance.
I have a service which is deployed as microservice and a mongodb with some documents with "few" states, for example: READY, RUNNING, COMPLETED. I need to pick the documents with state "READY" and then process them. But with multiple instances running there is high possibility of processing the "duplicates". I have seen the below thread, but it is only concerned about one instance only picking up tasks.
Spring boot Webservice / Microservices and scheduling
Above talks about solution using Hazlecast and mongodb. But what I am looking at is that all instances wait for the lock, get their own "documents (non-duplicates) and process them. I have checked the various documents and unfortunately I am not able to find any solution.
One of the option I thought is to introduce Kafka, where we can "assign" specific tasks to specific consumers. But before opting would like to see if we any solutions which can be implemented using simple methods such as database locks etc. Any pointers towards this are highly appreciated.
I have a server application A that produces records as requests arrive. I want these records to be persisted in a database. However, I don't want to let application A threads spend time persisting the records by communicating directly with the database. Therefore, I thought about using a simple producers-consumers architecture where application A threads produce records and, another application B threads are the consumers that persist the records to the database.
I'm looking for the "best" way to share these records between applications A and B. An important requirement is that application A threads will always be able to send records to the IPC system (e.g. queue but that may be some other solution). Therefore, I think the records must always be stored locally so that application A threads will be able to send record event if network is down.
The initial idea that came to my mind was to use a local message queue (e.g. ActiveMQ). Do you think a local message queue is appropriate? If yes, do you recommend a specific message queue implementation? Note that both applications are written in Java.
Thanks, Mickael
For this type of needs Queueing solution seems to be the best fit as the producer and consumer of the events can work in isolation. There are many solutions out there, and I have personally worked with RabbitMQ and ActiveMQ. Both are equally good. I don't wish to compare their performance characteristics here but RabbitMQ is written in Erlang which a language tailer-made for building real time applications.
Since you're already on Java platform ActiveMQ might be a better option and is capable producing high throughput. With a solution like this, the consumer does not have to be online all the time. Based on how critical your events data are, you may also want to have persistent queues and messages so that in the event of a message broker failure, you can still recover important "event" messages your application A produced.
If there are many applications producing events and later if you wish to scale out(or horizontally scale) the broker service because it's getting a bottleneck, both of the above solutions provide clustering services.
Last but not least, if you want to share these events between different platforms you may wish to share messages in AMQP format, which is a platform-independent wire-level protocol to share messages between heterogenous systems, and I'm not sure if this is requirement for you. RabbitMQ and ActiveMQ both support AMQP. Both of these solutions also support MQTT which is a lightweight messaging protocol but it seems that you don't wish to use MQTT.
There are other products such as HornetQ and Apache Qpid which are also production ready solutions but I have not used them personally.
I think queueing solution is a the best approach in terms of maintainability, loose coupling nature of participating applications and performance.
We have a component called Workflow which exposes SOAP web service. We are trying to introduce a asynchronous processing in Workflow by allowing it to consume messages from WebSphere MQ. We also want to utilize multiple instances of Workflow. So there can be 4 instances of Workflow listening to same queue. The problem here is, how to make sure all Workflow instances are utilized evenly and not single instance is overloaded.
Workflow is completely written in Java. We use Spring and Hibernate extensively. The processes which will be submitting message to Workflow are written in Java. For message processing and MQ, we use Spring Integration.
The best way to ensure that no Workflow instance is overloaded is to have each individual Workflow instance not consume a message from the message queue that will overload it. In this case, you may not care whether the work is distributed evenly, as long as all the work gets done promptly.
If you really want to make sure all Workflow instances are used evenly even when your load is so light that you don't need all of the instances, you may need to check whether there's a way of reconfiguring WebSphere MQ to distribute messages on a FIFO basis rather than a LIFO basis, or if WebSphere MQ can't be configured that way, to switch to a different message queue. However, I don't recommend this: the system as a whole can work perfectly fine even if, at low loads, only some of the Workflow instances are utilized, with all being utilized only at high loads.
I have a web application i am rewriting that currently performs a large amount of audit data sql writes. Every step of user interaction results in a method being executed that writes some information to a database.
This has the potential to impact users by causing the interaction to stop due to database problems.
Ideally I want to move this is a message based approach where if data needs to be written it is fired off too a queue, where a consumer picks these up and writes them to the database. It is not essential data, and loss is acceptable if the server goes down.
I'm just a little confused if I should try and use an embedded JMS queue and broker, or a Java queue. Or something I'm not familiar with (suggestions?)
What is the best approach?
More info:
The app uses spring and is running on websphere 6. All message communication is local, it will not talk to another server.
I think logging with JMS is overkill, and especially if loggin is the only reason for using JMS.
Have a look at DBAppender, you can log directly to the database. If performance is your concern you can log asynchronously using Logback.
If you still want to go JMS way then Logback has JMS Queue & Topic appenders
A plain queue will suffice based on your description of the problem. You can have a fixed size queue and discard messages if it fills too quickly since you say they are not critical.
Things to consider:
Is this functionality required by other apps too, now or in the
future.
Are the rate of producing messages so huge that it can start
consuming lot of heap memory when large number of users are logged
in. Important if messages should not be lost.
I'm not sure if that is best practice inside a Java EE container however.
Since you already run on a WebSphere machine, you do have a JMS broker going (SIBus). The easiest way to kick off asynchronous things are to send JMS messages and have a MDB reading them off - and doing database insertions. You might have some issues spawning own threads in WebSphere can still utilise the initial context for JNDI resources.
In a non Java EE case, I would have used a something like a plain LinkedBlockingQueue or any blocking queue, and just have a thread polling that queue for new messages to insert into a database.
I would uses JMS queue only if there are different servers involved. So in your case I would do it in simple plain pure java with some Java queue.