I'm working on an application in microservices architecture usingrabbitmq as messaging system.
calls between microservices are asynchronous http requests and each service is subscribed on specific queues
my question is seen that the calls are stateless, how to guarantee the parallelisation of the message commation not by routing-key in rabbitmq queue but by http call itself, that is to say for n call every service must be able to listen to only needed messages .
Sorry for the ambiguity, I'm trying to explain further:
The scenario is that we are in a micro service architecture, due to huge data response the call service will receive the answer in the listener rabbitmq queue.
So let's imagine that two calls are made simultaneously and both queries start loading data into the same queue, the calling service is waiting for messages and adds the received messages but cannot differentiate between the data of caller 1 and caller 2.
Is there a better implementation for the listener
Not sure I understood the question completely, but here is what I can suggest based on the description:
If each service is hooked to a particular listener and you don't want to associate a Routing-Key for the Queue+Listener integration, then can you try having header arguments. [You can use a QueueBuilder.withArguments API to set specific Header values that the Queue is supposed to listen to]
There needs to be a mechanism through which an exchange will bind to a particular queue and consequently to a Listener service.
Publisher -> Exchange ---> (with headers) binds to Queue -> Listener
Related
I have two microservices. Service #1 puts a certain object in the queue (a table in the database) that needs to be processed. After that, in service #2, the sheduler takes new records from the queue every few seconds and processes them, and then saves the result as json to the database. The question is, how do I notify service #1 about the result of processing?
You can achieve this using any messaging brokers activeMq or rabbitMq. Please refer: https://spring.io/guides/gs/messaging-rabbitmq/
Please make use of a rabbitmq receiver to fetch the notification and trigger the processing on you service 1 appropriately
You can set up Kafka for your microservice communication. For notifying service #1 you can send an event from service #2 through Kafka.
Suppose, from service #2 you'll send an event named "PROCESSING_DONE" and service #1 will listen to this event and can do the further processing.
This all will take place in realtime.
I have a queue and I have this consumer written in java for this queue. After consuming, we are executing an HTTP call to a downstream partner and this is a one-way asynchronous call. After executing this request, the downstream partner will send an HTTP request back to our system with the response for the initial asynchronous call. This response is needed for the same thread that we executed the initial asynchronous call. This means we need to expose an endpoint within the thread so the downstream system can call and send the response back. I would like to know how can I implement a requirement like this.
PS : We also can get the same response to a different web service and update a database row with the response. But I'm not sure how to stop the main thread and listen to the database row when the response is needed.
Hope you understood what I want with this requirement.
My response based on some assumptions. (I didn't wait for you respond to my comment since I found the problem had some other interesting features anyhow.)
the downstream partner will send an HTTP request back to our system
This necessitates that you have a listening port (ie, a server) running on this side. This server could be in the same JVM or a different one. But...
This response is needed for the same thread
This is a little confusing because at a high level, reusing the thread programmatically itself is not usually our interest but reusing the object (no matter in which thread). To reuse threads, you may consider using ExecutorService. So, what you may try to do, I have tried to depict in this diagram.
Here are the steps:
"Queue Item Consumer" consumes item from the queue and sends the request to the downstream system.
This instance of the "Queue Item Consumer" is cached for handling the request from the downstream system.
There is a listener running at some port within the same JVM to which the downstream system sends its request.
The listener forwards this request to the "right" cached instance of "Queue Item Consumer" (you have to figure out a way for this based on your caching mechanism). May be some header has to be present in the request from the downstream system to identify the right handler on this side.
Hope this works for you.
The system consists of 1 or more clients and a single server. Each client is added manually and provided with an identifier (client.id). Each client can send messages to the server. The server can send messages to each client. All messages can be divided into two groups: with an answer and without.
For example some signatures:
CompletableFuture<Response> call(Request requestToServer)
void execute(Data dataToSend)
where Response, Request, and Data are my POJO.
So, I need some sort of RMI for implementing message communication between the server and clients.
Requirements:
The server must be able to identify a client by its id client.id when processing a message, but the client, before sending that message, should not directly fill this identifier;
Messages should be POJO;
Ability to answer to a message with an exception;
Event-driven handlers (like #RabbitListener) - several handlers - spring bean per incoming message type, with or without return type. A handler should be resolved automatically, based on incoming message type;
Backed by RabbitMQ or ArtemisMQ;
Single service for sending messages from the server to clients: client id should be provided when sending a message. Example: void sendToClient(int clientId, Data dataToClient).
What I've tried to set up this method of communication:
Spring Integration
My own gateway with completable future - great. Also, can enrich message headers with client.id - great. But I didn't find an appropriate way to handle an incoming message and being able to answer it. Tried to publish an ApplicationEvent, but all event handlers have a void return type. So, my idea here is to get correlationId and send back message, providing that correlationId - that doesn't look like a clear solution.
RabbitListener/RabbitTemplate
Cons:
A lot of code to setup RabbitTemplate to send and receive messages;
Need to manually setup request and reply queues and bindings;
problem with resolving client.id inside #RabbitHandler.
AmqpProxyFactoryBean
The closest result to my needs, but several problems, that I cannot solve:
Resolve client.id on message handler;
Single handler per service interface method.
So, I need a lightweight solution to build up communication between services, backed by a message queue. It should be easy to add additional message type - declare the class, add the handler to the consumer and create an object of that class and send it from the producer.
But maybe I'm completely wrong, about services communication? Maybe I should not use message queue for that purpose?
Well using a message queue or message broker like RabbitMQ is a perfectly valid solution. But you have to think whether it is what you actually need.
A message broker allows you to decouple the producers and consumers of messages. It also allows you to introduce a level of fault tolerance. If the consumer is not available temporarily the message is not lost. However, it also means that if a reply is expected the producer of the message might incorrectly think that the other end has processed its request and keep waiting for a reply. Message brokers also often offer some kind of guarantee, like once-and-only-once, or at-least-once, and also policies how to handle undeliverable messages (like dead letter queues), time-to-live, and various routing capabilities (like consistent hash based routing). In your case, you will probably have to route by some header value carrying your client.id if your server-originated messages are to reach one client only.
If you don't care about any of these features and just want a communications fabric, maybe going for something a bit different might make more sense. For example Akka offers actor-based distributed communication.
If what is bothering you is just the cleanliness of the solution, or the amount of boilerplate, maybe having a look at other implementations might help. You might want to have a look at the Reactive Streams API implementation for RabbitMQ, which should be a bit more abstract.
I am creating a logging service. The architecture is as follows: A Node.js API, that receives requests from a website (the requests can be get or post), the request is send to an SQS messaging queue, a Java worker is listening to the SQS for messages if it is post I write the data in a Cassandra database. If it is get, I read the necessary data from Cassandra, do some computations and return it to the Node.js API, which in tern returns it to the client.
The read part is a little blurry for me. Is it possible to return the data as a message in SQS? (I red that a single message can contain only 256KB of data and the read data can be more than that) I will be running multiple instances of the Node API, so is there a way to know to which instance I need to return the data? Should I create a Java API that receives read requests from the Node API (bypassing the SQS)? What is the best way to do this?
Should I use a message queue to retrieve analytics data, or should I just connect to the service that will prepare the data and receive the data from there?
You don't really return data to node.js from SQS, nodejs has to be polling the queue and needs to act on messages if/when they arrive.
SO you could use two queues, one for incoming messages (your posts), and then a second queue for your outgoing messages (your gets). But in all cases, the process that is going to consue those messages needs to be polling for them - you can't 'push' messages to a listener, the listerner needs to 'pull' them.
I want to make my app resilient to connection issues that can happen when sending messages to rabbitmq. I want to get hold of all unsent messages, store them and send them latter, when rabbitmq becomes available.
Looking at the official documentation I didn't manage to figure out what is the difference between return callback and retry callback. I understand that retry callback is invoked when retry template exhausts configured policy but don't find this much useful since context doesn't contain message itself.
Based on "replyCode" that is passed in the "returnedMessage" method of ReturnCallback interface one can easily determine further behavior but haven't figured out when this callback is invoked.
One way to go is ConfirmCallback but there is an issue of having additional logic for keeping in sync CorrelationData and messages and statuses of those messages.
So...except ConfirmCallback interface is there any easier way to keep track of messages that are not successfully sent through RabbitMQ using Spring AMQP?
Returns are when the broker returns a message because it's undeliverable (no matching bindings on the exchange to which the message was published, and the mandatory bit is set).
Confirms are when the broker sends an ack back to the publisher, indicating that a message was successfully routed.
Retry is outside the broker realm so it is likely what you need for your use case.