Handling JMS-acknowledgements in Camel - java

We are implementing a distributed system based (among others) upon JMS and REST calls. Currently we are looking at two components A and B. Component A reads from an a ActiveMQ-Queue (via Camel-from), processes the message and sends it on to B via REST (this is done via Camel .to/.inOnly). B processes the message further.
In A this looks roughly like this:
from(activeMqInQueue)
.process(/* someBean */)
.inOnly(/* URI of B */)
.end();
Some time later, B will make an async call (decoupled by a seda queue) back to A. From the perspective of Camel, both calls have nothing to do with each other but from our point, it would be important to acknowledge the message, once we get an answer from B. Obviously, we have some form of handler, which can relate the outgoing and incoming call but what we are lacking is the possibility to explicitly acknowledge the original message.
How is this done or what pattern better suits our needs?

Related

When returning a Mono from a ReactiveMongo repository does the database call start in the background?

I am new to Spring WebFlux and would like to know what happens when a Mono is returned from a ReactiveMongo repository.
Does the database call begin immediately after it being returned?
For example:
public void serviceMethod() {
var mono = reactiveMongoRepo.findItemById("123");
}
When the Mono is returned does the database call start instantly or do I have to subscrisbe to the Mono to get the start the database call ?
What actually happens under the hood ?
When you declare your reactive code, its just what it is, a declaration. You are basically describing what you want your flow to look like when someone subscribes. This is called assembly time in reactor terminology.
When someone actually subscribes (for instance an external client) we enter what is called subscription time. The subscriber triggers a signal that is propagated up the stream with the purpose to find a publisher.
The subscriber basically asks in reverse up the chain (upstream), "are you a publisher?" the chain might answer "no" then the signal will be propagated again up the chain (upstream) asking "are you a publisher?" until it finds one. For instance, a database call, or a Flux, or a Mono etc. etc.
When it has found a publisher from there on, we’re in execution time. The data will start flowing through the pipeline (in the more natural top-to-bottom order, or upstream to downstream).
You can read more about it in the blog series Flight of the flux written by the reactor developer Simon Baslé which is an excellent read to get to know a little bit more about the inner workings of the reactor library.
No. Nothing happens until you, or more likely when using Spring WebFlux the framework subscribes to the chain.
https://projectreactor.io/docs/core/release/reference/#reactive.subscribe
In Reactor, when you write a Publisher chain, data does not start pumping into it by default. Instead, you create an abstract description of your asynchronous process (which can help with reusability and composition).
By the act of subscribing, you tie the Publisher to a Subscriber, which triggers the flow of data in the whole chain. This is achieved internally by a single request signal from the Subscriber that is propagated upstream, all the way back to the source Publisher.

RabbitMQ Microservices - Parallel processing

I'm working on an application in microservices architecture usingrabbitmq as messaging system.
calls between microservices are asynchronous http requests and each service is subscribed on specific queues
my question is seen that the calls are stateless, how to guarantee the parallelisation of the message commation not by routing-key in rabbitmq queue but by http call itself, that is to say for n call every service must be able to listen to only needed messages .
Sorry for the ambiguity, I'm trying to explain further:
The scenario is that we are in a micro service architecture, due to huge data response the call service will receive the answer in the listener rabbitmq queue.
So let's imagine that two calls are made simultaneously and both queries start loading data into the same queue, the calling service is waiting for messages and adds the received messages but cannot differentiate between the data of caller 1 and caller 2.
Is there a better implementation for the listener
Not sure I understood the question completely, but here is what I can suggest based on the description:
If each service is hooked to a particular listener and you don't want to associate a Routing-Key for the Queue+Listener integration, then can you try having header arguments. [You can use a QueueBuilder.withArguments API to set specific Header values that the Queue is supposed to listen to]
There needs to be a mechanism through which an exchange will bind to a particular queue and consequently to a Listener service.
Publisher -> Exchange ---> (with headers) binds to Queue -> Listener

Can I make a synchronous/blocking call via RabbitMQ using Spring Integration?

Imagine I have two Java apps, A and B, using Spring Integration's RabbitMQ support for communication.
Can I make a synchronous/blocking call from A to B? If so, how (roughly)?
Ideally, A has a Spring Integration Gateway which it invokes via e.g. a method called
Object doSomething(Object param)
Then it blocks while the Gateway sends on the message via RabbitMQ to a ServiceActivator on B, and B returns the return value, which eventually becomes the result of the doSomething() method.
It seems this may be possible, but the docs and other Stack Overflow questions don't seem to address this directly.
Many thanks!
Actually that's true. Exactly Gateway pattern implements that requirement.
Your client is blocked to wait the result from that gateway's method, but the underlying Integration flow can be async, paralleled etc.
Spring Integration AMQP provides the <int-amqp:outbound-gateway> for the blocking request/reply scenarios with RabbitMQ.
Of course the other, receiving side should take care of the correlation to send reply to the appropiriate replyToAddress from request message. The simples way to use <int-amqp:inbound-gateway> there.

Handling Failed calls on the Consumer end (in a Producer/Consumer Model)

Let me try explaining the situation:
There is a messaging system that we are going to incorporate which could either be a Queue or Topic (JMS terms).
1 ) Producer/Publisher : There is a service A. A produces messages and writes to a Queue/Topic
2 ) Consumer/Subscriber : There is a service B. B asynchronously reads messages from Queue/Topic. B then calls a web service and passes the message to it. The webservice takes significant amount of time to process the message. (This action need not be processed real-time.)
The Message Broker is Tibco
My intention is : Not to miss out processing any message from A. Re-process it at a later point in time in case the processing failed for the first time (perhaps as a batch).
Question:
I was thinking of writing the message to a DB before making a webservice call. If the call succeeds, I would mark the message processed. Otherwise failed. Later, in a cron job, I would process all the requests that had initially failed.
Is writing to a DB a typical way of doing this?
Since you have a fail callback, you can just requeue your Message and have your Consumer/Subscriber pick it up and try again. If it failed because of some problem in the web service and you want to wait X time before trying again then you can do either schedule for the web service to be called at a later date for that specific Message (look into ScheduledExecutorService) or do as you described and use a cron job with some database entries.
If you only want it to try again once per message, then keep an internal counter either with the Message or within a Map<Message, Integer> as a counter for each Message.
Crudely put that is the technique, although there could be out-of-the-box solutions available which you can use. Typical ESB solutions support reliable messaging. Have a look at MuleESB or Apache ActiveMQ as well.
It might be interesting to take advantage of the EMS platform your already have (example 1) instead of building a custom solution (example 2).
But it all depends on the implementation language:
Example 1 - EMS is the "keeper" : If I were to solve such problem with TIBCO BusinessWorks, I would use the "JMS transaction" feature of BW. By encompassing the EMS read and the WS call within the same "group", you ask for them to be both applied, or not at all. If the call failed for some reason, the message would be returned to EMS.
Two problems with this solution : You might not have BW, and the first failed operation would block all the rest of the batch process (that may be the desired behavior).
FYI, I understand it is possible to use such feature in "pure java", but I never tried it : http://www.javaworld.com/javaworld/jw-02-2002/jw-0315-jms.html
Example 2 - A DB is the "keeper" : If you go with your "DB" method, your queue/topic customer continuously drops insert data in a DB, and all records represent a task to be executed. This feels an awful lot like the simple "mapping engine" problem every integration middleware aims to make easier. You could solve this with anything from a custom java code and multiples threads (DB inserter, WS job handlers, etc.) to an EAI middleware (like BW) or even a BPM engine (TIBCO has many solutions for that)
Of course, there are also other vendors... EMS is a JMS standard implementation, as you know.
I would recommend using the built in EMS (& JMS) features,as "guaranteed delivery" is what it's built for ;) - no db needed at all...
You need to be aware that the first decision will be:
do you need to deliver in order? (then only 1 JMS Session and Client Ack mode should be used)
how often and in what reoccuring times do you want to retry? (To not make an infinite loop of a message that couldn't be processed by that web service).
This is independent whatever kind of client you use (TIBCO BW or e.g. Java onMessage() in a MDB).
For "in order" delivery: make shure only 1 JMS Session processes the messages and it uses Client acknolwedge mode. After you process the message sucessfully, you need to acknowledge the message with either calling the JMS API "acknowledge()" method or in TIBCO BW by executing the "commit" activity.
In case of an error you don't execute the acknowledge for the method, so the message will be put back in the Queue for redelivery (you can see how many times it was redelivered in the JMS header).
EMS's Explicit Client Acknolwedge mode also enables you to do the same if order is not important and you need a few client threads to process the message.
For controlling how often the message get's processed use:
max redelivery properties of the EMS queue (e.g. you could put the message in the dead
letter queue afer x redelivery to not hold up other messages)
redelivery delay to put a "pause" in between redelivery. This is useful in case the
Web Service needs to recover after a crash and not gets stormed by the same message again and again in high intervall through redelivery.
Hope that helps
Cheers
Seb

Apache Camel: What marches messages along?

On an ESB like Apache Camel, what mechanism is actually "marching" (pulling/pushing) messages along the routes from endpoint to endpoint?
Does the Camel RouteBuilder just compose a graph of Endpoints and Routes and know which destination/next Endpoint to pass a message to after it visits a certain Endpoint or do the Endpoints themselves know which is the next destination for the message it has processed.
Either way, I'm confused:
If it is the RouteBuilder that knows the "flow" of messages through the system, then this RouteBuilder would need to know the business logic of when to Endpoint A should pass the message next to Endpoint B vs Endpoint C, but in all the Camel examples I see this business logic doesn't exist; and
It seems to be that putting that kind of "flow" business logic in the Endpoints themselves couples them together and defeats some of the basic principles of SOA/ESB/EIP, etc.
Under the hood I believe camel is constructing a pure graph where each node is a Camel endpoint/processor, and each edge is a route between two endpoints (a source and a destination). This graph is precisely what RouteBuilder is building when you invoke its API. When you go to start() a Camel route, the graph is most likely validated and translated into a series of Runnables that need to be executed, and probably uses some kind of custom Executor or thread management to handle these Runnables.
Thus, the execution of the Runnables (processors processing messages as they arrive) are handled by this custom Executor. This is the mechanism that "marches messages along", although the order in which the tasks are queued up is driven by the overarching structure of the graph composed by RouteBuilder.
I suggest to read this QA first
What exactly is Apache Camel?
... and the links it refers to, on some more background about Apache Camel.
The business logic can be any kind of logic, such as a Java bean (POJO). And Camel allows you to access your business logic in a losly coupled fashion. See for example these links
http://camel.apache.org/service-activator.html
http://camel.apache.org/bean-integration.html
http://camel.apache.org/bean.html
http://camel.apache.org/bean-binding.html
http://camel.apache.org/hiding-middleware.html
http://camel.apache.org/spring-remoting.html

Categories