I am bit confused about Camel routes and its two endpoints : Direct and Seda. Well let's say i have a route like this :
public void configure()
{
from("direct:services")
.process(//Some processing here)
.to("http://ThirdPartyServers")
}
On top of this I have a rest web service which receives several requests, does some processing and then hands over the message to this route to get response from some third party servers. I have instantiated Camel Context through Spring framework like this :
<camelContext id="appCamelContext" xmlns="http://camel.apache.org/schema/spring"
trace="true" streamCache="true">
<propertyPlaceholder id="properties"
location="classpath:camel.properties" />
<camel:routeBuilder ref="oneRouteBuilder" />
<camel:routeBuilder ref="photosRouteBuilder" />
</camelContext>
Now the question is that in a instant I send multiple different messages to this route. Now Camel documentation says that direct component is invoked in single thread and is synchronous. So will all the messages be processed concurrently or one by one processing will happen ?
Also if i change the direct component to seda, will it make any difference ?
TIA
Update [after Petter's answer]:
Although Petter's answer has clarified but i have new doubt regarding the same Direct and Seda components. Lets say my route is now like this :
public void configure(){
from("direct:services")
.choice()
.when("some predicate here-Predicate1")
.to("seda:predicate1")
.otherwise()
.to("seda:fallback")
.end();
from("seda:predicate1")
.process("some processing")
.to("http://ThirdPartyServers");
from("seda:fallback")
.process("some processing")
.to("jms:fallbackqueue");
}
Now if i send 5 messages to direct component from different threads, so these messages would be processed concurrently. As you can see in the above route, direct component sends the message to seda component. So now will there be only one thread of seda component which will process all the different 5 messages? Meaning in the end all the messages will be processed one by one ?
The direct component runs in the thread of the caller. Simplified, it's as a regular java method invocation. Multiple messages can run through the route as long as multiple threads are calling the direct endpoint.
When invoking the SEDA (or VM) endpoint, you message is put on a queue (in memory). Another thread (in the route) picks messages from the queue one by one and processes them. You can configure how many threads the seda consumer should have by setting the concurrentConsumers option. By default, the one thread consuming messages makes sure only one message at a time is processed no matter how many threads are producing to this route.
It boils down to your statement
in a instant I send multiple different messages to this route
If this means different threads, or just from one single thread in a sequence,
Related
we have a larger multi service java spring app that declares about 100 exchanges and queues in RabbitMQ on startup. Some are declared explicitly via Beans, but most of them are declared implicitly via #RabbitListener Annotations.
#Component
#RabbitListener(
bindings = #QueueBinding(key = {"example.routingkey"},
exchange = #Exchange(value = "example.exchange", type = ExchangeTypes.TOPIC),
value = #Queue(name = "example_queue", autoDelete = "true", exclusive = "true")))
public class ExampleListener{
#RabbitHandler
public void handleRequest(final ExampleRequest request) {
System.out.println("got request!");
}
There are quite a lot of these listeners in the whole application.
The services of the application sometimes talk to each other via RabbitMq, so take a example Publisher that publishes a message to the Example Exchange that the above ExampleListener is bound to.
If that publish happens too early in the application lifecycle (but AFTER all the Spring Lifecycle Events are through, so after ApplicationReadyEvent, ContextStartedEvent), the binding of the Example Queue to the Example Exchange has not yet happend and the very first publish and reply chain will fail. In other words, the above Example Listener would not print "got request".
We "fixed" this problem by simply waiting 3 seconds before we start sending any RabbitMq messages to give it time to declare all queues,exchanges and bindings but this seems like a very suboptimal solution.
Does anyone else have some advice on how to fix this problem? It is quite hard to recreate as I would guess that it only occurs with a large amount of queues/exchanges/bindings that RabbitMq can not create fast enough. Forcing Spring to synchronize this creation process and wait for a confirmation by RabbitMq would probably fix this but as I see it, there is no built in way to do this.
Are you using multiple connection factories?
Or are you setting usePublisherConnection on the RabbitTemplate? (which is recommended, especially for a complex application like yours).
Normally, a single connection is used and all users of it will block until the admin has declared all the elements (it is run as a connection listener).
If the template is using a different connection factory, it will not block because a different connection is used.
If that is the case, and you are using the CachingConnectionFactory, you can call createConnection().close() on the consumer connection factory during initialization, before sending any messages. That call will block until all the declarations are done.
I have the following Integration Flow:
Integration Flow
If an exception is thrown during the second split method, inside the parser, I would like to channel that message to an error channel, is that possible somehow?
One way is to have an output channel for spitter as an ExecutorChannel or QueueChannel. This way every splitted item is going to be processed in the separate thread. You can then apply any available error handling options for those asyn channels.
See docs for more info: https://docs.spring.io/spring-integration/docs/5.2.0.RELEASE/reference/html/error-handling.html#error-handling
Another one is to use a .gateway() with its errorChannel option downstream after the second splitter, so every item is going to be processed in isolation again.
Also an ExpressionEvaluatingRequestHandlerAdvice (maybe together with a RequestHandlerRetryAdvice) can be used downstream on the specific endpoint to deal with its own exceptions: https://docs.spring.io/spring-integration/docs/5.2.0.RELEASE/reference/html/messaging-endpoints.html#message-handler-advice-chain
What's the difference between SimpleMessageListenerContainer and DirectMessageListenerContainer in Spring AMQP? I checked both of their documentation pages, SimpleMessageListenerContainer has almost no explanation on inner workings, and DirectMessageListenerContainer has the following explanation:
The SimpleMessageListenerContainer is not so simple. Recent changes to the rabbitmq java client has facilitated a much simpler listener container that invokes the listener directly on the rabbit client consumer thread. There is no txSize property - each message is acked (or nacked) individually.
I don't really understand what these mean. It says listener container that invokes the listener directly on the rabbit client consumer thread. If so, then how does SimpleMessageListenerContainer do the invocation?
I wrote a small application and used DirectMessageListenerContainer and just to see the difference, I switched to SimpleMessageListenerContainer, but as far as I can see there was no difference on RabbitMQ side. From Java side the difference was in methods (SimpleMessageListenerContainer provides more) and logs (DirectMessageListenerContainer logged more stuff)
I would like to know the scenarios to use each one of those.
The SMLC has a dedicated thread for each consumer (concurrency) which polls an internal queue. When a new message arrives for a consumer on the client thread, it is put in the internal queue and the consumer thread picks it up and invokes the listener. This was required with early versions of the client to provide multi-threading. With the newer client that is not a problem so we can invoke the listener directly (hence the name).
There are a few other differences aside from txSize.
See Choosing a Container.
In the DirectMessageListenerContainer some of the logic is moved into the AMQP implementation as opposed to ListenerContainer as is SimpleMessageListenerContainer
This is what the Javadocs in SimpleMessageListenerContainer say for setTxSize() -
/**
* Tells the container how many messages to process in a single transaction (if the channel is transactional). For
* best results it should be less than or equal to {#link #setPrefetchCount(int) the prefetch count}. Also affects
* how often acks are sent when using {#link AcknowledgeMode#AUTO} - one ack per txSize. Default is 1.
* #param txSize the transaction size
*/
The client sends an ack every time txSize number of messages are processed. This is controlled in the method
private boolean doReceiveAndExecute(BlockingQueueConsumer consumer) throws Throwable { //NOSONAR
Channel channel = consumer.getChannel();
for (int i = 0; i < this.txSize; i++) {
logger.trace("Waiting for message from consumer.");
Message message = consumer.nextMessage(this.receiveTimeout);
.
.
In the newer implementations, each message is acked on the thread directly and based on the transactional model (Single or publisher confirms) the consumer sends Acknowledgments to Rabbit MQ
I'm trying to implement the following JMS message flow using camel routes:
there is a topic published on external message broker. My program is listening for messages on this topic. Each incoming message triggers specific route to be executed - ONE TIME ONLY (some kind of ad-hoc, disposable route). This route is supposed to move messages between queues within my internal message broker based on some selector (get all messages from queue A matching given selector and move them to queue B). I'm only starting with camel and so far I figured out just the first part - listening for messages on topic:
<bean id="somebroker" class="org.apache.camel.component.jms.JmsComponent"
p:connectionFactory-ref="rmAdvisoriesConnectionFactory"/>
<camelContext id="camel" xmlns="http://camel.apache.org/schema/spring">
<endpoint id="jms" uri="somebroker:topic:sometopic"/>
<route id="routeAdvisories">
<from ref="jms"/>
<to>???</to>
</route>
</camelContext>
Can you suggest a destination for these advisory messages? I need to read some of their JMS properties and use these values to construct JMS selector that will be used for the "move messages" operation. But I have no idea how to declare and trigger this ad-hoc route. It would be ideal if I could define it within the same camelContext using only Spring DSL. Or alternatively, I could route advisories to some java method, which would be creating and executing this ad-hoc routes. But if this isn't possible, I'll be grateful for any suggestion.
Thanks.
As far as I understand, it will be useful to use the 'selector' option, in your JMS consumer route, for example:
from("activemq:queue:test?selector=key='value1'").to("mock:a");
from("activemq:queue:test?selector=key='value2'").to("mock:b");
Maybe, another option is to implement some routes based on 'Content Based Router Pattern" through "choice" option. You can find more info here: http://camel.apache.org/content-based-router.html
I hope it helps.
I couldn't get it working the way I intended, so I had to abandon my original approach. Instead of using camel routes to move messages between queues (now I'm not sure camel routes are even intended to be used this way) I ended up using ManagedRegionBroker - the way JMX operation "moveMatchingMessagesTo" is implemented - to move messages matching given selector.
I have a java class which consumes messages from a queue, sending HTTP calls to some urls. I have made some search on google and also on stackoverflow (and really sorry if i have missed any sources mentioning about the problem) but couldnt find anything in details about setRollbackOnly call.
My question is... in case I rollback, the message which is consumed from the queue will be blocking the rest of the queue and will be looping until it is processed successfully or it will be requeued at the end of the current queue?
My code which I use for consuming from the queue and sending HTTP calls is below and the whole application is running on Glassfish server:
public class RequestSenderBean implements MessageListener
{
#Resource
private MessageDrivenContext mdbContext;
public RequestSenderBean(){}
public void onMessage(final Message message)
{
try
{
if(message instanceof ObjectMessage)
{
String responseOfCall=sendHttpPost(URL, PARAMS_FROM_MESSAGE);
if(responseOfCall.startsWith("Success"))
{
//Everything is OK, do some stuff
}
else if(responseOfCall.startsWith("Failure"))
{
//Failure, do some other stuff
}
}
catch(final Exception e)
{
e.printStackTrace();
mdbContext.setRollbackOnly();
}
}
}
This is fundamental JMS/messaging knowledge.
Queues implement "load balancing" scenarios, whereby a message hits a queue and is dequed to be processed by one consumer. Increasing the number of consumers increases potential throughput of that queue's processing. Each message on a queue will be processed by one and only one consumer.
Topics provide publish-subscribe semantics: all consumers of a topic will receive the message that is pushed to the topic.
With that in mind, once a message is dequed and handed (transactionally) to a consumer, it is by no means blocking the rest of the queue if it is asynchronous (as is the case with MDBs).
As the Java EE Tutorial states:
Message Consumption
Messaging products are inherently asynchronous: There is no fundamental timing dependency between the production and the consumption of a message. However, the JMS specification uses this term in a more precise sense. Messages can be consumed in either of two ways:
Synchronously: A subscriber or a receiver explicitly fetches the message from the destination by calling the receive method. The receive method can block until a message arrives or can time out if a message does not arrive within a specified time limit.
Asynchronously: A client can register a message listener with a consumer. A message listener is similar to an event listener. Whenever a message arrives at the destination, the JMS provider delivers the message by calling the listener’s onMessage method, which acts on the contents of the message.
Because you use a MessageListener which is by definition asynchronous, you are not blocking the queue or its subsequent processing.
Also from the tutorial is the following:
Using Session Beans to Produce and to Synchronously Receive Messages
An application that produces messages or synchronously receives them can use a session bean to perform these operations. The example in An Application That Uses the JMS API with a Session Bean uses a stateless session bean to publish messages to a topic.
Because a blocking synchronous receive ties up server resources, it is not a good programming practice to use such a receive call in an enterprise bean. Instead, use a timed synchronous receive, or use a message-driven bean to receive messages asynchronously. For details about blocking and timed synchronous receives, see Writing the Clients for the Synchronous Receive Example.
As for message failure, it depends on how your queue is configured. You can set error-queues (in the case of containers like Glassfish or Weblogic) that failed messages are pushed to for later inspection. In your case, you're using setRollbackOnly which is handled thus:
7.1.2 Coding the Message-Driven Bean: MessageBean.java
The message-driven bean class, MessageBean.java, implements the
methods setMessageDrivenContext, ejbCreate, onMessage, and ejbRemove.
The onMessage method, almost identical to that of TextListener.java,
casts the incoming message to a TextMessage and displays the text. The
only significant difference is that it calls the
MessageDrivenContext.setRollbackOnly method in case of an exception.
This method rolls back the transaction so that the message will be
redelivered.
I recommend you read the Java EE Tutorial as well as the Enterprise Integration Patterns book which covers messaging concepts in good detail that's also product/technology-agnostic.