"Fire then Return" route in Apache Camel - java

We use Apache Camel to trigger some processes within our applications, e.g:
from("quartz2://sometThing/someQueue?cron=0+0+4+?+*+MON-SUN").setBody(constant(""))
.routeId(this.getClass().getSimpleName())
.to("jms:some-trigger-queue");
We then have a bunch of processors off the trigger queue to run each job, e.g:
from("jms:some-trigger-queue")
.processRef("someProcessor");
Some of these processors will in turn write messages to JMS queues. The problem I'm trying to fix is that the processors won't commit the JMS messages to the broker until the entire process is complete. I suspect this is because there is a message in flight on the trigger queue ("jms:some-trigger-queue") and because the processors are using the same context they won't commit until the in flight message is cleared (FYI I have tried forcing new transactions to be created within the processors but had no luck).
So my question is if I only had one processor (or my didn't care about the processors running at the same time) - how could I configure camel to trigger the processor and immediately move on (i.e. to remove the trigger message from being in flight)?

If you want to call the processors and then immediately move on then you can use the Wire Tap EIP (https://camel.apache.org/manual/latest/wireTap-eip.html).
For example:
from("jms:some-trigger-queue")
.wireTap("direct:callProcessor");
from("direct:callProcessor")
.processRef("someProcessor");
This way the direct:callProcessor route will be executed on a separate thread and jms:some-trigger-queue will continue routing without waiting for a response from direct:callProcessor.

Related

Locking Mechanism if pod crashes while processing mongodb record

We have a java/spring application which runs on EKS pods and we have records stored in MongoDB collection.
STATUS: READY,STARTED,COMPLETED
Application needs to pick the records which are in READY status and update the status to STARTED. Once the processing of the record is completed, the status will be updated to COMPLETED
Once the record is STARTED, it may take few hours to complete, until then other pods(other instance of the same app) should not pick this record. If some exception occurs, the app changes the status to READY so that other pods(or the same pod) can pick the READY record for processing.
Requirement: If the pod crashes when the record is processing(STARTED) but crashes before changing the status to READY/COMPLETED, the other pod should be able to pick this record and start processing again.
We have some solution in mind but trying to find the best solution. Request you to help me with some best approaches.
You can use a shutdown hook from spring:
#Component
public class Bean1 {
#PreDestroy
public void destroy() {
## handle database change
System.out.println(Status changed to ready);
}
}
Beyond that, that kind of job could run better in a messaging architecture, using SQS for example. Instead of using the status on the database to handle and orchestrate the task, you can use an SQS, publish the message that needs to be consumed (the messages that were in ready state) and have a poll of workers consuming messages from this SQS, if something crashes or the pod of this workers needs to be reclaimed, the message goes back to SQS and can be consumed by another pod.

Apache camel Hazelcast queue polling for concurrency

I have a requirement for polling a hazelcast (client mode) queue with retry (10 attempts) option on exception. I was expecting that camel polling and processing would be multi threaded. but It wasn't. While retrying on exception, any new message to the queue will be piled up and will be picked up for processing only after 1st one gets completed. Is there any option for parallel processing (concurrent consume). I have added concurrentConsumer and poolSize as a query parameter. But it didn't really play well.
What I have tried is:
fromF(hazelcast-queue://FOO?concurrentConsumers=5&hazelcastInstance=#hazelcastInstance&poolSize=10&queueConsumerMode=Poll).to("direct:testPoll");
from("direct:testPoll")
.log(LoggingLevel.DEBUG,":::>:Camel[${routeId}] consumes")
.onException(Exception.class)
.maximumRedeliveries(maxAttempt)
.delayPattern(delayPattern)
.maximumRedeliveryDelay(maxDelay)
.handled(true)
.logExhausted(false)
.end()
.bean("processTestPoll").log(INFO,"${body}").end();
Error:
There are 1 parameters that couldn't be set on the endpoint. Check the uri if the parameters are spelt correctly and that they are properties of the endpoint. Unknown parameters=[{concurrentConsumers=10}]
Your help will be really appreciated. Thanks in advance.
What you try to achieve can be done thanks to a SEDA in 2 different ways:
Generic Way
You can send your messages to a SEDA endpoint and consume them concurrently as next:
fromF("hazelcast-%sFOO?hazelcastInstance=#hazelcastInstance&queueConsumerMode=Poll",
HazelcastConstants.QUEUE_PREFIX)
.to("seda:process");
from("seda:process?concurrentConsumers=5")
.log("Processing: ${threadName} ${body}");
In the previous example, the Hazelcast Queue FOO is polled by one thread that puts the messages into the SEDA process and the SEDA process is consumed concurrently by 5 threads.
More details about concurrent consumers with the SEDA component
Specific Way
As you proposed in your deleted answer, you can also implement it directly using the specific SEDA endpoint for Hazelcast as next:
fromF("hazelcast-%sFOO?hazelcastInstance=#hazelcastInstance&concurrentConsumers=5",
HazelcastConstants.SEDA_PREFIX)
.log("Processing: ${threadName} ${body}");
In the previous example, the Hazelcast Queue FOO is consumed concurrently by 5 threads.
More details about the Hazelcat SEDA endpoint.

spring rabbitmq - consume multiple messages at the same time

I'm using RabbitMQ in my spring boot application in this way:
Sender:
rabbitTemplate.convertAndSend("exchange", "routingKey", "Message Text");
Listener:
#RabbitListener(queues = "queueName")
public void receive(String message) {
System.out.println("start");
//send an http request that takes for example 4 seconds
System.out.println("end");
}
With above codes, when application executes sender part, receive method invoked. My problem is while receive method is processing a message, if sender part put another message into queue, the method does not proccess new message and so second start word wont be printed until end word of previous message. In the other words, I want to know, how a message listener can proccess multiple messages at a time I don't know what is the problem.
From the problem you are stating, it looks like your listener is configured for single thread. Refer to the container listener configuration docs here and here especially the concurrency settings. The concurrency settings control how many threads process messages on the queue at same time.
If you are using spring boot, just add this configuration to the application properties:
# Minimum number of listener invoker threads
spring.rabbitmq.listener.simple.concurrency=5
And your listener will start accepting messages in parallel (multiple threads). There are other configurations that you can check too. Like the max number of listener invoker threads (check spring boot doc for more info).

ActiveMQ - cannot rollback non transaced session INVIDUAL_ACK

Is it possible to rollback async processed message in ActiveMQ? I'm consuming next message while first one is still processing, so while I'm trying to rollback the first message on another (not activemq pool) thread, I'm getting above error. Eventually should I sednd message to DLQ manually?
Message error handling can work a couple ways:
Broker-side 'redelivery policy'. Where the client invokes a rollback n number (default is usually 6 retries) of times and the broker automatically moves the message to a Dead Letter Queue (DLQ)
Client-side. Application consumes the message and then produces to the DLQ.
Option #1 is good for unplanned/planned outages-- database down, etc. Where you want automatic retry. The re-delivery policy can also be configured when the client connects to the broker.
Option #2 is good for 'bad data' scenarios where you know the message will never be able to be processed. This is ideal, because you can move the message on the 1st consumption and not have to reject the message n number of times.
When you combine infinite retry with #1 and include #2 in your application flow, you can have a robust process flow of automatic retry, and move-bad-data-out-of-the-way-quickly. Best of breed =)
ActiveMQ Redelivery policy

Kafka with Spring Integration - consumer-timeout vs read timeout?

With the configuration
Using the spring-integration-kafka extention and the following configuration:
<int-kafka:zookeeper-connect id="zookeeperConnect"
zk-connect="#{kafkaConfig['zooKeeperUrl']}" zk-connection-timeout="10000"
zk-session-timeout="10000" zk-sync-time="2000" />
<int-kafka:consumer-context id="consumerContext" consumer-timeout="5000" zookeeper-connect="zookeeperConnect">
the timeout is the time of waiting for a message or the time of waiting for a message and reading that message? is this value different from read timeout?
consumer.timeout.ms -1
from Kafka configuration
Throw a timeout exception to the consumer if no message is available
for consumption after the specified interval
from git-hub-spring-integration-kafka-repository
"In the above consumer context, you can also specify a consumer-timeout value which would be used to timeout the consumer in case of no messages to consume. This timeout would be applicable to all the streams (threads) in the consumer. The default value for this in Kafka is -1 which would make it wait indefinitely. However, Sping Integration overrides it to be 5 seconds by default in order to make sure that no threads are blocking indefinitely in the lifecycle of the application and thereby giving them a chance to free up any resources or locks that they hold. It is recommended to override this value so as to meet any specific use case requirements. By providing a reasonable consumer-timeout on the context and a fixed-delay value on the poller, this inbound adapter is capable of simulating a message driven behaviour."

Categories