camel-quarkus-amqp: Erroneous proceed messages not requeued / route not transactional - java

Problem:
When the further processing of a message fails, than the message is not requeued but just lost for good. Otherwise everything is working fine.
Environment:
Quarkus application on AKS with camel-quarkus-amqp, queue = Azure service bus
We have similar applications to that one, running on a jboss server with the same configuration (service bus queue with same properties, similar camel route setup) that are not showing that behaviour.
Code:
The application was previously built based on quarkus 1.11.3.Final but even after an update to 2.7.6.Final, the behaviour is the same.
from pom.xml (standard like from code generation):
<properties>
...
<quarkus.platform.version>2.7.6.Final</quarkus.platform.version>
</properties>
<dependencies>
<dependency>
<groupId>org.apache.camel.quarkus</groupId>
<artifactId>camel-quarkus-amqp</artifactId>
</dependency>
<dependency>
<groupId>org.apache.camel.quarkus</groupId>
<artifactId>camel-quarkus-bean</artifactId>
</dependency>
<dependencies>
application.properties:
quarkus.qpid-jms.url=failover:(amqps://xyz.servicebus.windows.net)
quarkus.qpid-jms.username=xxx
quarkus.qpid-jms.password=xxx
Implementation - Consumer:
public class MessageConsumer extends RouteBuilder {
#Inject private UpdateService updateService;
#Override
public void configure() {
from("amqp:queue:" + "queueName")
.routeId(MessageConsumer.class.getName() + ".consumeQueue")
.process(exchange -> exchange.getIn().setBody(UUID.fromString(exchange.getIn().getBody(String.class))))
.bean(updateService);
}
}
Implementation - Producer:
import javax.inject.Inject;
import javax.jms.ConnectionFactory;
import javax.jms.JMSContext;
import java.util.UUID;
public class MessageRepository {
#Inject ConnectionFactory connectionFactory;
public void sendMessage(UUID uuid) {
try (JMSContext context = connectionFactory.createContext(JMSContext.AUTO_ACKNOWLEDGE)) {
context.createProducer().send(context.createQueue("queueName"), uuid.toString());
} catch (Exception e) {
throw new ApplicationException("Invalid message");
}
}
}
Question:
What do I need to change / add to get messages requeued (not acknowledged) if an exception will be throw inside the bean processing of updateService in the camel route?

The likely issue here is that you need to enable transactions on the route so that the consumption of the message that fails will trigger a rollback of the transaction when an error is thrown.
There is considerable documentation about this and how transaction are managed in the Camel docs here.

Since there is a lack of transactional support for AMQP in camel-quarkus in versions below 3.4.0, I needed to do that plain without camel. Here is my implementation as reference for others maybe facing the same or similar problem:
public class MessageConsumer implements Runnable {
private static final long WAIT_UNTIL_RECONNECT = 30000; // 30 seconds
#Inject private UpdateService updateService;
#Inject ConnectionFactory connectionFactory;
private final ExecutorService scheduler = Executors.newSingleThreadExecutor();
void onStart(#Observes StartupEvent ev) { scheduler.submit(this); }
void onStop(#Observes ShutdownEvent ev) { scheduler.shutdown(); }
#SneakyThrows
#Override
public void run() {
// loop here to make sure that it will reconnect to the queue in case of disconnect
while(true) {
try {
openConnectionAndConsume();
} catch (Exception e) {
log.error("Exception in connection: ", e);
Thread.sleep(WAIT_UNTIL_RECONNECT);
}
}
}
private void openConnectionAndConsume() {
try (JMSContext context = connectionFactory.createContext(JMSContext.CLIENT_ACKNOWLEDGE);
JMSConsumer consumer = context.createConsumer(context.createQueue(messagingProperties.getQueue()))) {
while (true) {
Message message = consumer.receive();
handleMessage(message);
}
}
}
private void handleMessage(Message message) {
if (message == null) return;
try {
updateService.update(UUID.fromString(message.getBody(String.class)));
message.acknowledge();
} catch (Exception e) {
log.error("Error in route: ", e);
}
}
}

Related

Set maximum consumer count jms solace

I am trying to set the maximum consumer count of a topic endpoint with jms using solace as a broker, so for increasing load, multiple instances of the app can be started in cloudfoundry, and multiple subscribers can consume messages of the same topic.
I have tried multiple combinations of the below settings (setConcurrency(), setConcurrentConsumers(), setMaxConcurrentConsumers(), (20 as an arbitrary high number). Judging from the documentation, I definitely need to use setMaxConcurrentConsumers() and set this to an appropriately high value.
When I deploy the app, the topic endpoint gets created, but when I look at the solace management interface, the maximum consumer count is always 1 (as can be seen here: Queues -> Topic Endpoints -> select endpoint -> Configured Limit), even though it should be 20. So the second consumer is not able to connect. I don't want to set this manually every time I deploy the app.
import javax.jms.*;
import org.apache.commons.logging.Log;
import org.apache.commons.logging.LogFactory;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.beans.factory.annotation.Value;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
import org.springframework.jms.connection.CachingConnectionFactory;
import org.springframework.jms.core.JmsTemplate;
import org.springframework.jms.listener.DefaultMessageListenerContainer;
#Configuration
public class ProducerConfiguration {
private static final Log logger = LogFactory.getLog(SolaceController.class);
#Value("${durable_subscription}")
private String subscriptionName;
#Value("${topic_name}")
private String topic_name;
#Autowired
private ConnectionFactory connectionFactory;
#Bean
public JmsTemplate jmsTemplate() {
CachingConnectionFactory ccf = new CachingConnectionFactory(connectionFactory);
JmsTemplate jmst = new JmsTemplate(ccf);
jmst.setPubSubDomain(true);
return jmst;
}
#Bean
public Session configureSession(ConnectionFactory connectionFactory) throws JMSException {
return connectionFactory.createConnection().createSession(false, Session.AUTO_ACKNOWLEDGE);
}
private TextMessage lastReceivedMessage;
public class SimpleMessageListener implements MessageListener {
#Override
public void onMessage(Message message) {
if (message instanceof TextMessage) {
lastReceivedMessage = (TextMessage) message;
try {
logger.info("Received message : " + lastReceivedMessage.getText());
} catch (JMSException e) {
logger.error("Error getting text of the received TextMessage: " + e);
}
} else {
logger.error("Received message that was not a TextMessage: " + message);
}
}
}
#Bean
public DefaultMessageListenerContainer orderMessageListenerContainer() {
DefaultMessageListenerContainer lc = new DefaultMessageListenerContainer();
lc.setConnectionFactory(connectionFactory);
lc.setDestinationName(topic_name);
lc.setMessageListener(new SimpleMessageListener());
lc.setDurableSubscriptionName(subscriptionName);
lc.setPubSubDomain(true);
//tried multiple combinations here, also setting only setMaxConcurrentConsumers
lc.setConcurrency("2-20");
lc.setConcurrentConsumers(20);
lc.setMaxConcurrentConsumers(20);
lc.setSubscriptionDurable(true);
lc.initialize();
lc.start();
return lc;
}
}
I think for your use case, your consumer is stuck with queues. See https://solace.com/blog/topic-subscription-queues/
"... while multiple consumers can bind to Queues
Durable endpoints are limited to a single topic subscription. Queues allow multiple topic subscriptions as well as topic wildcards."
If you don't want to change your publisher you can try "Topic Subscription on Queues". That is a queue can be configured to listen on a topic. And then your consumers would get messages from that queue.
You need to create a non-exclusive queue/endpoint.
By default, the queue you create are exclusive queues/endpoints, which means only one subscriber can bind to it at any time.
The easiest way to create such a queue/endpoint is through the Solace CLI.
To create a non-exclusive queue in your JMS program, you have to go into Solace specific JMS implementation like this:
if (queueName != null) {
EndpointProperties props = new EndpointProperties();
props.setAccessType(EndpointProperties.ACCESSTYPE_NONEXCLUSIVE);
try {
((SolConnection)connection).getProperties().getJCSMPSession()
.provision(JCSMPFactory.onlyInstance().createQueue(queueName), props, 0L);
} catch (Exception e) {
e.printStackTrace();
}
queue = session.createQueue(queueName);
}

Object does not close\return connections into pool if work in Proxy EJB container

I've integration solution based on weblogic application server. (Oracle Retail Integration Bus). Each adapter of solution is a EJB component.
Weblogic Console Application view
There are two types of Adapters, first one is for XA Transactions, and second one is for NONXA transactions. (NONXA needed when in PLSQL API function used External proc procedures).
XA - OracleObjectSubscriberComponentImpl
public class OracleObjectSubscriberComponentImpl
extends DefaultAdaptorComponentImpl
implements SubscriberComponent
{
....
protected void performSubscribe(RibContext ribContext, RibMessage inRibMessage, RibMessages outRibMsg)
throws RibAPIException
{
boolean success = false;
Connection connection = null;
try
{
setRibContext(ribContext);
setRibMessagesOut(outRibMsg);
connection = getNewConnection();
initCallableStatement(connection, null);
registerOutParams();
if (setInParams(inRibMessage))
{
execute();
processResult();
success = true;
}
}
finally
{
end(success);
cleanup();
super.closeConnection(connection);
}
}
public void performSubscribe(RibContext ribContext, RibMessage inRibMessage)
throws RibAPIException
{
performSubscribe(ribContext, inRibMessage, null);
}
public void subscribe(RibContext ribContext, RibMessage inRibMessage)
throws RibAPIException
{
performSubscribe(ribContext, inRibMessage);
}
...
}
NonXA - OracleObjectSubscriberComponentNonXAImpl extends OracleObjectSubscriberComponentImpl.
public class OracleObjectSubscriberComponentNonXAImpl
extends OracleObjectSubscriberComponentImpl
{
public void initCallableStatement(Connection c, String msgType)
throws RibAPIException
{
try
{
c.setAutoCommit(false);
}
catch (SQLException e)
{
this.LOG.error("Could not turn off AutoCommit", e);
throw createRibAPIException("initStmt", e);
}
super.initCallableStatement(c, msgType);
}
public void end(boolean successful)
{
super.end(successful);
try
{
if (successful) {
this.conn.commit();
} else {
this.conn.rollback();
}
this.conn.setAutoCommit(true);
}
catch (SQLException sqle)
{
String errorString = "Error occurred during commit/rollback";
this.LOG.error(errorString, sqle);
throw new RuntimeException(errorString, sqle);
}
}
public void subscribe(RibContext ribContext, RibMessage inRibMessage)
throws RibAPIException
{
NonTransactionalSubscriberCoreService nonTransactionalSubscriberCoreService = (NonTransactionalSubscriberCoreService)RetailServiceFactory.getService(NonTransactionalSubscriberCoreService.class);
nonTransactionalSubscriberCoreService.subscribe(this, ribContext, inRibMessage);
}
}
Differences between them is:
NonXA Autocommit = false, and this.conn.commit(); and this.conn.rollback(); used instead of XA Autocommit = true.
NonXA got overrided method subscribe, where new Proxy service created, and NonXA object goes into Proxy, where it will execute Perform Subscribe.
Non XA class uses NonTransactionalSubscriberCoreServiceEjb class, this is a Proxy service:
public class NonTransactionalSubscriberCoreServiceEjb
implements NonTransactionalSubscriberCoreService, SessionBean
{
...
public void subscribe(OracleObjectSubscriberComponentNonXAImpl subscriber, RibContext ribContext, RibMessage inRibMessage)
throws RibAPIException
{
RibContextFactory.setCurrentRibContext(ribContext);
subscriber.performSubscribe(ribContext, inRibMessage);
RibContextFactory.clearCurrentRibContext();
}
...
}
All Adapters does not parrallel, each adapter get messages one by one, from JMS Topic. XA component works fine, it get connection in weblogic datasource and return it back when work finished. NonXA does not working as expected, it take connection from datasourse and do not release it, connection hold up until timeout came.
If i change NonXA class, subscribe method to this:
public void subscribe(RibContext ribContext, RibMessage inRibMessage)
throws RibAPIException
{
this.performSubscribe(ribContext, inRibMessage);
}
Connections will be released after work finished, but i can't use External Proc in API, because ORA-xxx (this feature is not supported in XA) raised. I need keep NonXA functionality and release connections.
Solution:
If datasource managed from weblogic server.
From Weblogic console for the datasource,
uncheck "Keep Connection After Local Transaction"
check "Remove Infected Connections Enabled"
If datasource managed from appliaction, make same changes in datasource xml file.
<jdbc-data-source>
<name>rib-rms-managed-datasource</name>
<jdbc-driver-params>
<url>jdbc:oracle:thin:#monrmsdb.apm.local:1521:retekdb</url>
<driver-name>oracle.jdbc.xa.client.OracleXADataSource</driver-name>
<properties>
<property>
<name>user</name>
<value>RMS13DEV</value>
</property>
</properties>
</jdbc-driver-params>
<jdbc-connection-pool-params>
<initial-capacity>0</initial-capacity>
<max-capacity>350</max-capacity>
<capacity-increment>10</capacity-increment>
<connection-creation-retry-frequency-seconds>1</connection-creation-retry-frequency-seconds>
<test-connections-on-reserve>false</test-connections-on-reserve>
<profile-harvest-frequency-seconds>60000</profile-harvest-frequency-seconds>
<inactive-connection-timeout-seconds>5</inactive-connection-timeout-seconds>
<statement-cache-size>0</statement-cache-size>
<!-- Add this--><remove-infected-connections>true</remove-infected-connections>
<pinned-to-thread>false</pinned-to-thread>
</jdbc-connection-pool-params>
<jdbc-data-source-params>
<jndi-name>jdbc/OracleRibDs</jndi-name>
<global-transactions-protocol>TwoPhaseCommit</global-transactions-protocol>
<!-- Add this--><keep-conn-after-local-tx>false</keep-conn-after-local-tx>
</jdbc-data-source-params>
</jdbc-data-source>

Apache Camel sends ActiveMQ messages to ActiveMQ.DLQ

There is a middleware in between of two other softwares. In the middleware I'm routing Apache ActiveMQ messages by Apache Camel.
the first software uses middleware to send message to the 3rd software and the 3rd one reply the message to the first(using middleware).
1stSoftware <<=>> Middleware <<=>> 3rdSoftware
Problem:
when with the first one I send message to the middleware, middleware sends that message directly to ActiveMQ.DLQ and the 3rd one can not consume it!(Interesting point is this: when I copy that message to the main queue with the Admin panel of ActiveMQ, software can consume it properly!)
What's the problem?! It was working until I changed the Linux date!!!!!!!
Middleware is like this:
#SuppressWarnings("deprecation")
public class MiddlewareDaemon {
private Main main;
public static void main(String[] args) throws Exception {
MiddlewareDaemon middlewareDaemon = new MiddlewareDaemon();
middlewareDaemon.boot();
}
public void boot() throws Exception {
main = new Main();
main.enableHangupSupport();
//?wireFormat.maxInactivityDuration=0
main.bind("activemq", activeMQComponent("tcp://localhost:61616")); //ToGet
main.bind("activemq2", activeMQComponent("tcp://192.168.10.103:61616")); //ToInOut
main.addRouteBuilder(new MyRouteBuilder());
System.out.println("Starting Camel(MiddlewareDaemon). Use ctrl + c to terminate the JVM.\n");
main.run();
}
private static class MyRouteBuilder extends RouteBuilder {
#Override
public void configure() throws Exception {
intercept().to("log:Midlleware?level=INFO&showHeaders=true&showException=true&showCaughtException=true&showStackTrace=true");
from("activemq:queue:Q.Midlleware")
.process(new Processor() {
public void process(Exchange exchange) {
Map<String, Object> header = null;
try {
Message in = exchange.getIn();
header = in.getHeaders();
} catch (Exception e) {
log.error("Exception:", e);
header.put("Midlleware_Exception", e.getMessage() + " - " + e);
}
}
})
.inOut("activemq2:queue:Q.Comp2")
}
}
}
And the 3rd software(Replier): (this is a daemon like above, i just copied the RouteBuilder part)
private static class MyRouteBuilder extends RouteBuilder {
#Override
public void configure() {
intercept().to("log:Comp2?level=INFO&showHeaders=true&showException=true&showCaughtException=true&showStackTrace=true");
from("activemq:queue:Q.Comp2")
.process(new Processor() {
public void process(Exchange exchange) {
Message in = exchange.getIn();
Map<String, Object> headers = null;
try {
headers = in.getHeaders();
in.setBody(ZipUtil.compress(/*somResults*/));
} catch (Exception e) {
log.error("Exception", e);
in.setBody(ZipUtil.compress("[]"));
in.getHeaders().put("Comp2_Exception", e.getMessage() + " - " + e);
}
}
})
;
}
}
If the only thing you changed is the time on one of the servers, then this might well be the problem.
For communications over MQ to work properly, it is essential that all involved host systems have their clock set in sync. In the case of ActiveMQ there is a default message time-to-live (30 seconds, I think) for response queues. If the responding system is more then 30 seconds in the future relative to the host running ActiveMQ, then ActiveMQ will immediately expire the message and move it to the DLQ.

how to use one apache camel context object in each java Restful services

i need to perform the all operation like the creating the quartz-2 scheduler and deleting on only one Apache camel context
using restful service. when i try using following code .each time its creating the new context object. i do not know how to fix it or where i need to initiate the apache camel context object.
this is my code
this is my java restful services which is call to the quartz scheduler.
java Rest Services.
#Path("/remainder")
public class RemainderResource {
private static org.apache.log4j.Logger log = Logger.getLogger(RemainderResource.class);
RemainderScheduler remainderScheduler=new RemainderScheduler();
CamelContext context = new DefaultCamelContext();
#POST
#Path("/beforeday/{day}")
public void create(#PathParam("day") int day,final String userdata)
{
log.debug("the starting process of the creating the Remainder");
JSONObject data=(JSONObject) JSONSerializer.toJSON(userdata);
String cronExp=data.getString("cronExp");
remainderScheduler.create(cronExp,day,context);
}
}
This is my java class which is schedule job .
public class RemainderScheduler {
private static org.apache.log4j.Logger log = Logger.getLogger(RemainderScheduler.class);
public void sendRemainder(int day)
{
log.debug("the starting of the sending the Remainder to user");
}
public RouteBuilder createMyRoutes(final String cronExp,final int day)
{
return new RouteBuilder()
{
#Override
public void configure() throws Exception {
log.debug("Before set schedulling");
from("quartz2://RemainderGroup/Remainder? cron="+cronExp+"&deleteJob=true&job.name='RemainderServices'").bean(new RemainderScheduler(), "sendRemainder('"+day+"')").routeId("Remainder")
.process(new Processor() {
#Override
public void process(Exchange exchange) throws Exception {
}
})
;
log.debug("after set schedulling");
}
};
}
public void stopService(CamelContext context)
{
log.debug("this is going to be stop the route");
try {
context.stopRoute("Remainder");
context.removeRoute("Remainder");
} catch (Exception e) {
e.printStackTrace();
}
}
public void create(final String cronExp,final int day,CamelContext context)
{
try
{
//this for if all ready exist then stop it.
if(context.getRoute("Remainder")!=null)
stopService(context);
log.debug("the starting of the process for creating the Remaider Services");
context.addRoutes(createMyRoutes(cronExp, day));
context.start();
log.debug("the status for removing the services is"+context.removeRoute("Remainder"));
}
catch(Exception e)
{
System.out.println(e.toString());
e.printStackTrace();
}
}
}
if i execute the above code then the each java Restful request create the new context object.
and its will start the job scheduling on new apache camel context object. and if send request for stop the route then also its creating the new apache context object so i am not able to reset or stop the quartz-2 scheduler.
It is not a good practise to create a camel context per request.
I suggest you to use camel-restlet or camel-cxfrs to delegate the request of create and delete scheduler to another camel context.

Camel: stop the route when the jdbc connection loss is detected

I have an application with the following route:
from("netty:tcp://localhost:5150?sync=false&keepAlive=true")
.routeId("tcp.input")
.transform()
.simple("insert into tamponems (AVIS) values (\"${in.body}\");")
.to("jdbc:mydb");
This route receives a new message every 59 millisecondes. I want to stop the route when the connection to the database is lost, before that a second message arrives. And mainly, I want to never lose a message.
I proceeded that way:
I added an errorHandler:
errorHandler(deadLetterChannel("direct:backup")
.redeliveryDelay(5L)
.maximumRedeliveries(1)
.retryAttemptedLogLevel(LoggingLevel.WARN)
.logExhausted(false));
My errorHandler tries to redeliver the message and if it fails again, it redirects the message to a deadLetterChannel.
The following deadLetterChannel will stop the tcp.input route and try to redeliver the message to the database:
RoutePolicy policy = new StopRoutePolicy();
from("direct:backup")
.routePolicy(policy)
.errorHandler(
defaultErrorHandler()
.redeliveryDelay(1000L)
.maximumRedeliveries(-1)
.retryAttemptedLogLevel(LoggingLevel.ERROR)
)
.to("jdbc:mydb");
Here is the code of the routePolicy:
public class StopRoutePolicy extends RoutePolicySupport {
private static final Logger LOG = LoggerFactory.getLogger(String.class);
#Override
public void onExchangeDone(Route route, Exchange exchange) {
String stop = "tcp.input";
CamelContext context = exchange.getContext();
if (context.getRouteStatus(stop) != null && context.getRouteStatus(stop).isStarted()) {
try {
exchange.getContext().getInflightRepository().remove(exchange);
LOG.info("STOP ROUTE: {}", stop);
context.stopRoute(stop);
} catch (Exception e) {
getExceptionHandler().handleException(e);
}
}
}
}
My problems with this method are:
In my "direct:backup" route, if I set the maximumRedeliveries to -1 the route tcp.input will never stop
I'm loosing messages during the stop
This method for detecting the connection loss and for stopping the route is too long
Please, does anybody have an idea for make this faster or for make this differently in order to not lose message?
I have finally found a way to resolve my problems. In order to make the application faster, I added asynchronous processes and multithreading with seda.
from("netty:tcp://localhost:5150?sync=false&keepAlive=true").to("seda:break");
from("seda:break").threads(5)
.routeId("tcp.input")
.transform()
.simple("insert into tamponems (AVIS) values (\"${in.body}\");")
.to("jdbc:mydb");
I did the same with the backup route.
from("seda:backup")
.routePolicy(policy)
.errorHandler(
defaultErrorHandler()
.redeliveryDelay(1000L)
.maximumRedeliveries(-1)
.retryAttemptedLogLevel(LoggingLevel.ERROR)
).threads(2).to("jdbc:mydb");
And I modified the routePolicy like that:
public class StopRoutePolicy extends RoutePolicySupport {
private static final Logger LOG = LoggerFactory.getLogger(String.class);
#Override
public void onExchangeBegin(Route route, Exchange exchange) {
String stop = "tcp.input";
CamelContext context = exchange.getContext();
if (context.getRouteStatus(stop) != null && context.getRouteStatus(stop).isStarted()) {
try {
exchange.getContext().getInflightRepository().remove(exchange);
LOG.info("STOP ROUTE: {}", stop);
context.stopRoute(stop);
} catch (Exception e) {
getExceptionHandler().handleException(e);
}
}
}
#Override
public void onExchangeDone(Route route, Exchange exchange) {
String stop = "tcp.input";
CamelContext context = exchange.getContext();
if (context.getRouteStatus(stop) != null && context.getRouteStatus(stop).isStopped()) {
try {
LOG.info("RESTART ROUTE: {}", stop);
context.startRoute(stop);
} catch (Exception e) {
getExceptionHandler().handleException(e);
}
}
}
}
With these updates, the TCP route is stopped before the backup route is processed. And the route is restarted when the jdbc connection is back.
Now, thanks to Camel, the application is able to handle a database failure without losing message and without manual intervention.
I hope this could help you.

Categories