Camel: stop the route when the jdbc connection loss is detected - java

I have an application with the following route:
from("netty:tcp://localhost:5150?sync=false&keepAlive=true")
.routeId("tcp.input")
.transform()
.simple("insert into tamponems (AVIS) values (\"${in.body}\");")
.to("jdbc:mydb");
This route receives a new message every 59 millisecondes. I want to stop the route when the connection to the database is lost, before that a second message arrives. And mainly, I want to never lose a message.
I proceeded that way:
I added an errorHandler:
errorHandler(deadLetterChannel("direct:backup")
.redeliveryDelay(5L)
.maximumRedeliveries(1)
.retryAttemptedLogLevel(LoggingLevel.WARN)
.logExhausted(false));
My errorHandler tries to redeliver the message and if it fails again, it redirects the message to a deadLetterChannel.
The following deadLetterChannel will stop the tcp.input route and try to redeliver the message to the database:
RoutePolicy policy = new StopRoutePolicy();
from("direct:backup")
.routePolicy(policy)
.errorHandler(
defaultErrorHandler()
.redeliveryDelay(1000L)
.maximumRedeliveries(-1)
.retryAttemptedLogLevel(LoggingLevel.ERROR)
)
.to("jdbc:mydb");
Here is the code of the routePolicy:
public class StopRoutePolicy extends RoutePolicySupport {
private static final Logger LOG = LoggerFactory.getLogger(String.class);
#Override
public void onExchangeDone(Route route, Exchange exchange) {
String stop = "tcp.input";
CamelContext context = exchange.getContext();
if (context.getRouteStatus(stop) != null && context.getRouteStatus(stop).isStarted()) {
try {
exchange.getContext().getInflightRepository().remove(exchange);
LOG.info("STOP ROUTE: {}", stop);
context.stopRoute(stop);
} catch (Exception e) {
getExceptionHandler().handleException(e);
}
}
}
}
My problems with this method are:
In my "direct:backup" route, if I set the maximumRedeliveries to -1 the route tcp.input will never stop
I'm loosing messages during the stop
This method for detecting the connection loss and for stopping the route is too long
Please, does anybody have an idea for make this faster or for make this differently in order to not lose message?

I have finally found a way to resolve my problems. In order to make the application faster, I added asynchronous processes and multithreading with seda.
from("netty:tcp://localhost:5150?sync=false&keepAlive=true").to("seda:break");
from("seda:break").threads(5)
.routeId("tcp.input")
.transform()
.simple("insert into tamponems (AVIS) values (\"${in.body}\");")
.to("jdbc:mydb");
I did the same with the backup route.
from("seda:backup")
.routePolicy(policy)
.errorHandler(
defaultErrorHandler()
.redeliveryDelay(1000L)
.maximumRedeliveries(-1)
.retryAttemptedLogLevel(LoggingLevel.ERROR)
).threads(2).to("jdbc:mydb");
And I modified the routePolicy like that:
public class StopRoutePolicy extends RoutePolicySupport {
private static final Logger LOG = LoggerFactory.getLogger(String.class);
#Override
public void onExchangeBegin(Route route, Exchange exchange) {
String stop = "tcp.input";
CamelContext context = exchange.getContext();
if (context.getRouteStatus(stop) != null && context.getRouteStatus(stop).isStarted()) {
try {
exchange.getContext().getInflightRepository().remove(exchange);
LOG.info("STOP ROUTE: {}", stop);
context.stopRoute(stop);
} catch (Exception e) {
getExceptionHandler().handleException(e);
}
}
}
#Override
public void onExchangeDone(Route route, Exchange exchange) {
String stop = "tcp.input";
CamelContext context = exchange.getContext();
if (context.getRouteStatus(stop) != null && context.getRouteStatus(stop).isStopped()) {
try {
LOG.info("RESTART ROUTE: {}", stop);
context.startRoute(stop);
} catch (Exception e) {
getExceptionHandler().handleException(e);
}
}
}
}
With these updates, the TCP route is stopped before the backup route is processed. And the route is restarted when the jdbc connection is back.
Now, thanks to Camel, the application is able to handle a database failure without losing message and without manual intervention.
I hope this could help you.

Related

Reply timeout when using AsyncRabbitTemplate::sendAndReceive - RabbitMQ

I recently changed from using a standard Rabbit Template, in my Spring Boot application, to using an Async Rabbit Template. In the process, I switched from the standard send method to using the sendAndReceive method.
Making this change does not seem to affect the publishing of messages to RabbitMQ, however I do now see stack traces as follows when sending messages:
org.springframework.amqp.core.AmqpReplyTimeoutException: Reply timed out
at org.springframework.amqp.rabbit.AsyncRabbitTemplate$RabbitFuture$TimeoutTask.run(AsyncRabbitTemplate.java:762) [spring-rabbit-2.3.10.jar!/:2.3.10]
at org.springframework.scheduling.support.DelegatingErrorHandlingRunnable.run(DelegatingErrorHandlingRunnable.java:54) [spring-context-5.3.9.jar!/:5.3.9]
I have tried modifying various settings including the reply and receive timeouts but all that changes is the time it takes to receive the above error. I have also tried setting useDirectReplyToContainer to true as well as setting useChannelForCorrelation to true.
I have managed to recreate the issue in a main method, included bellow, using a RabbitMQ broker running in docker.
public static void main(String[] args) {
com.rabbitmq.client.ConnectionFactory cf = new com.rabbitmq.client.ConnectionFactory();
cf.setHost("localhost");
cf.setPort(5672);
cf.setUsername("<my-username>");
cf.setPassword("<my-password>");
cf.setVirtualHost("<my-vhost>");
ConnectionFactory connectionFactory = new CachingConnectionFactory(cf);
RabbitTemplate rabbitTemplate = new RabbitTemplate(connectionFactory);
rabbitTemplate.setExchange("primary");
rabbitTemplate.setUseDirectReplyToContainer(true);
rabbitTemplate.setReceiveTimeout(10000);
rabbitTemplate.setReplyTimeout(10000);
rabbitTemplate.setUseChannelForCorrelation(true);
AsyncRabbitTemplate asyncRabbitTemplate = new AsyncRabbitTemplate(rabbitTemplate);
asyncRabbitTemplate.start();
System.out.printf("Async Rabbit Template Running? %b\n", asyncRabbitTemplate.isRunning());
MessageBuilderSupport<MessageProperties> props = MessagePropertiesBuilder.newInstance()
.setContentType(MessageProperties.CONTENT_TYPE_TEXT_PLAIN)
.setMessageId(UUID.randomUUID().toString())
.setHeader(PUBLISH_TIME_HEADER, Instant.now(Clock.systemUTC()).toEpochMilli())
.setDeliveryMode(MessageDeliveryMode.NON_PERSISTENT);
asyncRabbitTemplate.sendAndReceive(
"1.1.1.csv-routing-key",
new Message(
"a,test,csv".getBytes(StandardCharsets.UTF_8),
props.build()
)
).addCallback(new ListenableFutureCallback<>() {
#Override
public void onFailure(Throwable ex) {
System.out.printf("Error sending message:\n%s\n", ex.getLocalizedMessage());
}
#Override
public void onSuccess(Message result) {
System.out.println("Message successfully sent");
}
});
}
I am sure that I am just missing a configuration option but any help would be appricated.
Thanks. :)
asyncRabbitTemplate.sendAndReceive(..) will always expect a response from the consumer of the message, hence the timeout you are receiving.
To fire and forget use the standard RabbitTemplate.send(...) and catching any exceptions in a try/catch block:
try {
rabbitTemplate.send("1.1.1.csv-routing-key",
new Message(
"a,test,csv".getBytes(StandardCharsets.UTF_8),
props.build());
} catch (AmqpException ex) {
log.error("failed to send rabbit message, routing key = {}", routingKey, ex);
}
Set reply timeout to some bigger number and see the effect.
rabbitTemplate.setReplyTimeout(60000);
https://docs.spring.io/spring-amqp/reference/html/#reply-timeout

Netty 4 - The pool returns a channel which is not yet ready to send the the actual message

I have created an inbound handler of type SimpleChannelInboundHandler and added to pipeline. My intention is every time a connection is established, I wanted to send an application message called session open message and make the connection ready to send the actual message. To achieve this, the above inbound handler
over rides channelActive() where session open message is sent, In response to that I would get a session open confirmation message. Only after that I should be able to send any number of actual business message. I am using FixedChannelPool and initialised as follows. This works well some time on startup. But if the remote host closes the connection, after that if a message is sent calling the below sendMessage(), the message is sent even before the session open message through channelActive() and its response is obtained. So the server ignores the message as the session is not open yet when the business message was sent.
What I am looking for is, the pool should return only those channel that has called channelActive() event which has already sent the session open message and it has got its session open confirmation message from the server. How to deal with this situation?
public class SessionHandler extends SimpleChannelInboundHandler<byte[]> {
#Override
public void channelActive(ChannelHandlerContext ctx) throws Exception {
super.channelActive(ctx);
if (ctx.channel().isWritable()) {
ctx.channel().writeAndFlush("open session message".getBytes()).;
}
}
}
// At the time of loading the applicaiton
public void init() {
final Bootstrap bootStrap = new Bootstrap();
bootStrap.group(group).channel(NioSocketChannel.class).remoteAddress(hostname, port);
fixedPool = new FixedChannelPool(bootStrap, getChannelHandler(), 5);
// This is done to intialise connection and the channelActive() from above handler is invoked to keep the session open on startup
for (int i = 0; i < config.getMaxConnections(); i++) {
fixedPool.acquire().addListener(new FutureListener<Channel>() {
#Override
public void operationComplete(Future<Channel> future) throws Exception {
if (future.isSuccess()) {
} else {
LOGGER.error(" Channel initialzation failed...>>", future.cause());
}
}
});
}
}
//To actually send the message following method is invoked by the application.
public void sendMessage(final String businessMessage) {
fixedPool.acquire().addListener(new FutureListener<Channel>() {
#Override
public void operationComplete(Future<Channel> future) throws Exception {
if (future.isSuccess()) {
Channel channel = future.get();
if (channel.isOpen() && channel.isActive() && channel.isWritable()) {
channel.writeAndFlush(businessMessage).addListener(new GenericFutureListener<ChannelFuture>() {
#Override
public void operationComplete(ChannelFuture future) throws Exception {
if (future.isSuccess()) {
// success msg
} else {
// failure msg
}
}
});
fixedPool.release(channel);
}
} else {
// Failure
}
}
});
}
If there is no specific reason that you need to use a FixedChannelPool then you can use another data structure (List/Map) to store the Channels. You can add a channel to the data structure after sending open session message and remove it in the channelInactive method.
If you need to perform bulk operations on channels you can use a ChannelGroup for the purpose.
If you still want you use the FixedChannelPool you may set an attribute in the channel on whether open message was sent:
ctx.channel().attr(OPEN_MESSAGE_SENT).set(true);
you can get the attribute as follows in your sendMessage function:
boolean sent = ctx.channel().attr(OPEN_MESSAGE_SENT).get();
and in the channelInactive you may set the same to false or remove it.
Note OPEN_MESSAGE_SENT is an AttributeKey:
public static final AttributeKey<Boolean> OPEN_MESSAGE_SENT = AttributeKey.valueOf("OPEN_MESSAGE_SENT");
I know this is a rather old question, but I stumbled across the similar issue, not quite the same, but my issue was the ChannelInitializer in the Bootstrap.handler was never called.
The solution was to add the pipeline handlers to the pool handler's channelCreated method.
Here is my pool definition code that works now:
pool = new FixedChannelPool(httpBootstrap, new ChannelPoolHandler() {
#Override
public void channelCreated(Channel ch) throws Exception {
ChannelPipeline pipeline = ch.pipeline();
pipeline.addLast(HTTP_CODEC, new HttpClientCodec());
pipeline.addLast(HTTP_HANDLER, new NettyHttpClientHandler());
}
#Override
public void channelAcquired(Channel ch) {
// NOOP
}
#Override
public void channelReleased(Channel ch) {
// NOOP
}
}, 10);
So in the getChannelHandler() method I assume you're creating a ChannelPoolHandler in its channelCreated method you could send your session message (ch.writeAndFlush("open session message".getBytes());) assuming you only need to send the session message once when a connection is created, else you if you need to send the session message every time you could add it to the channelAcquired method.

Akka ActorSystem never terminate in Java

Im using Akka 2.5.6 in Java 8 and I want to know the right way to finish de ActorSystem, part of the functionality of my code is to process some XML files and validate them, to achieve this I have created 3 actors:
Controller, Processor and Validator.
The Controller is responsible for initiating the process and sending file by file and other information to the Processor, then the Processor create a digital signature of the file and sends the response to the Validator that finally validates the status and sends an OK message to the Controller which is counting the number of files validated and compares them with the total files. Once the total of files with the total of validated files are equal, I call to finish the ActorSystem with the terminate () method.
The method to finish is as follows:
private void endActors()
{
ActorSystem actorSystem = getContext().system();
Future <Terminated> terminated = actorSystem.terminate();
do {
log.info ("Waiting to finish ...");
try {
Thread.sleep (30000L);
} catch (InterruptedException ex) {
log.error ("Error in Thread.");
}
} while (! ended.isCompleted ());
log.info ("Actors finished processing.");
}
The loop never ends because the future is never complete, I dont know if this is the right way, I hope you have understood me and can help me or give me some advice.
Try the following (the key here is the on complete) . I wrote a class along these lines to use in a setup and teardown for junit, to avoid issues from actor system not fully terminating in the teardown of one test before being created in another test. (that caused port already in use issues)
private static ActorSystem system = null;
private static Future<Terminated> terminatedFuture;
public static ActorSystem getFreshActorSystem() {
tearDownActorSystem();
while(system != null) {
try {
Thread.sleep(500L);
} catch (InterruptedException e) {
}
}
system = ActorSystem.create();
return system;
}
public static void tearDownActorSystem() {
if (system !=null && !isInMiddleOfTerminating()) {
terminatedFuture = system.terminate();
terminatedFuture.onComplete( new OnComplete(){
#Override
public void onComplete(Throwable failure, Object success) throws Throwable {
system = null;
terminatedFuture = null;
}
} , system.dispatcher());
}
}
private static boolean isInMiddleOfTerminating() {
return terminatedFuture !=null;
}

Stopping a Camel route from outside the route

We have a data processing application that runs on Karaf 2.4.3 with Camel 2.15.3.
In this application, we have a bunch of routes that import data. We have a management view that lists these routes and where each route can be started. Those routes do not directly import data, but call other routes (some of them in other bundles, called via direct-vm), sometimes directly and sometimes in a splitter.
Is there a way to also completely stop a route/therefore stopping the entire exchange from being further processed?
When simply using the stopRoute function like this:
route.getRouteContext().getCamelContext().stopRoute(route.getId());
I eventually get a success message with Graceful shutdown of 1 routes completed in 10 seconds - the exchange is still being processed though...
So I tried to mimic the behaviour of the StopProcessor by setting the stop property, but that also didn't help:
public void stopRoute(Route route) {
try {
Collection<InflightExchange> browse = route.getRouteContext().getCamelContext().getInflightRepository()
.browse();
for (InflightExchange inflightExchange : browse) {
String exchangeRouteId = inflightExchange.getRouteId();
if ((exchangeRouteId != null) && exchangeRouteId.equals(route.getId())) {
this.stopExchange(inflightExchange.getExchange());
}
}
} catch (Exception e) {
Notification.show("Error while trying to stop route", Type.ERROR_MESSAGE);
LOGGER.error(e, e);
}
}
public void stopExchange(Exchange exchange) throws Exception {
AsyncProcessorHelper.process(new AsyncProcessor() {
#Override
public void process(Exchange exchange) throws Exception {
AsyncProcessorHelper.process(this, exchange);
}
#Override
public boolean process(Exchange exchange, AsyncCallback callback) {
exchange.setProperty(Exchange.ROUTE_STOP, Boolean.TRUE);
callback.done(true);
return true;
}
}, exchange);
}
Is there any way to completely stop an exchange from being processed from outside the route?
Can you get an exchange?
I use exchange.setProperty(Exchange.ROUTE_STOP, true);
Route stop flow and doesn't go to next route.

netty 3.5.7 Channel.close() throws exception cause closefuture not notified

I'm digging a bug in my netty program:I used a heartbeat handler between the server and client,when client system rebooting,the heartbeat handler in server side will be aware of timeout and then close the Channel,but sometimes the listener registered in Channel's CloseFuture never be notified,that's weird.
After digging netty 3.5.7 source code,I figure out that the only way a Channel's CloseFuture be notified is through AbstractChannel.setClosed();May be this method not be executed when Channel is closed,see below:
NioServerSocketPipelineSink:
private static void close(NioServerSocketChannel channel, ChannelFuture future) {
boolean bound = channel.isBound();
try {
if (channel.socket.isOpen()) {
channel.socket.close();
Selector selector = channel.selector;
if (selector != null) {
selector.wakeup();
}
}
// Make sure the boss thread is not running so that that the future
// is notified after a new connection cannot be accepted anymore.
// See NETTY-256 for more information.
channel.shutdownLock.lock();
try {
if (channel.setClosed()) {
future.setSuccess();
if (bound) {
fireChannelUnbound(channel);
}
fireChannelClosed(channel);
} else {
future.setSuccess();
}
} finally {
channel.shutdownLock.unlock();
}
} catch (Throwable t) {
future.setFailure(t);
fireExceptionCaught(channel, t);
}
}
in some platform channel.socket.close() may throw IOException,that means channel.setClosed() may never executed,so the listener registered in CloseFuture may not be notified.
Here is my question:Do you ever encounter this problem? Is the analysis right?
I figure out it's my heartbeat handler cause the problem:never timeout,so never close the channel,below is running in a timer :
if ((now - lastReadTime > heartbeatTimeout)
&& (now - lastWriteTime > heartbeatTimeout)) {
getChannel().close();
stopHeartbeatTimer();
}
where lastReadTime and lastWriteTime are updated like below:
public void writeComplete(ChannelHandlerContext ctx, WriteCompletionEvent e)
throws Exception {
lastWriteTime = System.currentTimeMillis();
super.writeComplete(ctx, e);
}
public void messageReceived(ChannelHandlerContext ctx, MessageEvent e)
throws Exception {
lastReadTime = System.currentTimeMillis();
super.messageReceived(ctx, e);
}
Remote client is Windows xp,current server is Linux,both jdk1.6.
I think the writeComplete still invoked internally after remote client's system is rebooting,although messageReceived not invoked,no IOExceptoin is thrown during this period.
I will redesign the heartbeat handler,attaching a timestamp and a HEART_BEAT flag in heartbeat packet,when the peer side received the packet,send back the packet with the same timestamp and a ACK_HEART_BEAT flag,when the current side received this ack packet,use this timestamp to update lastWriteTime.

Categories