Stopping a Camel route from outside the route - java

We have a data processing application that runs on Karaf 2.4.3 with Camel 2.15.3.
In this application, we have a bunch of routes that import data. We have a management view that lists these routes and where each route can be started. Those routes do not directly import data, but call other routes (some of them in other bundles, called via direct-vm), sometimes directly and sometimes in a splitter.
Is there a way to also completely stop a route/therefore stopping the entire exchange from being further processed?
When simply using the stopRoute function like this:
route.getRouteContext().getCamelContext().stopRoute(route.getId());
I eventually get a success message with Graceful shutdown of 1 routes completed in 10 seconds - the exchange is still being processed though...
So I tried to mimic the behaviour of the StopProcessor by setting the stop property, but that also didn't help:
public void stopRoute(Route route) {
try {
Collection<InflightExchange> browse = route.getRouteContext().getCamelContext().getInflightRepository()
.browse();
for (InflightExchange inflightExchange : browse) {
String exchangeRouteId = inflightExchange.getRouteId();
if ((exchangeRouteId != null) && exchangeRouteId.equals(route.getId())) {
this.stopExchange(inflightExchange.getExchange());
}
}
} catch (Exception e) {
Notification.show("Error while trying to stop route", Type.ERROR_MESSAGE);
LOGGER.error(e, e);
}
}
public void stopExchange(Exchange exchange) throws Exception {
AsyncProcessorHelper.process(new AsyncProcessor() {
#Override
public void process(Exchange exchange) throws Exception {
AsyncProcessorHelper.process(this, exchange);
}
#Override
public boolean process(Exchange exchange, AsyncCallback callback) {
exchange.setProperty(Exchange.ROUTE_STOP, Boolean.TRUE);
callback.done(true);
return true;
}
}, exchange);
}
Is there any way to completely stop an exchange from being processed from outside the route?

Can you get an exchange?
I use exchange.setProperty(Exchange.ROUTE_STOP, true);
Route stop flow and doesn't go to next route.

Related

How to handle backpressure when using apache camel and Kafka?

I am trying to write an application that will integrate with Kafka using Camel. (Version - 3.4.2)
I have an approach borrowed from the answer to this question.
I have a route that listens for messages from a Kafka topic. The processing of this message is decoupled from consumption by using a simple executor. Each processing is submitted as a task to this executor. The ordering of the messages is not important and the only concerning factor is how quickly and efficiently the message can be processed. I have disabled the auto-commit and manually commit the messages once the tasks are submitted to the executor. The loss of the messages that are currently being processed (due to crash/shutdown) is okay but the ones in Kafka that have never been submitted for the processing should not be lost (due to committing of the offset). Now to the questions,
How can I efficiently handle the load? For e.g, there are 1000 messages but I can only parallelly process 100 at a time.
Right now the solution I have is to block the consumer polling thread and trying to continuously submit the job. But a suspension of polling would be a much better approach but I cannot find any way to achieve that in Camel.
Is there a better way (Camel way) to decouple processing from consumption and handle backpressure?
public static void main(String[] args) throws Exception {
String consumerId = System.getProperty("consumerId", "1");
ExecutorService executor = new ThreadPoolExecutor(100, 100, 0L, TimeUnit.MILLISECONDS,
new SynchronousQueue<>());
LOGGER.info("Consumer {} starting....", consumerId);
Main main = new Main();
main.init();
CamelContext context = main.getCamelContext();
ComponentsBuilderFactory.kafka().brokers("localhost:9092").metadataMaxAgeMs(120000).groupId("consumer")
.autoOffsetReset("earliest").autoCommitEnable(false).allowManualCommit(true).maxPollRecords(100)
.register(context, "kafka");
ConsumerBean bean = new ConsumerBean();
context.addRoutes(new RouteBuilder() {
#Override
public void configure() {
from("kafka:test").process(exchange -> {
LOGGER.info("Consumer {} - Exhange is {}", consumerId, exchange.getIn().getHeaders());
processTask(exchange);
commitOffset(exchange);
});
}
private void processTask(Exchange exchange) throws InterruptedException {
try {
executor.submit(() -> bean.execute(exchange.getIn().getBody(String.class)));
} catch (Exception e) {
LOGGER.error("Exception occured {}", e.getMessage());
Thread.sleep(1000);
processTask(exchange);
}
}
private void commitOffset(Exchange exchange) {
boolean lastOne = exchange.getIn().getHeader(KafkaConstants.LAST_RECORD_BEFORE_COMMIT, Boolean.class);
if (lastOne) {
KafkaManualCommit manual = exchange.getIn().getHeader(KafkaConstants.MANUAL_COMMIT,
KafkaManualCommit.class);
if (manual != null) {
LOGGER.info("manually committing the offset for batch");
manual.commitSync();
}
} else {
LOGGER.info("NOT time to commit the offset yet");
}
}
});
main.run();
}
You can use throttle EIP for this purpose.
from("your uri here")
.throttle(maxRequestCount)
.timePeriodMillis(inTimePeriodMs)
.to(yourProcessorUri)
.end()
Please take a look at the original documentation here.

Netty 4 - The pool returns a channel which is not yet ready to send the the actual message

I have created an inbound handler of type SimpleChannelInboundHandler and added to pipeline. My intention is every time a connection is established, I wanted to send an application message called session open message and make the connection ready to send the actual message. To achieve this, the above inbound handler
over rides channelActive() where session open message is sent, In response to that I would get a session open confirmation message. Only after that I should be able to send any number of actual business message. I am using FixedChannelPool and initialised as follows. This works well some time on startup. But if the remote host closes the connection, after that if a message is sent calling the below sendMessage(), the message is sent even before the session open message through channelActive() and its response is obtained. So the server ignores the message as the session is not open yet when the business message was sent.
What I am looking for is, the pool should return only those channel that has called channelActive() event which has already sent the session open message and it has got its session open confirmation message from the server. How to deal with this situation?
public class SessionHandler extends SimpleChannelInboundHandler<byte[]> {
#Override
public void channelActive(ChannelHandlerContext ctx) throws Exception {
super.channelActive(ctx);
if (ctx.channel().isWritable()) {
ctx.channel().writeAndFlush("open session message".getBytes()).;
}
}
}
// At the time of loading the applicaiton
public void init() {
final Bootstrap bootStrap = new Bootstrap();
bootStrap.group(group).channel(NioSocketChannel.class).remoteAddress(hostname, port);
fixedPool = new FixedChannelPool(bootStrap, getChannelHandler(), 5);
// This is done to intialise connection and the channelActive() from above handler is invoked to keep the session open on startup
for (int i = 0; i < config.getMaxConnections(); i++) {
fixedPool.acquire().addListener(new FutureListener<Channel>() {
#Override
public void operationComplete(Future<Channel> future) throws Exception {
if (future.isSuccess()) {
} else {
LOGGER.error(" Channel initialzation failed...>>", future.cause());
}
}
});
}
}
//To actually send the message following method is invoked by the application.
public void sendMessage(final String businessMessage) {
fixedPool.acquire().addListener(new FutureListener<Channel>() {
#Override
public void operationComplete(Future<Channel> future) throws Exception {
if (future.isSuccess()) {
Channel channel = future.get();
if (channel.isOpen() && channel.isActive() && channel.isWritable()) {
channel.writeAndFlush(businessMessage).addListener(new GenericFutureListener<ChannelFuture>() {
#Override
public void operationComplete(ChannelFuture future) throws Exception {
if (future.isSuccess()) {
// success msg
} else {
// failure msg
}
}
});
fixedPool.release(channel);
}
} else {
// Failure
}
}
});
}
If there is no specific reason that you need to use a FixedChannelPool then you can use another data structure (List/Map) to store the Channels. You can add a channel to the data structure after sending open session message and remove it in the channelInactive method.
If you need to perform bulk operations on channels you can use a ChannelGroup for the purpose.
If you still want you use the FixedChannelPool you may set an attribute in the channel on whether open message was sent:
ctx.channel().attr(OPEN_MESSAGE_SENT).set(true);
you can get the attribute as follows in your sendMessage function:
boolean sent = ctx.channel().attr(OPEN_MESSAGE_SENT).get();
and in the channelInactive you may set the same to false or remove it.
Note OPEN_MESSAGE_SENT is an AttributeKey:
public static final AttributeKey<Boolean> OPEN_MESSAGE_SENT = AttributeKey.valueOf("OPEN_MESSAGE_SENT");
I know this is a rather old question, but I stumbled across the similar issue, not quite the same, but my issue was the ChannelInitializer in the Bootstrap.handler was never called.
The solution was to add the pipeline handlers to the pool handler's channelCreated method.
Here is my pool definition code that works now:
pool = new FixedChannelPool(httpBootstrap, new ChannelPoolHandler() {
#Override
public void channelCreated(Channel ch) throws Exception {
ChannelPipeline pipeline = ch.pipeline();
pipeline.addLast(HTTP_CODEC, new HttpClientCodec());
pipeline.addLast(HTTP_HANDLER, new NettyHttpClientHandler());
}
#Override
public void channelAcquired(Channel ch) {
// NOOP
}
#Override
public void channelReleased(Channel ch) {
// NOOP
}
}, 10);
So in the getChannelHandler() method I assume you're creating a ChannelPoolHandler in its channelCreated method you could send your session message (ch.writeAndFlush("open session message".getBytes());) assuming you only need to send the session message once when a connection is created, else you if you need to send the session message every time you could add it to the channelAcquired method.

Akka ActorSystem never terminate in Java

Im using Akka 2.5.6 in Java 8 and I want to know the right way to finish de ActorSystem, part of the functionality of my code is to process some XML files and validate them, to achieve this I have created 3 actors:
Controller, Processor and Validator.
The Controller is responsible for initiating the process and sending file by file and other information to the Processor, then the Processor create a digital signature of the file and sends the response to the Validator that finally validates the status and sends an OK message to the Controller which is counting the number of files validated and compares them with the total files. Once the total of files with the total of validated files are equal, I call to finish the ActorSystem with the terminate () method.
The method to finish is as follows:
private void endActors()
{
ActorSystem actorSystem = getContext().system();
Future <Terminated> terminated = actorSystem.terminate();
do {
log.info ("Waiting to finish ...");
try {
Thread.sleep (30000L);
} catch (InterruptedException ex) {
log.error ("Error in Thread.");
}
} while (! ended.isCompleted ());
log.info ("Actors finished processing.");
}
The loop never ends because the future is never complete, I dont know if this is the right way, I hope you have understood me and can help me or give me some advice.
Try the following (the key here is the on complete) . I wrote a class along these lines to use in a setup and teardown for junit, to avoid issues from actor system not fully terminating in the teardown of one test before being created in another test. (that caused port already in use issues)
private static ActorSystem system = null;
private static Future<Terminated> terminatedFuture;
public static ActorSystem getFreshActorSystem() {
tearDownActorSystem();
while(system != null) {
try {
Thread.sleep(500L);
} catch (InterruptedException e) {
}
}
system = ActorSystem.create();
return system;
}
public static void tearDownActorSystem() {
if (system !=null && !isInMiddleOfTerminating()) {
terminatedFuture = system.terminate();
terminatedFuture.onComplete( new OnComplete(){
#Override
public void onComplete(Throwable failure, Object success) throws Throwable {
system = null;
terminatedFuture = null;
}
} , system.dispatcher());
}
}
private static boolean isInMiddleOfTerminating() {
return terminatedFuture !=null;
}

how to use one apache camel context object in each java Restful services

i need to perform the all operation like the creating the quartz-2 scheduler and deleting on only one Apache camel context
using restful service. when i try using following code .each time its creating the new context object. i do not know how to fix it or where i need to initiate the apache camel context object.
this is my code
this is my java restful services which is call to the quartz scheduler.
java Rest Services.
#Path("/remainder")
public class RemainderResource {
private static org.apache.log4j.Logger log = Logger.getLogger(RemainderResource.class);
RemainderScheduler remainderScheduler=new RemainderScheduler();
CamelContext context = new DefaultCamelContext();
#POST
#Path("/beforeday/{day}")
public void create(#PathParam("day") int day,final String userdata)
{
log.debug("the starting process of the creating the Remainder");
JSONObject data=(JSONObject) JSONSerializer.toJSON(userdata);
String cronExp=data.getString("cronExp");
remainderScheduler.create(cronExp,day,context);
}
}
This is my java class which is schedule job .
public class RemainderScheduler {
private static org.apache.log4j.Logger log = Logger.getLogger(RemainderScheduler.class);
public void sendRemainder(int day)
{
log.debug("the starting of the sending the Remainder to user");
}
public RouteBuilder createMyRoutes(final String cronExp,final int day)
{
return new RouteBuilder()
{
#Override
public void configure() throws Exception {
log.debug("Before set schedulling");
from("quartz2://RemainderGroup/Remainder? cron="+cronExp+"&deleteJob=true&job.name='RemainderServices'").bean(new RemainderScheduler(), "sendRemainder('"+day+"')").routeId("Remainder")
.process(new Processor() {
#Override
public void process(Exchange exchange) throws Exception {
}
})
;
log.debug("after set schedulling");
}
};
}
public void stopService(CamelContext context)
{
log.debug("this is going to be stop the route");
try {
context.stopRoute("Remainder");
context.removeRoute("Remainder");
} catch (Exception e) {
e.printStackTrace();
}
}
public void create(final String cronExp,final int day,CamelContext context)
{
try
{
//this for if all ready exist then stop it.
if(context.getRoute("Remainder")!=null)
stopService(context);
log.debug("the starting of the process for creating the Remaider Services");
context.addRoutes(createMyRoutes(cronExp, day));
context.start();
log.debug("the status for removing the services is"+context.removeRoute("Remainder"));
}
catch(Exception e)
{
System.out.println(e.toString());
e.printStackTrace();
}
}
}
if i execute the above code then the each java Restful request create the new context object.
and its will start the job scheduling on new apache camel context object. and if send request for stop the route then also its creating the new apache context object so i am not able to reset or stop the quartz-2 scheduler.
It is not a good practise to create a camel context per request.
I suggest you to use camel-restlet or camel-cxfrs to delegate the request of create and delete scheduler to another camel context.

Camel: stop the route when the jdbc connection loss is detected

I have an application with the following route:
from("netty:tcp://localhost:5150?sync=false&keepAlive=true")
.routeId("tcp.input")
.transform()
.simple("insert into tamponems (AVIS) values (\"${in.body}\");")
.to("jdbc:mydb");
This route receives a new message every 59 millisecondes. I want to stop the route when the connection to the database is lost, before that a second message arrives. And mainly, I want to never lose a message.
I proceeded that way:
I added an errorHandler:
errorHandler(deadLetterChannel("direct:backup")
.redeliveryDelay(5L)
.maximumRedeliveries(1)
.retryAttemptedLogLevel(LoggingLevel.WARN)
.logExhausted(false));
My errorHandler tries to redeliver the message and if it fails again, it redirects the message to a deadLetterChannel.
The following deadLetterChannel will stop the tcp.input route and try to redeliver the message to the database:
RoutePolicy policy = new StopRoutePolicy();
from("direct:backup")
.routePolicy(policy)
.errorHandler(
defaultErrorHandler()
.redeliveryDelay(1000L)
.maximumRedeliveries(-1)
.retryAttemptedLogLevel(LoggingLevel.ERROR)
)
.to("jdbc:mydb");
Here is the code of the routePolicy:
public class StopRoutePolicy extends RoutePolicySupport {
private static final Logger LOG = LoggerFactory.getLogger(String.class);
#Override
public void onExchangeDone(Route route, Exchange exchange) {
String stop = "tcp.input";
CamelContext context = exchange.getContext();
if (context.getRouteStatus(stop) != null && context.getRouteStatus(stop).isStarted()) {
try {
exchange.getContext().getInflightRepository().remove(exchange);
LOG.info("STOP ROUTE: {}", stop);
context.stopRoute(stop);
} catch (Exception e) {
getExceptionHandler().handleException(e);
}
}
}
}
My problems with this method are:
In my "direct:backup" route, if I set the maximumRedeliveries to -1 the route tcp.input will never stop
I'm loosing messages during the stop
This method for detecting the connection loss and for stopping the route is too long
Please, does anybody have an idea for make this faster or for make this differently in order to not lose message?
I have finally found a way to resolve my problems. In order to make the application faster, I added asynchronous processes and multithreading with seda.
from("netty:tcp://localhost:5150?sync=false&keepAlive=true").to("seda:break");
from("seda:break").threads(5)
.routeId("tcp.input")
.transform()
.simple("insert into tamponems (AVIS) values (\"${in.body}\");")
.to("jdbc:mydb");
I did the same with the backup route.
from("seda:backup")
.routePolicy(policy)
.errorHandler(
defaultErrorHandler()
.redeliveryDelay(1000L)
.maximumRedeliveries(-1)
.retryAttemptedLogLevel(LoggingLevel.ERROR)
).threads(2).to("jdbc:mydb");
And I modified the routePolicy like that:
public class StopRoutePolicy extends RoutePolicySupport {
private static final Logger LOG = LoggerFactory.getLogger(String.class);
#Override
public void onExchangeBegin(Route route, Exchange exchange) {
String stop = "tcp.input";
CamelContext context = exchange.getContext();
if (context.getRouteStatus(stop) != null && context.getRouteStatus(stop).isStarted()) {
try {
exchange.getContext().getInflightRepository().remove(exchange);
LOG.info("STOP ROUTE: {}", stop);
context.stopRoute(stop);
} catch (Exception e) {
getExceptionHandler().handleException(e);
}
}
}
#Override
public void onExchangeDone(Route route, Exchange exchange) {
String stop = "tcp.input";
CamelContext context = exchange.getContext();
if (context.getRouteStatus(stop) != null && context.getRouteStatus(stop).isStopped()) {
try {
LOG.info("RESTART ROUTE: {}", stop);
context.startRoute(stop);
} catch (Exception e) {
getExceptionHandler().handleException(e);
}
}
}
}
With these updates, the TCP route is stopped before the backup route is processed. And the route is restarted when the jdbc connection is back.
Now, thanks to Camel, the application is able to handle a database failure without losing message and without manual intervention.
I hope this could help you.

Categories