Apache Camel timeout during finding data through a CustomProcessor - java

I seem to get timeout errors after 20 seconds. I have a custom processor the implements Processor. I inject a DAO and when finding the data within the custom processor it takes longer to find the data on the Apache Camel side and it timeouts. If I run the same code without Apache Camel it runs instantly. By doing a SELECT inside the CustomProcessor it takes longer to find the data.
The memory reference for the DAO are the same, so in the test the data is fetched immediately and the CustomProcessor hangs for 20 seconds before the data is receieved and it throws an Exception.
I am unable to figure out what is the cause of the problem.
I have located the code on Githib: https://github.com/rajivj2/example2
The problem is on line 27 of StatusHibernateDAO. I use a in memory database with only one table. There is only data populated.
When using the CustomProcessor without Apache Camel it works perfectly.
I hope you can help.

Since you are using Annotation Based configuration, the embedded Active MQ broker might be stopped unexpectedly, it’s good to use external broker (ex: tcp://host: 61616).
It seems QoS settings on the JMS endpoint is not working. You can the Header values from ProducerTemplate.
Since you are using producerTemplate.requestBody the activemq end point assumes that it is Request and Replay exchange(InOut) and since there is no response from route timeout occurring. If you want to implement (InOut) follow the instructions at http://camel.apache.org/jms.html. Since your Routebuilder is InOnly, you need to send the disableReplyTo header from the ProducerTemplate.
Replace your test method testMessageSendToConsumerQueueRemoteId as
#Transactional
#Test
public void testMessageSendToConsumerQueueRemoteId() throws Exception {
Status status = new Status();
status.setUserId(10);
statusDAO.save(status);
Endpoint mockEndpoint = this.context.getEndpoint(properties.getProperty("activemq.destination"));
PollingConsumer consumer = mockEndpoint.createPollingConsumer();
producerTemplate.sendBodyAndHeader(source, "<?xml version='1.0' encoding='UTF-8' standalone='yes'?><example><remoteid>10</remoteid></example>","disableReplyTo","true");
Status savedStatus = consumer.receive(100).getIn().getBody(Status.class);
logger.info("savedStatus "+savedStatus.getID()+" "+savedStatus.getUserId());
assertNotNull(savedStatus);
}
Replace your ContentEnricherProcessor’s process method as
public void process(Exchange exchange) throws Exception {
Message message = exchange.getIn();
Entity entity = (Entity) message.getBody();
Status status = process(entity);
message.setBody(status);
exchange.getOut().setBody(status);
}
And your camel.property file should be
source=activemq:queue:deliverynotification
activemq.location=tcp://localhost:61616
activemq.destination=activemq:queue:responsenotification
If you want to receive back the generated response, you need to change your RouteBuilder.
from(properties.getProperty("source"))
.process(new ResponseProcessor())
.inOnly("direct:z")
.end();
from("direct:z")
.unmarshal()
.jaxb("com.example.entities.xml").convertBodyTo(Entity.class)
.multicast()
.to("direct:x")
.end();
from("direct:x").transacted()
.process((ContentEnricherProcessor) applicationContext.getBean("contentEnricherProcessor"))
.to(properties.getProperty("activemq.destination"));
Then change the producer patterns like
Status response = producerTemplate.requestBody(source, "<?xml version='1.0' encoding='UTF-8' standalone='yes'?><example><remoteid>11</remoteid></example>",Status.class);
Here is the ResponseProcessor process method
public void process(Exchange exchange) throws Exception {
Message outMsg = exchange.getIn().copy();
exchange.setOut(outMsg);
}
Camel ROCKS... you can implement almost any use case or pattern of Enterprise Integration :)

Related

Camel: detect Kafka wrong IP

I am configuring Kafka as a source in my RouteBuilder. My goal is to handle Kafka disconnection issues. My RouteBuilder is as follows:
new RouteBuilder() {
public void configure() {
onException(Exception.class).process(exchange -> {
final Exception exception = exchange.getException();
logger.error(exception.getMessage());
// will do more processing here
});
from(String.format("kafka:%s?brokers=%s:%s", topicName, host, port)).bean(getMyService(), "myMethod")
.process(new Processor() {
#Override
public void process(Exchange exchange) throws Exception {
// some more processing
}
});
}
};
I provided wrong host and port, and expected to see an exception. However, no exception is seen in the log, and the onException processing is not get called.
Any idea what I am doing wrong?
A similar problem can be reproduced by running https://github.com/apache/camel/blob/master/examples/camel-example-kafka/src/main/java/org/apache/camel/example/kafka/MessageConsumerClient.java locally without any Kafka server running. Doing so results in a constant flow of messages:
Connection to node -1 could not be established. Broker may not be available.
Is there a way to have an exception thrown?
Any help would be appreciated.
OnException in the RouteBuilder will be triggered when you have a message to route, but since you are unable to connect to Kafka cluster you don't have that. That's why you don't see exception handled.
It's just a good example how tricky Apache Camel is. I'm working on a project having Apache Camel Kafka and I see how badly this is designed. Every Kafka parameter has corresponding Camel URL query-parameter. What if Kafka introduces a new configuration parameter and Apache Camel is not updated to get a new query-parameter? Then there is no way to use this Kafka parameter at all! It's insane.
Example of such Kafka configuration parameter is client.dns.lookup (I need to set it to 'use_all_dns_ips') introduced in Kafka 2.1. No Apache Camel URL query-param to set this!
SOLUTION: Replace Apache Camel Kafka by Spring Kafka.

What messaging pattern should I use to process and return responses from a rest request?

I have a hub and spoke architecture similar to this:
where a GET request comes into the hub and it routes it to one of the spokes for processing. On the hub I also put the request in a map with a UUID so that I can return the proper response when I get the data back from processing. The spokes are identical and are used to balance the load. I then need to pass the information back to the hub from the spoke and return the proper reponse.
I would like to do the messaging using JMS.
What is the best combination of integration patterns to accomplish this?
You already have Request/Reply within Vert.x, so you can achieve this behavior with about 20 lines of code:
public static void main(String[] args) {
Vertx vertx = Vertx.vertx();
Router router = Router.router(vertx);
router.get("/").handler((request) -> {
// When hub receives request, it dispatches it to one of the Spokes
String requestUUID = UUID.randomUUID().toString();
vertx.eventBus().send("processMessage", requestUUID, (spokeResponse) -> {
if (spokeResponse.succeeded()) {
request.response().end("Request " + requestUUID + ":" + spokeResponse.result().body().toString());
}
// Handle errors
});
});
// We create two Spokes
vertx.deployVerticle(new SpokeVerticle());
vertx.deployVerticle(new SpokeVerticle());
// This is your Hub
vertx.createHttpServer().requestHandler(router::accept).listen(8888);
}
And here's what Spoke looks like:
/**
* Static only for the sake of example
*/
static class SpokeVerticle extends AbstractVerticle {
private String id;
#Override
public void start() {
this.id = UUID.randomUUID().toString();
vertx.eventBus().consumer("processMessage", (request) -> {
// Do something smart
// Reply
request.reply("I'm Spoke " + id + " and my reply is 42");
});
}
}
Try accessing it in your browser on http://localhost:8888/
You should see that request ID is generated every time, while only one of two Spokes answers your request.
Well if I understand your design correctly this seems to request/reply scenario since the spoke is actually returning some response. If it didn't it would be publish/subscribe.
You can use ActiveMQ for jms and request/reply. See here:
http://activemq.apache.org/how-should-i-implement-request-response-with-jms.html
As for the details it all depends on your requirements, will the response be sent fairly immediately or is it a long running process?
If it is a long running process you can avoid request/reply and use a fire and forget scenario.
Basically, the hub fires a message on a queue which is being listened by one of the spoke components. Once the backend processing is done it returns the response to a queue monitored by the hub. You can correlate the request/response via some correlationId. During the request part, you can save the correlationId in a cache to match against the response. In a request/reply scenario this is done for you by the infrastructure but don't use for long running process.
To summarise:
Use ActiveMQ for your message processing with JMS.
Use Camel for the REST bits.
Use request/reply if you are sure you expect a response fairly rapidly.
Use fire and forget if you expect the response to take a long time but have to match the message correlationIds.
If you wish to use Camel with JMS, then you should use Request-Reply EIP, and as far as examples go, you have a pretty good one provided via Camel's official examples - it may be a bit old but it's still very much valid.
While you can just ignore the example's Camel configuration through Spring, its route definitions provide sufficient information:
public class SpokeRoute extends RouteBuilder {
#Override
public void configure() throws Exception {
from("jms:queue:spoke")
.process(e -> {
Object result = ...; // do some processing
e.getIn().setBody(result); // publish the result
// Camel will automatically reply if it finds a ReplyTo and CorrelationId headers
});
}
}
Then all HUB needs to do is invoke:
ProducerTemplate camelTemplate = camelContext.createProducerTemplate();
Object response = camelTemplate.sendBody("jms:queue:spoke", ExchangePattern.InOut, input);

Invoke route from Processor

I'm using Camel to integrate 2 systems. I have defined different routes and one of the routes consumes from a specific rabbitmq queue and send it to a REST service. Nothing fancy here, the route looks like this:
public class WebSurfingRabbitToRestRoute extends RouteBuilder{
#Override
public void configure() throws Exception {
from("rabbitmq://rabbit_host:port/Rabbit_Exchange").
setHeader("CamelHttpMethod", constant("POST")).
setHeader("Content-Type", constant("application/json")).
bean(TransformResponse.class, "transform").
to("http4://rest_service_host:port/MyRestService).
}
}
As you can see, i process every message before sending it to the rest service since i need to adjust some things. The problem comes when i find out that sometimes (i dont know how or when), the system that publish into rabbit, send 2 messages concatenated at once.
What i expect to get is a simple json like this:
[{field1:value1, field2:value2}]
What i sometimes get is:
[{field1:value1, field2:value2},{field1:value3, field2:value4}]
So when i face this scenario, the rest service im routing the message to, fails (obviously).
In order to solve this, i would like to know if there is a way to invoke a route from inside a processor. From the previous snippet of code you can see that Im calling the transform method, so the idea will be to do something like the following pseudo-code, because after the route is already fired, i cant split the events and send them both within the same route "instance", so i thought about invoking a different route that i can call from here which will send the message2 to the very same rest service.
public class TransformRabbitmqResponse {
public String transform(String body) throws Exception {
// In here i do stuff with the message
// Check if i got 2 messages concatenated
// if body.contains("},{") {
// split_messages
// InvokeDifferentRoute(message2)
//}
}
}
Do you guys think this is possible?
One option (though I am not sure this is the best option) would be to split this up into two different routes using a direct endpoint.
public class WebSurfingRabbitToRestRoute extends RouteBuilder{
#Override
public void configure() throws Exception {
from("rabbitmq://rabbit_host:port/Rabbit_Exchange")
.setHeader("CamelHttpMethod", constant("POST"))
.setHeader("Content-Type", constant("application/json"))
.bean(TransformResponse.class, "transform");
from("direct:transformedResponses")
.to("http4://rest_service_host:port/MyRestService");
}
}
And then in your transform bean, you can use camel Producer Template to publish the transformed payload(s) to your new direct endpoint (assuming you are using json?).
producerTemplate.sendBody("direct:transformedResponses", jsonString);

Apache Camel overriding ExchangePattern.InOut behaviour to return body is this possible to do

I had an issue with Apache Camel that timeouts for after 20 seconds.
I was getting timeout errors after 20 seconds. I have a custom processor the implements Processor. I injected a DAO and when finding the data within the custom processor it took longer to find the data on the Apache Camel side and it timed out. If I run the same code without Apache Camel it runs instantly. By doing a SELECT inside the CustomProcessor it took longer to find the data.
The memory reference for the DAO are the same, so in the test the data is fetched immediately and the CustomProcessor hangs for 20 seconds before the data is receieved and it throws an Exception.
I figured out that I need to add ?disableReplyTo=true as below:
source=activemq:queue:notification?disableReplyTo=true
This fixed the timeout issue, but nothing was being added to the queue. When I request the body of the message it returns the input XML that was sent. By default, by adding ?disableReplyTo=true sets the ExchangePattern to InOnly. Is there any way of overriding this to ExchangePattern.OutOnly? I want the message to be sent to the queue and also I want to get the body of the message that is set through producerTemplate.requestBody().
What I noted that going from one processor to another the other processor can access the exchange.setOut() as as exchange.getIn().
I have located the code on Githib: https://github.com/rajivj2/example2
I tried something like this:
public class ContentEnricherProcessor implements Processor {
#Resource
private StatusDAO statusDAO;
private Logger logger;
public ContentEnricherProcessor(StatusDAO statusDAO) {
this.statusDAO = statusDAO;
logger = LoggerFactory.getLogger(ContentEnricherProcessor.class);
}
public void process(Exchange exchange) throws Exception {
exchange.setPattern(ExchangePattern.OutOnly); // by doing producerTemplate.requestBody() this does not work
Message message = exchange.getIn();
Entity entity = (Entity) message.getBody();
Status status = process(entity);
message.setBody(status);
exchange.setOut(message);
}
I hope you can help.

NotSerializableException escaping Spring Controller and causing problems with Google App Engine Queue

I have a Spring Controller that is being invoked via an HTTP POST from the GAE Queue Scheduler.
#Controller
#RequestMapping(value = RSSPoller.RSS_POLLER_URL)
public class RSSPoller implements Serializable {
private static final long serialVersionUID = -4925178778477404709L;
public static final String RSS_POLLER_URL = "/rsspoller";
#RequestMapping(method = RequestMethod.POST)
#ResponseStatus(HttpStatus.OK)
public void pollAndProcessRssFeed() throws ServiceException {
try {
// do some stuff
}
catch(Exception e) {
throw new ServiceException("Can't process RSS feed because", e);
}
}
}
However when it gets invoked, the response code is 500 with a Critical log message of
Uncaught exception from servlet
java.lang.RuntimeException: java.io.NotSerializableException: <some spring class that changes all the time, but does not implement java.io.Serializable>
The same log message shows up in the logs with a Warning level as well.
I get similar warning messages (but not critical) in my logs when I invoke other Spring Controllers that either render a web page (GET), or returns some XML data (essentially RPC invokes which use HTTP POST). When I do an HTTP GET/POST to those URLs, the response code is 200 and the output is correct (and I ignore the warning message in the logs).
That leads me to two questions:
Why do I get the Critical error message/HTTP 500 for the POST from the queue, but not the GET/POST to other Spring Controllers in my app?
How can I trap the exception and essentially discard it; as to my purposes the task is complete.
I can post the full exception log if it's of use; for brevity I've omitted it.
you should make your <some spring class that changes all the time, but does not implement java.io.Serializable> just Serializable (not only Controller). Me at least helped it.

Categories