I am configuring Kafka as a source in my RouteBuilder. My goal is to handle Kafka disconnection issues. My RouteBuilder is as follows:
new RouteBuilder() {
public void configure() {
onException(Exception.class).process(exchange -> {
final Exception exception = exchange.getException();
logger.error(exception.getMessage());
// will do more processing here
});
from(String.format("kafka:%s?brokers=%s:%s", topicName, host, port)).bean(getMyService(), "myMethod")
.process(new Processor() {
#Override
public void process(Exchange exchange) throws Exception {
// some more processing
}
});
}
};
I provided wrong host and port, and expected to see an exception. However, no exception is seen in the log, and the onException processing is not get called.
Any idea what I am doing wrong?
A similar problem can be reproduced by running https://github.com/apache/camel/blob/master/examples/camel-example-kafka/src/main/java/org/apache/camel/example/kafka/MessageConsumerClient.java locally without any Kafka server running. Doing so results in a constant flow of messages:
Connection to node -1 could not be established. Broker may not be available.
Is there a way to have an exception thrown?
Any help would be appreciated.
OnException in the RouteBuilder will be triggered when you have a message to route, but since you are unable to connect to Kafka cluster you don't have that. That's why you don't see exception handled.
It's just a good example how tricky Apache Camel is. I'm working on a project having Apache Camel Kafka and I see how badly this is designed. Every Kafka parameter has corresponding Camel URL query-parameter. What if Kafka introduces a new configuration parameter and Apache Camel is not updated to get a new query-parameter? Then there is no way to use this Kafka parameter at all! It's insane.
Example of such Kafka configuration parameter is client.dns.lookup (I need to set it to 'use_all_dns_ips') introduced in Kafka 2.1. No Apache Camel URL query-param to set this!
SOLUTION: Replace Apache Camel Kafka by Spring Kafka.
Related
I have started working with Spring Integration to send messages to external System using Spring Integration Google Pub/sub model.
I am sending the payload received by the Service activator as below
#ServiceActivator(inputChannel = "inputChannel")
public void messageReceiver(final String payloadMessage) throws IOException {
adapter.sendData(payloadMessage); // send payloadMessage data to external system, add exception handlers
}
What I want is to implement Exception Handling to the adapter.sendData(payloadMessage) such that I would like to consider varies scenarios like
External System down
Network issues connecting to network from my system to external system.
I have been following the below Google cloud documentation and other online documentation but have
not found sufficient usecase to handle the above scenarios
https://cloud.google.com/pubsub/docs/spring#using-spring-integration-channel-adapters
Considering the above scenarios I would like to implement exception handling such a way that data is not lost when there are exceptions and external systems should receive data even if there are exceptions after some period of time.
I have configured the below error channel. Now there is any error in the sendData() method, I see the same failure messages keeping on loading in the eclipse console. Is there any need to add the param spring.cloud.gcp.pubsub.subscriber.max-ack-extension-period in the yaml
#Bean
public PubSubInboundChannelAdapter messageChannelAdapter(final #Qualifier("myInputChannel") MessageChannel inputChannel,
PubSubTemplate pubSubTemplate)
{
PubSubInboundChannelAdapter adapter = new PubSubInboundChannelAdapter(pubSubTemplate, pubSubSubscriptionName);
adapter.setOutputChannel(inputChannel);
adapter.setAckMode(AckMode.AUTO_ACK);
adapter.setErrorChannelName("pubsubErrors");
return adapter;
}
#ServiceActivator(inputChannel = "pubsubErrors")
public void pubsubErrorHandler(Message<MessagingException> exceptionMessage) {
BasicAcknowledgeablePubsubMessage originalMessage = (BasicAcknowledgeablePubsubMessage) exceptionMessage
.getPayload().getFailedMessage().getHeaders().get(GcpPubSubHeaders.ORIGINAL_MESSAGE);
originalMessage.nack();
}
Sounds like you need some retry and backoff logic around your exceptions.
See more info in docs: https://docs.spring.io/spring-integration/reference/html/messaging-endpoints.html#message-handler-advice-chain.
The #ServiceActivator has that adviceChain attribute for your consideration.
I'm using Camel in a Spring Application. What I need is to properly shut down my Application after camel has sent all his data. Basically camel has to read a file, split it in rows and send each row as kafka massage.
How can I shutdown my application after camel has finished to send all messages?
Here is my route:
public class Router extends RouteBuilder {
#Override
public void configure() throws Exception {
// Kafka Producer
from("file:{{file.dir}}?fileName={{file.name}}&noop=true")
.split(body().tokenize("\r\n|\n")).streaming()
.to("kafka:{{kafka.topic}}?brokers={{spring.kafka.producer.bootstrap-servers}}");
}
}
Basicaly you can call System.exit command
from("file:{{file.dir}}?fileName={{file.name}}&noop=true").routeId("fileconsumer")
.split(body().tokenize("\r\n|\n")).streaming()
.to("kafka:{{kafka.topic}}?brokers={{spring.kafka.producer.bootstrap-servers}}")
.process(exchange -> System.exit(0));
or you can stop camel route
CamelContext context = exchange.getContext();
context.stopRoute("fileconsumer");
I'm trying to write a basic ActiveMQ client to listen to a topic. I'm using Spring Boot ActiveMQ. I have an implementation built off of various tutorials that uses DefaultJmsListenerContainerFactory, but I am having some issues getting it working properly.
#Configuration
#EnableJms
public class JmsConfig {
#Bean
public DefaultJmsListenerContainerFactory jmsContainerFactory(ConnectionFactory connectionFactory,
DefaultJmsListenerContainerFactoryConfigurer configurer) {
DefaultJmsListenerContainerFactory factory = new DefaultJmsListenerContainerFactory();
factory.setConcurrency("3-10");
factory.setConnectionFactory(connectionFactory);
configurer.configure(factory, connectionFactory);
factory.setSubscriptionDurable(true);
factory.setClientId("someUniqueClientId");
return factory;
}
}
#JmsListener(destination="someTopic", containerFactory="jmsContainerFactory", subscription="someUniqueSubscription")
public void onMessage(String msg) {
...
}
Everything works fine, until I try to get a durable subscription going. When I do that, I'm finding that with the client id set on the container factory, I get an error about how the client id cannot be set on a shared connection.
Cause: setClientID call not supported on proxy for shared Connection. Set the 'clientId' property on the SingleConnectionFactory instead.
When I change the code to set the client id on the connection factory instead (it's a CachingConnectionFactory wrapping an ActiveMQConnectionFactory), the service starts up successfully, reads a couple messages and then starts consistently outputting this error:
Setup of JMS message listener invoker failed for destination 'someTopic' - trying to recover. Cause: Durable consumer is in use for client: someUniqueClientId and subscriptionName: someUniqueSubscription
I continue to receive messages, but also this error inter-mingled in the logs. This seems like it is probably a problem, but I'm really not clear on how to fix it.
I do have a naive implementation of this going without any spring code, using ActiveMQConnectionFactory directly and it seems happy to use a durable consumer (but it has its own different issues). In any case, I don't think it's a lack of support for durable connections on the other side.
I'm hoping someone with more experience in this area can help me figure out if this error is something I can ignore, or alternatively what I need to do to address it.
Thanks!
JMS 1.1 (which is what you're using since you're using ActiveMQ 5.x) doesn't support shared durable subscriptions. Therefore, when you use setConcurrency("3-10") and Spring tries to create > 1 subscription you receive an error. I see two main ways to solve this problem:
Use setConcurrency("1") which will limit the number of subscribers/consumers to 1. Depending on your requirements this could have a severe negative performance impact.
Switch to ActiveMQ Artemis which does support JMS 2.0 and invoke setSubscriptionShared(true).
Had gone through multiple posts but most of them are related handling Bad messages not about exception handling while processing them.
I want to know to how to handle the messages that is been received by the stream application and there is an exception while processing the message? The exception could be because of multiple reasons like Network failure, RuntimeException etc.,
Could someone suggest what is the right way to do? Should I use
setUncaughtExceptionHandler? or is there a better way?
How to handle retries?
it depends what do you want to do with exceptions on producer side.
if exception will be thrown on producer (e.g. due to Network failure or kafka broker has died), stream will die by default. and with kafka-streams version 1.1.0 you could override default behavior by implementing ProductionExceptionHandler like the following:
public class CustomProductionExceptionHandler implements ProductionExceptionHandler {
#Override
public ProductionExceptionHandlerResponse handle(final ProducerRecord<byte[], byte[]> record,
final Exception exception) {
log.error("Kafka message marked as processed although it failed. Message: [{}], destination topic: [{}]", new String(record.value()), record.topic(), exception);
return ProductionExceptionHandlerResponse.CONTINUE;
}
#Override
public void configure(final Map<String, ?> configs) {
}
}
from handle method you could return either CONTINUE if you don't want streams dying on exception, on return FAIL in case you want stream stops (FAIL is default one).
and you need specify this class in stream config:
default.production.exception.handler=com.example.CustomProductionExceptionHandler
Also pay attention that ProductionExceptionHandler handles only exceptions on producer, and it will not handle exceptions during processing message with stream methods mapValues(..), filter(..), branch(..) etc, you need to wrap these method logic with try / catch blocks (put all your method logic into try block to guarantee that you will handle all exceptional cases):
.filter((key, value) -> { try {..} catch (Exception e) {..} })
as I know, we don't need to handle exceptions on consumer side explicitly, as kafka streams will retry automatically consuming later (as offset will not be changed until messages will be consumed and processed); e.g. if kafka broker will be not reachable for some time, you will got exceptions from kafka streams, and when broken will be up, kafka stream will consume all messages. so in this case we will have just delay and nothing corrupted/lost.
with setUncaughtExceptionHandler you will not be able to change default behavior like with ProductionExceptionHandler, with it you could only log error or send message into failure topic.
Update since kafka-streams 2.8.0
since kafka-streams 2.8.0, you have the ability to automatically replace failed stream thread (that caused by uncaught exception)
using KafkaStreams method void setUncaughtExceptionHandler(StreamsUncaughtExceptionHandler eh); with StreamThreadExceptionResponse.REPLACE_THREAD. For more details please take a look at Kafka Streams Specific Uncaught Exception Handler
kafkaStreams.setUncaughtExceptionHandler(ex -> {
log.error("Kafka-Streams uncaught exception occurred. Stream will be replaced with new thread", ex);
return StreamsUncaughtExceptionHandler.StreamThreadExceptionResponse.REPLACE_THREAD;
});
For handling exceptions on the consumer side,
1) You can add a default exception handler in producer with the following property.
"default.deserialization.exception.handler" = "org.apache.kafka.streams.errors.LogAndContinueExceptionHandler";
Basically apache provides three exception handler classes as
1) LogAndContiuneExceptionHandler which you can take as
props.put(StreamsConfig.DEFAULT_DESERIALIZATION_EXCEPTION_HANDLER_CLASS_CONFIG,
LogAndContinueExceptionHandler.class);
2) LogAndFailExceptionHandler
props.put(StreamsConfig.DEFAULT_DESERIALIZATION_EXCEPTION_HANDLER_CLASS_CONFIG,
LogAndFailExceptionHandler.class);
3) LogAndSkipOnInvalidTimestamp
props.put(StreamsConfig.DEFAULT_DESERIALIZATION_EXCEPTION_HANDLER_CLASS_CONFIG,
LogAndSkipOnInvalidTimestamp.class);
For custom exception handling,
1)you can implement the DeserializationExceptionHandler interface and override the handle() method.
2) Or you can extend the above-mentioned classes.
setUncaughtExceptionHandler doesn't help to handle exception, it works after the stream has terminated due to some exception which was not caught.
Kafka provides few ways to handle exceptions. A simple try-catch{} would help catch exceptions in the processor code but kafka deserialization exception (can be due to data issues) and production exception(occurs during communication with broker) requires DeserializationExceptionHandler and ProductionExceptionHandler respectively. By default a kafka application would fail if it encounter any of these.
You can find on this post
In Spring cloud stream you confgure your custom deserialization handler using following:
spring.cloud.stream.kafka.streams.binder.configuration.default.deserialization.exception.handler=your-package-name.CustomLogAndContinueExceptionHandler
CustomLogAndContinueExceptionHandler extends LogAndContinueExceptionHandler or implements DeserializationExceptionHandler
CustomLogAndContinueExceptionHandler DeserializationHandlerResponse.CONTINUE or FAIL depending on your usecase
#Slf4j
public class CustomLogAndContinueExceptionHandler extends LogAndContinueExceptionHandler {
#Override
public DeserializationHandlerResponse handle(ProcessorContext context, ConsumerRecord<byte[], byte[]> record,
Exception exception) {
.... some business logic here ....
log.error("Message failed: taskId: {}, topic: {}, partition: {}, offset: {}, , detailerror : {}",
context.taskId(), record.topic(), record.partition(), record.offset(), exception.getMessage());
return DeserializationHandlerResponse.CONTINUE;
}
}
I seem to get timeout errors after 20 seconds. I have a custom processor the implements Processor. I inject a DAO and when finding the data within the custom processor it takes longer to find the data on the Apache Camel side and it timeouts. If I run the same code without Apache Camel it runs instantly. By doing a SELECT inside the CustomProcessor it takes longer to find the data.
The memory reference for the DAO are the same, so in the test the data is fetched immediately and the CustomProcessor hangs for 20 seconds before the data is receieved and it throws an Exception.
I am unable to figure out what is the cause of the problem.
I have located the code on Githib: https://github.com/rajivj2/example2
The problem is on line 27 of StatusHibernateDAO. I use a in memory database with only one table. There is only data populated.
When using the CustomProcessor without Apache Camel it works perfectly.
I hope you can help.
Since you are using Annotation Based configuration, the embedded Active MQ broker might be stopped unexpectedly, it’s good to use external broker (ex: tcp://host: 61616).
It seems QoS settings on the JMS endpoint is not working. You can the Header values from ProducerTemplate.
Since you are using producerTemplate.requestBody the activemq end point assumes that it is Request and Replay exchange(InOut) and since there is no response from route timeout occurring. If you want to implement (InOut) follow the instructions at http://camel.apache.org/jms.html. Since your Routebuilder is InOnly, you need to send the disableReplyTo header from the ProducerTemplate.
Replace your test method testMessageSendToConsumerQueueRemoteId as
#Transactional
#Test
public void testMessageSendToConsumerQueueRemoteId() throws Exception {
Status status = new Status();
status.setUserId(10);
statusDAO.save(status);
Endpoint mockEndpoint = this.context.getEndpoint(properties.getProperty("activemq.destination"));
PollingConsumer consumer = mockEndpoint.createPollingConsumer();
producerTemplate.sendBodyAndHeader(source, "<?xml version='1.0' encoding='UTF-8' standalone='yes'?><example><remoteid>10</remoteid></example>","disableReplyTo","true");
Status savedStatus = consumer.receive(100).getIn().getBody(Status.class);
logger.info("savedStatus "+savedStatus.getID()+" "+savedStatus.getUserId());
assertNotNull(savedStatus);
}
Replace your ContentEnricherProcessor’s process method as
public void process(Exchange exchange) throws Exception {
Message message = exchange.getIn();
Entity entity = (Entity) message.getBody();
Status status = process(entity);
message.setBody(status);
exchange.getOut().setBody(status);
}
And your camel.property file should be
source=activemq:queue:deliverynotification
activemq.location=tcp://localhost:61616
activemq.destination=activemq:queue:responsenotification
If you want to receive back the generated response, you need to change your RouteBuilder.
from(properties.getProperty("source"))
.process(new ResponseProcessor())
.inOnly("direct:z")
.end();
from("direct:z")
.unmarshal()
.jaxb("com.example.entities.xml").convertBodyTo(Entity.class)
.multicast()
.to("direct:x")
.end();
from("direct:x").transacted()
.process((ContentEnricherProcessor) applicationContext.getBean("contentEnricherProcessor"))
.to(properties.getProperty("activemq.destination"));
Then change the producer patterns like
Status response = producerTemplate.requestBody(source, "<?xml version='1.0' encoding='UTF-8' standalone='yes'?><example><remoteid>11</remoteid></example>",Status.class);
Here is the ResponseProcessor process method
public void process(Exchange exchange) throws Exception {
Message outMsg = exchange.getIn().copy();
exchange.setOut(outMsg);
}
Camel ROCKS... you can implement almost any use case or pattern of Enterprise Integration :)