Spring Cloud Stream Multi Topic Transaction Management - java

I'm trying to create a PoC application in Java to figure out how to do transaction management in Spring Cloud Stream when using Kafka for message publishing. The use case I'm trying to simulate is a processor that receives a message. It then does some processing and generates two new messages destined to two separate topics. I want to be able to handle publishing both messages as a single transaction. So, if publishing the second message fails I want to roll (not commit) the first message. Does Spring Cloud Stream support such a use case?
I've set the #Transactional annotation and I can see a global transaction starting before the message is delivered to the consumer. However, when I try to publish a message via the MessageChannel.send() method I can see that a new local transaction is started and completed in the KafkaProducerMessageHandler class' handleRequestMessage() method. Which means that the sending of the message does not participate in the global transaction. So, if there's an exception thrown after the publishing of the first message, the message will not be rolled back. The global transaction gets rolled back but that doesn't do anything really since the first message was already committed.
spring:
cloud:
stream:
kafka:
binder:
brokers: localhost:9092
transaction:
transaction-id-prefix: txn.
producer: # these apply to all producers that participate in the transaction
partition-key-extractor-name: partitionKeyExtractorStrategy
partition-selector-name: partitionSelectorStrategy
partition-count: 3
configuration:
acks: all
enable:
idempotence: true
retries: 10
bindings:
input-customer-data-change-topic:
consumer:
configuration:
isolation:
level: read_committed
enable-dlq: true
bindings:
input-customer-data-change-topic:
content-type: application/json
destination: com.fis.customer
group: com.fis.ec
consumer:
partitioned: true
max-attempts: 1
output-name-change-topic:
content-type: application/json
destination: com.fis.customer.name
output-email-change-topic:
content-type: application/json
destination: com.fis.customer.email
#SpringBootApplication
#EnableBinding(CustomerDataChangeStreams.class)
public class KafkaCloudStreamCustomerDemoApplication
{
public static void main(final String[] args)
{
SpringApplication.run(KafkaCloudStreamCustomerDemoApplication.class, args);
}
}
public interface CustomerDataChangeStreams
{
#Input("input-customer-data-change-topic")
SubscribableChannel inputCustomerDataChange();
#Output("output-email-change-topic")
MessageChannel outputEmailDataChange();
#Output("output-name-change-topic")
MessageChannel outputNameDataChange();
}
#Component
public class CustomerDataChangeListener
{
#Autowired
private CustomerDataChangeProcessor mService;
#StreamListener("input-customer-data-change-topic")
public Message<String> handleCustomerDataChangeMessages(
#Payload final ImmutableCustomerDetails customerDetails)
{
return mService.processMessage(customerDetails);
}
}
#Component
public class CustomerDataChangeProcessor
{
private final CustomerDataChangeStreams mStreams;
#Value("${spring.cloud.stream.bindings.output-email-change-topic.destination}")
private String mEmailChangeTopic;
#Value("${spring.cloud.stream.bindings.output-name-change-topic.destination}")
private String mNameChangeTopic;
public CustomerDataChangeProcessor(final CustomerDataChangeStreams streams)
{
mStreams = streams;
}
public void processMessage(final CustomerDetails customerDetails)
{
try
{
sendNameMessage(customerDetails);
sendEmailMessage(customerDetails);
}
catch (final JSONException ex)
{
LOGGER.error("Failed to send messages.", ex);
}
}
public void sendNameMessage(final CustomerDetails customerDetails)
throws JSONException
{
final JSONObject nameChangeDetails = new JSONObject();
nameChangeDetails.put(KafkaConst.BANK_ID_KEY, customerDetails.bankId());
nameChangeDetails.put(KafkaConst.CUSTOMER_ID_KEY, customerDetails.customerId());
nameChangeDetails.put(KafkaConst.FIRST_NAME_KEY, customerDetails.firstName());
nameChangeDetails.put(KafkaConst.LAST_NAME_KEY, customerDetails.lastName());
final String action = customerDetails.action();
nameChangeDetails.put(KafkaConst.ACTION_KEY, action);
final MessageChannel nameChangeMessageChannel = mStreams.outputNameDataChange();
emailChangeMessageChannel.send(MessageBuilder.withPayload(nameChangeDetails.toString())
.setHeader(MessageHeaders.CONTENT_TYPE, MimeTypeUtils.APPLICATION_JSON)
.setHeader(KafkaHeaders.TOPIC, mNameChangeTopic).build());
if ("fail_name_illegal".equalsIgnoreCase(action))
{
throw new IllegalArgumentException("Customer name failure!");
}
}
public void sendEmailMessage(final CustomerDetails customerDetails) throws JSONException
{
final JSONObject emailChangeDetails = new JSONObject();
emailChangeDetails.put(KafkaConst.BANK_ID_KEY, customerDetails.bankId());
emailChangeDetails.put(KafkaConst.CUSTOMER_ID_KEY, customerDetails.customerId());
emailChangeDetails.put(KafkaConst.EMAIL_ADDRESS_KEY, customerDetails.email());
final String action = customerDetails.action();
emailChangeDetails.put(KafkaConst.ACTION_KEY, action);
final MessageChannel emailChangeMessageChannel = mStreams.outputEmailDataChange();
emailChangeMessageChannel.send(MessageBuilder.withPayload(emailChangeDetails.toString())
.setHeader(MessageHeaders.CONTENT_TYPE, MimeTypeUtils.APPLICATION_JSON)
.setHeader(KafkaHeaders.TOPIC, mEmailChangeTopic).build());
if ("fail_email_illegal".equalsIgnoreCase(action))
{
throw new IllegalArgumentException("E-mail address failure!");
}
}
}
EDIT
We are getting closer. The local transaction does not get created anymore. However, the global transaction still gets committed even if there was an exception. From what I can tell the exception does not propagate to the TransactionTemplate.execute() method. Therefore, the transaction gets committed. It seems like that the MessageProducerSupport class in the sendMessage() method "swallows" the exception in the catch clause. If there's an error channel defined then a message is published to it and thus the exception is not rethrown. I tried turning the error channel off (spring.cloud.stream.kafka.binder.transaction.producer.error-channel-enabled = false) but that doesn't turn it off. So, just for a test I simply set the error channel to null in the debugger to force the exception to be rethrown. That seems to do it. However, the original message keeps getting redelivered to the initial consumer even though I have the max-attempts set to 1 for that consumer.

See the documentation.
spring.cloud.stream.kafka.binder.transaction.transactionIdPrefix
Enables transactions in the binder. See transaction.id in the Kafka documentation and Transactions in the spring-kafka documentation. When transactions are enabled, individual producer properties are ignored and all producers use the spring.cloud.stream.kafka.binder.transaction.producer.* properties.
Default null (no transactions)
spring.cloud.stream.kafka.binder.transaction.producer.*
Global producer properties for producers in a transactional binder. See spring.cloud.stream.kafka.binder.transaction.transactionIdPrefix and Kafka Producer Properties and the general producer properties supported by all binders.
Default: See individual producer properties.
You must configure the shared global producer.
Don't add #Transactional - the container will start the transaction and send the offset to the transaction before committing the transaction.
If the listener throws an exception, the transaction is rolled back and the DefaultAfterRollbackPostProcessor will re-seek the topics/partitions so that the record will be redelivered.
EDIT
There is a bug in the configuration of the binder's transaction manager that causes a new local transaction to be started by the output binding.
To work around it, reconfigure the TM with the following container customizer bean...
#Bean
public ListenerContainerCustomizer<AbstractMessageListenerContainer<byte[], byte[]>> customizer() {
return (container, dest, group) -> {
KafkaTransactionManager<?, ?> tm = (KafkaTransactionManager<?, ?>) container.getContainerProperties()
.getTransactionManager();
tm.setTransactionSynchronization(AbstractPlatformTransactionManager.SYNCHRONIZATION_ON_ACTUAL_TRANSACTION);
};
}
EDIT2
You can't use the binder's DLQ support because, from the container's perspective, the delivery was successful. We need to propagate the exception to the container to force a rollback. So, you need to move the dead-lettering to the AfterRollbackProcessor instead. Here is my complete test class:
#SpringBootApplication
#EnableBinding(Processor.class)
public class So57379575Application {
public static void main(String[] args) {
SpringApplication.run(So57379575Application.class, args);
}
#Autowired
private MessageChannel output;
#StreamListener(Processor.INPUT)
public void listen(String in) {
System.out.println("in:" + in);
this.output.send(new GenericMessage<>(in.toUpperCase()));
if (in.equals("two")) {
throw new RuntimeException("fail");
}
}
#KafkaListener(id = "so57379575", topics = "so57379575out")
public void listen2(String in) {
System.out.println("out:" + in);
}
#KafkaListener(id = "so57379575DLT", topics = "so57379575dlt")
public void listen3(String in) {
System.out.println("dlt:" + in);
}
#Bean
public ApplicationRunner runner(KafkaTemplate<byte[], byte[]> template) {
return args -> {
template.send("so57379575in", "one".getBytes());
template.send("so57379575in", "two".getBytes());
};
}
#Bean
public ListenerContainerCustomizer<AbstractMessageListenerContainer<byte[], byte[]>> customizer(
KafkaTemplate<Object, Object> template) {
return (container, dest, group) -> {
// enable transaction synchronization
KafkaTransactionManager<?, ?> tm = (KafkaTransactionManager<?, ?>) container.getContainerProperties()
.getTransactionManager();
tm.setTransactionSynchronization(AbstractPlatformTransactionManager.SYNCHRONIZATION_ON_ACTUAL_TRANSACTION);
// container dead-lettering
DefaultAfterRollbackProcessor<? super byte[], ? super byte[]> afterRollbackProcessor =
new DefaultAfterRollbackProcessor<>(new DeadLetterPublishingRecoverer(template,
(ex, tp) -> new TopicPartition("so57379575dlt", -1)), 0);
container.setAfterRollbackProcessor(afterRollbackProcessor);
};
}
}
and
spring:
kafka:
bootstrap-servers:
- 10.0.0.8:9092
- 10.0.0.8:9093
- 10.0.0.8:9094
consumer:
auto-offset-reset: earliest
enable-auto-commit: false
properties:
isolation.level: read_committed
cloud:
stream:
bindings:
input:
destination: so57379575in
group: so57379575in
consumer:
max-attempts: 1
output:
destination: so57379575out
kafka:
binder:
transaction:
transaction-id-prefix: so57379575tx.
producer:
configuration:
acks: all
retries: 10
#logging:
# level:
# org.springframework.kafka: trace
# org.springframework.transaction: trace
and
in:two
2019-08-07 12:43:33.457 ERROR 36532 --- [container-0-C-1] o.s.integration.handler.LoggingHandler : org.springframework.messaging.MessagingException: Exception thrown while
...
Caused by: java.lang.RuntimeException: fail
...
in:one
dlt:two
out:ONE

Related

Not able to stop an IBM MQ JMS consumer in Spring Boot

I am NOT able to stop an JMS consumer dynamically using a Spring Boot REST endpoint.
The number of consumers stays as is. No exceptions either.
IBM MQ Version: 9.2.0.5
pom.xml
<dependency>
<groupId>com.ibm.mq</groupId>
<artifactId>mq-jms-spring-boot-starter</artifactId>
<version>2.0.8</version>
</dependency>
JmsConfig.java
#Configuration
#EnableJms
#Log4j2
public class JmsConfig {
#Bean
public MQQueueConnectionFactory mqQueueConnectionFactory() {
MQQueueConnectionFactory mqQueueConnectionFactory = new MQQueueConnectionFactory();
mqQueueConnectionFactory.setHostName("my-ibm-mq-host.com");
try {
mqQueueConnectionFactory.setTransportType(WMQConstants.WMQ_CM_CLIENT);
mqQueueConnectionFactory.setCCSID(1208);
mqQueueConnectionFactory.setChannel("my-channel");
mqQueueConnectionFactory.setPort(1234);
mqQueueConnectionFactory.setQueueManager("my-QM");
} catch (Exception e) {
log.error("Exception while creating JMS connecion...", e.getMessage());
}
return mqQueueConnectionFactory;
}
}
JmsListenerConfig.java
#Configuration
#Log4j2
public class JmsListenerConfig implements JmsListenerConfigurer {
#Autowired
private JmsConfig jmsConfig;
private Map<String, String> queueMap = new HashMap<>();
#Bean
public DefaultJmsListenerContainerFactory mqJmsListenerContainerFactory() throws JMSException {
DefaultJmsListenerContainerFactory factory = new DefaultJmsListenerContainerFactory();
factory.setConnectionFactory(jmsConfig.mqQueueConnectionFactory());
factory.setDestinationResolver(new DynamicDestinationResolver());
factory.setSessionTransacted(true);
factory.setConcurrency("5");
return factory;
}
#Override
public void configureJmsListeners(JmsListenerEndpointRegistrar registrar) {
queueMap.put("my-queue-101", "101");
log.info("queueMap: " + queueMap);
queueMap.entrySet().forEach(e -> {
SimpleJmsListenerEndpoint endpoint = new SimpleJmsListenerEndpoint();
endpoint.setDestination(e.getKey());
endpoint.setId(e.getValue());
try {
log.info("Reading message....");
endpoint.setMessageListener(message -> {
try {
log.info("Receieved ID: {} Destination {}", message.getJMSMessageID(), message.getJMSDestination());
} catch (JMSException ex) {
log.error("Exception while reading message - " + ex.getMessage());
}
});
registrar.setContainerFactory(mqJmsListenerContainerFactory());
} catch (JMSException ex) {
log.error("Exception while reading message - " + ex.getMessage());
}
registrar.registerEndpoint(endpoint);
});
}
}
JmsController.java
#RestController
#RequestMapping("/jms")
#Log4j2
public class JmsController {
#Autowired
ApplicationContext context;
#RequestMapping(value = "/stop", method = RequestMethod.GET)
public #ResponseBody
String haltJmsListener() {
JmsListenerEndpointRegistry listenerEndpointRegistry = context.getBean(JmsListenerEndpointRegistry.class);
Set<String> containerIds = listenerEndpointRegistry.getListenerContainerIds();
log.info("containerIds: " + containerIds);
//stops all consumers
listenerEndpointRegistry.stop(); //DOESN'T WORK :(
//stops a consumer by id, used when there are multiple consumers and want to stop them individually
//listenerEndpointRegistry.getListenerContainer("101").stop(); //DOESN'T WORK EITHER :(
return "Jms Listener stopped";
}
}
Here is the result that I noticed.
Initial # of consumers: 0 (as expected)
After server startup and queue connection, total # of consumers: 1 (as expected)
After hitting http://localhost:8080/jms/stop endpoint, total # of consumers: 1 (NOT as expected, should go back to 0)
Am I missing any configuration ?
You need to also call shutDown on the container; see my comment on this answer DefaultMessageListenerContainer's "isActive" vs "isRunning"
start()/stop() set/reset running; initialize()/shutDown() set/reset active. It depends on what your requirements are. stop() just stops the consumers from getting new messages, but the consumers still exist. shutDown() closes the consumers. Most people call stop + shutdown and then initialize + start to restart. But if you just want to stop consuming for a short time, stop/start is all you need.
You will need to iterate over the containers and cast them to call shutDown().

Auto commit in kafka with spring cloud stream

I have an application where I would like to perform (n)ack in the Kafka messages manually. According to spring cloud documentation, it should be done with autoCommitOffset spring cloud documentation
However, in my application, even defining such property the header KafkaHeaders.ACKNOWLEDGMENT is still coming as null.
Here is what my configuration looks like
spring.cloud.stream.kafka.binder.brokers=${KAFKA_BROKER_LIST}
spring.cloud.stream.default.contentType=application/json
spring.cloud.stream.bindings.mytopic.destination=MyInputTopic
spring.cloud.stream.bindings.mytopic.group=myConsumerGroup
spring.cloud.stream.kafka.bindings.mytopic.consumer.autoCommitOffset=false
And my consumer:
#StreamListener("myTopic")
public void consume(#NotNull #Valid Message<MyTopic> message) {
MyTopic payload = message.getPayload();
Acknowledgment acknowledgment = message.getHeaders().get(KafkaHeaders.ACKNOWLEDGMENT, Acknowledgment.class); // always null
}
I am using java 13 with spring boot 2.2.5.RELEASE and spring cloud Hoxton.SR1
Any help is appreciated.
I just copied your properties and it works fine for me...
GenericMessage [payload=foo, headers={kafka_offset=0, kafka_consumer=org.apache.kafka.clients.consumer.KafkaConsumer#55d4844d, deliveryAttempt=1, kafka_timestampType=CREATE_TIME, kafka_receivedMessageKey=null, kafka_receivedPartitionId=0, kafka_receivedTopic=MyInputTopic, kafka_receivedTimestamp=1589488691039, kafka_acknowledgment=Acknowledgment for ConsumerRecord(topic = MyInputTopic, partition = 0, leaderEpoch = 0, offset = 0, CreateTime = 1589488691039, serialized key size = -1, serialized value size = 3, headers = RecordHeaders(headers = [], isReadOnly = false), key = null, value = [B#572887c3), contentType=application/json, kafka_groupId=myConsumerGroup}]
#SpringBootApplication
#EnableBinding(Sink.class)
public class Application {
public static void main(String[] args) {
SpringApplication.run(Application.class, args);
}
#StreamListener(Sink.INPUT)
public void listen(Message<String> in) {
System.out.println(in);
}
#Bean
public ApplicationRunner runner(KafkaTemplate<byte[], byte[]> template) {
return args -> {
template.send("MyInputTopic", "foo".getBytes());
};
}
}
spring.cloud.stream.default.contentType=application/json
spring.cloud.stream.bindings.input.destination=MyInputTopic
spring.cloud.stream.bindings.input.group=myConsumerGroup
spring.cloud.stream.kafka.bindings.input.consumer.autoCommitOffset=false
I found why my consumer was not working as expected:
In my configuration, I have something like spring.cloud.stream.bindings. mytopic.destination=MyInputTopic, however, the stream binding was done like this:
#StreamListener("Mytopic")
Apparently, the configurations prefixed with spring.cloud.stream.bindings are not case sensitive (as all the configurations worked as expected), but the ones prefixed with spring.cloud.stream.kafka.bindings are case sensitive leading to my issue.

Spring Kafka, Spring Cloud Stream, and Avro compatibility Unknown magic byte

I have a problem deserializing messages from Kafka topics. The messages have been serialized using spring-cloud-stream and Apache Avro. I am reading them using Spring Kafka and trying to deserialise them. If I use spring-cloud to both produce and consume the messages, then I can deserialize the messages fine. The problem is when I consume them with Spring Kafka and then try to deserialize.
I am using a Schema Registry (both the spring-boot Schema Registry for development, and also a Confluent schema in production), but the deserialization problems seem to occur before event calling the Schema Registry.
Its hard to post all the relevant code on this question, so I have posted it in a repo in git hub: https://github.com/robjwilkins/avro-example
The object I am sending over the topic is just a simple pojo:
#Data
public class Request {
private String message;
}
The code which produces messages on Kafka looks like this:
#EnableBinding(MessageChannels.class)
#Slf4j
#RequiredArgsConstructor
#RestController
public class ProducerController {
private final MessageChannels messageChannels;
#GetMapping("/produce")
public void produceMessage() {
Request request = new Request();
request.setMessage("hello world");
Message<Request> requestMessage = MessageBuilder.withPayload(request).build();
log.debug("sending message");
messageChannels.testRequest().send(requestMessage);
}
}
and application.yaml:
spring:
application.name: avro-producer
kafka:
bootstrap-servers: localhost:9092
consumer.group-id: avro-producer
cloud:
stream:
schema-registry-client.endpoint: http://localhost:8071
schema.avro.dynamic-schema-generation-enabled: true
kafka:
binder:
brokers: ${spring.kafka.bootstrap-servers}
bindings:
test-request:
destination: test-request
contentType: application/*+avro
Then I have a consumer:
#Slf4j
#Component
public class TopicListener {
#KafkaListener(topics = {"test-request"})
public void listenForMessage(ConsumerRecord<String, Request> consumerRecord) {
log.info("listenForMessage. got a message: {}", consumerRecord);
consumerRecord.headers().forEach(header -> log.info("header. key: {}, value: {}", header.key(), asString(header.value())));
}
private String asString(byte[] byteArray) {
return new String(byteArray, Charset.defaultCharset());
}
}
And the project which consumes has application.yaml config:
spring:
application.name: avro-consumer
kafka:
bootstrap-servers: localhost:9092
consumer:
group-id: avro-consumer
value-deserializer: io.confluent.kafka.serializers.KafkaAvroDeserializer
# value-deserializer: org.apache.kafka.common.serialization.StringDeserializer
key-deserializer: org.apache.kafka.common.serialization.StringDeserializer
properties:
schema.registry.url: http://localhost:8071
When the consumer gets a message it results in an exception:
2019-01-30 20:01:39.900 ERROR 30876 --- [ntainer#0-0-C-1] o.s.kafka.listener.LoggingErrorHandler : Error while processing: null
org.apache.kafka.common.errors.SerializationException: Error deserializing key/value for partition test-request-0 at offset 43. If needed, please seek past the record to continue consumption.
Caused by: org.apache.kafka.common.errors.SerializationException: Error deserializing Avro message for id -1
Caused by: org.apache.kafka.common.errors.SerializationException: Unknown magic byte!
I have stepped through the deserialization code to the point where this exception is thrown
public abstract class AbstractKafkaAvroDeserializer extends AbstractKafkaAvroSerDe {
....
private ByteBuffer getByteBuffer(byte[] payload) {
ByteBuffer buffer = ByteBuffer.wrap(payload);
if (buffer.get() != 0) {
throw new SerializationException("Unknown magic byte!");
} else {
return buffer;
}
}
It is happening because the deserializer checks byte contents of the serialized object (byte array) and expects it to be 0, however it is not. Hence the reason I question whether the spring-cloud-stream MessageConverter which serialized the object is compatible with the io.confluent object which I am using to deserialize the object. And if they are not compatible, what do I do?
thanks for any help.
The crux of this problem is that the producer is using spring-cloud-stream to post messages to Kafka, but the consumer uses spring-kaka. The reasons for this are:
The existing system is already well established and uses spring-cloud-stream
A new consumer is required to listen to multiple topics using the same method, binding only on a csv list of topic names
There is a requirement to consume a collection of messages at once, rather than individually, so their contents can be written in bulk to a database.
Spring-cloud-stream doesn't current allow the consumer to bind a listener to multiple topics, and there is no way to consume a collection of messages at once (unless I'm mistaken).
I have found a solution which doesn't require any changes to the producer code which uses spring-cloud-stream to publish messages to Kafka. Spring-cloud-stream uses a MessageConverter to manage serialisation and deserialisation. In the AbstractAvroMessageConverter there are methods: convertFromInternal and convertToInternal which handle the transformation to/from a byte array. My solution was to extend this code (creating a class which extends AvroSchemaRegistryClientMessageConverter), so I could reuse much of the spring-cloud-stream functionality, but with an interface that can be accessed from my spring-kafka KafkaListener. I then amended my TopicListener to use this class to do the conversion:
The converter:
#Component
#Slf4j
public class AvroKafkaMessageConverter extends AvroSchemaRegistryClientMessageConverter {
public AvroKafkaMessageConverter(SchemaRegistryClient schemaRegistryClient) {
super(schemaRegistryClient, new NoOpCacheManager());
}
public <T> T convertFromInternal(ConsumerRecord<?, ?> consumerRecord, Class<T> targetClass,
Object conversionHint) {
T result;
try {
byte[] payload = (byte[]) consumerRecord.value();
Map<String, String> headers = new HashMap<>();
consumerRecord.headers().forEach(header -> headers.put(header.key(), asString(header.value())));
MimeType mimeType = messageMimeType(conversionHint, headers);
if (mimeType == null) {
return null;
}
Schema writerSchema = resolveWriterSchemaForDeserialization(mimeType);
Schema readerSchema = resolveReaderSchemaForDeserialization(targetClass);
#SuppressWarnings("unchecked")
DatumReader<Object> reader = getDatumReader((Class<Object>) targetClass, readerSchema, writerSchema);
Decoder decoder = DecoderFactory.get().binaryDecoder(payload, null);
result = (T) reader.read(null, decoder);
}
catch (IOException e) {
throw new RuntimeException("Failed to read payload", e);
}
return result;
}
private MimeType messageMimeType(Object conversionHint, Map<String, String> headers) {
MimeType mimeType;
try {
String contentType = headers.get(MessageHeaders.CONTENT_TYPE);
log.debug("contentType: {}", contentType);
mimeType = MimeType.valueOf(contentType);
} catch (InvalidMimeTypeException e) {
log.error("Exception getting object MimeType from contentType header", e);
if (conversionHint instanceof MimeType) {
mimeType = (MimeType) conversionHint;
}
else {
return null;
}
}
return mimeType;
}
private String asString(byte[] byteArray) {
String theString = new String(byteArray, Charset.defaultCharset());
return theString.replace("\"", "");
}
}
The amended TopicListener:
#Slf4j
#Component
#RequiredArgsConstructor
public class TopicListener {
private final AvroKafkaMessageConverter messageConverter;
#KafkaListener(topics = {"test-request"})
public void listenForMessage(ConsumerRecord<?, ?> consumerRecord) {
log.info("listenForMessage. got a message: {}", consumerRecord);
Request request = messageConverter.convertFromInternal(
consumerRecord, Request.class, MimeType.valueOf("application/vnd.*+avr"));
log.info("request message: {}", request.getMessage());
}
}
This solution only consumes one message at a time but can be easily modified to consume batches of messages.
The full solution is here: https://github.com/robjwilkins/avro-example/tree/develop
You should to define deserializer explicitly, by creating DefaultKafkaConsumerFactory and your TopicListener bean in a config, something like this:
#Configuration
#EnableKafka
public class TopicListenerConfig {
#Value("${spring.kafka.bootstrap-servers}")
private String bootstrapServers;
#Value(("${spring.kafka.consumer.group-id}"))
private String groupId;
#Bean
public Map<String, Object> consumerConfigs() {
Map<String, Object> props = new HashMap<>();
props.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, bootstrapServers);
props.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
props.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, JsonDeserializer.class);
props.put(ConsumerConfig.GROUP_ID_CONFIG, groupId);
props.put(JsonDeserializer.TRUSTED_PACKAGES, "com.wilkins.avro.consumer");
props.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "earliest");
return props;
}
#Bean
public ConsumerFactory<String, String> consumerFactory() {
return new DefaultKafkaConsumerFactory<>(consumerConfigs());
}
#Bean
public KafkaListenerContainerFactory<ConcurrentMessageListenerContainer<String, String>> kafkaListenerContainerFactory() {
ConcurrentKafkaListenerContainerFactory<String, String> factory =
new ConcurrentKafkaListenerContainerFactory<>();
factory.setConsumerFactory(consumerFactory());
return factory;
}
#Bean
public TopicListener topicListener() {
return new TopicListener();
}
}
You can configure the binding to use a Kafka Serializer natively instead.
Set the producer property useNativeEncoding to true and configure the serializer using the ...producer.configuration Kafka properties.
EDIT
Example:
spring:
cloud:
stream:
# Generic binding properties
bindings:
input:
consumer:
use-native-decoding: true
destination: so54448732
group: so54448732
output:
destination: so54448732
producer:
use-native-encoding: true
# Kafka-specific binding properties
kafka:
bindings:
input:
consumer:
configuration:
value.deserializer: com.example.FooDeserializer
output:
producer:
configuration:
value.serializer: com.example.FooSerializer
Thanks this has saved my day using nativeencoding and spring:
cloud:
stream:
Generic binding properties
bindings:
input:
consumer:
use-native-decoding: true
destination: so54448732
group: so54448732
output:
destination: so54448732
producer:
use-native-encoding: true
Kafka-specific binding properties
kafka:
bindings:
input:
consumer:
configuration:
value.deserializer: com.example.FooDeserializer
output:
producer:
configuration:
value.serializer: com.example.FooSerializer

How to handle JmsChannelFactoryBean errors, is there a possibility of custom error channel usage?

I have following configuration for creation of two channels (by using the JmsChannelFactoryBean):
#Bean
public JmsChannelFactoryBean jmsChannel(ActiveMQConnectionFactory activeMQConnectionFactory) {
JmsChannelFactoryBean fb = new JmsChannelFactoryBean(true);
fb.setConnectionFactory(activeMQConnectionFactory);
fb.setDestinationName("something.queue");
fb.setErrorHandler(t -> log.error("something went wrong on jms channel", t));
return fb;
}
#Bean
public JmsChannelFactoryBean jmsChannelDLQ(ActiveMQConnectionFactory activeMQConnectionFactory) {
JmsChannelFactoryBean fb = new JmsChannelFactoryBean(true);
fb.setConnectionFactory(activeMQConnectionFactory);
fb.setDestinationName("something.queue.DLQ");
fb.setErrorHandler(t -> log.error("something went wrong on jms channel", t));
return fb;
}
The something.queue is configured to put the dead letter on something.queue.DLQ. Im using mostly Java DSL to configure the app, and if possible - would like to keep this.
Case is: the message is taken from jmsChannel put to sftp outbound gateway, if there is a problem on sending the file, the message is put back into the jmsChannel as not delivered. After some retries it is designed as poisonus, and put to something.queue.DLQ.
Is it possbile to have the info on error channel when that happens?
What is best practice to handle errors when using JMS backed message channels?
EDIT 2
The integration flow is defined as:
IntegrationFlows.from(filesToProcessChannel).handle(outboundGateway)
Where filesToProcessChannel is the JMS backed channel and outbound gateway is defined as:
#Bean
public SftpOutboundGateway outboundGateway(SftpRemoteFileTemplate sftpRemoteFileTemplate) {
SftpOutboundGateway gateway = new SftpOutboundGateway(sftpRemoteFileTemplate, AbstractRemoteFileOutboundGateway.Command.PUT.getCommand(), EXPRESSION_PAYLOAD);
ArrayList<Advice> adviceChain = new ArrayList<>();
adviceChain.add(errorHandlingAdvice());
gateway.setAdviceChain(adviceChain);
return gateway;
}
Im trying to grab exception using advice:
#Bean
public Advice errorHandlingAdvice() {
RequestHandlerRetryAdvice advice = new RequestHandlerRetryAdvice();
RetryTemplate retryTemplate = new RetryTemplate();
SimpleRetryPolicy retryPolicy = new SimpleRetryPolicy();
retryPolicy.setMaxAttempts(1);
retryTemplate.setRetryPolicy(retryPolicy);
advice.setRetryTemplate(retryTemplate);
advice.setRecoveryCallback(new ErrorMessageSendingRecoverer(filesToProcessErrorChannel));
return advice;
}
Is this the right way?
EDIT 3
There is certanly something wrong with SFTPOutboundGateway and advices (or with me :/):
I used the folowing advice from the spring integration reference:
#Bean
public Advice expressionAdvice() {
ExpressionEvaluatingRequestHandlerAdvice advice = new ExpressionEvaluatingRequestHandlerAdvice();
advice.setSuccessChannelName("success.input");
advice.setOnSuccessExpressionString("payload + ' was successful'");
advice.setFailureChannelName("failure.input");
advice.setOnFailureExpressionString(
"payload + ' was bad, with reason: ' + #exception.cause.message");
advice.setTrapException(true);
return advice;
}
#Bean
public IntegrationFlow success() {
return f -> f.handle(System.out::println);
}
#Bean
public IntegrationFlow failure() {
return f -> f.handle(System.out::println);
}
And when I use :
return IntegrationFlows.from(filesToProcessChannel)
.handle((GenericHandler<File>) (payload, headers) -> {
if (payload.equals("x")) {
return null;
}
else {
throw new RuntimeException("some failure");
}
}, spec -> spec.advice(expressionAdvice()))
It gets called, and i get error message printed out (and that is expected), but when I try to use:
return IntegrationFlows.from(filesToProcessChannel)
.handle(outboundGateway, spec -> spec.advice(expressionAdvice()))
The advice is not called, and the error message is put back to JMS.
The app is using Spring Boot v2.0.0.RELEASE, Spring v5.0.4.RELEASE.
EDIT 4
I managed to resolve the advice issue using following configuration, still don't understand why the handler spec will not work:
#Bean
IntegrationFlow files(SftpOutboundGateway outboundGateway,
...
) {
return IntegrationFlows.from(filesToProcessChannel)
.handle(outboundGateway)
...
.log(LoggingHandler.Level.INFO)
.get();
}
#Bean
public SftpOutboundGateway outboundGateway(SftpRemoteFileTemplate sftpRemoteFileTemplate) {
SftpOutboundGateway gateway = new SftpOutboundGateway(sftpRemoteFileTemplate, AbstractRemoteFileOutboundGateway.Command.PUT.getCommand(), EXPRESSION_PAYLOAD);
ArrayList<Advice> adviceChain = new ArrayList<>();
adviceChain.add(expressionAdvice());
gateway.setAdviceChain(adviceChain);
return gateway;
}
#Bean
public ExpressionEvaluatingRequestHandlerAdvice expressionAdvice() {
ExpressionEvaluatingRequestHandlerAdvice advice = new ExpressionEvaluatingRequestHandlerAdvice();
advice.setSuccessChannelName("success.input");
advice.setOnSuccessExpressionString("payload + ' was successful'");
advice.setFailureChannelName("failure.input");
advice.setOnFailureExpressionString(
"payload + ' was bad, with reason: ' + #exception.cause.message");
advice.setTrapException(true);
return advice;
}
#Bean
public IntegrationFlow success() {
return f -> f.handle(System.out::println);
}
#Bean
public IntegrationFlow failure() {
return f -> f.handle(System.out::println);
}
Since the movement to the DLQ is performed by the broker, the application has no mechanism to log the situation - it is not even aware that it happened.
You would have to catch the exceptions yourself and publish the message the the DLQ yourself, after some number of attempts (JMSXDeliveryCount header), instead of using the broker policy.
EDIT
Add an Advice to the .handle() step.
.handle(outboundGateway, e -> e.advice(myAdvice))
Where myAdvice implements MethodInterceptor.
In the invoke method, after a failure, you can check the delivery count header and, if it exceeds your threshold, publish the message to the DLQ (e.g. send it to another channel that has a JMS outbound adapter subscribed) and log the error; if the threshold has not been exceeded, simply return the result of the invocation.proceed() (or rethrow the exception).
That way, you control publishing to the DLQ rather than having the broker do it. You can also add more information, such as the exception, to headers.
EDIT2
You need something like this
public class MyAdvice implements MethodInterceptor {
#Autowired
private MessageChannel toJms;
public Object invoke(MethodInvocation invocation) throws Throwable {
try {
return invocation.proceed();
}
catch Exception(e) {
Message<?> message = (Message<?>) invocation.getArguments()[0];
Integer redeliveries = messasge.getHeader("JMXRedeliveryCount", Integer.class);
if (redeliveries != null && redeliveries > 3) {
this.toJms.send(message); // maybe rebuild with additional headers about the error
}
else {
throw e;
}
}
}
}
(it should be close, but I haven't tested it). It assumes your broker populates that header.

Set Expiration per message with SpringJMS

I use MQ in my project via SpringJMS, as a broker I use ActiveMQ.
I need to set expiration message-based, so I tried to used message.setJMSExpiration but without success. All messages coming to ActiveMQ have expiration=0.
Does anyone has success with setting Expiration per message using Spring?
For configuring JmsTemplate I used default value explicitQosEnabled = false; so I expected to keep expiration from my Message props. But as I see in ActiveMQSession.class this message properties will be override:
long expiration = 0L;
if (!producer.getDisableMessageTimestamp()) {
long timeStamp = System.currentTimeMillis();
message.setJMSTimestamp(timeStamp);
if (timeToLive > 0) {
expiration = timeToLive + timeStamp;
}
}
message.setJMSExpiration(expiration);
//me: timeToLive coming from default values of Producer/JmsTemplate...
What I am doing wrong ? or it is just impossible with this tools.
I don't know why Spring decided to exclude this, but you can extend JmsTemplate and overload some methods, passing a timeToLive argument.
public class MyJmsTemplate extends JmsTemplate {
public void send(final Destination destination,
final MessageCreator messageCreator, final long timeToLive)
throws JmsException {
execute(new SessionCallback<Object>() {
public Object doInJms(Session session) throws JMSException {
doSend(session, destination, messageCreator, timeToLive);
return null;
}
}, false);
}
protected void doSend(Session session, Destination destination,
MessageCreator messageCreator, long timeToLive) throws JMSException {
Assert.notNull(messageCreator, "MessageCreator must not be null");
MessageProducer producer = createProducer(session, destination);
try {
Message message = messageCreator.createMessage(session);
if (logger.isDebugEnabled()) {
logger.debug("Sending created message: " + message);
}
doSend(producer, message, timeToLive);
// Check commit - avoid commit call within a JTA transaction.
if (session.getTransacted() && isSessionLocallyTransacted(session)) {
// Transacted session created by this template -> commit.
JmsUtils.commitIfNecessary(session);
}
} finally {
JmsUtils.closeMessageProducer(producer);
}
}
protected void doSend(MessageProducer producer, Message message,
long timeToLive) throws JMSException {
if (isExplicitQosEnabled() && timeToLive > 0) {
producer.send(message, getDeliveryMode(), getPriority(), timeToLive);
} else {
producer.send(message);
}
}
}
JMSExpiration is not the way to set an expiration. See the javadocs for Message...
JMS providers set this field when a message is sent. This method can be used to change the value for a message that has been received.
In other words, it's ignored on a send - the time to live is set on the producer.send() method.
To expire a message set explicitQosEnabled to true and setTimeToLive(...).

Categories