I have an application where I would like to perform (n)ack in the Kafka messages manually. According to spring cloud documentation, it should be done with autoCommitOffset spring cloud documentation
However, in my application, even defining such property the header KafkaHeaders.ACKNOWLEDGMENT is still coming as null.
Here is what my configuration looks like
spring.cloud.stream.kafka.binder.brokers=${KAFKA_BROKER_LIST}
spring.cloud.stream.default.contentType=application/json
spring.cloud.stream.bindings.mytopic.destination=MyInputTopic
spring.cloud.stream.bindings.mytopic.group=myConsumerGroup
spring.cloud.stream.kafka.bindings.mytopic.consumer.autoCommitOffset=false
And my consumer:
#StreamListener("myTopic")
public void consume(#NotNull #Valid Message<MyTopic> message) {
MyTopic payload = message.getPayload();
Acknowledgment acknowledgment = message.getHeaders().get(KafkaHeaders.ACKNOWLEDGMENT, Acknowledgment.class); // always null
}
I am using java 13 with spring boot 2.2.5.RELEASE and spring cloud Hoxton.SR1
Any help is appreciated.
I just copied your properties and it works fine for me...
GenericMessage [payload=foo, headers={kafka_offset=0, kafka_consumer=org.apache.kafka.clients.consumer.KafkaConsumer#55d4844d, deliveryAttempt=1, kafka_timestampType=CREATE_TIME, kafka_receivedMessageKey=null, kafka_receivedPartitionId=0, kafka_receivedTopic=MyInputTopic, kafka_receivedTimestamp=1589488691039, kafka_acknowledgment=Acknowledgment for ConsumerRecord(topic = MyInputTopic, partition = 0, leaderEpoch = 0, offset = 0, CreateTime = 1589488691039, serialized key size = -1, serialized value size = 3, headers = RecordHeaders(headers = [], isReadOnly = false), key = null, value = [B#572887c3), contentType=application/json, kafka_groupId=myConsumerGroup}]
#SpringBootApplication
#EnableBinding(Sink.class)
public class Application {
public static void main(String[] args) {
SpringApplication.run(Application.class, args);
}
#StreamListener(Sink.INPUT)
public void listen(Message<String> in) {
System.out.println(in);
}
#Bean
public ApplicationRunner runner(KafkaTemplate<byte[], byte[]> template) {
return args -> {
template.send("MyInputTopic", "foo".getBytes());
};
}
}
spring.cloud.stream.default.contentType=application/json
spring.cloud.stream.bindings.input.destination=MyInputTopic
spring.cloud.stream.bindings.input.group=myConsumerGroup
spring.cloud.stream.kafka.bindings.input.consumer.autoCommitOffset=false
I found why my consumer was not working as expected:
In my configuration, I have something like spring.cloud.stream.bindings. mytopic.destination=MyInputTopic, however, the stream binding was done like this:
#StreamListener("Mytopic")
Apparently, the configurations prefixed with spring.cloud.stream.bindings are not case sensitive (as all the configurations worked as expected), but the ones prefixed with spring.cloud.stream.kafka.bindings are case sensitive leading to my issue.
Related
I'm trying to figure out how i can test my Spring Cloud Streams Kafka-Streams application.
The application lookls like this:
Stream 1: Topic1 > Topic2
Stream 2: Topic2 + Topic3 joined > Topic4
Stream 3: Topic4 > Topic5
I tried different approaches like the TestChannelBinder but this approach only works with Simple functions not Streams and Avro.
I decided to use EmbeddedKafka with MockSchemaRegistryClient. I can produce to a topic and also consume from the same topic again (topic1) but i'm not able to consume from (topic2).
In my test application.yaml i put the following configuration (i'm only testing the first stream for now, i want to extend it once this works):
spring.application.name: processingapp
spring.cloud:
function.definition: stream1 # not now ;stream2;stream3
stream:
bindings:
stream1-in-0:
destination: topic1
stream1-out-0:
destination: topic2
kafka:
binder:
min-partition-count: 1
replication-factor: 1
auto-create-topics: true
auto-add-partitions: true
bindings:
default:
consumer:
autoRebalanceEnabled: true
resetOffsets: true
startOffset: earliest
stream1-in-0:
consumer:
keySerde: io.confluent.kafka.streams.serdes.avro.PrimitiveAvroSerde
valueSerde: io.confluent.kafka.streams.serdes.avro.SpecificAvroSerde
stream1-out-0:
producer:
keySerde: io.confluent.kafka.streams.serdes.avro.PrimitiveAvroSerde
valueSerde: io.confluent.kafka.streams.serdes.avro.SpecificAvroSerde
streams:
binder:
configuration:
schema.registry.url: mock://localtest
specivic.avro.reader: true
My test looks like the following:
#RunWith(SpringRunner.class)
#SpringBootTest
public class Test {
private static final String INPUT_TOPIC = "topic1";
private static final String OUTPUT_TOPIC = "topic2";
#ClassRule
public static EmbeddedKafkaRule embeddedKafka = new EmbeddedKafkaRule(1, true, 1, INPUT_TOPIC, OUTPUT_TOPIC);
#BeforeClass
public static void setup() {
System.setProperty("spring.cloud.stream.kafka.binder.brokers", embeddedKafka.getEmbeddedKafka().getBrokersAsString());
}
#org.junit.Test
public void testSendReceive() throws IOException {
Map<String, Object> senderProps = KafkaTestUtils.producerProps(embeddedKafka.getEmbeddedKafka());
senderProps.put("key.serializer", LongSerializer.class);
senderProps.put("value.serializer", SpecificAvroSerializer.class);
senderProps.put("schema.registry.url", "mock://localtest");
AvroFileParser fileParser = new AvroFileParser();
DefaultKafkaProducerFactory<Long, Test1> pf = new DefaultKafkaProducerFactory<>(senderProps);
KafkaTemplate<Long, Test1> template = new KafkaTemplate<>(pf, true);
Test1 test1 = fileParser.parseTest1("src/test/resources/mocks/test1.json");
template.send(INPUT_TOPIC, 123456L, test1);
System.out.println("produced");
Map<String, Object> consumer1Props = KafkaTestUtils.consumerProps("testConsumer1", "false", embeddedKafka.getEmbeddedKafka());
consumer1Props.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "earliest");
consumer1Props.put("key.deserializer", LongDeserializer.class);
consumer1Props.put("value.deserializer", SpecificAvroDeserializer.class);
consumer1Props.put("schema.registry.url", "mock://localtest");
DefaultKafkaConsumerFactory<Long, Test1> cf = new DefaultKafkaConsumerFactory<>(consumer1Props);
Consumer<Long, Test1> consumer1 = cf.createConsumer();
consumer1.subscribe(Collections.singleton(INPUT_TOPIC));
ConsumerRecords<Long, Test1> records = consumer1.poll(Duration.ofSeconds(10));
consumer1.commitSync();
System.out.println("records count?");
System.out.println("" + records.count());
Test1 fetchedTest1;
fetchedTest1 = records.iterator().next().value();
assertThat(records.count()).isEqualTo(1);
System.out.println("found record");
System.out.println(fetchedTest1.toString());
Map<String, Object> consumer2Props = KafkaTestUtils.consumerProps("testConsumer2", "false", embeddedKafka.getEmbeddedKafka());
consumer2Props.put("key.deserializer", StringDeserializer.class);
consumer2Props.put("value.deserializer", TestAvroDeserializer.class);
consumer2Props.put("schema.registry.url", "mock://localtest");
DefaultKafkaConsumerFactory<String, Test2> consumer2Factory = new DefaultKafkaConsumerFactory<>(consumer2Props);
Consumer<String, Test2> consumer2 = consumer2Factory.createConsumer();
consumer2.subscribe(Collections.singleton(OUTPUT_TOPIC));
ConsumerRecords<String, Test2> records2 = consumer2.poll(Duration.ofSeconds(30));
consumer2.commitSync();
if (records2.iterator().hasNext()) {
System.out.println("has next");
} else {
System.out.println("has no next");
}
}
}
I receive the following exception when trying to consume and deserialize from topic2:
Caused by: org.apache.kafka.common.errors.SerializationException: Error retrieving Avro unknown schema for id 0
Caused by: java.io.IOException: Cannot get schema from schema registry!
at io.confluent.kafka.schemaregistry.client.MockSchemaRegistryClient.getSchemaBySubjectAndIdFromRegistry(MockSchemaRegistryClient.java:193) ~[kafka-schema-registry-client-6.2.0.jar:na]
at io.confluent.kafka.schemaregistry.client.MockSchemaRegistryClient.getSchemaBySubjectAndId(MockSchemaRegistryClient.java:249) ~[kafka-schema-registry-client-6.2.0.jar:na]
at io.confluent.kafka.schemaregistry.client.MockSchemaRegistryClient.getSchemaById(MockSchemaRegistryClient.java:232) ~[kafka-schema-registry-client-6.2.0.jar:na]
at io.confluent.kafka.serializers.AbstractKafkaAvroDeserializer$DeserializationContext.schemaFromRegistry(AbstractKafkaAvroDeserializer.java:307) ~[kafka-avro-serializer-6.2.0.jar:na]
at io.confluent.kafka.serializers.AbstractKafkaAvroDeserializer.deserialize(AbstractKafkaAvroDeserializer.java:107) ~[kafka-avro-serializer-6.2.0.jar:na]
at io.confluent.kafka.serializers.AbstractKafkaAvroDeserializer.deserialize(AbstractKafkaAvroDeserializer.java:86) ~[kafka-avro-serializer-6.2.0.jar:na]
at io.confluent.kafka.serializers.KafkaAvroDeserializer.deserialize(KafkaAvroDeserializer.java:55) ~[kafka-avro-serializer-6.2.0.jar:na]
at org.apache.kafka.common.serialization.Deserializer.deserialize(Deserializer.java:60) ~[kafka-clients-2.7.1.jar:na]
at org.apache.kafka.streams.processor.internals.SourceNode.deserializeKey(SourceNode.java:54) ~[kafka-streams-2.7.1.jar:na]
at org.apache.kafka.streams.processor.internals.RecordDeserializer.deserialize(RecordDeserializer.java:65) ~[kafka-streams-2.7.1.jar:na]
at org.apache.kafka.streams.processor.internals.RecordQueue.updateHead(RecordQueue.java:176) ~[kafka-streams-2.7.1.jar:na]
at org.apache.kafka.streams.processor.internals.RecordQueue.addRawRecords(RecordQueue.java:112) ~[kafka-streams-2.7.1.jar:na]
at org.apache.kafka.streams.processor.internals.PartitionGroup.addRawRecords(PartitionGroup.java:185) ~[kafka-streams-2.7.1.jar:na]
at org.apache.kafka.streams.processor.internals.StreamTask.addRecords(StreamTask.java:895) ~[kafka-streams-2.7.1.jar:na]
at org.apache.kafka.streams.processor.internals.TaskManager.addRecordsToTasks(TaskManager.java:1008) ~[kafka-streams-2.7.1.jar:na]
at org.apache.kafka.streams.processor.internals.StreamThread.pollPhase(StreamThread.java:812) ~[kafka-streams-2.7.1.jar:na]
at org.apache.kafka.streams.processor.internals.StreamThread.runOnce(StreamThread.java:625) ~[kafka-streams-2.7.1.jar:na]
at org.apache.kafka.streams.processor.internals.StreamThread.runLoop(StreamThread.java:564) ~[kafka-streams-2.7.1.jar:na]
at org.apache.kafka.streams.processor.internals.StreamThread.run(StreamThread.java:523) ~[kafka-streams-2.7.1.jar:na]
There won't be a message consumed.
So i tried to overwrite the SpecificAvroSerde and register the schemas directly and use this deserializer.
public class TestAvroDeserializer<T extends org.apache.avro.specific.SpecificRecord>
extends SpecificAvroDeserializer<T> implements Deserializer<T> {
private final KafkaAvroDeserializer inner;
public TestAvroDeserializer() throws IOException, RestClientException {
MockSchemaRegistryClient mockedClient = new MockSchemaRegistryClient();
Schema.Parser parser = new Schema.Parser();
Schema test2Schema = parser.parse(new File("./src/main/resources/avro/test2.avsc"));
mockedClient.register("test2-value", test2Schema , 1, 0);
inner = new KafkaAvroDeserializer(mockedClient);
}
/**
* For testing purposes only.
*/
TestAvroDeserializer(final SchemaRegistryClient client) throws IOException, RestClientException {
MockSchemaRegistryClient mockedClient = new MockSchemaRegistryClient();
Schema.Parser parser = new Schema.Parser();
Schema test2Schema = parser.parse(new File("./src/main/resources/avro/test2.avsc"));
mockedClient.register("test2-value", test2Schema , 1, 0);
inner = new KafkaAvroDeserializer(mockedClient);
}
}
With this deserializer it won't work too. Does anyone have experience on how to do this tests with EmbeddedKafka and MockSchemaRegistry? Or is there another approach i should use?
I'm very glad if someone can help. Thank you in advance.
I found an appropriate way of integration testing my topology.
I use the TopologyTestDriver from the kafka-streams-test-utils package.
Include this dependency to Maven:
<dependency>
<groupId>org.apache.kafka</groupId>
<artifactId>kafka-streams-test-utils</artifactId>
<scope>test</scope>
</dependency>
For the application described in the question setting up the TopologyTestDriver would look like the following. This code is just sequentially to show how it works.
#Test
void test() {
keySerde.configure(Map.of(KafkaAvroSerializerConfig.SCHEMA_REGISTRY_URL_CONFIG, "mock://schemas"), true);
valueSerdeTopic1.configure(Map.of(KafkaAvroSerializerConfig.SCHEMA_REGISTRY_URL_CONFIG, "mock://schemas"), false);
valueSerdeTopic2.configure(Map.of(KafkaAvroSerializerConfig.SCHEMA_REGISTRY_URL_CONFIG, "mock://schemas"), false);
final StreamsBuilder builder = new StreamsBuilder();
Configuration config = new Configuration(); // class where you declare your spring cloud stream functions
KStream<String, Topic1> input = builder.stream("topic1", Consumed.with(keySerde, valueSerdeTopic1));
KStream<String, Topic2> output = config.stream1().apply(input);
output.to("topic2");
Topology topology = builder.build();
Properties streamsConfig = new Properties();
streamsConfig.putAll(Map.of(
org.apache.kafka.streams.StreamsConfig.APPLICATION_ID_CONFIG, "toplogy-test-driver",
org.apache.kafka.streams.StreamsConfig.BOOTSTRAP_SERVERS_CONFIG, "ignored",
KafkaAvroSerializerConfig.SCHEMA_REGISTRY_URL_CONFIG, "mock://schemas",
org.apache.kafka.streams.StreamsConfig.DEFAULT_KEY_SERDE_CLASS_CONFIG, PrimitiveAvroSerde.class.getName(),
org.apache.kafka.streams.StreamsConfig.DEFAULT_VALUE_SERDE_CLASS_CONFIG, SpecificAvroSerde.class.getName()
));
TopologyTestDriver testDriver = new TopologyTestDriver(topology, streamsConfig);
TestInputTopic<String, Topic1> inputTopic = testDriver.createInputTopic("topic1", keySerde.serializer(), valueSerdeTopic1.serializer());
TestOutputTopic<String, Topic2> outputTopic = testDriver.createOutputTopic("topic2", keySerde.deserializer(), valueSerdeTopic2.deserializer());
inputTopic.pipeInput("key", topic1AvroModel); // Write to the input topic which applies the topology processor of your spring-cloud-stream app
KeyValue<String, Topic2> outputRecord = outputTopic.readKeyValue(); // Read from the output topic
}
If you write more tests i recommend to abstract the setup code to not repeat yourself for each test.
I highly suggest this example from the spring-cloud-streams-samples repository, it leaded me to the solution to use TopologyTestDriver.
My aim is to read the csv file, convert it to Java Objects (POJO) and send the Java Objects one by one to ActiveMQ queue. Below is the code:
public void configure() throws Exception {
from("file:src/main/resources?fileName=data.csv")
.unmarshal(bindy)
.split(body())
.to("file:src/main/resources/?fileName=equityfeeds.txt")
.split().tokenize(",").streaming().to("jms:queue:javaobjects.upstream.queue");
}
Issues:
1.When I execute the code no file(equityfeeds.txt) gets created and no objects goes to the queue. What's wrong? I don't need to do any processing right now. I just need to unmarshal the csv to POJOs and send the Java Objects one by one to the ActiveMQ queue.
EquityFeeds (POJO)
#CsvRecord(separator = ",",skipFirstLine = true)
public class EquityFeeds {
#DataField(pos = 1)
private String externalTransactionId;
#DataField(pos = 2)
private String clientId;
#DataField(pos = 3)
private String securityId;
#DataField(pos = 4)
private String transactionType;
#DataField(pos = 5, pattern = "dd/MM/YY")
private Date transactionDate;
#DataField(pos = 6)
private float marketValue;
#DataField(pos = 7)
private String priorityFlag;
Please kindly help. Please tell me where I am going wrong.
#pvpkiran:Below is my Camel Code for producer:
public void configure() throws Exception {
from("file:src/main/resources?fileName=data.csv")
.unmarshal(bindy)
.split(body())
.streaming().to("jms:queue:javaobjects.upstream.queue");
}
Below is my Consumer Code (Using JMS API):
#JmsListener(destination = "javaobjects.upstream.queue")
public void javaObjectsListener(final Message objectMessage) throws JMSException {
Object messageData = null;
if(objectMessage instanceof ObjectMessage) {
ObjectMessage objMessage = (ObjectMessage) objectMessage;
messageData = objMessage.getObject();
}
System.out.println("Object: "+messageData.toString());
}
I am not using Camel for consuming the JMSMessage. In the consumer I am using JMS API for consuming the message. Also I am not testing the code. The messages have come in ActiveMQ and I am using JMS API (as above) to consume the message. In the terminal in am getting NullPointerException. Also 2 message have gone into ActiveMQ.DLQ giving the below Error Message:
java.lang.Throwable: Delivery[7] exceeds redelivery policy limit:RedeliveryPolicy {destination = null, collisionAvoidanceFactor = 0.15, maximumRedeliveries = 6, maximumRedeliveryDelay = -1, initialRedeliveryDelay = 1000, useCollisionAvoidance = false, useExponentialBackOff = false, backOffMultiplier = 5.0, redeliveryDelay = 1000, preDispatchCheck = true}, cause:null
Try this. This should work
from("file:src/main/resources?fileName=equityfeeds.csv")
.unmarshal(new BindyCsvDataFormat(EquityFeeds.class))
.split(body())
.streaming().to("jms:queue:javaobjects.upstream.queue");
// This route is for Testing
from("jms:queue:javaobjects.upstream.queue").to("bean:camelBeanComponent?method=processRoute");
And write a consumer component bean
#Component
public class CamelBeanComponent {
public void processRoute(Exchange exchange) {
System.out.println(exchange.getIn().getBody());
}
}
This printed(You need to add toString() if you need output like this)
EquityFeeds(externalTransactionId=SAPEXTXN1, clientId=GS, securityId=ICICI, transactionType=BUY, transactionDate=Sun Dec 30 00:00:00 CET 2012, marketValue=101.9, priorityFlag=Y)
EquityFeeds(externalTransactionId=SAPEXTXN2, clientId=AS, securityId=REL, transactionType=SELL, transactionDate=Sun Dec 30 00:00:00 CET 2012, marketValue=121.9, priorityFlag=N)
If you use .split().tokenize(",") then, each field in each line(not complete line) is converted to EquityFeeds object (with other fields as null) is sent as a message to the queue
I'm trying to create a PoC application in Java to figure out how to do transaction management in Spring Cloud Stream when using Kafka for message publishing. The use case I'm trying to simulate is a processor that receives a message. It then does some processing and generates two new messages destined to two separate topics. I want to be able to handle publishing both messages as a single transaction. So, if publishing the second message fails I want to roll (not commit) the first message. Does Spring Cloud Stream support such a use case?
I've set the #Transactional annotation and I can see a global transaction starting before the message is delivered to the consumer. However, when I try to publish a message via the MessageChannel.send() method I can see that a new local transaction is started and completed in the KafkaProducerMessageHandler class' handleRequestMessage() method. Which means that the sending of the message does not participate in the global transaction. So, if there's an exception thrown after the publishing of the first message, the message will not be rolled back. The global transaction gets rolled back but that doesn't do anything really since the first message was already committed.
spring:
cloud:
stream:
kafka:
binder:
brokers: localhost:9092
transaction:
transaction-id-prefix: txn.
producer: # these apply to all producers that participate in the transaction
partition-key-extractor-name: partitionKeyExtractorStrategy
partition-selector-name: partitionSelectorStrategy
partition-count: 3
configuration:
acks: all
enable:
idempotence: true
retries: 10
bindings:
input-customer-data-change-topic:
consumer:
configuration:
isolation:
level: read_committed
enable-dlq: true
bindings:
input-customer-data-change-topic:
content-type: application/json
destination: com.fis.customer
group: com.fis.ec
consumer:
partitioned: true
max-attempts: 1
output-name-change-topic:
content-type: application/json
destination: com.fis.customer.name
output-email-change-topic:
content-type: application/json
destination: com.fis.customer.email
#SpringBootApplication
#EnableBinding(CustomerDataChangeStreams.class)
public class KafkaCloudStreamCustomerDemoApplication
{
public static void main(final String[] args)
{
SpringApplication.run(KafkaCloudStreamCustomerDemoApplication.class, args);
}
}
public interface CustomerDataChangeStreams
{
#Input("input-customer-data-change-topic")
SubscribableChannel inputCustomerDataChange();
#Output("output-email-change-topic")
MessageChannel outputEmailDataChange();
#Output("output-name-change-topic")
MessageChannel outputNameDataChange();
}
#Component
public class CustomerDataChangeListener
{
#Autowired
private CustomerDataChangeProcessor mService;
#StreamListener("input-customer-data-change-topic")
public Message<String> handleCustomerDataChangeMessages(
#Payload final ImmutableCustomerDetails customerDetails)
{
return mService.processMessage(customerDetails);
}
}
#Component
public class CustomerDataChangeProcessor
{
private final CustomerDataChangeStreams mStreams;
#Value("${spring.cloud.stream.bindings.output-email-change-topic.destination}")
private String mEmailChangeTopic;
#Value("${spring.cloud.stream.bindings.output-name-change-topic.destination}")
private String mNameChangeTopic;
public CustomerDataChangeProcessor(final CustomerDataChangeStreams streams)
{
mStreams = streams;
}
public void processMessage(final CustomerDetails customerDetails)
{
try
{
sendNameMessage(customerDetails);
sendEmailMessage(customerDetails);
}
catch (final JSONException ex)
{
LOGGER.error("Failed to send messages.", ex);
}
}
public void sendNameMessage(final CustomerDetails customerDetails)
throws JSONException
{
final JSONObject nameChangeDetails = new JSONObject();
nameChangeDetails.put(KafkaConst.BANK_ID_KEY, customerDetails.bankId());
nameChangeDetails.put(KafkaConst.CUSTOMER_ID_KEY, customerDetails.customerId());
nameChangeDetails.put(KafkaConst.FIRST_NAME_KEY, customerDetails.firstName());
nameChangeDetails.put(KafkaConst.LAST_NAME_KEY, customerDetails.lastName());
final String action = customerDetails.action();
nameChangeDetails.put(KafkaConst.ACTION_KEY, action);
final MessageChannel nameChangeMessageChannel = mStreams.outputNameDataChange();
emailChangeMessageChannel.send(MessageBuilder.withPayload(nameChangeDetails.toString())
.setHeader(MessageHeaders.CONTENT_TYPE, MimeTypeUtils.APPLICATION_JSON)
.setHeader(KafkaHeaders.TOPIC, mNameChangeTopic).build());
if ("fail_name_illegal".equalsIgnoreCase(action))
{
throw new IllegalArgumentException("Customer name failure!");
}
}
public void sendEmailMessage(final CustomerDetails customerDetails) throws JSONException
{
final JSONObject emailChangeDetails = new JSONObject();
emailChangeDetails.put(KafkaConst.BANK_ID_KEY, customerDetails.bankId());
emailChangeDetails.put(KafkaConst.CUSTOMER_ID_KEY, customerDetails.customerId());
emailChangeDetails.put(KafkaConst.EMAIL_ADDRESS_KEY, customerDetails.email());
final String action = customerDetails.action();
emailChangeDetails.put(KafkaConst.ACTION_KEY, action);
final MessageChannel emailChangeMessageChannel = mStreams.outputEmailDataChange();
emailChangeMessageChannel.send(MessageBuilder.withPayload(emailChangeDetails.toString())
.setHeader(MessageHeaders.CONTENT_TYPE, MimeTypeUtils.APPLICATION_JSON)
.setHeader(KafkaHeaders.TOPIC, mEmailChangeTopic).build());
if ("fail_email_illegal".equalsIgnoreCase(action))
{
throw new IllegalArgumentException("E-mail address failure!");
}
}
}
EDIT
We are getting closer. The local transaction does not get created anymore. However, the global transaction still gets committed even if there was an exception. From what I can tell the exception does not propagate to the TransactionTemplate.execute() method. Therefore, the transaction gets committed. It seems like that the MessageProducerSupport class in the sendMessage() method "swallows" the exception in the catch clause. If there's an error channel defined then a message is published to it and thus the exception is not rethrown. I tried turning the error channel off (spring.cloud.stream.kafka.binder.transaction.producer.error-channel-enabled = false) but that doesn't turn it off. So, just for a test I simply set the error channel to null in the debugger to force the exception to be rethrown. That seems to do it. However, the original message keeps getting redelivered to the initial consumer even though I have the max-attempts set to 1 for that consumer.
See the documentation.
spring.cloud.stream.kafka.binder.transaction.transactionIdPrefix
Enables transactions in the binder. See transaction.id in the Kafka documentation and Transactions in the spring-kafka documentation. When transactions are enabled, individual producer properties are ignored and all producers use the spring.cloud.stream.kafka.binder.transaction.producer.* properties.
Default null (no transactions)
spring.cloud.stream.kafka.binder.transaction.producer.*
Global producer properties for producers in a transactional binder. See spring.cloud.stream.kafka.binder.transaction.transactionIdPrefix and Kafka Producer Properties and the general producer properties supported by all binders.
Default: See individual producer properties.
You must configure the shared global producer.
Don't add #Transactional - the container will start the transaction and send the offset to the transaction before committing the transaction.
If the listener throws an exception, the transaction is rolled back and the DefaultAfterRollbackPostProcessor will re-seek the topics/partitions so that the record will be redelivered.
EDIT
There is a bug in the configuration of the binder's transaction manager that causes a new local transaction to be started by the output binding.
To work around it, reconfigure the TM with the following container customizer bean...
#Bean
public ListenerContainerCustomizer<AbstractMessageListenerContainer<byte[], byte[]>> customizer() {
return (container, dest, group) -> {
KafkaTransactionManager<?, ?> tm = (KafkaTransactionManager<?, ?>) container.getContainerProperties()
.getTransactionManager();
tm.setTransactionSynchronization(AbstractPlatformTransactionManager.SYNCHRONIZATION_ON_ACTUAL_TRANSACTION);
};
}
EDIT2
You can't use the binder's DLQ support because, from the container's perspective, the delivery was successful. We need to propagate the exception to the container to force a rollback. So, you need to move the dead-lettering to the AfterRollbackProcessor instead. Here is my complete test class:
#SpringBootApplication
#EnableBinding(Processor.class)
public class So57379575Application {
public static void main(String[] args) {
SpringApplication.run(So57379575Application.class, args);
}
#Autowired
private MessageChannel output;
#StreamListener(Processor.INPUT)
public void listen(String in) {
System.out.println("in:" + in);
this.output.send(new GenericMessage<>(in.toUpperCase()));
if (in.equals("two")) {
throw new RuntimeException("fail");
}
}
#KafkaListener(id = "so57379575", topics = "so57379575out")
public void listen2(String in) {
System.out.println("out:" + in);
}
#KafkaListener(id = "so57379575DLT", topics = "so57379575dlt")
public void listen3(String in) {
System.out.println("dlt:" + in);
}
#Bean
public ApplicationRunner runner(KafkaTemplate<byte[], byte[]> template) {
return args -> {
template.send("so57379575in", "one".getBytes());
template.send("so57379575in", "two".getBytes());
};
}
#Bean
public ListenerContainerCustomizer<AbstractMessageListenerContainer<byte[], byte[]>> customizer(
KafkaTemplate<Object, Object> template) {
return (container, dest, group) -> {
// enable transaction synchronization
KafkaTransactionManager<?, ?> tm = (KafkaTransactionManager<?, ?>) container.getContainerProperties()
.getTransactionManager();
tm.setTransactionSynchronization(AbstractPlatformTransactionManager.SYNCHRONIZATION_ON_ACTUAL_TRANSACTION);
// container dead-lettering
DefaultAfterRollbackProcessor<? super byte[], ? super byte[]> afterRollbackProcessor =
new DefaultAfterRollbackProcessor<>(new DeadLetterPublishingRecoverer(template,
(ex, tp) -> new TopicPartition("so57379575dlt", -1)), 0);
container.setAfterRollbackProcessor(afterRollbackProcessor);
};
}
}
and
spring:
kafka:
bootstrap-servers:
- 10.0.0.8:9092
- 10.0.0.8:9093
- 10.0.0.8:9094
consumer:
auto-offset-reset: earliest
enable-auto-commit: false
properties:
isolation.level: read_committed
cloud:
stream:
bindings:
input:
destination: so57379575in
group: so57379575in
consumer:
max-attempts: 1
output:
destination: so57379575out
kafka:
binder:
transaction:
transaction-id-prefix: so57379575tx.
producer:
configuration:
acks: all
retries: 10
#logging:
# level:
# org.springframework.kafka: trace
# org.springframework.transaction: trace
and
in:two
2019-08-07 12:43:33.457 ERROR 36532 --- [container-0-C-1] o.s.integration.handler.LoggingHandler : org.springframework.messaging.MessagingException: Exception thrown while
...
Caused by: java.lang.RuntimeException: fail
...
in:one
dlt:two
out:ONE
I've been using Spring WebFlux to create a text stream, here is the code.
#SpringBootApplication
#RestController
public class ReactiveServer {
private static final String FILE_PATH = "c:/test/";
#GetMapping(produces = MediaType.TEXT_EVENT_STREAM_VALUE, value = "/events")
Flux<String> events() {
Flux<String> eventFlux = Flux.fromStream(Stream.generate(() -> FileReader.readFile()));
Flux<Long> durationFlux = Flux.interval(Duration.ofMillis(500));
return Flux.zip(eventFlux, durationFlux).map(Tuple2::getT1);
}
public static void main(String[] args) {
SpringApplication.run(ReactiveServer.class, args);
}
}
When I access the /events url on the browser I get this, that's almost what I want to get:
data:{"type":"HystrixCommand","name":"GetConsumerCommand","group":"ConsumerRemoteGroup","currentTime":1542379993662,"isCircuitBreakerOpen":false,"errorPercentage":0,"errorCount":0,"requestCount":0,"rollingCountBadRequests":0}
data:{"type":"HystrixCommand","name":"GetConsumerCommand","group":"ConsumerRemoteGroup","currentTime":1542379994203,"isCircuitBreakerOpen":false,"errorPercentage":0,"errorCount":0,"requestCount":2,"rollingCountBadRequests":0}
data:{"type":"HystrixCommand","name":"GetConsumerCommand","group":"ConsumerRemoteGroup","currentTime":1542379994706,"isCircuitBreakerOpen":false,"errorPercentage":0,"errorCount":0,"requestCount":2,"rollingCountBadRequests":0}
data:{"type":"HystrixCommand","name":"GetConsumerCommand","group":"ConsumerRemoteGroup","currentTime":1542379995213,"isCircuitBreakerOpen":false,"errorPercentage":0,"errorCount":0,"requestCount":3,"rollingCountBadRequests":0}
What I need to do is to insert a "ping:" in between iterations to get:
ping:
data:{"type":"HystrixCommand","name":"GetConsumerCommand","group":"ConsumerRemoteGroup","currentTime":1542379993662,"isCircuitBreakerOpen":false,"errorPercentage":0,"errorCount":0,"requestCount":0,"rollingCountBadRequests":0}
data:{"type":"HystrixCommand","name":"GetConsumerCommand","group":"ConsumerRemoteGroup","currentTime":1542379994203,"isCircuitBreakerOpen":false,"errorPercentage":0,"errorCount":0,"requestCount":2,"rollingCountBadRequests":0}
ping:
data:{"type":"HystrixCommand","name":"GetConsumerCommand","group":"ConsumerRemoteGroup","currentTime":1542379994706,"isCircuitBreakerOpen":false,"errorPercentage":0,"errorCount":0,"requestCount":2,"rollingCountBadRequests":0}
data:{"type":"HystrixCommand","name":"GetConsumerCommand","group":"ConsumerRemoteGroup","currentTime":1542379995213,"isCircuitBreakerOpen":false,"errorPercentage":0,"errorCount":0,"requestCount":3,"rollingCountBadRequests":0}
But, the best I could get was:
data: ping:
data:{"type":"HystrixCommand","name":"GetConsumerCommand","group":"ConsumerRemoteGroup","currentTime":1542379993662,"isCircuitBreakerOpen":false,"errorPercentage":0,"errorCount":0,"requestCount":0,"rollingCountBadRequests":0}
data:{"type":"HystrixCommand","name":"GetConsumerCommand","group":"ConsumerRemoteGroup","currentTime":1542379994203,"isCircuitBreakerOpen":false,"errorPercentage":0,"errorCount":0,"requestCount":2,"rollingCountBadRequests":0}
data: ping:
data:{"type":"HystrixCommand","name":"GetConsumerCommand","group":"ConsumerRemoteGroup","currentTime":1542379994706,"isCircuitBreakerOpen":false,"errorPercentage":0,"errorCount":0,"requestCount":2,"rollingCountBadRequests":0}
data:{"type":"HystrixCommand","name":"GetConsumerCommand","group":"ConsumerRemoteGroup","currentTime":1542379995213,"isCircuitBreakerOpen":false,"errorPercentage":0,"errorCount":0,"requestCount":3,"rollingCountBadRequests":0}
Does anyone know of a way to what I need?
You could try returning a Flux<ServerSentEvent> and specify the type of event you're trying to send. Like this:
#RestController
public class TestController {
#GetMapping(produces = MediaType.TEXT_EVENT_STREAM_VALUE, path = "/events")
Flux<ServerSentEvent> events() {
Flux<String> events = Flux.interval(Duration.ofMillis(200)).map(String::valueOf);
Flux<ServerSentEvent<String>> sseData = events.map(event -> ServerSentEvent.builder(event).build());
Flux<ServerSentEvent<String>> ping = Flux.interval(Duration.ofMillis(500))
.map(l -> ServerSentEvent.builder("").event("ping").build());
return Flux.merge(sseData, ping);
}
}
With that code snippet, I'm getting the following output:
$ http --stream :8080/events
HTTP/1.1 200 OK
Content-Type: text/event-stream;charset=UTF-8
transfer-encoding: chunked
data:0
data:1
event:ping
data:
data:2
data:3
data:4
event:ping
data:
Which is consistent with Server Sent Events. Is the ping: prefix something specific to Hystrix? If it is, I don't think this is consistent with the SSE spec and that it's something supported in Spring Framework.
I'm using BatchingRabbitTemplate to send messages in a batch to amqp endpoint. Now, on the other receiving end, I can use #RabbitListener to receive messages, but my problem is that messages are automatically de-batched so I cannot use #RabbitHandler public void receive (List<SomeObject> so). Is there any simpler way of non-de-batching messages except me doing this:
#RabbitListener(..., containerFactory = "nonDeBatchingContainerFactory")
#Bean
public RabbitListenerContainerFactory nonDeBatchingContainerFactory(){
SimpleRabbitListenerContainerFactory factory = new SimpleRabbitListenerContainerFactory();
factory.setConnectionFactory(connectionFactory());
factory.setDeBatchingEnabled(false);
factory.setMessageConverter(jackson2JsonMessageConverter());
factory.setAfterReceivePostProcessors(new NonDeBatchingMessagePostProcessor(jackson2JsonMessageConverter()));
return factory;
}
and then implementing this post-processor (that is more or less copy of existing code for de-batching).
public class NonDeBatchingMessagePostProcessor implements MessagePostProcessor {
private MessageConverter payloadConverter;
public NonDeBatchingMessagePostProcessor(MessageConverter payloadConverter) {
this.payloadConverter = payloadConverter;
}
#Override
public Message postProcessMessage(Message message) throws AmqpException {
Object batchFormat = message.getMessageProperties().getHeaders().get(MessageProperties.SPRING_BATCH_FORMAT);
if (MessageProperties.BATCH_FORMAT_LENGTH_HEADER4.equals(batchFormat)) {
List<? super Object> aggregatedObjects = new ArrayList<>();
ByteBuffer byteBuffer = ByteBuffer.wrap(message.getBody());
MessageProperties messageProperties = message.getMessageProperties();
String singleObjectTypeId = messageProperties.getHeaders().get(DEFAULT_CLASSID_FIELD_NAME).toString();
messageProperties.getHeaders().remove(MessageProperties.SPRING_BATCH_FORMAT);
while (byteBuffer.hasRemaining()) {
int length = byteBuffer.getInt();
if (length < 0 || length > byteBuffer.remaining()) {
throw new ListenerExecutionFailedException("Bad batched message received",
new MessageConversionException("Insufficient batch data at offset " + byteBuffer.position()),
message);
}
byte[] body = new byte[length];
byteBuffer.get(body);
messageProperties.setContentLength(length);
// Caveat - shared MessageProperties.
Message fragment = new Message(body, messageProperties);
Object singleObject = this.payloadConverter.fromMessage(fragment);
aggregatedObjects.add(singleObject);
}
Message aggregatedMessages = this.payloadConverter.toMessage(aggregatedObjects, messageProperties);
aggregatedMessages.getMessageProperties().getHeaders().put(DEFAULT_CONTENT_CLASSID_FIELD_NAME, singleObjectTypeId);
return aggregatedMessages;
}
return null;
}
}
I need this use case in order to receive all messages in batch on the rabbit and then do bulk indexing in elastic search. Thanks.
It might be a bit easier to do the batching at the producing application level (send a List<SomeObject>) rather than using the batching template. Then you won't need anything at all on the consumer side.