Flow hangs when IdempotentReceiverInterceptor discards the message(after 4-th message) - java

I have following flow:
return flow -> flow.channel(inputChannel())
...
.gateway(childFlow, addMyInterceptor(str)); // by name
}
Consumer<GatewayEndpointSpec> addMyInterceptor(String objectIdHeader) {
return endpointSpec -> endpointSpec.advice(addMyInterceptorInternal(objectIdHeader))
.errorChannel(errorChannel());
}
default IdempotentReceiverInterceptor addMyInterceptorInternal(String header) {
MessageProcessor<String> headerSelector = message -> headerExpression(header).apply(message);
var interceptor = new IdempotentReceiverInterceptor(new MetadataStoreSelector(headerSelector, idempotencyStore()));
interceptor.setDiscardChannel(idempotentDiscardChannel());
return interceptor;
}
When IdempotentReceiverInterceptor encounters that message is duplicated - I see that application hangs on after 4-th duplicated message. I understand that it is because gateway expected response(like here: PubSubInboundChannelAdapter stops to receive messages after 4th message) but I don't have any ideas how to return result from interceptor.
Could you please explain it for me?

As long as all channels are direct (default) - i.e. no async handoffs in the flow using queue or executor channels, set the gateway's replyTimeout to 0 when the flow might not return a reply

Related

Azure ServiceBusSessionReceiverAsyncClient - Mono instead of Flux

I have a Spring Boot app, where I receive one single message from a Azure Service Bus queue session.
The code is:
#Autowired
ServiceBusSessionReceiverAsyncClient apiMessageQueueIntegrator;
.
.
.
Mono<ServiceBusReceiverAsyncClient> receiverMono = apiMessageQueueIntegrator.acceptSession(sessionid);
Disposable subscription = Flux.usingWhen(receiverMono,
receiver -> receiver.receiveMessages(),
receiver -> Mono.fromRunnable(() -> receiver.close()))
.subscribe(message -> {
// Process message.
logger.info(String.format("Message received from quque. Session id: %s. Contents: %s%n", message.getSessionId(),
message.getBody()));
receivedMessage.setReceivedMessage(message);
timeoutCheck.countDown();
}, error -> {
logger.info("Queue error occurred: " + error);
});
As I am receiving only one message from the session, I use a CountDownLatch(1) to dispose of the subscription when I have received the message.
The documentation of the library says that it is possible to use Mono.usingWhen instead of Flux.usingWhen if I only expect one message, but I cannot find any examples of this anywhere, and I have not been able to figure out how to rewrite this code to do this.
How would the pasted code look if I were to use Mono.usingWhen instead?
Thank you conniey. Posting your suggestion as an answer to help other community members.
By default receiveMessages() is a Flux because we imagine the messages from a session to be "infinitely long". In your case, you only want the first message in the stream, so we use the next() operator.
The usage of the countdown latch is probably not the best approach. In the sample, we had one there so that the program didn't end before the messages were received. .subscribe is not a blocking call, it sets up the handlers and moves onto the next line of code.
Mono<ServiceBusReceiverAsyncClient> receiverMono = sessionReceiver.acceptSession("greetings-id");
Mono<ServiceBusReceivedMessage> singleMessageMono = Mono.usingWhen(receiverMono,
receiver -> {
// Anything you wish to do with the receiver.
// In this case we only want to take the first message, so we use the "next" operator. This returns a
// Mono.
return receiver.receiveMessages().next();
},
receiver -> Mono.fromRunnable(() -> receiver.close()));
try {
// Turns this into a blocking call. .block() waits indefinitely, so we have a timeout.
ServiceBusReceivedMessage message = singleMessageMono.block(Duration.ofSeconds(30));
if (message != null) {
// Process message.
}
} catch (Exception error) {
System.err.println("Error occurred: " + error);
}
You can refer to GitHub issue:ServiceBusSessionReceiverAsyncClient - Mono instead of Flux

How to get rid of "Closing the Kafka producer with timeoutMillis = ..."

I am sending Apache Avro formatted messages to a Kafka broker instance via the following code:
ProducerRecord<String, byte[]> producerRecord = new ProducerRecord<>(kafkaTopic.getTopicName(), null, null,
avroConverter.getSchemaId().toString(), convertRecordToByteArray(kafkaRecordToSend));
String avroSchemaName = null;
// some of my AVRO schemas are unions, some are simple:
if (_avroSchema.getTypes().size() == 1) {
avroSchemaName = _avroSchema.getTypes().get(0).getName();
} else if (_avroSchema.getTypes().size() == 2) {
avroSchemaName = _avroSchema.getTypes().get(1).getName();
}
// some custom header items...
producerRecord.headers().add(MessageHeaders.MESSAGE_ID.getText(), messageID.getBytes());
producerRecord.headers().add(MessageHeaders.AVRO_SCHEMA_REGISTRY_SUBJECT.getText(),
avroSchemaName.getBytes());
producerRecord.headers().add(MessageHeaders.AVRO_SCHEMA_REGISTRY_SCHEMA_ID.getText(),
avroConverter.getSchemaId().toString().getBytes());
if (multiline) {
producerRecord.headers().add(MessageHeaders.AVRO_SCHEMA_MULTILINE_RECORD_NAME.getText(),
MULTILINE_RECORD_NAME.getBytes());
}
try {
Future<RecordMetadata> result = kafkaProducer.send(producerRecord);
RecordMetadata sendResult = result.get();
MessageLogger.logResourceBundleMessage(_messages, "JAPCTOAVROKAFKAPRODUCER:DEBUG0002",
sendResult.offset());
} catch (Exception e) {
MessageLogger.logError(e);
throw e;
}
The code works fine, the messages end up in Kafka and are processed to end up in an InfluxDB. The problem is that every send operation produces a lot of INFO messages (client ID number is an example):
[Producer clientId=producer-27902] Closing the Kafka producer with timeoutMillis = 10000 ms.
[Producer clientId=producer-27902] Closing the Kafka producer with timeoutMillis = 9223372036854775807 ms.
Kafka startTimeMs: ...
Kafka commitId: ...
[Producer clientId=producer-27902] Cluster ID:
which Spam our Graylog.
I use similar code to send String formatted messages. This code is executed without producing INFO messages...
ProducerRecord<String, String> recordToSend = new ProducerRecord<>(queueName, messageText);
recordToSend.headers().add("messageID", messageID.getBytes());
Future<RecordMetadata> result = _producerConnection.send(recordToSend);
I know that the INFO message are logged from class org.apache.kafka.clients.producer.KafkaProducer. I need to get rid of these messages, but I do not have access to the logging.mxl defining the logger properties for Graylog.
Is there a way to get rid of these messages via POM-entries or programatically?
The reason of the code behavior was a design flaw:
The code in the post above was placed in a method which was called for sending the message to Kafka. The KafkaProducer class was instantiated in that method and each time when the method was called. Surprisingly, KafkaProducer issues the Closing the KafkaProducer with timeoutMillis = not only at an explicit close() by the calling code, but as well when the strong reference to the instance is lost ( in my case when the code leaves the method), In the latter case, the timeoutMillis is set to 9223372036854775807 (largest long number).
To get rid of the many messages, I moved the KafkaProducer instantiation out of the method and made the instance variable a class attribute and I do not call an explicit close() after the send(...) any more.
Furthermore I changed the instance of my class instantiating KafkaProducer to a strong referencing class member.
By doing so, I get some messages by the KafkaProducer at instantiation, then there is silence.

Redis Streams one message per consumer with Java

I'am trying to implement a java application with redis streams where every consomer consumes exactly one message. Like a pipeline/queue where every consumer takes exactly one message, processes it and after finishing the consumer takes the next message which was not processed so far in the stream.
What works is that every message is consumed by exactly one consumer (with xreadgroup).
I started with this tutorial from redislabs
The code:
RedisClient redisClient = RedisClient.create("redis://pw#host:port");
StatefulRedisConnection<String, String> connection = redisClient.connect();
RedisCommands<String, String> syncCommands = connection.sync();
try {
syncCommands.xgroupCreate(XReadArgs.StreamOffset.from(STREAM_KEY, "0-0"), ID_READ_GROUP);
} catch (RedisBusyException redisBusyException) {
System.out.println(String.format("\t Group '%s' already exists", ID_READ_GROUP));
}
System.out.println("Waiting for new messages ");
while (true) {
List<StreamMessage<String, String>> messages = syncCommands.xreadgroup(
Consumer.from(ID_READ_GROUP, ID_WORKER), ReadArgs.StreamOffset.lastConsumed(STREAM_KEY));
if (!messages.isEmpty()) {
System.out.println(messages.size()); //
for (StreamMessage<String, String> message : messages) {
System.out.println(message.getId());
Thread.sleep(5000);
syncCommands.xack(STREAM_KEY, ID_READ_GROUP, message.getId());
}
}
}
My current problem is that a consumer takes more that one message from the queue and in some situations the other consumers are waiting and one consumer is processing 10 messages at once.
Thanks in advance!
Notice that XREADGROUP can get COUNT argument.
See the JavaDoc how to do it in Lettuce xreadgroup, by passing XReadArgs.

How to ensure messages reach kafka broker?

I have have a message producer on my local machine and a broker on remote host (aws).
After sending a message from the producer,
I wait and call the console consumer on the remote host and
see excessive logs.
Without the value from producer.
The producer flushes the data after calling the send method.
Everything is configured correctly.
How can I check to see that the broker received the message from the producer and to see if the producer received the answer?
The Send method asynchronously sends the message to the topic and
returns a Future of RecordMetadata.
java.util.concurrent.Future<RecordMetadata> send(ProducerRecord<K,V> record)
Asynchronously sends a record to a topic
After the flush call,
check to see that the Future has completed by calling the isDone method.
(for example, Future.isDone() == true)
Invoking this method makes all buffered records immediately available to send (even if linger.ms is greater than 0) and blocks on the completion of the requests associated with these records. The post-condition of flush() is that any previously sent record will have completed (e.g. Future.isDone() == true). A request is considered completed when it is successfully acknowledged according to the acks configuration you have specified or else it results in an error.
The RecordMetadata contains the offset and the partition
public int partition()
The partition the record was sent to
public long offset()
the offset of the record, or -1 if {hasOffset()} returns false.
Or you can also use Callback function to ensure messages was sent to topic or not
Fully non-blocking usage can make use of the Callback parameter to provide a callback that will be invoked when the request is complete.
here is clear example in docs
ProducerRecord<byte[],byte[]> record = new ProducerRecord<byte[],byte[]>("the-topic", key, value);
producer.send(myRecord,
new Callback() {
public void onCompletion(RecordMetadata metadata, Exception e) {
if(e != null) {
e.printStackTrace();
} else {
System.out.println("The offset of the record we just sent is: " + metadata.offset());
}
}
});
You can try get() API of send , which will return the Future of RecordMetadata
ProducerRecord<String, String> record =
new ProducerRecord<>("SampleTopic", "SampleKey", "SampleValue");
try {
producer.send(record).get();
} catch (Exception e) {
e.printStackTrace();
}
Use exactly-once-delivery and you won't need to worry about whether your message reached or not: https://www.baeldung.com/kafka-exactly-once, https://www.confluent.io/blog/exactly-once-semantics-are-possible-heres-how-apache-kafka-does-it/

ServiceActivator 'randomly' unsubscribing from PublishSubscribeChannel

I have a method annotated with #ServiceActivator("CH1"), where "CH1" definition is:
#Bean(name = "CH1")
MessageChannel ch1() {
return new PublishSubscribeChannel
}
and other PollableChannels publishing to this channel via
#BidgeTo(value = "CH1", poller = #Poller("myPoller"))
Things seem to work fine most of the time; however, seemingly randomly the message handler unsubscribes from "CH1" and I see in the logs:
[DEBUG] (pool-2-thread-1) org.springframework.integration.dispatcher.BroadcastingDispatcher: No subscribers, default behavior is ignore
Now I know I can change the minSubscribers but I don't get why things seem to randomly unsubscribe? After this error it will go back to handling some messages fine. Does a message handler unsubscribe while handling messages or if the executor being used is full? I see no errors associated with this in the log nor and unsubscribe or update to subscriber counts to "CH1" in the logs.
That does not make sense. Please, share some test-case to reproduce from the Framework perspective.
The source code on the matter looks like:
if (dispatched == 0 && this.minSubscribers == 0 && logger.isDebugEnabled()) {
if (sequenceSize > 0) {
logger.debug("No subscribers received message, default behavior is ignore");
}
else {
logger.debug("No subscribers, default behavior is ignore");
}
}
where we can go to the sequence == 0 only in case of:
Collection<MessageHandler> handlers = this.getHandlers();
if (this.requireSubscribers && handlers.size() == 0) {
throw new MessageDispatchingException(message, "Dispatcher has no subscribers");
}
int sequenceSize = handlers.size();
Only the clue that your subscribers unsubscribes somehow...
I see that you have a DEBUG for your CH1, so would you mind to share DEBUG logs for entire org.springframework.integration when you see that error.
EDIT
Also note that whenever a subscriber is added/removed (e.g. when a consuming endpoint is started/stopped), you will see this log message...
if (logger.isInfoEnabled()) {
logger.info("Channel '" + this.getFullChannelName() + "' has " + counter + " subscriber(s).");
}
(when logging with at least INFO logging).

Categories