Want to implement a delayed consumer using the high level consumer api
main idea:
produce messages by key (each msg contains creation timestamp) this makes sure that each partition has ordered messages by produced time.
auto.commit.enable=false (will explicitly commit after each message process)
consume a message
check message timestamp and check if enough time has passed
process message (this operation will never fail)
commit 1 offset
while (it.hasNext()) {
val msg = it.next().message()
//checks timestamp in msg to see delay period exceeded
while (!delayedPeriodPassed(msg)) {
waitSomeTime() //Thread.sleep or something....
}
//certain that the msg was delayed and can now be handled
Try { process(msg) } //the msg process will never fail the consumer
consumer.commitOffsets //commit each msg
}
some concerns about this implementation:
commit each offset might slow ZK down
can consumer.commitOffsets throw an exception? if yes i will consume the same message twice (can solve with idempotent messages)
problem waiting long time without committing the offset, for example delay period is 24 hours, will get next from iterator, sleep for 24 hours, process and commit (ZK session timeout ?)
how can ZK session keep-alive without commit new offsets ? (setting a hive zookeeper.session.timeout.ms can resolve in dead consumer without recognising it)
any other problems im missing?
Thanks!
One way to go about this would be to use a different topic where you push all messages that are to be delayed. If all delayed messages should be processed after the same time delay this will be fairly straight forward:
while(it.hasNext()) {
val message = it.next().message()
if(shouldBeDelayed(message)) {
val delay = 24 hours
val delayTo = getCurrentTime() + delay
putMessageOnDelayedQueue(message, delay, delayTo)
}
else {
process(message)
}
consumer.commitOffset()
}
All regular messages will now be processed as soon as possible while those that need a delay gets put on another topic.
The nice thing is that we know that the message at the head of the delayed topic is the one that should be processed first since its delayTo value will be the smallest. Therefore we can set up another consumer that reads the head message, checks if the timestamp is in the past and if so processes the message and commits the offset. If not it does not commit the offset and instead just sleeps until that time:
while(it.hasNext()) {
val delayedMessage = it.peek().message()
if(delayedMessage.delayTo < getCurrentTime()) {
val readMessage = it.next().message
process(readMessage.originalMessage)
consumer.commitOffset()
} else {
delayProcessingUntil(delayedMessage.delayTo)
}
}
In case there are different delay times you could partition the topic on the delay (e.g. 24 hours, 12 hours, 6 hours). If the delay time is more dynamic than that it becomes a bit more complex. You could solve it by introducing having two delay topics. Read all messages off delay topic A and process all the messages whose delayTo value are in the past. Among the others you just find the one with the closest delayTo and then put them on topic B. Sleep until the closest one should be processed and do it all in reverse, i.e. process messages from topic B and put the once that shouldn't yet be proccessed back on topic A.
To answer your specific questions (some have been addressed in the comments to your question)
Commit each offset might slow ZK down
You could consider switching to storing the offset in Kafka (a feature available from 0.8.2, check out offsets.storage property in consumer config)
Can consumer.commitOffsets throw an exception? if yes, I will consume the same message twice (can solve with idempotent messages)
I believe it can, if it is not able to communicate with the offset storage for instance. Using idempotent messages solves this problem though, as you say.
Problem waiting long time without committing the offset, for example delay period is 24 hours, will get next from iterator, sleep for 24 hours, process and commit (ZK session timeout?)
This won't be a problem with the above outlined solution unless the processing of the message itself takes more than the session timeout.
How can ZK session keep-alive without commit new offsets? (setting a hive zookeeper.session.timeout.ms can resolve in dead consumer without recognizing it)
Again with the above you shouldn't need to set a long session timeout.
Any other problems I'm missing?
There always are ;)
Use Tibco EMS or other JMS Queue's. They have retry delay built in . Kafka may not be the right design choice for what you are doing
I would suggest another route in your cases.
It doesn't make sense to address the waiting time in the main thread of the consumer. This will be an anti-pattern in how the queues are used. Conceptually, you need to process the messages as fastest as possible and keep the queue at a low loading factor.
Instead, I would use a scheduler that will schedule jobs for each message you are need to delay. This way you can process the queue and create asynchronous jobs that will be triggered at predefined points in time.
The downfall of using this technique is that it is sensible to the status of the JVM that holds the scheduled jobs in memory. If that JVM fails, you loose the scheduled jobs and you don't know if the task was or was not executed.
There are scheduler implementations, though that can be configured to run in a cluster environment, thus keeping you safe from JVM crashes.
Take a look at this java scheduling framework: http://www.quartz-scheduler.org/
We had the same issue during one of our tasks. Although, eventually, it was solved without using delayed queues, but when exploring the solution, the best approach we found was to use pause and resume functionality provided by the KafkaConsumer API. This approach and its motivation is perfectly described here: https://medium.com/naukri-engineering/retry-mechanism-and-delay-queues-in-apache-kafka-528a6524f722
Keyed-list on schedule or its redis alternative may be best approaches.
Related
TL;DR: how to mimic rabbitMQ's scheduling functionality keeping the consumer:
stateless
free from managing scheduled messages
free from useless retries from scheduled messages between receiving the message and finally consuming it the correct scheduled time
I have a single SQS queue with default properties on creation. The average time a consumer takes to process a message is 1~2s. But a few messages needs to be processed twice, between a 4h window. These messages are called B, and the others are called A.
Suppose I have my queue with the following messages: A1, A2, B1, A3, B2 (5 messages, max 10s to consume them all) at the start of these table:
time | what should happen
---------|-------------------
now | consumer connected to queue
now+10s | all As were consumed successfully and deleted from queue
Bs had their unsuccessful first try and now they are waiting for their retry in 4h
between | nothing happens since no new messages arrived and old ones are waiting
now+4h4s | Bs successfully consumed during second retry and due that, deleted from queue
I have a Spring application where I can throw exceptions when I find a type B message. Due simplicity and scalability, I want to have one single thread consuming messages taking 1~2s to consume each message.
This way, I cannot hang message processing as this answer suggested. I also don't need SQS' Delivery delay since it postpones just the messages arriving at queue and not retries. If possible, I would like to keep using long polling #JmsListener and avoid at all keeping any state on my memory's application. I want to avoid this if possible
I would write a small AWS Lambda function that gets invoked every ~minute. That function would get a message (off the hopefully FIFO-type SQS queue) and check the time it was added. If it was added >= 4 hours, it would delete it off the incoming queue and add it to the delayed by 4 hour queue, which your application could listen to. If it moved a message, continue to do so until the next message isn't 4 hours old. Increase/decrease the frequency of the lambda to increase the granularity of how 'tight' to 4 hours you are, but at the added expense of running the lambda more often.
Here is a quick link to an example of an AWS Lambda function using SQS: https://docs.aws.amazon.com/lambda/latest/dg/with-sqs-example.html
You could send message B to a Step Functions state machine and put a wait state in to wait for 4 hours before sending it to the queue. The state machine would keep the state for you, and you can send messages directly to SQS from Step Functions so you don't need to write any code.
Since I was using JmsListener with setSessionAcknowledgeMode(Session.CLIENT_ACKNOWLEDGE), I decided to run this at the end of the consumer of re-processable messages:
myAmazonSqsInstance.sendMessage(
new SendMessageRequest()
.withQueueUrl("queueName")
.withMessageBody(myMessageWithText)
.withDelaySeconds(900) // 900s = 15min
);
This way this message will be consumed successfully but a new message with the same body will be produced on the queue. This message will be consumed in 15min, and due to my business logic, fail again. There will be 16 fails (16*15min=4h) til it finally is consumed without producing new messages.
Although this is not what I asked for, and it's similar to the other answers (only the tech stack is different), I decided to write it down here to make a java solution available
I have a typical kafka consumer/producer app that is polling all the time for data. Sometimes, there might be no data for hours, but sometimes there could be thousands of messages per second. Because of this, the application is built so it's always polling, with a 500ms duration timeout.
However, I've noticed that sometimes, if the kafka cluster goes down, the consumer client, once started, won't throw an exception, it will simply timeout at 500ms, and continue returning empty ConsumerRecords<K,V>. So, as far as the application is concerned, there is no data to consume, when in reality, the whole Kafka cluster could be unreachable, but the app itself has no idea.
I checked the docs, and I couldn't find a way to validate consumer health, other than maybe closing the connection and subscribing to the topic every single time, but I really don't want to do that on a long-running application.
What's the best way to validate that the consumer is active and healthy while polling, ideally from the same thread/client object, so that the app can distinguish between no data and an unreachable kafka cluster situation?
I am sure this is not the best way to achieve what you are looking for.
But one simple way which I had implemented in my application is by maintaining a static counter in the application indicating emptyRecordSetReceived. Whenever I receive an empty record set by the poll operation I increment this counter.
This counter was emitted to the Graphite at periodic interval (say every minute) with the help of the Metric registry from the application.
Now let's say you know the maximum time frame for which the message will not be available to consume by this application. For example, say 6 hours. Given that you are polling every 500 Millisecond, you know that if we do not receive the message for 6 hours, the counter would increase by
2 poll in 1 second * 60 seconds * 60 minutes * 6 hours = 43200.
We had placed an alerting check based on this counter value reported to Graphite. This metric used to give me a decent idea if it is a genuine problem from the application or something else is down from the Broker or producer side.
This is just the naive way I had solved this use case to some extent. I would love to hear how it is actually done without maintaining these counters.
I am having a scenario where the enable.auto.commit is set to false. For every poll() the records obtained are offloaded to a threadPoolExecutor. And the commitSync() is happening out of the context. But, I doubt if this is the right way to handle as my thread pool may still be processing few message while i commit the messages.
while (true) {
ConsumerRecords < String, NormalizedSyslogMessage > records = consumer.poll(100);
Date startTime = Calendar.getInstance().getTime();
for (ConsumerRecord < String, NormalizedSyslogMessage > record: records) {
NormalizedSyslogMessage normalizedMessage = record.value();
normalizedSyslogMessageList.add(normalizedMessage);
}
Date endTime = Calendar.getInstance().getTime();
long durationInMilliSec = endTime.getTime() - startTime.getTime();
// execute process thread on message size equal to 5000 or timeout > 4000
if (normalizedSyslogMessageList.size() == 5000) {
CorrelationProcessThread correlationProcessThread = applicationContext
.getBean(CorrelationProcessThread.class);
List < NormalizedSyslogMessage > clonedNormalizedSyslogMessages = deepCopy(normalizedSyslogMessageList);
correlationProcessThread.setNormalizedMessage(clonedNormalizedSyslogMessages);
taskExecutor.execute(correlationProcessThread);
normalizedSyslogMessageList.clear();
}
consumer.commitSync();
}
I suppose there are a couple of things to address here.
First is Offsets being out of sync - This is probably caused by either one of the following:
If the number of messages fetched by poll() does not take the size of the normalizedSyslogMessageList to 5000, the commitSync() will still run regardless of whether the current batch of messages has been processed or not.
If however, the size touches 5000 - because the processing is being done in a separate thread, the main consumer thread will never know whether the processing has been completed or not but... The commitSync() would run anyway committing the offsets.
The second part (Which I believe is your actual concern/question) - Whether or not this is the best way to handle this. I would say No because of point number 2 above i.e. the correlationProcessThread is being invoked in a fire-and-forget manner here so you wouldn't know whe the processing of those messages would be completed for you to be able to safely commit the offsets.
Here's a statement from "Kafka's Definitive Guide" -
It is important to remember that commitSync() will commit the latest
offset returned by poll(), so make sure you call commitSync() after
you are done processing all the records in the collection, or you risk
missing messages.
Point number 2 especially will be hard to fix because:
Supplying the consumer reference to the threads in the pool will basically mean multiple threads trying to access one consumer instance (This post makes a mention of this approach and the issues - Mainly, Kafka Consumer NOT being Thread-Safe).
Even if you try and get the status of the processing thread before committing offsets by using the submit() method instead of execute() in the ExecutorService, then you would need to make a blocking get() method call to the correlationProcessThread. So, you may not get a lot of benefit by processing in multiple threads.
Options for fixing this
As I'm not aware of the your context and the exact requirement, I will only be able to suggest conceptual ideas but it might be worth considering:
breaking the consumer instances as per the processing they need to do and carrying out the processing in the same thread or
you could explore the possibility of maintaining the offsets of the messages in a map (as and when they are processed) and then committing those specific offsets (this method)
I hope this helps.
Totally agree with what Lalit has mentioned. Currently i'm going through the same exact situation where my processing are happening in separate threads and consumer & producer in different threads. I've used a ConcurrentHashMap to be shared between producer and consumer threads which updates that a particular offset has been processed or not.
ConcurrentHashMap<OffsetAndMetadata, Boolean>
On the consumer side, a local LinkedHashMap can be used to maintain the order in which the records are consumed from Topic/Partition and do manual commit in the consumer thread itself.
LinkedHashMap<OffsetAndMetadata, TopicPartition>
You can refer to the following link, if your processing thread is maintaining any consumed record order.
Transactions in Kafka
A point to mention in my approach, there will be chance that data will be duplicated in case of any failures.
I have a huge number of messages coming from CSV files, that then get sent to a rate limited API. I'm using a Queue Channel backed by a database channel message store to make the messages durable while processing. I want to get as close to the rate limit as possible, so I need to be sending messages to the API across multiple threads.
What I had in my head of how it should work is something reads the DB, sees what messages are available, and then delegates each message to one of the threads to be processed in a transaction.
But I haven't been able to do that, what I've had to do is have a transactional poller which has a thread pool of N threads, a fixed rate of say 5 seconds, and a max messages per poll of 10 (something more than what could be processed in 5 seconds) ... which works ok, but has problems when there are not many messages waiting (i.e. if there were 10 messages they would be processed by a single thread) this isn't going to be a problem in practice because we will have 1000's of messages. But it seems conceptually more complex than how I thought it should work.
I might not have explained this very well, but it seems like what might be a common problem when messages come in fast, but go out slower?
Your solution is really correct, but you need to think do not shift messages into an Exectuor since that way you you jump out of the transaction boundaries.
The fact that you have 10 messages processed in the same thread is exactly an implementation details and it looks like this:
AbstractPollingEndpoint.this.taskExecutor.execute(() -> {
int count = 0;
while (AbstractPollingEndpoint.this.initialized
&& (AbstractPollingEndpoint.this.maxMessagesPerPoll <= 0
|| count < AbstractPollingEndpoint.this.maxMessagesPerPoll)) {
try {
if (!Poller.this.pollingTask.call()) {
break;
}
count++;
}
So, we poll messages until maxMessagesPerPoll in the same thread.
To make it really more parallel and still keep transaction do not lose messages you need to consider to use fixedRate:
/**
* Specify whether the periodic interval should be measured between the
* scheduled start times rather than between actual completion times.
* The latter, "fixed delay" behavior, is the default.
*/
public void setFixedRate(boolean fixedRate)
And increase an amount of thread used by the TaskScheduler for the polling.
You can do that declaring a ThreadPoolTaskScheduler bean with the name as IntegrationContextUtils.TASK_SCHEDULER_BEAN_NAME to override a default one with the pool as 10. Or use Global Properties to just override the pool size in that default TaskScheduler: https://docs.spring.io/spring-integration/docs/5.0.6.RELEASE/reference/html/configuration.html#global-properties
I am trying to monitor consumer offsets of a given group with the Java API. I create one additional consumer which does not subscribe to any topic, but just calls consumer.committed(topic) to get the offset information. This kind of works, but:
For testing I use only one real consumer (i.e. one which does subscribe to the topic). When I shut it down using close() and later restart one, it takes 27 seconds between subscribe and the first consumption of messages, despite the fact that I use poll(1000).
I am guessing this has to do with the rebalancing possibly being confused by the non-subscribing consumer. Could that be possible? Is there a better way to monitor offsets with the Java API (I know about the command line tool, but need to use the API).
There are different ways to inspect offset from topics, depends on the purpose of what you want it for, besides of "committed" that you described above, here are two more options:
1) if you want to know the offset id from which the consumer start to fetch data from broker next time Thread(s) start(s), then you must use "position" as
long offsetPosition;
TopicPartition tPartition = new TopicPartition(topic,partitionToReview);
offsetPosition = kafkaConsumer.position(tPartition);
System.out.println("offset of the next record to fetch is : " + position);
2) calling "offset()" method from ConsumerRecord object, after performing a poll from kafkaConsumer
Iterator<ConsumerRecord<byte[],byte[]>> it = kafkaConsumer.poll(1000).iterator();
while(it.hasNext()){
ConsumerRecord<byte[],byte[]> record = it.next();
System.out.println("offset : " + record.offset());
}
Found it: the monitoring consumer added to the confusion but was not the culprit. In the end it is easy to understand though slightly unexpected (for me at least):
The default for session.timeout.ms is 30 seconds. When a consumer disappears it takes up to 30 seconds before it is declared dead and the work is rebalanced. For testing, I had stopped the single consumer I had, waited three seconds and restarted a new one. This then took 27 seconds before it started, filling the 30 seconds time-out.
I would have expected that a single, lone consumer starting up does not wait for the time-out to expire, but starts to "rebalance", i.e. grab the work immediately. It seems though that the time-out has to expire before work is rebalanced, even if there is only one consumer.
For the testing to get through faster, I changed the configuration to use a lower session.timeout.ms for the consumer as well as group.min.session.timeout.ms for the broker.
To conclude: using a consumer that does not subscribe to any topic for monitoring the offsets works just fine and does not seem to interfere with the rebalancing process.