Kafka enable.auto.commit false in combination with commitSync() - java

I am having a scenario where the enable.auto.commit is set to false. For every poll() the records obtained are offloaded to a threadPoolExecutor. And the commitSync() is happening out of the context. But, I doubt if this is the right way to handle as my thread pool may still be processing few message while i commit the messages.
while (true) {
ConsumerRecords < String, NormalizedSyslogMessage > records = consumer.poll(100);
Date startTime = Calendar.getInstance().getTime();
for (ConsumerRecord < String, NormalizedSyslogMessage > record: records) {
NormalizedSyslogMessage normalizedMessage = record.value();
normalizedSyslogMessageList.add(normalizedMessage);
}
Date endTime = Calendar.getInstance().getTime();
long durationInMilliSec = endTime.getTime() - startTime.getTime();
// execute process thread on message size equal to 5000 or timeout > 4000
if (normalizedSyslogMessageList.size() == 5000) {
CorrelationProcessThread correlationProcessThread = applicationContext
.getBean(CorrelationProcessThread.class);
List < NormalizedSyslogMessage > clonedNormalizedSyslogMessages = deepCopy(normalizedSyslogMessageList);
correlationProcessThread.setNormalizedMessage(clonedNormalizedSyslogMessages);
taskExecutor.execute(correlationProcessThread);
normalizedSyslogMessageList.clear();
}
consumer.commitSync();
}

I suppose there are a couple of things to address here.
First is Offsets being out of sync - This is probably caused by either one of the following:
If the number of messages fetched by poll() does not take the size of the normalizedSyslogMessageList to 5000, the commitSync() will still run regardless of whether the current batch of messages has been processed or not.
If however, the size touches 5000 - because the processing is being done in a separate thread, the main consumer thread will never know whether the processing has been completed or not but... The commitSync() would run anyway committing the offsets.
The second part (Which I believe is your actual concern/question) - Whether or not this is the best way to handle this. I would say No because of point number 2 above i.e. the correlationProcessThread is being invoked in a fire-and-forget manner here so you wouldn't know whe the processing of those messages would be completed for you to be able to safely commit the offsets.
Here's a statement from "Kafka's Definitive Guide" -
It is important to remember that commitSync() will commit the latest
offset returned by poll(), so make sure you call commitSync() after
you are done processing all the records in the collection, or you risk
missing messages.
Point number 2 especially will be hard to fix because:
Supplying the consumer reference to the threads in the pool will basically mean multiple threads trying to access one consumer instance (This post makes a mention of this approach and the issues - Mainly, Kafka Consumer NOT being Thread-Safe).
Even if you try and get the status of the processing thread before committing offsets by using the submit() method instead of execute() in the ExecutorService, then you would need to make a blocking get() method call to the correlationProcessThread. So, you may not get a lot of benefit by processing in multiple threads.
Options for fixing this
As I'm not aware of the your context and the exact requirement, I will only be able to suggest conceptual ideas but it might be worth considering:
breaking the consumer instances as per the processing they need to do and carrying out the processing in the same thread or
you could explore the possibility of maintaining the offsets of the messages in a map (as and when they are processed) and then committing those specific offsets (this method)
I hope this helps.

Totally agree with what Lalit has mentioned. Currently i'm going through the same exact situation where my processing are happening in separate threads and consumer & producer in different threads. I've used a ConcurrentHashMap to be shared between producer and consumer threads which updates that a particular offset has been processed or not.
ConcurrentHashMap<OffsetAndMetadata, Boolean>
On the consumer side, a local LinkedHashMap can be used to maintain the order in which the records are consumed from Topic/Partition and do manual commit in the consumer thread itself.
LinkedHashMap<OffsetAndMetadata, TopicPartition>
You can refer to the following link, if your processing thread is maintaining any consumed record order.
Transactions in Kafka
A point to mention in my approach, there will be chance that data will be duplicated in case of any failures.

Related

Spring Integration send messages to Executor in transaction

I have a huge number of messages coming from CSV files, that then get sent to a rate limited API. I'm using a Queue Channel backed by a database channel message store to make the messages durable while processing. I want to get as close to the rate limit as possible, so I need to be sending messages to the API across multiple threads.
What I had in my head of how it should work is something reads the DB, sees what messages are available, and then delegates each message to one of the threads to be processed in a transaction.
But I haven't been able to do that, what I've had to do is have a transactional poller which has a thread pool of N threads, a fixed rate of say 5 seconds, and a max messages per poll of 10 (something more than what could be processed in 5 seconds) ... which works ok, but has problems when there are not many messages waiting (i.e. if there were 10 messages they would be processed by a single thread) this isn't going to be a problem in practice because we will have 1000's of messages. But it seems conceptually more complex than how I thought it should work.
I might not have explained this very well, but it seems like what might be a common problem when messages come in fast, but go out slower?
Your solution is really correct, but you need to think do not shift messages into an Exectuor since that way you you jump out of the transaction boundaries.
The fact that you have 10 messages processed in the same thread is exactly an implementation details and it looks like this:
AbstractPollingEndpoint.this.taskExecutor.execute(() -> {
int count = 0;
while (AbstractPollingEndpoint.this.initialized
&& (AbstractPollingEndpoint.this.maxMessagesPerPoll <= 0
|| count < AbstractPollingEndpoint.this.maxMessagesPerPoll)) {
try {
if (!Poller.this.pollingTask.call()) {
break;
}
count++;
}
So, we poll messages until maxMessagesPerPoll in the same thread.
To make it really more parallel and still keep transaction do not lose messages you need to consider to use fixedRate:
/**
* Specify whether the periodic interval should be measured between the
* scheduled start times rather than between actual completion times.
* The latter, "fixed delay" behavior, is the default.
*/
public void setFixedRate(boolean fixedRate)
And increase an amount of thread used by the TaskScheduler for the polling.
You can do that declaring a ThreadPoolTaskScheduler bean with the name as IntegrationContextUtils.TASK_SCHEDULER_BEAN_NAME to override a default one with the pool as 10. Or use Global Properties to just override the pool size in that default TaskScheduler: https://docs.spring.io/spring-integration/docs/5.0.6.RELEASE/reference/html/configuration.html#global-properties

Keeping consumer alive using Kafka

I'm looking for a "low-cost" method to keep a consumer alive when I'm not actively polling. I.e., still processing records from the last poll, and I don't want the consumer connection to time out.
Some functions that look promising:
wakeup
commitAsync
In each case this would be non-standard usage of the API, so I'm not sure it would be a reasonable / rational approach.
RE: Setting the connection timeout higher - I want the consumer to timeout if it gets wedged. My question pertains to one section where I've fetched a block of records and separate threads are working through them.
The documentation seems to suggest you should call pause() and then keep actively polling. If you call poll() while paused, nothing will be returned.
For use cases where message processing time varies unpredictably,
neither of these options may be sufficient. The recommended way to
handle these cases is to move message processing to another thread,
which allows the consumer to continue calling poll while the processor
is still working. Some care must be taken to ensure that committed
offsets do not get ahead of the actual position. Typically, you must
disable automatic commits and manually commit processed offsets for
records only after the thread has finished handling them (depending on
the delivery semantics you need). Note also that you will need to
pause the partition so that no new records are received from poll
until after thread has finished handling those previously returned.
The documentation for pause() confirms this:
Suspend fetching from the requested partitions. Future calls to
poll(long) will not return any records from these partitions until
they have been resumed using resume(Collection). Note that this method
does not affect partition subscription. In particular, it does not
cause a group rebalance when automatic assignment is used.
Since Kafka 0.10.1, the consumer no longer heartbeats during poll calls. It runs the hearbeat in a separate thread. So if that's your version, there is nothing else to do. See KIP-62

Kafka KStreams - processing timeouts

I am attempting to use <KStream>.process() with a TimeWindows.of("name", 30000) to batch up some KTable values and send them on. It seems that 30 seconds exceeds the consumer timeout interval after which Kafka considers said consumer to be defunct and releases the partition.
I've tried upping the frequency of poll and commit interval to avoid this:
config.put(StreamsConfig.COMMIT_INTERVAL_MS_CONFIG, "5000");
config.put(StreamsConfig.POLL_MS_CONFIG, "5000");
Unfortunately these errors are still occurring:
(lots of these)
ERROR o.a.k.s.p.internals.RecordCollector - Error sending record to topic kafka_test1-write_aggregate2-changelog
org.apache.kafka.common.errors.TimeoutException: Batch containing 1 record(s) expired due to timeout while requesting metadata from brokers for kafka_test1-write_aggregate2-changelog-0
Followed by these:
INFO o.a.k.c.c.i.AbstractCoordinator - Marking the coordinator 12.34.56.7:9092 (id: 2147483547 rack: null) dead for group kafka_test1
WARN o.a.k.s.p.internals.StreamThread - Failed to commit StreamTask #0_0 in thread [StreamThread-1]:
org.apache.kafka.clients.consumer.CommitFailedException: Commit cannot be completed since the group has already rebalanced and assigned the partitions to another member. This means that the time between subsequent calls to poll() was longer than the configured session.timeout.ms, which typically implies that the poll loop is spending too much time message processing. You can address this either by increasing the session timeout or by reducing the maximum size of batches returned in poll() with max.poll.records.
at org.apache.kafka.clients.consumer.internals.ConsumerCoordinator$OffsetCommitResponseHandler.handle(ConsumerCoordinator.java:578)
Clearly I need to be sending heartbeats back to the server more often. How?
My topology is:
KStreamBuilder kStreamBuilder = new KStreamBuilder();
KStream<String, String> lines = kStreamBuilder.stream(TOPIC);
KTable<Windowed<String>, String> kt = lines.aggregateByKey(
new DBAggregateInit(),
new DBAggregate(),
TimeWindows.of("write_aggregate2", 30000));
DBProcessorSupplier dbProcessorSupplier = new DBProcessorSupplier();
kt.toStream().process(dbProcessorSupplier);
KafkaStreams kafkaStreams = new KafkaStreams(kStreamBuilder, streamsConfig);
kafkaStreams.start();
The KTable is grouping values by key every 30 seconds. In Processor.init() I call context.schedule(30000).
DBProcessorSupplier provides an instance of DBProcessor. This is an implementation of AbstractProcessor where all the overrides have been provided. All they do is LOG so I know when each is being hit.
It's a pretty simple topology but it's clear I'm missing a step somewhere.
Edit:
I get that I can adjust this on the server side but Im hoping there is a client-side solution. I like the notion of partitions being made available pretty quickly when a client exits / dies.
Edit:
In an attempt to simplify the problem I removed the aggregation step from the graph. It's now just consumer->processor(). (If I send the consumer directly to .print() it works v quickly so I know it's ok). (Similarly If I output the aggregation (KTable) via .print() it seems ok too).
What I found was that the .process() - which should be calling .punctuate() every 30 seconds is actually blocking for variable lengths of time and outputting somewhat randomly (if at all).
Main program
Debug output
Processor Supplier
Processor
Further:
I set the debug level to 'debug' and reran. Im seeing lots of messages:
DEBUG o.a.k.s.p.internals.StreamTask - Start processing one record [ConsumerRecord <info>
but a breakpoint in the .punctuate() function isn't getting hit. So it's doing lots of work but not giving me a chance to use it.
A few clarifications:
StreamsConfig.COMMIT_INTERVAL_MS_CONFIG is a lower bound on the commit interval, ie, after a commit, the next commit happens not before this time passed. Basically, Kafka Stream tries to commit ASAP after this time passed, but there is no guarantee whatsoever how long it will actually take to do the next commit.
StreamsConfig.POLL_MS_CONFIG is used for the internal KafkaConsumer#poll() call, to specify the maximum blocking time of the poll() call.
Thus, both values are not helpful to heartbeat more often.
Kafka Streams follows a "depth-first" strategy when processing record. This means, that after a poll() for each record all operators of the topology are executed. Let's assume you have three consecutive maps, than all three maps will be called for the first record, before the next/second record will get processed.
Thus, the next poll() call will be made, after all record of the first poll() got fully processed. If you want to heartbeat more often, you need to make sure, that a single poll() call fetches less records, such that processing all records takes less time and the next poll() will be triggered earlier.
You can use configuration parameters for KafkaConsumer that you can specify via StreamsConfig to get this done (see https://kafka.apache.org/documentation.html#consumerconfigs):
streamConfig.put(ConsumerConfig.XXX, VALUE);
max.poll.records: if you decrease this value, less record will be polled
session.timeout.ms: if you increase this value, there is more time for processing data (adding this for completeness because it is actually a client setting and not a server/broker side configuration -- even if you are aware of this solution and do not like it :))
EDIT
As of Kafka 0.10.1 it is possible (and recommended) to prefix consumer and procuder configs within streams config. This avoids parameter conflicts as some parameter names are used for consumer and producer and cannot be distinguiesh otherwise (and would be applied to consumer and producer at the same time).
To prefix a parameter you can use StreamsConfig#consumerPrefix() or StreamsConfig#producerPrefix(), respectively. For example:
streamsConfig.put(StreamsConfig.consumerPrefix(ConsumerConfig.PARAMETER), VALUE);
One more thing to add: The scenario described in this question is a known issue and there is already KIP-62 that introduces a background thread for KafkaConsumer that send heartbeats, thus decoupling heartbeats from poll() calls. Kafka Streams will leverage this new feature in upcoming releases.

Kafka 0.9 new consumer api --- how to just watch consumer offsets

I am trying to monitor consumer offsets of a given group with the Java API. I create one additional consumer which does not subscribe to any topic, but just calls consumer.committed(topic) to get the offset information. This kind of works, but:
For testing I use only one real consumer (i.e. one which does subscribe to the topic). When I shut it down using close() and later restart one, it takes 27 seconds between subscribe and the first consumption of messages, despite the fact that I use poll(1000).
I am guessing this has to do with the rebalancing possibly being confused by the non-subscribing consumer. Could that be possible? Is there a better way to monitor offsets with the Java API (I know about the command line tool, but need to use the API).
There are different ways to inspect offset from topics, depends on the purpose of what you want it for, besides of "committed" that you described above, here are two more options:
1) if you want to know the offset id from which the consumer start to fetch data from broker next time Thread(s) start(s), then you must use "position" as
long offsetPosition;
TopicPartition tPartition = new TopicPartition(topic,partitionToReview);
offsetPosition = kafkaConsumer.position(tPartition);
System.out.println("offset of the next record to fetch is : " + position);
2) calling "offset()" method from ConsumerRecord object, after performing a poll from kafkaConsumer
Iterator<ConsumerRecord<byte[],byte[]>> it = kafkaConsumer.poll(1000).iterator();
while(it.hasNext()){
ConsumerRecord<byte[],byte[]> record = it.next();
System.out.println("offset : " + record.offset());
}
Found it: the monitoring consumer added to the confusion but was not the culprit. In the end it is easy to understand though slightly unexpected (for me at least):
The default for session.timeout.ms is 30 seconds. When a consumer disappears it takes up to 30 seconds before it is declared dead and the work is rebalanced. For testing, I had stopped the single consumer I had, waited three seconds and restarted a new one. This then took 27 seconds before it started, filling the 30 seconds time-out.
I would have expected that a single, lone consumer starting up does not wait for the time-out to expire, but starts to "rebalance", i.e. grab the work immediately. It seems though that the time-out has to expire before work is rebalanced, even if there is only one consumer.
For the testing to get through faster, I changed the configuration to use a lower session.timeout.ms for the consumer as well as group.min.session.timeout.ms for the broker.
To conclude: using a consumer that does not subscribe to any topic for monitoring the offsets works just fine and does not seem to interfere with the rebalancing process.

Kafka - Delayed Queue implementation using high level consumer

Want to implement a delayed consumer using the high level consumer api
main idea:
produce messages by key (each msg contains creation timestamp) this makes sure that each partition has ordered messages by produced time.
auto.commit.enable=false (will explicitly commit after each message process)
consume a message
check message timestamp and check if enough time has passed
process message (this operation will never fail)
commit 1 offset
while (it.hasNext()) {
val msg = it.next().message()
//checks timestamp in msg to see delay period exceeded
while (!delayedPeriodPassed(msg)) {
waitSomeTime() //Thread.sleep or something....
}
//certain that the msg was delayed and can now be handled
Try { process(msg) } //the msg process will never fail the consumer
consumer.commitOffsets //commit each msg
}
some concerns about this implementation:
commit each offset might slow ZK down
can consumer.commitOffsets throw an exception? if yes i will consume the same message twice (can solve with idempotent messages)
problem waiting long time without committing the offset, for example delay period is 24 hours, will get next from iterator, sleep for 24 hours, process and commit (ZK session timeout ?)
how can ZK session keep-alive without commit new offsets ? (setting a hive zookeeper.session.timeout.ms can resolve in dead consumer without recognising it)
any other problems im missing?
Thanks!
One way to go about this would be to use a different topic where you push all messages that are to be delayed. If all delayed messages should be processed after the same time delay this will be fairly straight forward:
while(it.hasNext()) {
val message = it.next().message()
if(shouldBeDelayed(message)) {
val delay = 24 hours
val delayTo = getCurrentTime() + delay
putMessageOnDelayedQueue(message, delay, delayTo)
}
else {
process(message)
}
consumer.commitOffset()
}
All regular messages will now be processed as soon as possible while those that need a delay gets put on another topic.
The nice thing is that we know that the message at the head of the delayed topic is the one that should be processed first since its delayTo value will be the smallest. Therefore we can set up another consumer that reads the head message, checks if the timestamp is in the past and if so processes the message and commits the offset. If not it does not commit the offset and instead just sleeps until that time:
while(it.hasNext()) {
val delayedMessage = it.peek().message()
if(delayedMessage.delayTo < getCurrentTime()) {
val readMessage = it.next().message
process(readMessage.originalMessage)
consumer.commitOffset()
} else {
delayProcessingUntil(delayedMessage.delayTo)
}
}
In case there are different delay times you could partition the topic on the delay (e.g. 24 hours, 12 hours, 6 hours). If the delay time is more dynamic than that it becomes a bit more complex. You could solve it by introducing having two delay topics. Read all messages off delay topic A and process all the messages whose delayTo value are in the past. Among the others you just find the one with the closest delayTo and then put them on topic B. Sleep until the closest one should be processed and do it all in reverse, i.e. process messages from topic B and put the once that shouldn't yet be proccessed back on topic A.
To answer your specific questions (some have been addressed in the comments to your question)
Commit each offset might slow ZK down
You could consider switching to storing the offset in Kafka (a feature available from 0.8.2, check out offsets.storage property in consumer config)
Can consumer.commitOffsets throw an exception? if yes, I will consume the same message twice (can solve with idempotent messages)
I believe it can, if it is not able to communicate with the offset storage for instance. Using idempotent messages solves this problem though, as you say.
Problem waiting long time without committing the offset, for example delay period is 24 hours, will get next from iterator, sleep for 24 hours, process and commit (ZK session timeout?)
This won't be a problem with the above outlined solution unless the processing of the message itself takes more than the session timeout.
How can ZK session keep-alive without commit new offsets? (setting a hive zookeeper.session.timeout.ms can resolve in dead consumer without recognizing it)
Again with the above you shouldn't need to set a long session timeout.
Any other problems I'm missing?
There always are ;)
Use Tibco EMS or other JMS Queue's. They have retry delay built in . Kafka may not be the right design choice for what you are doing
I would suggest another route in your cases.
It doesn't make sense to address the waiting time in the main thread of the consumer. This will be an anti-pattern in how the queues are used. Conceptually, you need to process the messages as fastest as possible and keep the queue at a low loading factor.
Instead, I would use a scheduler that will schedule jobs for each message you are need to delay. This way you can process the queue and create asynchronous jobs that will be triggered at predefined points in time.
The downfall of using this technique is that it is sensible to the status of the JVM that holds the scheduled jobs in memory. If that JVM fails, you loose the scheduled jobs and you don't know if the task was or was not executed.
There are scheduler implementations, though that can be configured to run in a cluster environment, thus keeping you safe from JVM crashes.
Take a look at this java scheduling framework: http://www.quartz-scheduler.org/
We had the same issue during one of our tasks. Although, eventually, it was solved without using delayed queues, but when exploring the solution, the best approach we found was to use pause and resume functionality provided by the KafkaConsumer API. This approach and its motivation is perfectly described here: https://medium.com/naukri-engineering/retry-mechanism-and-delay-queues-in-apache-kafka-528a6524f722
Keyed-list on schedule or its redis alternative may be best approaches.

Categories