What is the impact if delay kafka manual commit offset - java

We want to manual commit kafka offset to control data lose events. But we might delay the manaul commit as we want to do this only after persisting to datasource
I would like to learn how slowing down a commit offset impact kafka's topic/paralalism/partition if at all

When you consume from one topic, if that consumers belongs to one consumer group, Kafka will make sure one partition consumed by one consumer. So if you commit manually it will not affect to other consumers because they consuming from another partition.
But if you compare same partition consumer with enable.auto.commit=false and enable.auto.commit=true, that auto commit enabled consumers throughput if relatively high. And if you don't need the confirmation of your commits, then use commitAsync, it will improve throughput than commitSync.
Generally, you call the API when you are finished processing all the messages in a batch, and don’t poll for new messages until the last offset in the batch is committed. This approach can can affect throughput and latency, as can the number of messages returned when polling, so you can set up your application to commit less frequently.
But, if you do manual committing, There can be duplicate consumed messages when consumer restarts or rebalances. When you consume a message and write to your db, after that you are going to commit the message to Kafka. If consumer rebalance or restart at that time, that message will not be committed and will be re-consumed by another consumer in same group.
For more informations, please refer
consumer-tuning
how-to-commit-offsets

Related

How to control the number of messages that being emitted by Apache Kafka per a specific time?

I am new to Apache Kafka and I am trying to configure Apache Kafka that it receives messages from the producer as much as possible but it only sends to the consumer configured number of messages per specific time.
In other words How to configure Apache Kafka to send only "50 messages for example" per "30 seconds"
to the consumer regardless of the number of the messages, and in the next 30 seconds it takes another 50 messages from the cashed messages and so on.
If you have control over the consumer
You could use max.poll.records property to limit max number of records per poll() method call. And then you only need to ensure that poll() is called once in 30 seconds.
In general you can take a look at all available configuration properties here.
If you cannot control consumer
Then the only option for you is to write messages as per your demand - write at most 50 messages in 30 seconds. There are no configuration options available. Only your application logic can achieve that.
updated - how to control ensure call to poll
The simplest way is to:
while (true) {
consumer.poll()
// .. do your stuff
Thread.sleep(30000);
}
You can make things more complex with measuring time for processing (i.e. starting after poll call up to Thread.sleep() to not wait more then 30 seconds at all.
The problem that producer really doesn't send messages to the consumer. There is that persistent Kafka topic in between where producer places its messages. And it really doesn't care if there is any consumer on the other side. Same from the consumer perspective: it just subscribers for data from the topic and doesn't care if there is some producer on the other side. So, thinking about a back-pressure from the consumer down to producer where there is a messaging middle ware is wrong direction.
On the other hand it is not clear how those consumed messages may impact your third party service. The point is that Kafka consumer is single-threaded per partition. So, all the messages from one partition is going to be (must) processed one by one in the same thread. This way you cannot send more than one messages to your service: the next one can be sent only when the previous has been replied. So, think about it: how it is even possible in your consumer application to excess rate limit?
However if you have enough partitions and high concurrency on the consumer side, so you really may end up with several requests to your service in parallel from different threads. For this purpose I would suggest to take a look into a Rate Limiter pattern. This library provides a good implementation: https://resilience4j.readme.io/docs/ratelimiter. It is much better to keep messages in the topic then try to limit producer somehow.
To conclude: even if the consumer side is not your project, it is better to discuss with that team how to improve their consumer. You did your part well: the producer sends messages to Kafka topic. What else you can do over here?
Interesting use case and not sure why you need it, but two possible solutions: 1. To protect the cluster, you could use quotas, not for amount of messages but for bandwidth throughput: https://kafka.apache.org/documentation/#design_quotas . 2. If you need an exact amount of messages per time frame, you could put a buffering service (rate limiter) in between where you consume and pause, publishing messages to the consumed topic. Rate limiter could consume next 50 then pause until minute passes. This will increase space used on your cluster because of duplicated messages. You also need to be careful of how to pause the consumer, hearbeats need to be sent else you will rebalance your consumer continuously, ie you can't just sleep till next minute. This is obviously if you can't control the end consumer.

Delayed ACK in Spring Kafka

I'm using Spring and Spring Kafka for a batching service that collects data from Kafka until certain conditions are met, and then dumps the data.
I want to acknowledge the commits when the data leaves my service, but it could potentially sit in memory for 5-10 minutes.
Given that the Acknowledgement implementations in Spring Kafka hold on to the original record(s) it seems unreasonable to hold on to them until I dump my data given what that would do to memory utilization.
Is there any other way to acknowledge / commit offsets from Spring Kafka given just the partition/offset information?
You could use AckMode.TIME or AckMode.COUNT with an incredibly large ackTime or ackCount so the container will never do the ack.
Then, pass the Consumer<?, ?> into your listener method and do the offset commit yourself.
Note, however, that the consumer is not thread-safe so you must perform the commit on the listener thread.
Also, bear in mind that records are not individually ack'd, just the offset. You can't ack "out of order".
Also, you would likely need to increase the max.poll.interval.ms beyond its default (5 minutes) to avoid a rebalance of the partitions.

When consumer gets message from channel in rabbitmq,where does pre-fetch messages reside

I have below configuration for rabbitmq
prefetchCount:1
ack-mode:auto.
I have one exchange and one queue is attached to that exchange and one consumer is attached to that queue. As per my understanding below steps will be happening if queue has multiple messages.
Queue write data on a channel.
As ack-mode is auto,as soon as queue writes message on channel,message is removed from queue.
Message comes to consumer,consumer start performing on that data.
As Queue has got acknowledgement for previous message.Queue writes next data on Channel.
Now,my doubt is,Suppose consumer is not finished with previous data yet.What will happen with that next data queue has written in channel?
Also,suppose prefetchCount is 10 and I have just once consumer attached to queue,where these 10 messages will reside?
The scenario you have described is one that is mentioned in the documentation for RabbitMQ, and elaborated in this blog post. Specifically, if you set a sufficiently large prefetch count, and have a relatively small publish rate, your RabbitMQ server turns into a fancy network switch. When acknowledgement mode is set to automatic, prefetch limiting is effectively disabled, as there are never unacknowledged messages. With automatic acknowledgement, the message is acknowledged as soon as it is delivered. This is the same as having an arbitrarily large prefetch count.
With prefetch >1, the messages are stored within a buffer in the client library. The exact data structure will depend upon the client library used, but to my knowledge, all implementations store the messages in RAM. Further, with automatic acknowledgements, you have no way of knowing when a specific consumer actually read and processed a message.
So, there are a few takeaways here:
Prefetch limit is irrelevant with automatic acknowledgements, as there are never any unacknowledged messages, thus
Automatic acknowledgements don't make much sense when using a consumer
Sufficiently-large prefetch when auto-ack is off, or any use of autoack = on will result in the message broker not doing any queuing, and instead doing routing only.
Now, here's a little bit of expert opinion. I find the whole notion of a message broker that "pushes" messages out to be a little backwards, and for this very reason- it's difficult to configure properly, and it is unclear what the benefit is. A queue system is a natural fit for a pull-based system. The processor can ask the broker for the next message when it is done processing the current message. This approach will ensure that load is balanced naturally and the messages don't get lost when processors disconnect or get knocked out.
Therefore, my recommendation is to drop the use of consumers altogether and switch over to using basic.get.

Getting duplicates in kafka consumer

I am writing a Java client for Kafka consumer.I commit every messages asynchronously before processing it.Still I am receiving many duplicate messages during rebalance.
Can anyone explain the reason and how to avoid this?
Kafka Consumer does not provide exactly-once processing guarantees, even if you commit all messages synchronously.
The problem is, that when you did finish processing a message successfully and want to commit it, the rebalance can happen right before the commit. Thus, your commit is not done and the already processed message will be reprocessed.
Because you use asynchronous commits, the number of duplicates increases, as committing does not happen immediately for each single message. Hence, you can have many messages "in-flight" that are finished processing but not committed yet. On rebalance, all "in-flight" message will be re-processed.
So committing synchronously will reduce the number of duplicates. However, duplicates cannot be avoided completely, because there is no exactly-once delivery guarantee in Kafka.
Exactly-once delivery is on the roadmap for future release of Kafka though: https://cwiki.apache.org/confluence/display/KAFKA/KIP-98+-+Exactly+Once+Delivery+and+Transactional+Messaging

Reading messages offset in Apache Kafka

I am very much new to Kafka and we are using Kafka 0.8.1.
What I need to do is to consume a message from topic. For that, I will have to write one consumer in Java which will consume a message from topic and then save that message to database. After a message is saved, some acknowledgement will be sent to Java consumer. If acknowledgement is true, then next message should be consumed from the topic. If acknowldgement is false(which means due to some error message,read from the topic, couldn't be saved into the database), then again that message should be read.
I think I need to use Simple Consumer,to have control over message offset and have gone through the Simple Consumer example as given in this link https://cwiki.apache.org/confluence/display/KAFKA/0.8.0+SimpleConsumer+Example.
In this example, offset is evaluated in run method as 'readOffset'. Do I need to play with that? For e.g. I can use LatestTime() instead of EarliestTime() and in case of false, I will reset the offset to the one before using offset - 1.
Is this how I should proceed?
I think you can get along with using the high level consumer (http://kafka.apache.org/documentation.html#highlevelconsumerapi), that should be easier to use than the SimpleConsumer. I don't think the consumer needs to reread messages from Kafka on database failure, as the consumer already has those messages and can resend them to the DB or do anything else it sees fit.
High-level consumers store the last offset read from a specific partition in Zookeeper (based on the consumer group name), so that when a consumer process dies and is later restarted (potentially on an other host), it can continue processing messages where it left off. It's possible to autosave this offset to Zookeeper periodically (see the consumer properties auto.commit.enable and auto.commit.interval.ms), or have it saved by application logic by calling ConsumerConnector.commitOffsets . See also https://cwiki.apache.org/confluence/display/KAFKA/Consumer+Group+Example .
I suggest you turn auto-commit off and commit your offsets yourselves once you received DB acknowledgement. Thus, you can make sure unprocessed messages are reread from Kafka in case of consumer failure and all messages commited to Kafka will eventually reach the DB at least once (but not 'exactly once').

Categories