kafka Consumer Reading Previous Records - java

i am facing a problem with my kafka consumer. i have two kafka brokers running with replication factor 2 for the topic. everytime a broker restarts and if i restart my consumer service, it starts to read records which it has already read. e.g. before i restarted the consumer this was the state.
and consumer was sitting idle not receiving any messages as it has read all of them.
i restart my consumer, and all of a sudden it starts receiving messages which it has processed previously and here is the offset situation now.
also what is this LOG-END-OFFSET and LAG, looks like these are something to consider here.
note that it only happens when 1 of the broker gets restarted due to kubernetes shifting it to another node.
this is the topic configuration

Based on the info you posted, a couple of things that immediately come to mind:
The first screenshot shows a lag of 182, which means the consumer either was not running, or it has some weird configuration that made it stop consuming. Was it possible one of the brokers was down when the consumer stopped consuming?
On restart, the consumer finally consumed all the remaining messages, because it now shows lag of 0. This is correct, expected Kafka behavior.
Make sure that the consumer group name is not changing between restarts. Some clients default to "randomized" customer group names, which works as long as the consumer is not restarted.

Related

Kafka consumer group not rebalancing when increasing partitions

I have a situation where in my dev environment, my Kafka consumer groups will rebalance and distribute partitions to consumer instances just fine after increasing the partition count of a subscribed topic.
However, when we deploy our product into its kubernetes environment, we aren't seeing the consumer groups rebalance after increasing the partition count of the topic. Kafka recognized the increase which can be seen from the server logs or describing the topic from the command line. However, the consumer groups won't rebalance and recognize the new partitions. From my local testing, kafka respects metadata.max.age.ms (default 5 mins). But in kubernetes the group never rebalances.
I don't know if it affects anything but we're using static membership.
The consumers are written in Java and use the standard Kafka Java library. No messages are flowing through Kafka, and adding messages doesn't help. I don't see anything special in the server or consumer configurations that differs from my dev environment. Is anyone aware of any configurations that may affect this behavior?
** Update **
The issue was only occurring for a new topic. At first, the consumer application was starting before the producer application (which is intended to create the topic). So the consumer was auto creating the topic. In this scenario, the topic defaulted to 1 partition. When the producer application started it, updated the partition count per configuration. After that update, we never saw a rebalance.
Next we tried disabling consumer auto topic creation to address this. This prevented the consumer application from auto creating the topic on subscription. Yet still after the topic was created by the producer app, the consumer group was never rebalanced, so the consumer application would sit idle.
According to the documentation I've found, and testing in my dev environment, both of these situations should trigger a rebalance. For whatever reason we don't see that happen in our deployments. My temporary workaround was just to ensure that the topic is already created prior to allowing my consumer's to subscribe. I don't like it, but it works for now. I suspect that the different behavior I'm seeing is due to my dev environment running a single kafka broker vs the kubernetes deployments with a cluster, but that's just a guess.
Kafka defaults to update topic metadata only after 5 minutes, so will not detect partition changes immediately, as you've noticed. The deployment method of your app shouldn't matter, as long as network requests are properly reaching the broker.
Plus, check your partition assignment strategy to see if it's using sticky assignment. This will depend on what version of the client you're using, as the defaults changed around 2.7, I think
No messages are flowing through Kafka
If there's no data on the new partitions, there's no real need to rebalance to consume from them

kafka springboot about receiving messages only from consumer application launch time and ignoring unprocessed messages

Currently when starting consumer application it will receive old messages that have not been processed by KafkaListener and I only want to receive the latest messages since starting the consumer application ignore those old messages, I have to do that any?
This is pretty brief introduction into your issue - it would be handy to show versions of libraries you are working with, configurations, etc.
Nevertheless, if you do not want to receive old messages, that has not been ack before, you need to move offset for you consumer group.
Offset is basically pointer at last successfully read item, so when consumer is stopped, it remains here until consumer starts reading again - that is the reason why "old" messages are read. In this thread are some answers, but it is difficult to answer completely without further information.
Set consumer settings as auto.offset.reset=latest and enable.auto.commit=false, then your app will always start reading from the very end of the Kafka topic and ignore whatever was written while the app is stopped (between restarts, for example)
You could also add a random UUID to the group.id to ensure no other consumer would easily join that consumer group and "take away" events from your app.
Kafka Consumer API also has a method seekToEnd that you can try.

Kafka Stops consuming message after multiple failures

We have a simple application. One micro service will send messages to be consumed by two other micro services. Of the two services, one is able to successfully process the messages and we don't see a lag there. The other service consistently fails and after 25th message no other message is consumed. Is there a reason for this?
The Kafka topic is created with one partition and one replication factor.
Service 1 - Working Fine:
Service Working Fine
Service 2 - Lag increasing:
Lag Increasing
Is there a configuration in Kafka that will make the consumer to stop consuming the messages after a particular amount of failure or what can we do to avoid this behavior?
you are saying a topic with one partition and you have two different services, I'm assuming different consumer groups names consuming from the same topic partition, one service is consuming fine and other one stop consuming at 25th message. In this case you can check in that consumer logs why it is not consuming, this could be malformed message where the consumer is able to consume.
here is how I'd debug
restart or redeploy the same consumer and see it is alway stop at 25th message
if step 1 is true, check 24th and 25th message and see the difference or reset offset to 26th to see, if consumer is moving forward after offset reset, then message 25th having some issue where consumer unable to consume.

ActiveMQ stops receiving messages after servers left idle for few hours

I've been browsing the forums for last few days and tried almost everything i could find, but without any luck.
The situation is: inside our Java Web Application we have ActiveMQ 5.7 (I know it's very old, eventually we will upgrade to newer version - but for some reasons it's not possible right now). We have only one broker and multiple consumers.
When I start the servers (I have tried to do so for 2, 3, 4 and more servers) everything is ok. The servers are comunicating with each other, QUEUE messages are consumed instantly. But when I leave the servers idle (for example to finally catch some sleep ;) ) it is no longer the case. Messages are stuck in the database and are not beign consumed. The only option to have them delivered is to restart the server.
Part of my configuration (we keep it in properties file, it's the actual state, however I have tried many different combinations):
BrokerServiceURI=broker:(tcp://0.0.0.0:{0})/{1}?persistent=true&useJmx=false&populateJMSXUserID=false&useShutdownHook=false&deleteAllMessagesOnStartup=false&enableStatistics=true
ConnectionFactoryURI=failover://({0})?initialReconnectDelay=100&timeout=6000
ConnectionFactoryServerURI=tcp://{0}:{1}?keepAlive=true&soTimeout=100&wireFormat.cacheEnabled=false&wireFormat.tightEncodingEnabled=false&wireFormat.maxInactivityDuration=0
BrokerService.startAsync=true
BrokerService.networkConnectorStartAsync=true
BrokerService.keepDurableSubsActive=false
Do you have a clue?
I cannot actually tell you the reason from the description mentioned above but I can list down a few checks that are fresh in my mind. Please confirm the following if they are valid for you or not.
Can you check the consumer connections?
Are the consumer sessions still active?
If all the consumer-connections are up, then check the thread-dump whether the active consumer threads (I'm assuming you created consumer threads, correct me if I'm wrong) are in RUNNING or WAITING state(this happened with me where all the consumers were active but some other thread was keeping a lock on Logger while posting a message to slack and the consumers were in WAITING state) because of some other thread in the server).
Check the Dispatch queue size for each consumer. Check the prefetch of each consumer and then compare Dispatch Queue size with Prefetch, refer
Is there a JMSXGroupID you are allotting to each message?
Can you tell a little more about your consumer/producer/broker configurations?

Kafka old consumer rebalance issue

In our system we're using an older version of kafka (0.9.0.1) and the old scala consumer API in a tomcat application.
Everything works fine most of the time, but sometimes when the servers where the consumers run are heavily utilised by some other tasks in the app then the consumers become unresponsive which triggers as expected a rebalance and that consumer is removed from its partitions and other consumers are used.
My question is if there is an easy way for the consumer to re-register itself when it comes back up?
I know that the old consumers store the partition consumer details in Zookeeper and was thinking we could have a task that would periodically check if our consumer is registered there and restart the consumer if not, but I'm not sure what exactly we should check there. Can anyone point me to some documentation about the data stored in zookeeper by kafka (haven't found anything in the official documentation sadly :( )?
Basically, what you want is fixed assignments, and that consumer groups never rebalance. If there was a way to disable automatic consumer rebalancing in the old Scala client, or maybe even increase the rebalance timeout to a much higher value, that could also work, but I couldn't find how to do that with the old Scala consumer.
However, it is possible to assign fixed topic/partitions when using the newer Java consumers, also available in that same 0.9 kafka version. Look for Subscribing To Specific Partitions in the latest Javadocs:
https://kafka.apache.org/090/javadoc/org/apache/kafka/clients/consumer/KafkaConsumer.html
Subscribing To Specific Partitions
In the previous examples we subscribed to the topics we were interested in and
let Kafka give our particular process a fair share of the partitions for those topics.
This provides a simple load balancing mechanism so multiple instances of our program
can divided up the work of processing records.
In this mode the consumer will just get the partitions it subscribes to
and if the consumer instance fails no attempt will be made to
rebalance partitions to other instances.

Categories