We have a system which receives data from the users and pushes data to kafka and only when we are sure that the data has been pushed we send the user an "OK" response.
Since the new kafka is using async send(ProducerRecord,Callback), I wanted to know that if this send is crash resistant (fault-tolerant)?
My guess is that its most probably not,so how can I use it in sync mode? Or should I make the user wait until the callback is called?
According to Kafka's Design :
Asynchronous send
Batching is one of the big drivers of efficiency, and to enable batching the Kafka producer has an asynchronous mode that accumulates data in memory and sends out larger batches in a single request. The batching can be configured to accumulate no more than a fixed number of messages and to wait no longer than some fixed latency bound (say 100 messages or 5 seconds). This allows the accumulation of more bytes to send, and few larger I/O operations on the servers. Since this buffering happens in the client it obviously reduces the durability as any data buffered in memory and not yet sent will be lost in the event of a producer crash.
Related
I am opening a kafka producer with config properties -
KafkaProducer<String, MyValue> producer = new KafkaProducer<String, MyValue>(kafkaProperties);
then sending records synchronously using - (so as to avoid batching and also maintain the original message order)
//create myValue instance //omited for simplicity
//create myrecord instance using topicname and myvalue
producer.send(myRecord).get();
producer.flush(); //send message as soon as record is available to producer
now my issue is, I have several records to send and between sends i might have to wait for long times - few minutes to hours (for what ever reasons, atleast to explore and understand kafka better).
I want to know for how long will the producer connection with the cluster/bootstrap server be alive. Is there anyway i can configure it using the producer configurations.
(In depth explanations will be greatly thanked - even if it has to go to tcp connection levels, you are welcome)
(kafka consumers have a heartbeat concept. Does producers have similar concept. A google search for "kafka producer heartbeat.interval.ms" returned only result for consumer).
KafkaProducer.send method is asynchronous, by default it adds all records into buffer memory and send them at once, so according docs the producer establish the connection while sending the batch to cluster
The send() method is asynchronous. When called it adds the record to a buffer of pending record sends and immediately returns. This allows the producer to batch together individual records for efficiency.
The producer maintains buffers of unsent records for each partition. These buffers are of a size specified by the batch.size config. Making this larger can result in more batching, but requires more memory (since we will generally have one of these buffers for each active partition).
By default a buffer is available to send immediately even if there is additional unused space in the buffer. However if you want to reduce the number of requests you can set linger.ms to something greater than 0.
This will instruct the producer to wait up to that number of milliseconds before sending a request in hope that more records will arrive to fill up the same batch. This is analogous to Nagle's algorithm in TCP.
For example, in the code snippet above, likely all 100 records would be sent in a single request since we set our linger time to 1 millisecond. However this setting would add 1 millisecond of latency to our request waiting for more records to arrive if we didn't fill up the buffer.
Note that records that arrive close together in time will generally batch together even with linger.ms=0 so under heavy load batching will occur regardless of the linger configuration; however setting this to something larger than 0 can lead to fewer, more efficient requests when not under maximal load at the cost of a small amount of latency.
From the KafkaProducer.flush, invoking flush doesn't mean producer send each record to cluster, invoking flush makes all buffered records immediately available to send
Invoking this method makes all buffered records immediately available to send (even if linger.ms is greater than 0) and blocks on the completion of the requests associated with these records. The post-condition of flush() is that any previously sent record will have completed (e.g. Future.isDone() == true). A request is considered completed when it is successfully acknowledged according to the acks configuration you have specified or else it results in an error.
We have one requirement where we are using Kafka Streams to read from Kafka topic and then send the data over network through a pool of sessions. However, sometimes, network calls are bit slow and we need to frequently pause the stream, ensure we are not overloading network. Currently, we capture data into a stream and load it to a executor service and then send it over network through session pool.
If data in executor service is too high, we need to pause the stream for some time and then resume it once backlog on executor service is cleared up. For achiveing this pause mechanism, We are currently closing the stream and starting again once backlog is cleared up.
Is there any way we can pause the kafka stream?
If I understand you correctly, there is nothing special you need to do. You are talking about "back pressure" and Kafka Streams can handle it out of the box.
What can be done is putting this data into a queue with some max size and use this queue to load in executor service. Whenever the queue reaches some threshold, there are two methods:
If your call to put data in queue is blocking with no time-out, there is nothing more you need to do. Just wait until the system is back online, your call
returns, and processing will resume.
If your call to put data in queue is blocking with time-out,just issue the lookup to check the size of the queue. Repeat this until the system is back online and your call succeeds.
The only caveat is that as long as your Streams application blocks, the internally used Kafka consumer client will not send any heartbeats to Kafka and might time out. Thus, you need to set the time-out configuration parameter higher than the expected maximum downtime of your external system.
Another approach is to use a Processor API available in Kafka-streams, though, it is not usually recommended pattern.
Let me know if it helps!!
I have below configuration for rabbitmq
prefetchCount:1
ack-mode:auto.
I have one exchange and one queue is attached to that exchange and one consumer is attached to that queue. As per my understanding below steps will be happening if queue has multiple messages.
Queue write data on a channel.
As ack-mode is auto,as soon as queue writes message on channel,message is removed from queue.
Message comes to consumer,consumer start performing on that data.
As Queue has got acknowledgement for previous message.Queue writes next data on Channel.
Now,my doubt is,Suppose consumer is not finished with previous data yet.What will happen with that next data queue has written in channel?
Also,suppose prefetchCount is 10 and I have just once consumer attached to queue,where these 10 messages will reside?
The scenario you have described is one that is mentioned in the documentation for RabbitMQ, and elaborated in this blog post. Specifically, if you set a sufficiently large prefetch count, and have a relatively small publish rate, your RabbitMQ server turns into a fancy network switch. When acknowledgement mode is set to automatic, prefetch limiting is effectively disabled, as there are never unacknowledged messages. With automatic acknowledgement, the message is acknowledged as soon as it is delivered. This is the same as having an arbitrarily large prefetch count.
With prefetch >1, the messages are stored within a buffer in the client library. The exact data structure will depend upon the client library used, but to my knowledge, all implementations store the messages in RAM. Further, with automatic acknowledgements, you have no way of knowing when a specific consumer actually read and processed a message.
So, there are a few takeaways here:
Prefetch limit is irrelevant with automatic acknowledgements, as there are never any unacknowledged messages, thus
Automatic acknowledgements don't make much sense when using a consumer
Sufficiently-large prefetch when auto-ack is off, or any use of autoack = on will result in the message broker not doing any queuing, and instead doing routing only.
Now, here's a little bit of expert opinion. I find the whole notion of a message broker that "pushes" messages out to be a little backwards, and for this very reason- it's difficult to configure properly, and it is unclear what the benefit is. A queue system is a natural fit for a pull-based system. The processor can ask the broker for the next message when it is done processing the current message. This approach will ensure that load is balanced naturally and the messages don't get lost when processors disconnect or get knocked out.
Therefore, my recommendation is to drop the use of consumers altogether and switch over to using basic.get.
I have a producer which sends persistent messages in batches to a queue leveraging JMS transaction.
I have tested and found that Producer Flow Control is applied when using a batch size of 1. I could see my producer being throttled as per the memory limit I have configured for the queue. Here's my Producer Flow Control configuration:
<policyEntry queue="foo" optimizedDispatch="true"
producerFlowControl="true" memoryLimit="1mb">
</policyEntry>
The number of pending messages in the queue are in control which I see as the evidence for Producer Flow Control in action.
However, when the batch size is increased to 2, I found that this memory limit is not respected and the producer is NOT THROTTLED at all. The evidence being the number of pending messages in the queue continue to increase till it hits the storeUsage limit configured.
I understand this might be because the messages are sent in asynchronous fashion when the batch size is more than 1 even though I haven't explicitly set useAsyncSend to true.
ActiveMQ's Producer Flow Control documentation mentions that to throttle asynchronous publishers, we need to configure Producer Window Size in the producer which shall force the Producer to wait for acknowledgement once the window limit is reached.
However, when I configured Producer Window Size in my producer and attempted to send messages in batches, an exception is thrown and no messages were sent.
This makes me think and ask this question, "Is it possible to configure Producer Window Size while sending persistent messages in batches?".
If not, then what is the correct way to throttle the producers who send persistent messages in batches?
There is not really a way to throttle "max msgs per second" or similar. What you would do is to enable producer flow control and vm cursor, then set the memory limit on that queue (or possibly all queues if you wish) to some reasonable level.
You can decide in the configuration if the producer should hang or throw an exception if the queue memory limit has been reached.
<policyEntry queue="MY.BATCH.QUEUE" memoryLimit="100mb" producerFlowControl="true">
<pendingQueuePolicy>
<vmQueueCursor/>
</pendingQueuePolicy>
</policyEntry>
I found this problem in v5.8.0 but found this to be resolved in v5.9.0 and above.
From v5.9.0 onwards I found PFC is applied out of the box even for producers who send messages asynchronously.
Since batch send (where batch size > 1) is essentially an asynchronous operation, this applies there as well.
But the PFC wiki was confusing as it mentions that one should configure ProducerWindowSize for async producers if PFC were to be applied. However, I tested and verified that this was not needed.
I basically configured a per-destination limit of 1mb and sent messages in batches (with batch size of 100).
My producer was throttled out of the box without any additional configuration. The number of pending messages in the queue didn't increase and was under control.
With a simple Camel consumer consuming the messages (and appending them to a file), I found that with v5.8.0 (where I faced the problem), I could send 100k messages with the payload being 2k in 36 seconds. But most of them ended up as Pending messages.
But with v5.9.0, it took 176 seconds to send the same set of messages testifying the role played by PFC. And the number of pending messages never increased beyond 1000 in my case.
I also tested with v5.10.0 and v5.12.0 (the latest version at the time of writing) which worked as expected.
So if you are facing this problem, chances are that you are running ActiveMQ v5.8.0 or earlier. Simply upgrading to the latest version should solve this problem.
I thank the immensely helpful ActiveMQ mailing list folks for all their suggestions and help.
Thanks #Petter for your answer too. Sorry I didn't mention the version I was using in my question, otherwise I believe you could have spotted the problem straight away.
I want 100 messages to be delivered together to a consumer through activemq, but at the same time producer will be producing messages one at a time.
Reason I want this is because I don't want to handle the overhead of processing each message individually on delivery, instead we want to do bulk processing on delivery.
Is it possible to achieve this through ActiveMQ or should i write my own modifications to achieve this.
ActiveMQ is a JMS 1.1 client / broker implementation therefore there is no API to deliver messages in bulk, the async listener dispatches them one at a time. The client does prefetch more than one message though so the overhead of processing them using async listeners is quite low.
You could achieve your goal by placing every message into a buffer and only doing your processing when the buffer contains N messages. To make it work, you'd want to use an acknowledgement mode such as CLIENT_ACKNOWLEDGE that allows you to not acknowledge the messages that are sitting in the buffer until they are processed; that way if your client crashed with some messages in its memory, they would be re-delivered when the client comes back up.