Are kafka acks received in the same order of produced messages - java

I'm working on a process that collect data from IBM MQ and process it to a kafka topic.
To make sure not loosing any message,I need to commit my JMS message only after making sure my message is being sent and received by kafka broker.
I don't want to use synchronous kafka producer (waiting on future.get()) because of the performance impact it may have,instead I want to commit my JMS message inside the callback I'm providing my kafka producer.
For this to work correctly, I need the garantee that ack will be received in the same order of my produced messages (first ack corresponds to the first message being sent..).
Is my assumption correct?

These are the producer configs you want for ordered Kafka producer messages
enable.idempotence=true
acks=all
max.in.flight.requests.per.connection=5
retries=2147483647
The last two are defaults, so you don't explicitly need to set them.
Details - https://developer.confluent.io/tutorials/message-ordering/kafka.html
However, there will be no guaranteed order of Kafka producer callbacks unless you use a synchronous producer
collect data from IBM MQ and process it to a kafka topic
You can use Kafka Connect for this rather than writing your own producer

We need to talk little about the messages processing guarantee here.
If we are after the exactly-once processing semantics and messaging order, then the consumers must be configured with isolation.level="read_committed" and producers have to be configured with retries=Integer.MAX_VALUE, enable.idempotence=true, and max.in.flight.requests.per.connection=1 per default.
Also, setting max.in.flight.requests.per.connection=1, will guarantee that messages will be written to the broker in the order in which they were sent, even when retries occur.

Related

Client Healthcheck: Check for consumer/producer if broker is down

I have requirement to implement healthcheck and as part of that I have to find if producer will be able to publish message and consumer will be able to consumer message, for this I have to check that connection to cluster is working which can be checked using "connection_count" metric but that doesn't give true picture especially for consumer which will be tied to certain brokers on which partition for this consumer is.
Situation with producer is even more tricky as Producer might be publishing the message to any broker which holds the partition for topic on which producer is publishing.
In nutshell, how do I find the health of relevant brokers on producer/consumer sude.
Ultimately, I divide the question into a few checks.
Can you reach the broker? AdminClient.describeCluster works for this
Can you descibe the Topic(s) you are using? AdminClient.describeTopic can do that
Is the ISR list for those topics higher than min.in.sync.replicas? Extrapolate data from (2)
On the producer side, if you set at least acks=1, and there is no ack callback, or you could expose JMX data around the buffer size and if the producer's buffer isn't periodically flushed, then it is not healthy.
For the consumer, look at the conditions under which a rebalance will happen (such as long processing times between polls), then you can quickly identify what it means to be "unhealthy" for them. Attaching partition assignment + rebalance listeners can help here.
Some of these concepts I've written between
dropwizard-kafka (also has Producer and Consumer checks)
remora
I would like to think Spring has something similar

Accessing messages in Dead letter Queue of ActiveMQ and redeliver to webservice or socket after consumption

Iam writing an application in Java using ActiveMQ where iam using a producer and a asynchronous consumer mechanism where the messages sent by the producer are not consumed due to network failures.Hence these messages are sent to dead letter queue of ActiveMQ.
My question is how to access the messages in dead letter queue and perform retry on the same by consuming it in the consumer and send to a webservice or socket etc.Any code example would be great.
DLQ is like any topic or queue and you can subscribe to it like any topic or queue and consume messages accumulated. Here is the list http://activemq.apache.org/advisory-message.html
The name of the DLQ to subscribe to it is ActiveMQ.DLQ if you not have an individualDeadLetterStrategy , you subscribe and do your business onMessage method .
http://activemq.apache.org/message-redelivery-and-dlq-handling.html
Hassen is totally right regarding the possibility to set up a MDB consuming DLQ entries.
However the right place to set up a redelivery policy is on a queue or topic itself (see http://activemq.apache.org/redelivery-policy.html) and not using the dead letter queue. Indeed you have only one DLQ instance for the MOM which will contain messages from all the different queues / topic and setting up a failover mechanism there will imply to manage the different message structure

How to deliver multiple messages together to the Listener in ActiveMQ?

I want 100 messages to be delivered together to a consumer through activemq, but at the same time producer will be producing messages one at a time.
Reason I want this is because I don't want to handle the overhead of processing each message individually on delivery, instead we want to do bulk processing on delivery.
Is it possible to achieve this through ActiveMQ or should i write my own modifications to achieve this.
ActiveMQ is a JMS 1.1 client / broker implementation therefore there is no API to deliver messages in bulk, the async listener dispatches them one at a time. The client does prefetch more than one message though so the overhead of processing them using async listeners is quite low.
You could achieve your goal by placing every message into a buffer and only doing your processing when the buffer contains N messages. To make it work, you'd want to use an acknowledgement mode such as CLIENT_ACKNOWLEDGE that allows you to not acknowledge the messages that are sitting in the buffer until they are processed; that way if your client crashed with some messages in its memory, they would be re-delivered when the client comes back up.

jms call to message.acknowledge() after exiting onMessage()

I want to concurrently consume jms messages from multiple queues. All the messages should go to the DB after long running processing and I have no right to lose them.
Question: Is it possible to save messages for future acknowledgement and call oldMessage.acknowledge() when another message is being processed?
My first guess is that this is impossible since it is deep in the jms processing unit and I have to process message and acknowledgement within an onMessage(...) method.
Second guess is to split onMessage() concurrently and allow long running processing for many messages. But this is not a good option since I have to ensure that all messages are coming ordered!
2nd question: Is there any way to ensure the incoming order while concurrency processing?
1: JMS has a flag on Session that is *CLIENT_ACKNOWLEDGE* you can see it here. I never used it but seems to do what you want.
2:
2.1: You have N consumers for the same queue: You can explore the Exclusive Consumer that some implementations have support. (for AtiveMQ: here).
2.2 You have 1 consumer per queue but you want to order all messages from all queues.
You can use the concept of an ordered SlackBuffer.
You can explore another possibilities like: Redirect all messages to an output queue that maintains the order of messages and you will only consume messages from that single output queue. The order of messages and the redirection are accomplished by the MQ server. It is only a valid idea if you can control the MQ server.
I hope this can help

do message queues provide transactional support?

Say I load messages in a queue from multiple nodes.
Then, one or many nodes are pulling messages from the queue.
Is it possible (or is this normal usage?) that the queue guarantees to not hand out a message to more than one server/node?
And does that server/node have to tell the queue it has completed the operation and the queue and delete the message?
A message queuing system that did not guarantee to hand out a given message to just one recipient would not be worth the using. Some message queue systems have transactional controls. In that case, if a message is collected by one receiver as part of a transaction, but the receiver does not then commit the transaction (and the message queue can identify that the original recipient is no longer available), then it would be reissued. However, the message would not be made available to two processes concurrently.
What messaging/queuing technology are you using ? AMQP can certainly guarantee this behaviour (amongst many others, including pub/sub models)
If you want this in Java - then a JMS compliant messaging system will do what you want - and most messaging systems have a JMS client. You can Use Spring's JmsTemplate for real ease of use too.
With JMS - a message from a Queue will only be consumed by one and only one client - and once it is consumed (acknowledged) - it will be removed from the messaging system. Also when you publish a message using JMS - if its persistent - it will be sent synchronously, and the send() method won't return until the message is stored on the broker's disk - this is important - if you don't want to run the risk of loosing messages in the event of failure.

Categories