Apache Camel - Kafka component - single producer multiple consumer - java

I am creating two apache camel (blueprint XML) kafka projects, one is kafka-producer which accepts requests and stores it in kafka server, and other is kafka-consumer which picks ups messages from kafka server and processes them.
This setup is working fine for single topic and single consumer. However how do I create separate consumer groups within same kafka topic? How to route multiple consumer specific messages within same topic inside different consumer groups? Any help is appreciated. Thank you.

Your question is quite general as it's not very clear what's the problem you are trying to solve, therefore it's hard to understand if there's a better way to implement the solution.
Anyway let's start by saying that, as far as I can understand, you are looking for a Selective Consumer (EIP) which is something that's not supported out-of-the-box by Kafka and Consumer API. Selective Consumer can choose what message to pick from the queue or topic based on specific selectors' values that are put in advance by a producer. This feature must be implemented in the message broker as well, but kafka has not such a capability.
Kafka does implement a hybrid solution between pure pub/sub and queue. That being said, what you can do is subscribing to the topic with one or more consumer groups (more on that later) and filter out all messages you're not interested in, by inspecting messages themselves. In the messaging and EIP world, this pattern is known as Array of Filters. As you can imagine this happen after the message has been broadcasted to all subscribers; therefore if that solution does not fit your requirements or context, then you can think of implementing a Content Based Router which is intended to dispatch the message to a subset of consumers only under your centralized control (this would imply intermediate consumer-specific channels that could be other Kafka topics or seda/VM queues, of course).
Moving to the second question, here is the official Kafka Component website: https://camel.apache.org/components/latest/kafka-component.html.
In order to create different consumer groups, you just have to define multiple routes each of them having a dedicated groupId. By adding the groupdId property, you will inform the Consumer Group coordinators (that reside in Kafka brokers) about the existence of multiple separated groups of consumers and brokers will use those in order to discriminate and treat them separately (by sending them a copy of each log message stored in the topic)...
Here is an example:
public void configure() throws Exception {
from("kafka:myTopic?brokers={{kafkaBootstrapServers}}" +
"&groupId=myFirstConsumerGroup"
.log("Message received by myFirstConsumerGroup : ${body}");
from("kafka:myTopic?brokers={{kafkaBootstrapServers}}" +
"&groupId=mySecondConsumerGroup"
.log("Message received by mySecondConsumerGroup : ${body}");
}
As you can see, I created two routes in the same RouteBuilder, not to say in the same Java process. That's a very bad design decision in most of the use cases I can think of, because there's no single responsibility, segregated concerns and they will not scale. But again, it depends on your requirements/context.
Out of completeness, please consider taking a look at all other Kafka Component properties, as there may be many other configurations of your interest such as the number of consumer threads per group.
I tried to stay high level, in order to initiate the discussion... I'll edit my answer in case of new updates from you. Hope I helped!

Related

Is Kafka the right solution for messages with dependencies?

We have messages which are dependent.Ex. say we have 4 messages M1, M2, M1_update1,(should be processed only after M1 is processed),M3 (should be processed only after M1,M2 are processed).
In this example, only M1 and M2 can be processed in parallel, others have to be sequential. I know messages in one partition of Kafka topic are processed sequentially. But how do I know that M1,M2 are processed and now is the time to push M1_update1 and M3 messages to the topic? Is Kafka right choice for this kind of use-case? Any insights is appreciated!!
Kafka is used as pub-sub messaging system which is highly scalable and fault tolerant.
I believe using kafka alone when your messages are interdependent could be a bad choice. The processing you require is condition based probably you need a routing engine such as camel or drool to achieve the end result.
You're basically describing a message queue that guarantees ordering. Kafka, by design, does not guarantee ordering, except in the case you mention, where the topic has a single partition. In that case, though, you're not taking full advantage of Kafka's ability to maximize throughput by parallelizing data in partitions.
As far as messages being dependent on each other, that would require a logic layer that core Kafka itself doesn't provide. If I understand it correctly, and the processing happens after the message is consumed from Kafka, you would need some sort of notification on the consumer end, which would receive and process M1 and M2 and somehow notify the producer on the other side it's now ok to send M1_update and M3. This is definitely outside the scope of what core Kafka provides. You could still use Kafka to build something like this, but there's probably other solutions that would work better for you.

Apache Kafka Message broadcasting

I am studying Apache-kafka and have some confusion. Please help me to understand the following scenario.
I have a topic with 5 partitions and 5 brokers in a Kafka cluster. I am maintaining my message order in Partition 1(say P1).I want to broadcast the messages of P1 to 10 consumers.
So my question is; how do these 10 consumers interact with topic partition p1.
This is probably not how you want to use Kafka.
Unless you're being explicit with how you set your keys, you can't really control which partition your messages end up in when producing to a topic. Partitions in Kafka are designed to be more like low-level plumbing, something that exists, but you don't usually have to interact with. On the consumer side, you will be assigned partitions based on how many consumers are active for a particular consumer group at any one time.
One way to get around this is to define a topic to have only a single partition, in which case, of course, all messages will go to that partition. This is not ideal, since Kafka won't be able to parallelize data ingestion or serving, but it is possible.
So, having said that, let's assume that you did manage to put all your messages in partition 1 of a specific topic. When you fire up a consumer of that topic with consumer group id of consumer1, it will be assigned all the partitions for that topic, since that consumer is the only active one for that particular group id. If there is only one partition for that topic, like explained above, then that consumer will get all the data. If you then fire up a second consumer with the same group id, Kafka will notice there's a second consumer for that specific group id, but since there's only one partition, it can't assign any partitions to it, so that consumer will never get any data.
On the other hand, if you fire up a third consumer with a different consumer group id, say consumer2, that consumer will now get all the data, and it won't interfere at all with consumer1 message consumption, since Kafka keeps track of their consuming offsets separately. Kafka keeps track of which offset each particular ConsumerGroupId is at on each partition, so it won't get confused if one of them starts consuming slowly or stops for a while and restarts consuming later that day.
Much more detailed information here on how Kafka works here: https://kafka.apache.org/documentation/#gettingStarted
And more information on how to use the Kafka consumer at this link:
https://kafka.apache.org/20/javadoc/index.html?org/apache/kafka/clients/consumer/KafkaConsumer.html
#mjuarez's answer is absolutely correct - just for brevity I would reduce it to the following;
Don't try and read only from a single partition because it's a low level construct and it somewhat undermines the parallelism of Kafka. You're much better off just creating more topics if you need finer separation of data.
I would also add that most of the time a consumer needn't know which partition a message came from, in the same way that I don't eat a sandwich differently depending on which store it came from.
#mjuarez is actually not correct and I am not sure why his comment is being falsely confirmed by the OP. You can absolutely explicitly tell Kafka which partition a producer record pertains to using the following:
ProducerRecord(
java.lang.String topic,
java.lang.Integer partition, // <--------- !!!
java.lang.Long timestamp,
K key,
V value)
https://kafka.apache.org/10/javadoc/org/apache/kafka/clients/producer/ProducerRecord.html#ProducerRecord-java.lang.String-java.lang.Integer-java.lang.Long-K-V-
So most of what was said after that becomes irrelevant.
Now to address the OP question directly: you want to accomplish a broadcast. To have a message sent once and read more than once you would have to have a different consumer group for each reader.
And that use case is an absolutely valid Kafka usage paradigm.
You can accomplish that using RabbitMQ too:
https://www.rabbitmq.com/tutorials/tutorial-three-java.html
... but the way it is done is not ideal because multiple out-of-process queues are involved.

How to balance publishers' requests with RabbitMQ?

Suppose you have multiple producers and one consumer which wants to receive persistent messages from all publishers available.
Producers work with different speed. Let's say that system A produces 10 requests/sec and system B 1 request/sec. So if you use the only queue you will process 10 messages from A then 1 message from B.
But what if you want to balance load and process one message from A then one message from B etc.? Consuming from multiple queues is not a good option because we can't use wildcard binding in this case.
Update:
Queue per producer seems as the best approach. Producers don't know their speed which changes constantly. Having one queue per consumer I can subscribe to one topic and receive messages from all publishers available. But having a queue per producer I need to code the logic by myself:
Get all available queues through management plugin (AMQP doesn't allow to list queues).
Filter by queue name.
Implement round robin strategy.
Implement notification mechanism to subscribe to new publishers that can appear at any moment.
Remove unnecessary queue when publisher had disappeared and client read all messages.
Well, it seems pretty easy but I thought that broker could provide all of this functionality without any coding. In case with one queue I just create one persistent queue, bind it to a topic exchange then start any number of publishers that send messages to the topic. This option works almost out of the box.
I know I'm late for the party, but still.
In Azure Service Bus terms it's called "partitioning" and it's based on the partition key. The best part is in Azure SB the receiving client is not aware of the partitioning, it simply subscribes to the single queue.
In RabbitMQ there is a X-Consistent-Hashing plugin ("rabbitmq_consistent_hash_exchange") but unfortunately it's not that convenient. The consumers must be explicitly configured to consume from specific queues. If you have ten queues then you need to setup your consumers so that all ten are covered.
Another two options:
Random Exchange Type
Sharding Plugin
Bear in mind that with the Sharding Plugin even though it creates "one logical queue to consume" you'll have to have as many subscribers as there are virtual queues, otherwise some of the queues will be left unconsumed.
You can use the Priority Queue Support and associate a priority according to the producer speed. With the caveat that the priority must be set with caution (for example, if the consumer speed is below the system B, the consumer will only consume messages from B) and producers must be aware of their producing speed.
Another option to consider is creating 3 types of queues according to the producing speed: HIGH, MEDIUM, LOW. The three queues are binded to the exchange with the binding key set according to the producing speed. It could be done using.
Consumer will consume messages from these 3 queues using a round robin strategy. With the caveat that producers must be aware of their producing speed.
But the best option may be a queue per producer especially if producers speed is not stable and cannot be categorized. Thus, producers do not need to know their producing speed.

Demultiplexing messages from a queue to process in parallel streams using amqp?

I am trying to figure out if I can switch from a blocking scenario to a more reactive pattern.
I have incoming update commands arriving in a queue, and I need to handle them in order, but only those regarding the same entity. In essence, I can create as many parallel streams of update events as I wish, as long as no two streams contain events regarding the same entity.
I was thinking that the consumer of the primary queue would possibly be able to leverage amqp's routing mechanisms, and temporary queues, by creating temporary queues for each entity id, and hooking a consumer to them. Once the subscriber is finished and no other events regarding the entity in question are currently in the queue, the queue could be disposed of.
Is this scenario something that is used regularly? Is there a better way to achieve this? In our current system we use a named lock based on the id to prevent concurrent updates.
There are at least 2 Options:
A single queue for each entity
And n Consumers on one Entity-Queue.
One queue with messages of all entities. Where the message contains data what it is for an entity. You could than split this up into several queues (One AMQP-Queue for one type of entity) or by using a BlockingQueue implementation.
Benefits of splitting up the Entities in qmqp-queues
You could create an ha-setup with rabbitmq
You could route messages
You could maybe have more than one consumer of an entity queue if it
is necessary someday (scalability)
Messages could be persistent and therefore recoverable on an
application-crash
Benefits of using an internal BlockingQueue implementation
It is faster (no net-io obviously)
Everything has to happen in one JVM
Anyway it does depend on what you want since both ways could have their benefits.
UPDATE:
I am not sure if I got you now, but let me give you some resources to try some things out.
There are special rabbitmq extensions maybe some of them can give you an idea. Take a look at alternate exchanges and exchange to exchange bindings.
Also for basic testing, I am not sure if it covers all rabbitmq features or at all all amqp features but this can sometimes be usefull. Keep in mind the routing key in this visualization is the producer name, you can also find there some examples. Import and Export your configuration.

JMS - How do message selectors work with multiple queue and topic consumers?

Say you have a JMS queue, and multiple consumers are watching the queue for messages. You want one of the consumers to get all of a particular type of message, so you decide to employ message selectors.
For example, you define a property to go in your JMS message header named, targetConsumer. Your message selector, which you apply to the consumer known as, A, is something like WHERE targetConsumer = 'CONSUMER_A'.
It's clear that consumer A will now just grab messages with the property set like it is in in the example. Will the other consumers have awareness of that, though? IOW, will another consumer, unconstrained by a message selector, grab the CONSUMER_A messages, if it looks at the queue before Consumer A? Do I need to apply message selectors like, WHERE targetConsumer <> 'CONSUMER_A' to the others?
I am RTFMing and gathering empirical data now, but was hoping someone might know off the top of their head.
When multiple consumers use the same queue, message selectors need to configured correctly across these consumers so that there is no conflict in determining the intended consumer.
In the case of message-driven-beans (a consumer of JMS messages), the selector can be specified in the ejb-jar.xml file thereby allowing for the configuration to be done at deployment time (instead of the opposing view of specifying the message selector during development).
Edit: In real life, this would make sense when different consumers are responsible for processing messages containing the same headers (often generated by the same producer) written onto the same queue. For instance, message selectors could be used in a trading application, to differentiate between buy and sell orders, when the producer is incapable of writing the JMS messages onto two separate buy and sell queues.
Yes, another consumer which is not using any message selector will get message intended for consumer A (or for that matter any message on top of the queue). Hence when sharing a queue, consumer applications must be disciplined and pick only those messages intended for them.
The 'first' JMS message consumer from a queue will pick up the message if the selector matches. What 'first' means is an implementation detail (could be round-robin, based on priority or network closeness). So when using selectors on queues you need to make sure that these selectors are 'non overlapping'.
More formally: no message must exist that matches 2 selectors on the same queue
This is yet another disadvantage of queues versus topics - in practice you should always consider using topics first. With a topic each matching consumer receives the message.

Categories