I have a situation in which Producer A writes on topics A,B and C however listener for topic C throws an exception. All writes are part of a transaction. I want to know if there is a way that all writes can be rolled back automatically, as if no there were no commits in the first place?
I don't think this can be achieved in Kafka out of the box. I would suggest to re-think the design since Kafka/ messaging system is not the best match for your requirement. Kafka consumers are meant to be independent business logic like a micro-service, even if one fails it should not affect the other. If its so critical you may consider a single topic/webservice with all required info in that topic/request and make the client transactional. Otherwise if non-critical(failure of a topic client is not affecting functionality of another topic client), then introduce some audit/alerting mechanism on top of clients to make sure that they are back online.
In my Spring Boot app, customers can submit files. Each customer's files are merged together by a scheduled task that runs every minute. The fact that the merging is performed by a scheduler has a number of drawbacks, e.g. it's difficult to write end-to-end tests, because in the test you have to wait for the scheduler to run before retrieving the result of the merge.
Because of this, I would like to use an event-based approach instead, i.e.
Customer submits a file
An event is published that contains this customer's ID
The merging service listens for these events and performs a merge operation for the customer in the event object
This would have the advantage of triggering the merge operation immediately after there is a file available to merge.
However, there are a number of problems with this approach which I would like some help with
Concurrency
The merging is a reasonably expensive operation. It can take up to 20 seconds, depending on how many files are involved. Therefore the merging will have to happen asynchronously, i.e. not as part of the same thread which publishes the merge event. Also, I don't want to perform multiple merge operations for the same customer concurrently in order to avoid the following scenario
Customer1 saves file2 triggering a merge operation2 for file1 and file2
A very short time later, customer1 saves file3 triggering merge operation3 for file1, file2, and file3
Merge operation3 completes saving merge-file3
Merge operation2 completes overwriting merge-file3 with merge-file2
To avoid this, I plan to process merge operations for the same customer in sequence using locks in the event listener, e.g.
#Component
public class MergeEventListener implements ApplicationListener<MergeEvent> {
private final ConcurrentMap<String, Lock> customerLocks = new ConcurrentHashMap<>();
#Override
public void onApplicationEvent(MergeEvent event) {
var customerId = event.getCustomerId();
var customerLock = customerLocks.computeIfAbsent(customerId, key -> new ReentrantLock());
customerLock.lock();
mergeFileForCustomer(customerId);
customerLock.unlock();
}
private void mergeFileForCustomer(String customerId) {
// implementation omitted
}
}
Fault-Tolerance
How do I recover if for example the application shuts down in the middle of a merge operation or an error occurs during a merge operation?
One of the advantages of the scheduled approach is that it contains an implicit retry mechanism, because every time it runs it looks for customers with unmerged files.
Summary
I suspect my proposed solution may be re-implementing (badly) an existing technology for this type of problem, e.g. JMS. Is my proposed solution advisable, or should I use something like JMS instead? The application is hosted on Azure, so I can use any services it offers.
If my solution is advisable, how should I deal with fault-tolerance?
Regarding the concurrency part, I think the approach with locks would work fine, if the number of files submitted per customer (on a given timeframe) is small enough.
You can eventually monitor over time the number of threads waiting for the lock to see if there is a lot of contention. If there is, then maybe you can accumulate a number of merge events (on a specific timeframe) and then run a parallel merge operation, which in fact leads to a solution similar to the one with the scheduler.
In terms of fault-tolerance, an approach based on a message queue would work (haven't worked with JMS but I see it's an implementation of a message-queue).
I would go with a cloud-based message queue (SQS for example) simply because of reliability purposes. The approach would be:
Push merge events into the queue
The merging service scans one event at a time and it starts the merge job
When the merge job is finished, the message is removed from the queue
That way, if something goes wrong during the merge process, the message stays in the queue and it will be read again when the app is restarted.
My thoughts around this matter after some considerations.
I restricted possible solutions to what's available from Azure managed services, according to specifications from OP.
Azure Blob Storage Function Trigger
Because this issue is about storing files, let's start with exploring Blob Storage with trigger function that fires on file creation. According to doc, Azure functions can run up to 230 seconds, and will have a default retry count of 5.
But, this solution will require that files from a single customer arrives in a manner that will not cause concurrency issues, hence let's leave this solution for now.
Azure Queue Storage
Does not guarantee first-in-first-out (FIFO) ordered delivery, hence it does not meet the requirements.
Storage queues and Service Bus queues - compared and contrasted: https://learn.microsoft.com/en-us/azure/service-bus-messaging/service-bus-azure-and-service-bus-queues-compared-contrasted
Azure Service Bus
Azure Service Bus is a FIFO queue, and seems to meet the requirements.
https://learn.microsoft.com/en-us/azure/service-bus-messaging/service-bus-azure-and-service-bus-queues-compared-contrasted#compare-storage-queues-and-service-bus-queues
From doc above, we see that large files are not suited as message payload. To solve this, files may be stored in Azure Blob Storage, and message will contain info where to find the file.
With Azure Service Bus and Azure Blob Storage selected, let's discuss implementation caveats.
Queue Producer
On AWS, the solution for the producer side would have been like this:
Dedicated end-point provides pre-signed URL to customer app
Customer app uploads file to S3
Lambda triggered by S3 object creation inserts message to queue
Unfortunately, Azure doesn't have a pre-signed URL equivalent yet (they have Shared Access Signature which is not equal), hence file uploads must be done through an end-point which in turn stores the file to Azure Blob Storage. When file upload end-point is required, it seems appropriate to let the file upload end-point also be reponsible for inserting messages into queue.
Queue Consumer
Because file merging takes a signicant amount of time (~ 20 secs), it should be possible to scale out the consumer side. With multiple consumers, we'll have to make sure that a single customer is processed by no more than one consumer instance.
This can be solved by using message sessions: https://learn.microsoft.com/en-us/azure/service-bus-messaging/message-sessions
In order to achieve fault tolerance, consumer should use peek-lock (as opposed to receive-and-delete) during file merge and mark message as completed when file merge is completed. When message is marked as completed, consumer may be responsible for
removing superfluous files in Blob Storage.
Possible problems with both existing solution and future solution
If customer A starts uploading a huge file #1 and immediately after that starts uploading a small file #2, file upload of file #2 may be be completed before file #1 and cause an out-of-order situation.
I assume that this is an issue that is solved in existing solution by using some kind of locking mechanism or file name convention.
Spring-boot with Kafka can solve your problem of fault tolerance.
Kafka supports the producer-consumer model. let the customer events posted to Kafka producer.
configure Kafka with replication for not to lose any events.
use consumers that can invoke the Merging service for each event.
once the consumer read the event of customerId and merged then commit the offset.
In case of any failure in between merging the event, offset is not committed so it can be read again when the application started again.
If the merging service can detect the duplicate event with given data then reprocessing the same message should not cause any issue(Kafka promises single delivery of the event). Duplicate event detection is a safety check for an event processed full but failed to commit to Kafka.
First, event-based approach is corrrect for this scenario. You should use external broker for pub-sub event messages.
Attention that, by default, Spring publishing an event is synchronous.
Suppose that, you have 3 services:
App Service
Merge Servcie
CDC Service (change data capture)
Broker Service (Kafka, RabbitMQ,...)
Main flow base on "Outbox Pattern":
App Service save event message to Outbox message table
CDC Service watching outbox table and publish event message from Outbox table to Broker Servie
Merge Service subcribe to Broker Server and receiving event message (messages is orderly)
Merge Servcie perform merge action
You can use eventuate lib for this flow.
Futher more, you can apply DDD to your architecture. Using Axon framework for CQRS pattern, public domain event and process it.
Refer to:
Outbox pattern: https://microservices.io/patterns/data/transactional-outbox.html
It really sounds like you may do with a Stream or an ETL tool for the job. When you are developing an app, and you have some prioritisation/queuing/batching requirement, it is easy to see how you can build a solution with a Cron + SQL Database, with maybe a queue to decouple doing work from producing work.
This may very well be the easiest thing to build as you have a lot of granularity and control to this approach. If you believe that you can in fact meet your requirements this way fairly quickly with low risk, you can do so.
There are software components which are more tailored to these tasks, but they do have some learning curves, and depend on what PAAS or cloud you may be using. You'll get monitoring, scalability, availability resiliency out-of-the-box. An open source or cloud service will take the burden of management off your hands.
What to use will also depend on what your priority and requirements are. If you want to go the ETL approach which is great at banking up jobs you might want to use something like a Glue t. If you want to want prioritization functionality you may want to use multiple queues, it really depends. You'll also want to monitor with a dashboard to see what wait time you should have for your merge regardless of the approach.
I am creating two apache camel (blueprint XML) kafka projects, one is kafka-producer which accepts requests and stores it in kafka server, and other is kafka-consumer which picks ups messages from kafka server and processes them.
This setup is working fine for single topic and single consumer. However how do I create separate consumer groups within same kafka topic? How to route multiple consumer specific messages within same topic inside different consumer groups? Any help is appreciated. Thank you.
Your question is quite general as it's not very clear what's the problem you are trying to solve, therefore it's hard to understand if there's a better way to implement the solution.
Anyway let's start by saying that, as far as I can understand, you are looking for a Selective Consumer (EIP) which is something that's not supported out-of-the-box by Kafka and Consumer API. Selective Consumer can choose what message to pick from the queue or topic based on specific selectors' values that are put in advance by a producer. This feature must be implemented in the message broker as well, but kafka has not such a capability.
Kafka does implement a hybrid solution between pure pub/sub and queue. That being said, what you can do is subscribing to the topic with one or more consumer groups (more on that later) and filter out all messages you're not interested in, by inspecting messages themselves. In the messaging and EIP world, this pattern is known as Array of Filters. As you can imagine this happen after the message has been broadcasted to all subscribers; therefore if that solution does not fit your requirements or context, then you can think of implementing a Content Based Router which is intended to dispatch the message to a subset of consumers only under your centralized control (this would imply intermediate consumer-specific channels that could be other Kafka topics or seda/VM queues, of course).
Moving to the second question, here is the official Kafka Component website: https://camel.apache.org/components/latest/kafka-component.html.
In order to create different consumer groups, you just have to define multiple routes each of them having a dedicated groupId. By adding the groupdId property, you will inform the Consumer Group coordinators (that reside in Kafka brokers) about the existence of multiple separated groups of consumers and brokers will use those in order to discriminate and treat them separately (by sending them a copy of each log message stored in the topic)...
Here is an example:
public void configure() throws Exception {
from("kafka:myTopic?brokers={{kafkaBootstrapServers}}" +
"&groupId=myFirstConsumerGroup"
.log("Message received by myFirstConsumerGroup : ${body}");
from("kafka:myTopic?brokers={{kafkaBootstrapServers}}" +
"&groupId=mySecondConsumerGroup"
.log("Message received by mySecondConsumerGroup : ${body}");
}
As you can see, I created two routes in the same RouteBuilder, not to say in the same Java process. That's a very bad design decision in most of the use cases I can think of, because there's no single responsibility, segregated concerns and they will not scale. But again, it depends on your requirements/context.
Out of completeness, please consider taking a look at all other Kafka Component properties, as there may be many other configurations of your interest such as the number of consumer threads per group.
I tried to stay high level, in order to initiate the discussion... I'll edit my answer in case of new updates from you. Hope I helped!
I wish to avoid sending duplicate messages to a Kafka topic.
What is the ideal way to achieve it ?
Using Java client for Apache Kafka, is there anyway to verify if a message exists before invoking KafkaProducer.send
I am referring to this doc
Currently (Kafka 0.10.1), there is no way to have exactly-once delivery on write with Kafka. No matter what workaround you want to do, there will be always be a gap and you can end up with either lost messages or duplicates.
However, Kafka will add an idempotent producer (planned for 0.10.2) that will allow you to avoid duplicate writes. The target date for 0.10.2 release is beginning 2017.
It is impractical for you to check whether the same message has been delivered every time you send a new one. Think it another way: you could invoke KafkaProducer.send method with a callback notifying you of the success or failure.
That's pretty much out of scope for Kafka. You need to do that using a different storage that provides proper indexing for random access.
Depending on your needs, that can be (distributed) cache, a key-value store or whatever.
You'll probably want to do that on the consumer-side rather than producer, as different consumers may use different strategies for de-duplication (and some consumers may simply tolerate duplicates).
Suppose you have multiple producers and one consumer which wants to receive persistent messages from all publishers available.
Producers work with different speed. Let's say that system A produces 10 requests/sec and system B 1 request/sec. So if you use the only queue you will process 10 messages from A then 1 message from B.
But what if you want to balance load and process one message from A then one message from B etc.? Consuming from multiple queues is not a good option because we can't use wildcard binding in this case.
Update:
Queue per producer seems as the best approach. Producers don't know their speed which changes constantly. Having one queue per consumer I can subscribe to one topic and receive messages from all publishers available. But having a queue per producer I need to code the logic by myself:
Get all available queues through management plugin (AMQP doesn't allow to list queues).
Filter by queue name.
Implement round robin strategy.
Implement notification mechanism to subscribe to new publishers that can appear at any moment.
Remove unnecessary queue when publisher had disappeared and client read all messages.
Well, it seems pretty easy but I thought that broker could provide all of this functionality without any coding. In case with one queue I just create one persistent queue, bind it to a topic exchange then start any number of publishers that send messages to the topic. This option works almost out of the box.
I know I'm late for the party, but still.
In Azure Service Bus terms it's called "partitioning" and it's based on the partition key. The best part is in Azure SB the receiving client is not aware of the partitioning, it simply subscribes to the single queue.
In RabbitMQ there is a X-Consistent-Hashing plugin ("rabbitmq_consistent_hash_exchange") but unfortunately it's not that convenient. The consumers must be explicitly configured to consume from specific queues. If you have ten queues then you need to setup your consumers so that all ten are covered.
Another two options:
Random Exchange Type
Sharding Plugin
Bear in mind that with the Sharding Plugin even though it creates "one logical queue to consume" you'll have to have as many subscribers as there are virtual queues, otherwise some of the queues will be left unconsumed.
You can use the Priority Queue Support and associate a priority according to the producer speed. With the caveat that the priority must be set with caution (for example, if the consumer speed is below the system B, the consumer will only consume messages from B) and producers must be aware of their producing speed.
Another option to consider is creating 3 types of queues according to the producing speed: HIGH, MEDIUM, LOW. The three queues are binded to the exchange with the binding key set according to the producing speed. It could be done using.
Consumer will consume messages from these 3 queues using a round robin strategy. With the caveat that producers must be aware of their producing speed.
But the best option may be a queue per producer especially if producers speed is not stable and cannot be categorized. Thus, producers do not need to know their producing speed.