Spring Integration - ReleaseStrategy that looks at messages that weren't yet added - java

I have messages that are fetched to the system in groups (let's say 50), need to be grouped by AGGREGATION_ID into lists of messages and send further into the flow.
I can use correlationStrategy to aggregate with that id, but I need to know when to release the aggregated message. In ReleaseStrategy I can only look at the messages already added to the aggregate, but I need to know when there's no more messages in the fetched group of 50 with the same AGGREGATION_ID to know when to send the group. How can I do that?

A ReleaseStrategy could be any bean with a full access to the whole application context. If you have some place where you store those messages before aggregation, you definitely can take a look into that place from a custom ReleaseStrategy implementation.
On the other hand I would suggest to take a look into the groupTimeout option of an aggregator: https://docs.spring.io/spring-integration/docs/5.3.0.M4/reference/html/message-routing.html#agg-and-group-to. So, with the normal behavior your groups are going to be gathered according an expected size of 50, but when there is no new messages for the group during some time, a group is going to be released with whatever is there so far. You also can configure that groupTimeout as a SpEL expression, so there is an access to application context, too.

Related

What component work earlier in flow? RecordFilterStrategy or ConsumerInterceptor?

I have kafka consumer in my spring-boot application. And I need to filter consumed messages. On filtered messages I wan't to do some action's and do not provide to consumer, on the other side I need to continue processing on non filtered messages through my app flow. Can I combine this to approaches like RecordFilterStrategy and ConsumerInterceptor for that peuposes or I can use only something one ? And what the best way for filtering and do some action's on messages in kafka?
You could use neither and put conditional statements directly in the Listener method.
But interceptor would run before Spring gets hold of the record.
You could also use Kafka Streams filter/branch operators to optionally process/drop records

Best Practice for Kafka rollback scenario in microservices [duplicate]

We have a micro-services architecture, with Kafka used as the communication mechanism between the services. Some of the services have their own databases. Say the user makes a call to Service A, which should result in a record (or set of records) being created in that service’s database. Additionally, this event should be reported to other services, as an item on a Kafka topic. What is the best way of ensuring that the database record(s) are only written if the Kafka topic is successfully updated (essentially creating a distributed transaction around the database update and the Kafka update)?
We are thinking of using spring-kafka (in a Spring Boot WebFlux service), and I can see that it has a KafkaTransactionManager, but from what I understand this is more about Kafka transactions themselves (ensuring consistency across the Kafka producers and consumers), rather than synchronising transactions across two systems (see here: “Kafka doesn't support XA and you have to deal with the possibility that the DB tx might commit while the Kafka tx rolls back.”). Additionally, I think this class relies on Spring’s transaction framework which, at least as far as I currently understand, is thread-bound, and won’t work if using a reactive approach (e.g. WebFlux) where different parts of an operation may execute on different threads. (We are using reactive-pg-client, so are manually handling transactions, rather than using Spring’s framework.)
Some options I can think of:
Don’t write the data to the database: only write it to Kafka. Then use a consumer (in Service A) to update the database. This seems like it might not be the most efficient, and will have problems in that the service which the user called cannot immediately see the database changes it should have just created.
Don’t write directly to Kafka: write to the database only, and use something like Debezium to report the change to Kafka. The problem here is that the changes are based on individual database records, whereas the business significant event to store in Kafka might involve a combination of data from multiple tables.
Write to the database first (if that fails, do nothing and just throw the exception). Then, when writing to Kafka, assume that the write might fail. Use the built-in auto-retry functionality to get it to keep trying for a while. If that eventually completely fails, try to write to a dead letter queue and create some sort of manual mechanism for admins to sort it out. And if writing to the DLQ fails (i.e. Kafka is completely down), just log it some other way (e.g. to the database), and again create some sort of manual mechanism for admins to sort it out.
Anyone got any thoughts or advice on the above, or able to correct any mistakes in my assumptions above?
Thanks in advance!
I'd suggest to use a slightly altered variant of approach 2.
Write into your database only, but in addition to the actual table writes, also write "events" into a special table within that same database; these event records would contain the aggregations you need. In the easiest way, you'd simply insert another entity e.g. mapped by JPA, which contains a JSON property with the aggregate payload. Of course this could be automated by some means of transaction listener / framework component.
Then use Debezium to capture the changes just from that table and stream them into Kafka. That way you have both: eventually consistent state in Kafka (the events in Kafka may trail behind or you might see a few events a second time after a restart, but eventually they'll reflect the database state) without the need for distributed transactions, and the business level event semantics you're after.
(Disclaimer: I'm the lead of Debezium; funnily enough I'm just in the process of writing a blog post discussing this approach in more detail)
Here are the posts
https://debezium.io/blog/2018/09/20/materializing-aggregate-views-with-hibernate-and-debezium/
https://debezium.io/blog/2019/02/19/reliable-microservices-data-exchange-with-the-outbox-pattern/
first of all, I have to say that I’m no Kafka, nor a Spring expert but I think that it’s more a conceptual challenge when writing to independent resources and the solution should be adaptable to your technology stack. Furthermore, I should say that this solution tries to solve the problem without an external component like Debezium, because in my opinion each additional component brings challenges in testing, maintaining and running an application which is often underestimated when choosing such an option. Also not every database can be used as a Debezium-source.
To make sure that we are talking about the same goals, let’s clarify the situation in an simplified airline example, where customers can buy tickets. After a successful order the customer will receive a message (mail, push-notification, …) that is sent by an external messaging system (the system we have to talk with).
In a traditional JMS world with an XA transaction between our database (where we store orders) and the JMS provider it would look like the following: The client sets the order to our app where we start a transaction. The app stores the order in its database. Then the message is sent to JMS and you can commit the transaction. Both operations participate at the transaction even when they’re talking to their own resources. As the XA transaction guarantees ACID we’re fine.
Let’s bring Kafka (or any other resource that is not able to participate at the XA transaction) in the game. As there is no coordinator that syncs both transactions anymore the main idea of the following is to split processing in two parts with a persistent state.
When you store the order in your database you can also store the message (with aggregated data) in the same database (e.g. as JSON in a CLOB-column) that you want to send to Kafka afterwards. Same resource – ACID guaranteed, everything fine so far. Now you need a mechanism that polls your “KafkaTasks”-Table for new tasks that should be send to a Kafka-Topic (e.g. with a timer service, maybe #Scheduled annotation can be used in Spring). After the message has been successfully sent to Kafka you can delete the task entry. This ensures that the message to Kafka is only sent when the order is also successfully stored in application database. Did we achieve the same guarantees as we have when using a XA transaction? Unfortunately, no, as there is still the chance that writing to Kafka works but the deletion of the task fails. In this case the retry-mechanism (you would need one as mentioned in your question) would reprocess the task an sends the message twice. If your business case is happy with this “at-least-once”-guarantee you’re done here with a imho semi-complex solution that could be easily implemented as framework functionality so not everyone has to bother with the details.
If you need “exactly-once” then you cannot store your state in the application database (in this case “deletion of a task” is the “state”) but instead you must store it in Kafka (assuming that you have ACID guarantees between two Kafka topics). An example: Let’s say you have 100 tasks in the table (IDs 1 to 100) and the task job processes the first 10. You write your Kafka messages to their topic and another message with the ID 10 to “your topic”. All in the same Kafka-transaction. In the next cycle you consume your topic (value is 10) and take this value to get the next 10 tasks (and delete the already processed tasks).
If there are easier (in-application) solutions with the same guarantees I’m looking forward to hear from you!
Sorry for the long answer but I hope it helps.
All the approach described above are the best way to approach the problem and are well defined pattern. You can explore these in the links provided below.
Pattern: Transactional outbox
Publish an event or message as part of a database transaction by saving it in an OUTBOX in the database.
http://microservices.io/patterns/data/transactional-outbox.html
Pattern: Polling publisher
Publish messages by polling the outbox in the database.
http://microservices.io/patterns/data/polling-publisher.html
Pattern: Transaction log tailing
Publish changes made to the database by tailing the transaction log.
http://microservices.io/patterns/data/transaction-log-tailing.html
Debezium is a valid answer but (as I've experienced) it can require some extra overhead of running an extra pod and making sure that pod doesn't fall over. This could just be me griping about a few back to back instances where pods OOM errored and didn't come back up, networking rule rollouts dropped some messages, WAL access to an aws aurora db started behaving oddly... It seems that everything that could have gone wrong, did. Not saying Debezium is bad, it's fantastically stable, but often for devs running it becomes a networking skill rather than a coding skill.
As a KISS solution using normal coding solutions that will work 99.99% of the time (and inform you of the .01%) would be:
Start Transaction
Sync save to DB
-> If fail, then bail out.
Async send message to kafka.
Block until the topic reports that it has received the
message.
-> if it times out or fails Abort Transaction.
-> if it succeeds Commit Transaction.
I'd suggest to use a new approach 2-phase message. In this new approach, much less codes are needed, and you don't need Debeziums any more.
https://betterprogramming.pub/an-alternative-to-outbox-pattern-7564562843ae
For this new approach, what you need to do is:
When writing your database, write an event record to an auxiliary table.
Submit a 2-phase message to DTM
Write a service to query whether an event is saved in the auxiliary table.
With the help of DTM SDK, you can accomplish the above 3 steps with 8 lines in Go, much less codes than other solutions.
msg := dtmcli.NewMsg(DtmServer, gid).
Add(busi.Busi+"/TransIn", &TransReq{Amount: 30})
err := msg.DoAndSubmitDB(busi.Busi+"/QueryPrepared", db, func(tx *sql.Tx) error {
return AdjustBalance(tx, busi.TransOutUID, -req.Amount)
})
app.GET(BusiAPI+"/QueryPrepared", dtmutil.WrapHandler2(func(c *gin.Context) interface{} {
return MustBarrierFromGin(c).QueryPrepared(db)
}))
Each of your origin options has its disadvantage:
The user cannot immediately see the database changes it have just created.
Debezium will capture the log of the database, which may be much larger than the events you wanted. Also deployment and maintenance of Debezium is not an easy job.
"built-in auto-retry functionality" is not cheap, it may require much codes or maintenance efforts.

Is that possible to replay events via RabbitMQ with Axon 3

I have an application built with the framework Axon 3.
There are 2 instances (jvm)
The first one handles commands and notifies the second one with RabbitMQ to construct a read model database.
There is an event store for this application (MongoDB)
Now I want to build a third instance and Is that possible to replay all historic events of the first instance via RabbitMQ to construct the initial state of the third instance ? and how to configure it ?
I tried the doc Axons for an answer, it seems that I should use TrackingEventProcessor instead of the default one SubscribingEventProcessor, but it does not allow to use with SpringAMQPMessageSource (mentioned in the doc)
Axon has two modes: Tracking and Subscribing. Depending on the source of your events, you can chose either one or sometimes both styles.
AMQP is a specification for a message broker. Once a message is delivered, it is removed from the Queue it was placed on. Therefore, conceptually, it is impossible to replay those events, since they don't exist in the broker anymore.
If replays are important, make sure you use a messaging mechanism that stores the messages. In Axon, the EventStore does exactly that. For now, Axon only has the EmbeddedEventStore, but you could have an Event Store in the receiving node point to the same database as the sending node.
At the moment, at AxonIQ, we are working on an Event Store Server, that deals with this in a cleaner way (no need to share datasources between instances).

JMS Queue Split. Enterprise Integration. Apache Camel

I have an third-party application that puts some messages to JMS Queue.
Also I have an application that reads the messages from this queue. Depending on the type of the message I save this message to DB or send it to the third-party service. Also we shouldn't exceed some fixed calls per second limit not to overload the third-party.
Currently, two solutions came to my mind for this use case.
The first one is to ask the third party to send some custom headers, so that JMS consumer will be able to filter messages using JMS Selectors. So, in this case we'll be able to create two consumers, the first one will be able to read messages and save them to DB, and the second one will use some throttling/polling mechanism to send messages to third party at particular load.
But this approach won't work for me cause it will take ages for a third party to add those custom headers. Something like this in Camel:
from("jms:queue?selector=firstSelector")
.bean(dbSaver);
from("jms:queue?selector=secondSelector")
.throttle(10)
.bean(httpClient);
The second one is to create another two JMS Queues and a processor that will split the messages between these queues. Then goes the same logic as in first solution. However, this means that 2 extra JMS queues should be added. In Camel:
from("jms:parentQueue")
.choice()
.when(body().contains(...))
.to("jms:fistChildQueue")
.otherwise()
.to("jms:secondChildQueue")
.end()
from("jms:fistChildQueue")
.bean(dbSaver);
from("jms:secondChildQueue")
.throttle(10)
.bean(httpClient);
Also, I've been thinking of using two in-memory queues instead of JMS Queues. However, in this case if there'll be a lot of messages in JMS Queue we can easily get into troubles with memory.
Could anyone suggest an architectural design for this use case? It would be great to see it in Camel Route style.
1. Do you really need a queue for the flow to the DB? You could have bean(dbSaver) in the first route, or abstract it into a "direct" route instead of a jms-consuming route. This way, you have two queues instead of three.
2. A second approach: If you have control over the DB, you could write the second type of message to a different table. Then, an sql-consumer can poll for records, and delete them as it consumes them and passes them onward to the http service. However, the table is acting like a "roll your own Q". Probably more work for little payback, so maybe a second queue is better.
3. Finally, I wonder if you could reuse the same Queue. I see an option that will allow you to write back to the same queue. You could add a header and write back certain message. This could look confusing, and a bug could create an infinite loop.
If you already use JPA, this could be made a bit easier by using the camel-jpa component. As a consumer, it acts reads and deletes records (default behavior). I don't think the SQL/JDBC components have anything like that out of the box.

Dynamic dispatching messages design consultation

I have a design question and would like to have suggestions.
I am using Weblogic 11g, EJB3.0
I have a system which it's target to retrieve and dispatch messages to couple of resources(databases).
Each message contain information and target database key.
So here is an flow example:
I get from web service a message and key target. I parse the message and dispatch it to the right database via the key target.
There is a possibility that a message will contain more then one target database.(for example the key 'all' means that I should dispatch to all databases.
If Insert is failing in one of the resources a rollback will occurred and I a re-try will re-execute the whole operation.
So now this is my issue:
Should I make the number of the queues as the number of my resources? (and dedicated each Queue to a specific resource)
(In this case I parse each message and just send it to the right queue while an MDB will listen and do the insertion to the right database)
If I do it that way I wont be able to make it dynamic, Meaning each time a new resource will need to be added/removed in the future I will need to open the code and make the right changes.
What you desgigners think? How could I implement it better dynamiclly?
thanks for your help
ray.
I think using a queue for each destination resource (database) is fine. And you should not need to change code to add and remove these endpoints as long as they are driven from a configuration file. Your dispatcher should have a config that lists the destination queue for each "target database key". The processes/components that read monitor the endpoint queue similarly need configuration such as db connect string or whatever.
Now, adding a adding a new resource type, that would require you to rev your code. But that is to be expected.

Categories