We are currently using axon framework with hikaricp as data source pooling system. We are facing pool exhausting issues from time to time and we have a theory:
To update our read models we use the command bus to send UpdateEntityViewCommand.
As the command bus starts a transaction using the primary transaction manager (the write one) it acquires a connection from the write pool
On the handler, we open an inner transaction using a connection from the read pool, thus blocking the outer one.
This seems to exhaust the pools under some conditions. The question is: should we stop using the command bus to update our read models? Is appropiate to have two buses( one for write and one for read?
Thanks in advance
Axon's intent with the separation of Commands, Events, and Queries, is in general to support CQRS within your system. This means you would introduce dedicated Command Models and Query Models, which respectively only receive Command and Query messages.
The Query Models are typically also referred to as Projections or Read Models.
It is this combination of ideas Axon is normally used for which makes your question a bit vague. Or, perchance, you are using it differently than normally intended.
That Axon supports CQRS through this approach, does not necessitate that you do CQRS within your set up as well, by the way.
At any note, it would be good too (as also stated in the comments):
Include code snippets of the message handlers when these prove useful
Give some deeper insight in your configuration
Whether you aim to use the CQRS pattern in your application, yes or no
Completely separate from this, I can give some guidance on how Axon deals with transactions. Every message inside an Axon application will start a so-called "Unit of Work". It is the UnitOfWork (UoW for short) that will start a transaction. This means that no matter if you use commands, events or queries, Axon will have started that transaction for you already.
Taking this a step further, this also means that whatever you do inside a message handling function (thus the #CommandHandler, #EventHandler and #QueryHandler annotated methods) will always already have an active transaction running. That for example defines that you do not have to include your own transaction management inside a message handling function, with for example the #Transactional annotation.
Concluding, I am guessing you might be mixing some concepts. This can obviously happen, so no worries there; that's what SO is for. Next to this though, you thus do not have to start your own transaction inside any message handling function, as you already have an active transaction at that point in time.
Related
We have a micro-services architecture, with Kafka used as the communication mechanism between the services. Some of the services have their own databases. Say the user makes a call to Service A, which should result in a record (or set of records) being created in that service’s database. Additionally, this event should be reported to other services, as an item on a Kafka topic. What is the best way of ensuring that the database record(s) are only written if the Kafka topic is successfully updated (essentially creating a distributed transaction around the database update and the Kafka update)?
We are thinking of using spring-kafka (in a Spring Boot WebFlux service), and I can see that it has a KafkaTransactionManager, but from what I understand this is more about Kafka transactions themselves (ensuring consistency across the Kafka producers and consumers), rather than synchronising transactions across two systems (see here: “Kafka doesn't support XA and you have to deal with the possibility that the DB tx might commit while the Kafka tx rolls back.”). Additionally, I think this class relies on Spring’s transaction framework which, at least as far as I currently understand, is thread-bound, and won’t work if using a reactive approach (e.g. WebFlux) where different parts of an operation may execute on different threads. (We are using reactive-pg-client, so are manually handling transactions, rather than using Spring’s framework.)
Some options I can think of:
Don’t write the data to the database: only write it to Kafka. Then use a consumer (in Service A) to update the database. This seems like it might not be the most efficient, and will have problems in that the service which the user called cannot immediately see the database changes it should have just created.
Don’t write directly to Kafka: write to the database only, and use something like Debezium to report the change to Kafka. The problem here is that the changes are based on individual database records, whereas the business significant event to store in Kafka might involve a combination of data from multiple tables.
Write to the database first (if that fails, do nothing and just throw the exception). Then, when writing to Kafka, assume that the write might fail. Use the built-in auto-retry functionality to get it to keep trying for a while. If that eventually completely fails, try to write to a dead letter queue and create some sort of manual mechanism for admins to sort it out. And if writing to the DLQ fails (i.e. Kafka is completely down), just log it some other way (e.g. to the database), and again create some sort of manual mechanism for admins to sort it out.
Anyone got any thoughts or advice on the above, or able to correct any mistakes in my assumptions above?
Thanks in advance!
I'd suggest to use a slightly altered variant of approach 2.
Write into your database only, but in addition to the actual table writes, also write "events" into a special table within that same database; these event records would contain the aggregations you need. In the easiest way, you'd simply insert another entity e.g. mapped by JPA, which contains a JSON property with the aggregate payload. Of course this could be automated by some means of transaction listener / framework component.
Then use Debezium to capture the changes just from that table and stream them into Kafka. That way you have both: eventually consistent state in Kafka (the events in Kafka may trail behind or you might see a few events a second time after a restart, but eventually they'll reflect the database state) without the need for distributed transactions, and the business level event semantics you're after.
(Disclaimer: I'm the lead of Debezium; funnily enough I'm just in the process of writing a blog post discussing this approach in more detail)
Here are the posts
https://debezium.io/blog/2018/09/20/materializing-aggregate-views-with-hibernate-and-debezium/
https://debezium.io/blog/2019/02/19/reliable-microservices-data-exchange-with-the-outbox-pattern/
first of all, I have to say that I’m no Kafka, nor a Spring expert but I think that it’s more a conceptual challenge when writing to independent resources and the solution should be adaptable to your technology stack. Furthermore, I should say that this solution tries to solve the problem without an external component like Debezium, because in my opinion each additional component brings challenges in testing, maintaining and running an application which is often underestimated when choosing such an option. Also not every database can be used as a Debezium-source.
To make sure that we are talking about the same goals, let’s clarify the situation in an simplified airline example, where customers can buy tickets. After a successful order the customer will receive a message (mail, push-notification, …) that is sent by an external messaging system (the system we have to talk with).
In a traditional JMS world with an XA transaction between our database (where we store orders) and the JMS provider it would look like the following: The client sets the order to our app where we start a transaction. The app stores the order in its database. Then the message is sent to JMS and you can commit the transaction. Both operations participate at the transaction even when they’re talking to their own resources. As the XA transaction guarantees ACID we’re fine.
Let’s bring Kafka (or any other resource that is not able to participate at the XA transaction) in the game. As there is no coordinator that syncs both transactions anymore the main idea of the following is to split processing in two parts with a persistent state.
When you store the order in your database you can also store the message (with aggregated data) in the same database (e.g. as JSON in a CLOB-column) that you want to send to Kafka afterwards. Same resource – ACID guaranteed, everything fine so far. Now you need a mechanism that polls your “KafkaTasks”-Table for new tasks that should be send to a Kafka-Topic (e.g. with a timer service, maybe #Scheduled annotation can be used in Spring). After the message has been successfully sent to Kafka you can delete the task entry. This ensures that the message to Kafka is only sent when the order is also successfully stored in application database. Did we achieve the same guarantees as we have when using a XA transaction? Unfortunately, no, as there is still the chance that writing to Kafka works but the deletion of the task fails. In this case the retry-mechanism (you would need one as mentioned in your question) would reprocess the task an sends the message twice. If your business case is happy with this “at-least-once”-guarantee you’re done here with a imho semi-complex solution that could be easily implemented as framework functionality so not everyone has to bother with the details.
If you need “exactly-once” then you cannot store your state in the application database (in this case “deletion of a task” is the “state”) but instead you must store it in Kafka (assuming that you have ACID guarantees between two Kafka topics). An example: Let’s say you have 100 tasks in the table (IDs 1 to 100) and the task job processes the first 10. You write your Kafka messages to their topic and another message with the ID 10 to “your topic”. All in the same Kafka-transaction. In the next cycle you consume your topic (value is 10) and take this value to get the next 10 tasks (and delete the already processed tasks).
If there are easier (in-application) solutions with the same guarantees I’m looking forward to hear from you!
Sorry for the long answer but I hope it helps.
All the approach described above are the best way to approach the problem and are well defined pattern. You can explore these in the links provided below.
Pattern: Transactional outbox
Publish an event or message as part of a database transaction by saving it in an OUTBOX in the database.
http://microservices.io/patterns/data/transactional-outbox.html
Pattern: Polling publisher
Publish messages by polling the outbox in the database.
http://microservices.io/patterns/data/polling-publisher.html
Pattern: Transaction log tailing
Publish changes made to the database by tailing the transaction log.
http://microservices.io/patterns/data/transaction-log-tailing.html
Debezium is a valid answer but (as I've experienced) it can require some extra overhead of running an extra pod and making sure that pod doesn't fall over. This could just be me griping about a few back to back instances where pods OOM errored and didn't come back up, networking rule rollouts dropped some messages, WAL access to an aws aurora db started behaving oddly... It seems that everything that could have gone wrong, did. Not saying Debezium is bad, it's fantastically stable, but often for devs running it becomes a networking skill rather than a coding skill.
As a KISS solution using normal coding solutions that will work 99.99% of the time (and inform you of the .01%) would be:
Start Transaction
Sync save to DB
-> If fail, then bail out.
Async send message to kafka.
Block until the topic reports that it has received the
message.
-> if it times out or fails Abort Transaction.
-> if it succeeds Commit Transaction.
I'd suggest to use a new approach 2-phase message. In this new approach, much less codes are needed, and you don't need Debeziums any more.
https://betterprogramming.pub/an-alternative-to-outbox-pattern-7564562843ae
For this new approach, what you need to do is:
When writing your database, write an event record to an auxiliary table.
Submit a 2-phase message to DTM
Write a service to query whether an event is saved in the auxiliary table.
With the help of DTM SDK, you can accomplish the above 3 steps with 8 lines in Go, much less codes than other solutions.
msg := dtmcli.NewMsg(DtmServer, gid).
Add(busi.Busi+"/TransIn", &TransReq{Amount: 30})
err := msg.DoAndSubmitDB(busi.Busi+"/QueryPrepared", db, func(tx *sql.Tx) error {
return AdjustBalance(tx, busi.TransOutUID, -req.Amount)
})
app.GET(BusiAPI+"/QueryPrepared", dtmutil.WrapHandler2(func(c *gin.Context) interface{} {
return MustBarrierFromGin(c).QueryPrepared(db)
}))
Each of your origin options has its disadvantage:
The user cannot immediately see the database changes it have just created.
Debezium will capture the log of the database, which may be much larger than the events you wanted. Also deployment and maintenance of Debezium is not an easy job.
"built-in auto-retry functionality" is not cheap, it may require much codes or maintenance efforts.
I am working on my first Axon application and I cant figure out the use of the aggregates. I understand every time a command handler is called, the aggregate is being recreated by all the events, but I dont understand what other usage the recreating of the aggregates could have.
Like when should I manually recreate an aggregate?
What is the benefit of the aggregate being recreated every time I call an command?
The way I set up my application, I use a aggregateview to persist the data I need into the database. So now I feel like the events are just stored in the event store and are only used to recreate the aggregate after I call a command. Is there nothing else I should do with the events being stored and the recreation of the aggregate? Shouldn't I for example recreate the entire aggregate, instead of fetching the aggregateview out of my database by ID to update it.
The idea behind Event Sourcing your Aggregate, is that these events are the source for any model within your system.
Thus, if you create a dedicated Command Model handling the commands like you describe, then this model (which from Axon's perspective is the #Aggregate(Root) annotation class) will be sourced from the events it has published.
Additionally, you can introduce any type of Query Model you want; a RDBMS view, a Text-Based Search solution (e.g. Elastic), a time series database, you name it. Any of these Query Models are however still part of this same root application your Aggregate resides in. As you have the events as the means to notify others of decisions being made, it comes natural to (re)use those to update all your Query Models as well.
Now, it is perfectly true that you are not inclined to use Event Sourcing for your Aggregates in Axon, which from it's perspective is called a State-Stored Aggregate. If you do this however, you'll be back at having distinct models in distinct storage mechanism, without a single source of truth.
So, to circle back to your question with this added knowledge, I'd state the following:
Like when should I manually recreate an aggregate?
You are never inclined to recreate the Aggregate as the Command Model, ever, as the framework does this for you. If you have a mirrored Query Model Aggregate, then you would recreate this whenever you have added/removed/changed fields within the model. Or, if you have introduced entirely new models.
What is the benefit of the aggregate being recreated every time I call an command?
The benefit of recreating it every time, is the assurance that you will be using the latest state always. Even if between release of your application you have added/changed/removed new fields. The #EventSourcingHandler annotated methods would simply fill them in, without the need for you to for example write a database script to adjust it on the database level directly.
Concluding, the reason for this approach lies entirely within the architectural concepts supported through Axon. You can read up on them on AxonIQ's Architectural Concepts page if you want; I am sure it will clarify things even further.
Hope this helps you out #Gisrou8! If not, please come back with more questions, I'd gladly like to explain things further.
Update: Further Command Model explanation
In the comment Gisrou8 placed under my response it becomes apparent that "the unease" with this approach mainly resides in the state of the Aggregate.
As shared in my earlier response, the Aggregate as can be modeled with Axon Framework should be, in an Event Sourced set up, regarded as the Command Model in a CQRS system.
One of the main pillars around the Command Model, is that the only state it contains is the state required for decision making logic. To be more specific on the latter, the only state stored in your Aggregate is the state used to decide if a Command Handler should accepts the incoming command and publish an event as a result.
Thus, the sole fields you would introduce in your Aggregate along side the Aggregate Identifier are the fields you need to drive these decisions.
This is what the Command Model is intended for, so do not worry about this point.
To answer any queries within your application, you'd introduce a dedicated Query Model which is updated as a result of the events published by the Command Handlers within the Aggregate. It's this exact segregation which is the strong suit of this model as it allows for better scaling, performance improvements or required team separations, among other non-functional requirements.
I have a code in my business layer that updates data on database and also in a rest service.
The question is that if it doesn't fail data must be save in both places and, in other hand, if it fails it must to rollback in database and send another requisition to rest api.
So, what I'm looking for is a way to use transaction management of EJB to also orchestrait calls to api. When in commit time, send a set requisition to api and, when in rollback time, send delete requisition to api.
In fact I need to maintain consistency and make both places syncronous.
I have read about UserTransactions and managedbeans but I don't have a clue about what is the best way to do that.
You can use regular distributed transactions, depending on your infrastructure and participants. This might be possible e.g. if all participants are EJBs and the data stores are capable to handle distributed transactions.
This won't work with loosely coupled componentes, and your setup looks like this.
I do not recommend to create your own distributed transaction protocol. Regarding the edge and corner cases, you will probably not end up with consistent data in the end.
I would suggest to think about using event sourcing and eventually consistency for things like that. For example, you could emit an event (command) for writing data. If your "rollback" is needed, you can emit an event (command) to delete the date written before. After all events are processed, the data is consistent.
Some interesting links might be:
Martin Fowler - Event Sourcing
Martin Fowler - CQRS
Apache Kafka
We are developing a java program which has lots of independent components(let's say A, B, C, D). We use Hibernate framework for database mapping. Some of them working on the same JVM process and some of them are working the different JVM process. All of these components have REST endpoints and has own entity map file. Each of these components talks to each other over the TCP. There is some business logic which required a few of these components work together. And the problem starts from here. Let's say component A is called. A start a new transaction and change some data on the database then calls the component B. When component B accepted the request it opens a new transaction and changes some information on the database and commit. After finishing its work return the result to the component A. Component A start again doing some operations but failed and rollback all changes what it has done and returns the response to the back. But the changes which component B did still there. This is a little illustration of our program. What we need what is the best approaches to handle this?
Somehow we share the transaction between all the components. So when the component gets request first check is there any active transaction for this request.
The second option is using the one component to all entity and all others use this. When the request comes we open a transaction, all components do their works and when we return the response we check if there is no any exception we commit otherwise rollback.
Is there any other way? Or what are the best practices using transaction between multiple components architecture system? What are the pros and cons using a single component to keep all entity information at one point? Any post, blogs, documentation, tutorial or book are acceptable. Or if it is ok you can share your experiments.
You are looking to implement 'Distributed Transactions'.
I recommend Spring if you have a Java application as it supports distributed transactions:
https://www.javaworld.com/article/2077963/open-source-tools/distributed-transactions-in-spring--with-and-without-xa.html
Consider Spring MVC java web-application, which provides some REST API.
Let's say it has many methods, one of them is DELETE /api/foo/{id}, which obviously deletes foo entity from the DB with given id.
The problem is that due to big data in the DB, this operation is not immediate, so if client tries perform simultaneously multiply delete operations on same entity, say
DELETE /api/foo/123 x N times (by mistake in client software of course),
it causes some unpleasant side effects in the DB (you know, if you try delete same entity in several transactions, that's not generally nice).
My question is: what is the best practice in Spring MVC to prevent such situations?
I can certainly introduce synchronisation on Foo id in each such update method (PUT/DELETE). I will need to do it for all entities and all PUT/DELETE API methods though, which I really don't want to do. I suppose it should be some elegant and nice solution, how to perform such type of synchronisation on interceptor/servlet level, i.e. not on service of controller level.
I can also create specific interceptor and perform there waiting for duplicated requests (requests with same URL and parameters). But again, it doesn't sound as an elegant solution (until I will be ensured that it is not possible to configure in Spring MVC somehow in more beauty way).
That is a problem of concurrency that shall be handled by using the appropriate transaction and locking level. Unfortunately, there is no single size fits all way here and depending on your actual requirements, you could have to implement optimistic or pessimistic locking, as well as one of the possible transaction level (from no transaction at all to serializable transactions).
In general, handling such questions at the web level is a bad idea, because you will end in questions like what to do in on request wants to delete some data that another one is displaying at the same time? In SpringMVC, the common way is to use transactional methods in the service layer. Additionaly, you should declare an optimistic or pessimistic locking system in the persistence layer.
Optimistic layer normally give a higher throughput, at the cost of some transaction ending in exceptions. In that case, current best practices are now to report the problem to the user asking him/her to send his/her request again.