I will usually have 5-6 events per aggregate and would like not to store projections in DB. What would be the easiest way always to make view projection at query time?
The short answer to this, is that there is no easy/quick way to do this.
However, it most certainly is doable to implement a 'replay given events at request time' set up.
What I would suggest you do exists in several steps:
Create the query model you would like to return, which can handle events (use #EventHandler annotated methods on the model)
Create a Component which can handle the query that'll return the query model in step one (use a #QueryHandler annotated method for this.
The Query-Handling-Component should be able to retrieve a stream of events from the EventStore. If this is based on an aggregateIdentifier, use the EventStore#readEvents(String) method. If you need the entire event stream, you need to use the StreamableMessageSource#openStream(TrackingToken) method (note: the EventStore interface implements StreamableMessageSource)
Upon query handling, create a AnnotationEventHandlerAdapter, giving it a fresh instance of your Query Model
For every event in the event stream you've created in point 3, call the AnnotationEventHandlerAdapter#handle(EventMessage) method. This method will call the #EventHandler annotated methods on your Query Model object
If the stream is depleted, you are ensured all necessary events for your Query Model have dealt with. Thus, you can now return the Query Model
So, again, I don't think this is overly trivial, easy or quick to set up.
Additionally, step 3 has quite a caveat in there. Retrieving the stream of a given Aggregate based on the Aggregate Identifier is pretty fast/concise, as an Aggregate in general doesn't have a lot of events.
However, retrieving the Event Stream based on a TrackingToken, which you'd need if your Query Model spans several Aggregates, can ensure you pull in the entire event store for instantiating your models on the fly. Granted, you can fine tune the point in time you want the Event Stream to return events from as you're dealing with a TrackingToken, but the changes are pretty high you will be incomplete and relatively slow.
However, you stated you want to retrieve events for a given Aggregate Identifier.
I'd thus think this should be a workable solution in your scenario.
Hope this helps!
Related
I have an interface method in my repository
#Query("from Alert a")
Stream<Alert> streamAll();
Since I may have a lot of alerts, but I only need to process each one a record at a time and I don't need the associated data afterwards, I write code as follows
streamAll()
.forEach(alert-> {
doProcessing(alert);
entityManager.detach(alert);
});
Where I explicitly do a detach of the alert. I was wondering is there a nicer way of doing this so I can simply do
streamAllWithDetach()
.forEach(this::doProcessing);
Without writing a lot of custom code or AOP wrappers.
just keep it simple. let's create a method to wrap
doProcessing(alert);
entityManager.detach(alert);
I am working on my first Axon application and I cant figure out the use of the aggregates. I understand every time a command handler is called, the aggregate is being recreated by all the events, but I dont understand what other usage the recreating of the aggregates could have.
Like when should I manually recreate an aggregate?
What is the benefit of the aggregate being recreated every time I call an command?
The way I set up my application, I use a aggregateview to persist the data I need into the database. So now I feel like the events are just stored in the event store and are only used to recreate the aggregate after I call a command. Is there nothing else I should do with the events being stored and the recreation of the aggregate? Shouldn't I for example recreate the entire aggregate, instead of fetching the aggregateview out of my database by ID to update it.
The idea behind Event Sourcing your Aggregate, is that these events are the source for any model within your system.
Thus, if you create a dedicated Command Model handling the commands like you describe, then this model (which from Axon's perspective is the #Aggregate(Root) annotation class) will be sourced from the events it has published.
Additionally, you can introduce any type of Query Model you want; a RDBMS view, a Text-Based Search solution (e.g. Elastic), a time series database, you name it. Any of these Query Models are however still part of this same root application your Aggregate resides in. As you have the events as the means to notify others of decisions being made, it comes natural to (re)use those to update all your Query Models as well.
Now, it is perfectly true that you are not inclined to use Event Sourcing for your Aggregates in Axon, which from it's perspective is called a State-Stored Aggregate. If you do this however, you'll be back at having distinct models in distinct storage mechanism, without a single source of truth.
So, to circle back to your question with this added knowledge, I'd state the following:
Like when should I manually recreate an aggregate?
You are never inclined to recreate the Aggregate as the Command Model, ever, as the framework does this for you. If you have a mirrored Query Model Aggregate, then you would recreate this whenever you have added/removed/changed fields within the model. Or, if you have introduced entirely new models.
What is the benefit of the aggregate being recreated every time I call an command?
The benefit of recreating it every time, is the assurance that you will be using the latest state always. Even if between release of your application you have added/changed/removed new fields. The #EventSourcingHandler annotated methods would simply fill them in, without the need for you to for example write a database script to adjust it on the database level directly.
Concluding, the reason for this approach lies entirely within the architectural concepts supported through Axon. You can read up on them on AxonIQ's Architectural Concepts page if you want; I am sure it will clarify things even further.
Hope this helps you out #Gisrou8! If not, please come back with more questions, I'd gladly like to explain things further.
Update: Further Command Model explanation
In the comment Gisrou8 placed under my response it becomes apparent that "the unease" with this approach mainly resides in the state of the Aggregate.
As shared in my earlier response, the Aggregate as can be modeled with Axon Framework should be, in an Event Sourced set up, regarded as the Command Model in a CQRS system.
One of the main pillars around the Command Model, is that the only state it contains is the state required for decision making logic. To be more specific on the latter, the only state stored in your Aggregate is the state used to decide if a Command Handler should accepts the incoming command and publish an event as a result.
Thus, the sole fields you would introduce in your Aggregate along side the Aggregate Identifier are the fields you need to drive these decisions.
This is what the Command Model is intended for, so do not worry about this point.
To answer any queries within your application, you'd introduce a dedicated Query Model which is updated as a result of the events published by the Command Handlers within the Aggregate. It's this exact segregation which is the strong suit of this model as it allows for better scaling, performance improvements or required team separations, among other non-functional requirements.
So, I'm working on using Jooq to create a caching layer over Postgres. I've been using the MockConnection/MockDataProvider objects to intercept every query, and this is working, but I'm having a few issues.
First, how do I determine between reads and writes? That is, how do I tell whether a query is an insert/update/etc or a select, given only the MockExecuteContext that's passed into the execute method in MockDataProvider?
And I'm a bit confused on how I can do invalidations. The basic scheme I'm implementing right now is that whenever a "write" query is made to a table, I invalidate all cached queries that involve that table. This goes back to my first question, on telling different types of queries from each other, but also brings up another issue: how would I identify the tables used in a query given only the sql string and the bindings (both are attributes of MockExecuteContext)?
Also, is this a correct approach at caching? My first thought was to override the fetch() method, but that method is final, and I'd rather not change something already embedded in Jooq itself. This is the only other way I could think of to intercept all requests made so I could create a separate, persistent caching layer.
I have seen this (https://groups.google.com/forum/#!topic/jooq-user/xSjrvnmcDHw) question, but I'm still not clear on how Lukas recommended to identify tables from the object. I can try to implement a Postgres NOTIFY, but I wanted something native in Jooq first. I've seen this issue (https://github.com/jOOQ/jOOQ/issues/2665) pop up a lot too, but I'm not sure how it applies.
Keep in mind that I'm new to Jooq, so it's quite possible that I'm missing something obvious.
Thanks!
I currently have a default Spring architecture: Repostiory, Service, Controller (Spring WebMVC), JacksonJson Mapper as "view". All my Repository/Service/Controller methods look like:
public Collection<Pet> findPetsWithName(String name) {}
So basically each Layer retrieves data, does some calculations and returns it to the next layer.
With increasing Data size I was playing with Spring JdbcTemplate, fetchsize settings and RowCallbackHandler in order to "stream" Database results rather than fetching all at once.
My Question is now: Can I apply the "callback" approach to all layers, not only the Repository layer so that all results are but into a Callback function instread of returning them as Collection? Does it work with SpringMVC views? I think I'd end up with a Chained Callback of :
RowCallbackHandler(ServiceCallbackHandler(ControllerCallbackHandler(SpringViewHandler(HttpSerlvetResponse))))
public void findPetsWithName(String name, Callback<Pet> callback) {}
Has anyone experiences with this approach? Are there existing Patterns or templates for it? I think there is only a benefit for large data sizes because it is more difficult to design.
The only time I had use for streaming data from row mapper to the response was when we were storing large encrypted binary data in the database and wanted to stream it as-is, to be decrypted by our think client.
Assuming this is the kind of situation you are thinking about, you should use a ResultSetExtractor.
You can get the stream from the result set in the callback (assuming your data type is blob equivalent), and pipe it to the response output stream, which is accepted as a parameter to your repo method.
Let me know if you are looking to implement a design where each row should be mapped to an object and the call back mechanism should pass the object back to the higher layers one by one.
To respect requirements, I've temporarily used a hack to swap around a numerical id with a String representing a corresponding username in the view.
To do this I've called a DAO directly from a TableModel. Obviously, this isn't very elegant and is probably inappropriate from a design point of view. What would be the proper approach to achieve this?
A TableModel is queried from the EDT and so should never block - so calling a DAO is probably a bad idea. You can either:
Retrieve the information you need from the DAO and provide it to the TableModel prior to showing your table.
Dynamically load the information in the background and add it to the TableModel when the data access has completed.
1 is probably the easiest option to implement.