Identify and Rerun particular events in Axon - java

We were looking into rerunning handpicked events from axon. A use case would be along the lines of the following.
If the user's registration event failed even though the command was successful, we want to rerun that particular event for that specific event handler(s).
We looked into using the tracking event processor, but that seems to be of replaying a set of events FROM a specific point in time. However, in our case, if there were 100 events yesterday, we would only want to rerun a particular event in the middle.
At the moment we are in migration of migrating to Axon and as such have decided to go with SubscriptionEventProcessors predominantly as it is more synchronous (i.e. errors are propagated up the command handler). And we do understand that subscription processors are stateless, as in they only process what they received via the event bus.
So I am assuming that we cannot use the tracking processor? and instead, we need to load the particular event and re-push it to the event bus?
How could we achieve this? (With or without the above suggestion)
Also with regards to identifying exceptions, we are thinking of using aspects and logging and reading the particular log line for exceptions. However we did notice a tracing module for axon and spring boot. https://github.com/AxonFramework/extension-tracing However it is mentioned to be in beta and there was not much reference documents we could find yet either. Is there a better more axon based solution to this as well?

To be quick about it, I would use a similar response as I have posted on this question.
The rehash my response over there quickly, it can be summarized by:
Create a Query Model containing the required #EventHandler annotated methods to "re-handle", which you'd provide as input to Axon's AnnotationEventHandlerAdapter constructor. You would call the subsequent AnnotationEventHandlerAdapter with a filtered event stream based on the requirements you have. As a result, the Query Model will be updated to the format you need.
Thus instead of performing it as a form of query because the users requires that information, you would perform the operation upon an exceptional case. Regardless, you will still form the Query Model anew, but just based on a specific set of events.
By the way, when it comes to the choice of Event Processor, I'd still go for the TrackingEventProcessor. Yes it means an event failure would not issue a rollback, but in essence the event states it has occurred. Thus rolling back because something else fails handling that event is incorrect; the command still succeeded, so it should stay as such too.
Lastly, you are looking for logging logic. The Tracing Extension you've shared has just recently been pulled from it's beta status and thus can be used safely. Next, basic logging can already be achieved by configuring the LoggingInterceptor as a MessageHandlerInterceptor and MessageDispatchInterceptor (documentation on this can be found here). And if you are looking to introduce Aspect logic: Axon has a similar mechanism to tie into each message handler out there. Have a look at the HandlerEnhancer/HandlerEnhancerDefintion on this page.
Hope all this helps you out #MilindaD!

Related

Axonframework, is there a way to lock aggregate using the command bus instead

I have two aggregates with the same identifier. I know it might sound weird, but the reason why I want to do this is because BaseAggregate has too many features and I want to separate codebase so it's still be able to maintenance and also be able to scale. So developers on FooAggregate can focus on their own features while shared some state that may need from the developers on BarAggregate. Here's the problem when dispatching commands to FooAggregate and BarAggregate at the same time it's conflict with aggregateSequenceNumber on the event store. So I do a lot of research and found that aggregate is locked whenever there's a command being executed, but these 2 aggregates deploy on the different JVM so It would not prevent each other from executing a command. I want it to be done on the command bus instead. So let's say FooCommand and BarCommand (with the same identifier) were dispatched at the same time. I want the command bus to wait until either FooCommand or BarCommand succeeded then execute the next command. Is there a way to config command bus to behave like this and would it affects the performance?
abstract class BaseAggregate {
#AggregateIdentifier
public String identifier;
// I use event-sourced approach so this state can be build up from events (with the same identifier)
// and share among other services
public List<String> sharedState;
}
class FooAggregate extends BaseAggregate {
public void handle(FooCommand command) {
// apply event
}
}
class BarAggregate extends BaseAggregate {
public void handle(BarCommand command) {
// apply event
}
}
Before moving to your exact question, let me focus on the following:
So developers on FooAggregate can focus on their own features while shared some state that may need from the developers on BarAggregate.
You should not read the state from one aggregate instance into another. The aggregate instances are isolated units that perform validation on their own state. If the consistency boundary should be broadened to include more entities within your system, that means you have a larger aggregate scope.
Having said that, I am guessing you might have worded your question a bit off. I am basing this on the fact you're using a polymorphic aggregate, which is a fine way to model your domain. What's not clear from your issue description nor the code, is whether you're configuring this correctly.
If you're using Spring, you can place Axon's #Aggregate annotation on the parent class. In your case, that's the BaseAggregate. That'll correctly tell Axon Framework it's dealing with a polymorphic aggregate.
If you're not using Spring, you will have to use the AggregateConfigurer like so:
AggregateConfigurer<BaseAggregate> configurer =
AggregateConfigurer.defaultConfiguration(BaseAggregate.class)
.withSubtype(FooAggregate.class)
.withSubtype(BarAggregate.class);
With that said, we can move onwards to what you're actually asking.
Whether the CommandBus can lock the aggregate for you.
Axon Framework's locking scheme works on the Repository.
More specifically, there's a LockingRepository class that's implemented by any Repository within Axon Framework.
This component ensures that anytime an Aggregate is loaded, it'll be locked, ensuring duplicate access cannot occur.
However, the lock is intently not a distributed lock, as this generates an entire scheme of others problems on its own. Thus for simplicity, that's were it resides.
Knowing this, it's not the job for a Command Bus to lock, but for a Command Bus to ensure commands for the same aggregate are routed consistently. That's actually what Axon Frameworks distributed Command Buses do!
If you'd use Axon Server, that behavior would be seamless.
Note that this works with both the Standard (free) and Enterprise editions of Axon Server. You can even go for AxonIQ Cloud if you will.
If you're not using Axon Server, you will have to set up a distributed command bus yourself. This can be done with either Axon's Spring Cloud or JGroups Extension.

Events handling between Spring boot and Reactjs

This might seem like an easy solution got on the internet, but believe me, I have seen through a lot of examples and couldn't figure out which approach to choose.
Requirement :
I have a subscriber at the application service(spring boot/Java) end, subscribed to blockchain events( corda ). I want to push this event to UI (ReactJS) whenever there is a change in state.
I could subscribe to the blockchain events successfully but stuck with multiple in-complete or tangled ideas of pushing it to the UI and how UI would receive my events ( kindly don't suggest paid services, APIs, Libraries etc ).
I have come across and tried out all approach, since I'm newly working on events I need some ray of light as to how to approach towards a complete solution.
Publisher-subscriber pattern
Observable pattern
Sse emitter
Flux & Mono
Firebase ( a clear NO )
+Boggler :
events handling between service and UI , should it be via API/endpoint calls or can it be emitted just in air( i'm not clear) and based on event name can we subscribe to it in UI ?
should i have two APIs dedicated for this ? one trigger subscribe and other actually executes emitter ?
If the endpoint is always being heard doesn't it needs dedicated resource ?
I basically need a CLEAR approach to handle this.
Code can be provided based on demand
I see you mention you are able to capture events in Spring Boot. So you are left with sending the event information to the front-end. I could think of three ways to do this.
Websockets: Might be an over-kill, as I suppose you won't need bi-directional communication.
SEE: Perhaps a better choice than WebSockets.
Or simply Polling: Not a bad choice either, if you are not looking for realtime notifications.
Yes Long Polling.
The solution seems to be pretty simple. Make the connection once and let them wait for as long as possible. So that in the meanwhile if any new data comes to the server, the server can directly give the response back. This way we can definitely reduce the number of requests and response cycles involved.
You will find multiple implementation examples of How Long Polling is done as part of Spring Boot project on internet.

Implementing retractions in google dataflow

I read the "The Dataflow Model: A Practical Approach to Balancing Correctness, Latency, and Cost in MassiveScale, Unbounded, Out of Order Data Processing" paper. Alas, the SDK does not yet expose the accumulating & retracting triggering mode (section 2.3).
I was wondering if there was a workaround for getting similar semantics?
I have been reading the source and have figured out that StateTag or StateNamespace may be the way i can store the "last emitted value of the window" and hence can be used to calculate the retraction message down the pipeline. Is this the correct path or are there other classes/ways I can/should look at.
The upcoming state API is indeed your best bet for emulating retractions. Those classes you mentioned are part of the state API, but everything in the com.google.cloud.dataflow.sdk.util is for internal use only; we technically make no guarantees that the APIs won't change drastically, or even remain unreleased. That said, releasing that API is on our roadmap, and I'm hopeful we'll get it released relatively soon.
One thing to keep in mind: all the code downstream of your custom retractions will need to be able to differentiate them from normal records. This is something we'll do automatically for you once bonafide retraction support is ready, but in the mean time, you'll just need to make sure all the code you write that might receive a retraction knows how to recognize and handle it as such.

Consistency Between MongoDB and RabbitMQ

I'm writing a system that will leverage Mongo for persistence and RabbitMQ for a message bus/event queueing, and I'm trying to figure out the best way to be resilient to failures on the publication side.
There are three scenarios I can think of:
Everything works - consistent
Everything fails - consistent
Part works, whichever happens later is out of date - inconsistent
The last case is the one I'm interested in, and I'm curious to know how others have solved the issue, given that XA isn't an option (and I wouldn't want the performance overhead anyway).
There are a couple of solutions I can think of:
Add a "lastEvent" (or some similar) field to the Mongo document. On a periodic interval, scan for documents where lastEvent < lastUpdated, and fire an event (this requires an extra update for every change, and loses context of the "old" document in the case of an update)
Fire the event in Rabbit before persisting in Mongo, and allow safe handling of events that may not have actually happened (really dislike this approach)
Could anyone else shed some light on how to provide some sort of consistency across a persistence layer and message bus?
1 is never a good idea, the notion of "last X time" falls over as soon as you introduce multi-threaded or multi-process systems, and when that "time" is generated (if some requests take longer to process then others, then the "later" time might be written before the "earlier" times to the persistent store)
2 Is basically Idempotence, and it's a pattern that works very well for designing fault tolerant systems if done properly

How to approach JMX Client polling

recently I dove into the world of JMX, trying to instrument our applications, and expose some operations through a custom JMXClient. The work of figuring out how to instrument the classes without having to change much about our existing code is already done. I accomplished this using a DynamicMBean implementation. Specifically, I created a set of annotations, which we decorate our classes with. Then, when objects are created (or initialized if they are used as static classes), we register them with our MBeanServer through a static class, that builds a dynamicMBean for the class and registers it. This has worked out beautifully when we just use JConsole or VisualVM. We can execute operations and view the state of fields all like we should be able to. My question is more geared toward creating a semi-realtime JMXClient like JConsole.
The biggest problem I'm facing here is how to make the JMXClient report the state of fields in as close to realtime as I can reasonably get, without having to modify the instrumented libraries to push notifications (eg. in a setter method of some class, set the field, then fire off a JMX notification). We want the classes to be all but entirely unaware they are being instrumented. If you check out JConsole while inspecting an attribute, there is a refresh button at the bottom of the the screen that refreshes the attribute values. The value it displays to you is the value retrieved when that attribute was loaded into the view, and wont ever change without using the refresh button. I want this to happen on its own.
I have written a small UI which shows some data about connection states, and a few field on some instrumented classes. In order to make those values reflect the current state, I have a Thread which spins in the background. Every second or so the thread attempts to get the current values of the fields I'm interested in, then the UI gets updated as a result. I don't really like this solution very much, as its tricky to write the logic that updates the underlying models. And even trickier to update the UI in a way that doesn't cause strange bugs (using Swing).
I could also write an additional section of the JMXAgent in our application side, with a single thread that runs through the list of DynamicMBeans that have been registered, determines if the values of their attributes have change, then pushes a notification(s). This would move the notification logic out of the instrumented libraries, but still puts more load on the applications :(.
I'm just wondering if any of you have been in this position with JMX, or something else, and can guide me in the right direction for a design methodology for the JMXClient or really any other advice that could make this solution more elegant than the one I have.
Any suggestions you guys have would be appreciated.
If you don't want to change the entities then something is going to have to poll them. Either your JMXAgent or the JMX client is going to have to request the beans every so often. There is no way for you to get around this performance hit although since you are calling a bunch of gets, I don't think it's going to be very expensive. Certainly your JMXAgent would be better than the JMX client polling all of the time. But if the client is polling all of the beans anyway then the cost may be exactly the same.
You would not need to do the polling if the objects could call the agent to say that they have been changed or if they supported some sort of isDirty() method.
In our systems, we have a metrics system that the various components used. Each of the classes incremented their own metric and it was the metrics that were wired into a persister. You could request the metric values using JMX or persist them to disk or the wire. By using a Metric type, then there was separation between the entity that was doing the counting and the entities that needed access to all of the metric values.
By going to a registered Metric object type model, your GUI could then query the MetricRegistrar for all of the metrics and display them via JMX, HTML, or whatever. So your entities would just do metric.increment() or metric.set(...) and the GUI would query the metric whenever it needed the value.
Hope something here helps.
Being efficient here means staying inside the mbean server that contains the beans you're looking at. What you want is a way to convert the mbeans that don't know how to issue notifications into mbeans that do.
For watching numeric and string attributes, you can use the standard mbeans in the monitor package. Instantiate those in the mbean server that contains the beans you actually want to watch, and then set the properties appropriately. You can do this without adding code to the target because the monitor package is standard in the JVM. The monitor beans will watch the objects you select for changes and will emit change notifications only when actual changes are observed. Use setGranularityPeriod to tell the monitor beans how often to look at the target.
Once the monitor beans are in place, just register for the MonitorNotifications that will be created upon change.
not a solution per se but you can simplify your polling-event translator JMXAgent implementation using spring integration. It has something called JMX Attribute Polling Channel which seems to fulfill your need. example here

Categories