This might seem like an easy solution got on the internet, but believe me, I have seen through a lot of examples and couldn't figure out which approach to choose.
Requirement :
I have a subscriber at the application service(spring boot/Java) end, subscribed to blockchain events( corda ). I want to push this event to UI (ReactJS) whenever there is a change in state.
I could subscribe to the blockchain events successfully but stuck with multiple in-complete or tangled ideas of pushing it to the UI and how UI would receive my events ( kindly don't suggest paid services, APIs, Libraries etc ).
I have come across and tried out all approach, since I'm newly working on events I need some ray of light as to how to approach towards a complete solution.
Publisher-subscriber pattern
Observable pattern
Sse emitter
Flux & Mono
Firebase ( a clear NO )
+Boggler :
events handling between service and UI , should it be via API/endpoint calls or can it be emitted just in air( i'm not clear) and based on event name can we subscribe to it in UI ?
should i have two APIs dedicated for this ? one trigger subscribe and other actually executes emitter ?
If the endpoint is always being heard doesn't it needs dedicated resource ?
I basically need a CLEAR approach to handle this.
Code can be provided based on demand
I see you mention you are able to capture events in Spring Boot. So you are left with sending the event information to the front-end. I could think of three ways to do this.
Websockets: Might be an over-kill, as I suppose you won't need bi-directional communication.
SEE: Perhaps a better choice than WebSockets.
Or simply Polling: Not a bad choice either, if you are not looking for realtime notifications.
Yes Long Polling.
The solution seems to be pretty simple. Make the connection once and let them wait for as long as possible. So that in the meanwhile if any new data comes to the server, the server can directly give the response back. This way we can definitely reduce the number of requests and response cycles involved.
You will find multiple implementation examples of How Long Polling is done as part of Spring Boot project on internet.
Related
We were looking into rerunning handpicked events from axon. A use case would be along the lines of the following.
If the user's registration event failed even though the command was successful, we want to rerun that particular event for that specific event handler(s).
We looked into using the tracking event processor, but that seems to be of replaying a set of events FROM a specific point in time. However, in our case, if there were 100 events yesterday, we would only want to rerun a particular event in the middle.
At the moment we are in migration of migrating to Axon and as such have decided to go with SubscriptionEventProcessors predominantly as it is more synchronous (i.e. errors are propagated up the command handler). And we do understand that subscription processors are stateless, as in they only process what they received via the event bus.
So I am assuming that we cannot use the tracking processor? and instead, we need to load the particular event and re-push it to the event bus?
How could we achieve this? (With or without the above suggestion)
Also with regards to identifying exceptions, we are thinking of using aspects and logging and reading the particular log line for exceptions. However we did notice a tracing module for axon and spring boot. https://github.com/AxonFramework/extension-tracing However it is mentioned to be in beta and there was not much reference documents we could find yet either. Is there a better more axon based solution to this as well?
To be quick about it, I would use a similar response as I have posted on this question.
The rehash my response over there quickly, it can be summarized by:
Create a Query Model containing the required #EventHandler annotated methods to "re-handle", which you'd provide as input to Axon's AnnotationEventHandlerAdapter constructor. You would call the subsequent AnnotationEventHandlerAdapter with a filtered event stream based on the requirements you have. As a result, the Query Model will be updated to the format you need.
Thus instead of performing it as a form of query because the users requires that information, you would perform the operation upon an exceptional case. Regardless, you will still form the Query Model anew, but just based on a specific set of events.
By the way, when it comes to the choice of Event Processor, I'd still go for the TrackingEventProcessor. Yes it means an event failure would not issue a rollback, but in essence the event states it has occurred. Thus rolling back because something else fails handling that event is incorrect; the command still succeeded, so it should stay as such too.
Lastly, you are looking for logging logic. The Tracing Extension you've shared has just recently been pulled from it's beta status and thus can be used safely. Next, basic logging can already be achieved by configuring the LoggingInterceptor as a MessageHandlerInterceptor and MessageDispatchInterceptor (documentation on this can be found here). And if you are looking to introduce Aspect logic: Axon has a similar mechanism to tie into each message handler out there. Have a look at the HandlerEnhancer/HandlerEnhancerDefintion on this page.
Hope all this helps you out #MilindaD!
I am working with Spring Batch Admin API to have Admin Screens working for my batch Jobs.
I am using this client
In above code of BatchJobInstancesController , UI is very slow for End Point - instancesForJob(...). Its slow because of too much unnecessary data not needed by UI being added.
So I am trying to write a new service or end point that replaces/overrides only that end point or service without least disturbing angular client.
How to approach it?
Service method SimpleJobService.getJobExecutionsForJobInstance needs to be overridden to change the logic.
How can I disable only that end point and plug in new code?
is it possible for new code to serve at same URL?
I mean this seems a common scenario where you are trying to use somebody else's N number of services but want to tweak only a few services.
EDIT: No answer from long time, I will try on lines mentioned here
Within Java you can create an Observer-Observable set of classes in which the Observable can call the Observer. You can also in java explicitly reference an owning class instance in a child instance of another class and call the owning class instance's public functions.
Which is the better approach to take? Which is more beneficial in different scenarios, one example being Multi-Threading?
The Observer Pattern should be used whenever you don't know or don't care who is observing you. This is the key-concept in event-driven programming. You don't have any control of who is observing or what they do when you broadcast your events. Like you already mentioned in your comments, this is great for decoupling classes.
An example of a usage could be in a plugin-architecture:
You write a basic mail-server that broadcasts whenever a mail is received. You could then have a spam-plugin that validates the incoming mail, an auto-reply service that sends a reply, a forward service that redirects the mail and so on. Your plain mail server (the observable) doesn't know anything about spam, replies or forwarding. It just shouts out "Hey, a new mail is here" not knowing if anyone is listening. Then each of the plugins (the observers) does their own special thing, not knowing anything about each other. This system is very flexible and could easily be extended.
But the flexibility provided by the Observer Pattern is a two-edged sword. In the mail-server example, each plugin handles the incoming mail in total isolation from each other. This makes it impossible to setup rules like "don't reply or forward spam" because the observers doesn't know about each other - and even if they did, they wouldn't know in what order they are executed or has completed. So for the basic mail-server to solve this problem, It'll need to have references to the instances that does the spam/reply/forward actions.
So the Observer Pattern provides flexibility. You could easily add a new anti-virus plugin later, without having to modify your plain mail server code. The cost of this flexibility is loss of control of the flow of actions.
The reference approach gives you total control of the flow of actions. However you would need to modify your plain mail server code if you ever need to add support for an anti-virus plugin.
I hope this example gives you some ideas of the pros and cons of each approach.
In regards to multi-threading, one approach isn't favorable over the other.
I have several similar systems which are authoritative for different parts of my data, but there's no way I can tell just from my "keys" which system owns which entities.
I'm working to build this system on top of AMQP (RabbitMQ), and it seems like the best way to handle this would be:
Create a Fanout exchange, called thingInfo, and have all of my other systems bind their own anonymous queues to that exchange.
Send a message out to the exchange: {"thingId": "123abc"}, and set a reply_to queue.
Wait for a single one of the remote hosts to reply to my message, or for some timeout to occur.
Is this the best way to go about solving this sort of problem? Or is there a better way to structure what I'm looking for? This feels mostly like the RPC example from the RabbitMQ docs, except I feel like using a broadcast exchange complicates things.
I think I'm basically trying to emulate the model described for MCollective's Message Flow, but, while I think MCollective generally expects more than one response, in this case, I would expect/require precisely one or, preferably, a clear "nope, don't have it, go fish" response from "everyone" (if it's really possible to even know that in this sort of architecture?).
Perhaps another model that mostly fits is "Scatter-Gather"? It seems there's support for this in Spring Integration.
It's a reasonable architecture (have the uninterested consumers simply ignore the message).
If there's some way to extract the pertinent data that the consumers use to decide interest into headers, then you can gain some efficiency by using a topic exchange instead of a fanout.
In either case, it gets tricky if more than one consumer might reply.
As you say, you can use a timeout if zero consumers reply, but if you think that might be frequent, you may be better off using arbitrary two-way messaging and doing the reply correlation in your code rather than using request/reply and tying up a thread waiting for a reply that will never come, and timing out.
This could also deal with the multi-reply case.
I'm working on a twitter app right now using twitter4j. Basically I am allowing users to create accounts, add their personal keywords, and start their own stream.
I'm trying to play as nice as possible with the twitter api. Avoid rate limits, don't be connecting the same account over and over etc. So what I think I need to do is have some object that contains a list of all the active TwitterStream object, but I don't know how to approach this. This is the controller to start the stream.
public static Result startStream(){
ObjectNode result = Json.newObject();
if (
//openStreams is a Map<Long,TwitterStream> in the TwitterListener class
TwitterListener.openStreams.containsKey(Long.parseLong(session().get("id")))
){
result.put("status", "running");
return ok(result);
}
Cache.set("twitterStream", TwitterListener.listener(
Person.find.byId(
Long.parseLong(session().get("id"))
)
)
);
result.put("status", "OK");
return ok(result);
}
As you can see I am putting them in Cache right now but I'd like to keep streams open for long periods of time, so cache won't suffice.
What is the most appropriate way to structure my application for this purpose?
Should I be using Akka?
How could I implement Play's Global object to do this?
As soon as you start to think about introducing global state in your application, you have to ask yourself, is there any possibility that I might want to scale to multiple nodes, or have multiple nodes for the purposes of redundancy? If there's even the slightest chance that the answer is yes, then you should use Akka, because Akka will allow you to easily adapt your code to work in a multi node environment with Akka clustering by simply introducing a consistent hashing router. If you don't use Akka, then you'll practically have to redesign your application when the requirement for multiple nodes comes in.
So I'm going to assume that you want to future proof your application, and explain how to use Akka (Akka is a nice way of managing global state anyway even if you don't need multiple nodes).
So in Akka, what you want is an actor that is a stream manager, this will be responsible for creating stream actors if they don't already exist as children of itself. Then the stream actors will be responsible for handling the stream, sending the stream to subscribers, and tracking how many connections are subscribed to them to them, and shutting down when there are no longer any subscribers.