I am working on a service in which I have to perform some events, log them and return the results. I want that user should not wait for logging to complete, and therefore should get immediate results whereas logging can continue. Any suggestions on these?
a()
b()
.
.
.
g()//all these function are required to compute somethings which user wants
logging() //it takes time
return results
If logging is an overhead for you, and you want that to be an asynchronous process, then there are definitely ways to achieve this:
You can create your own handlers to do this, i.e create a FIFO queue to submit all your log strings and another process can read and print these messages as a separate process, so in your original flow you only add the log message to queue and move ahead, of course this involves reinventing the wheel, but you have the freedom to do exactly what you need for your project.
You may want to look at this answer
You can leverage existing framework, like log4j, which provides many options to achieve async logging using specific async appenders.
You can find details about it here
Related
I have a question about Axon Saga. I have a project where I have three microservices, each microservice has his own database, but the two "Slave" microservice has to share his data to the "Master" microservice, for that I want to use the Axon Saga. I already asked a question about the compensation, when something goes wrong, and I have to deal with the compensation by myself, it is ok, but not ideal. Currently I am using the DistributedCommandBus to communicate between the microservices, is it good for that? I am using the Choreography Saga model, so here is what it is look like now:
Master -> Send command -> Slave1 -> Handles event
Slave1 -> Send back command -> Master -> Handles event
Master -> Send command -> Slave2 -> Handles event
Slave2 -> Send back command -> Master -> Handles event
If something went wrong then comes the compensating Commands/Events backwards.
My question is has anybody did something like this with Axon, with compensation, what the best practices for that? How can I retry the Saga process? With the RetryScheduler? Add a github repo if you can.
Thanks, Máté
First and foremost, let me answer your main question:
My question is has anybody did something like this with Axon?
Shortly, yes, as this is one of the main use cases of for Sagas.
As a rule of thumb, I'd like to state a Saga can be used to coordinate a complex business transaction between:
Several distinct Aggregate Instances
Several Bounded Contexts
On face value, it seems you've landed in option two of delegating a complex business transaction.
It is important to note that when you are using Sagas, you should very consciously deal with any exceptions and/or command dispatching results.
Thus, if you dispatch a command from the "Master" to "Slave 1" and the latter fails the operation, this result will come back in to the Saga.
This thus gives you the first option to retry an operation, which I would suggest to do with a compensating action.
Lastly, with a compensating action, I am talking about dispatching a command to trigger it.
If you can not rely on the direct response from dispatching the command, retrying/rescheduling a message within the Saga would be a reasonable second option.
To that end, Axon has the EventScheduler and DeadlineManager.
Note that the former of the two publishes an event for everyone to see.
The latter schedules a DeadlineMessage within the context of that single Saga instance, thus limiting the scope of who can see a retry is occurring.
Typically, the DeadlineManager would be my preferred mode of operation for thus, unless you require this 'rescheduling action' to be seen by everybody.
FYI, check this page for EventScheduler information and this page for DeadlineManager info.
Sample Update
Here's a bit of pseudo-code to get a feel what a compensating action in a Saga Event Handler would look like:
class SomeSaga {
private CommandGateway commandGateway;
#SagaEventHandler(assocationValue = "some-key")
public void on(SomeEvent event) {
// perform some checks, validation and state setting, if necessary
commandGateway.send(new MyActionCommand(...))
.exceptionally(throwable -> {
commandGateway.send(new CompensatingAction(...));
});
}
}
I don't know your exact use case, but from this and your previous question I get the impression you want to roll back, or in this case undo, the event if one of the event handlers cannot process it.
In general, there are some things you are able to do. You can see if the aggregate that applied the event in the first place has or can have the information to check whether the 'slave' microservice should be able to handle the event before you apply it. If this isn't practical, the slave microservice can also apply a 'failure' event directly on the eventbus to inform the rest of the system that a failure state has occurred that needs to be handled:
https://docs.axoniq.io/reference-guide/implementing-domain-logic/event-handling/dispatching-events#dispatching-events-from-a-non-aggregate
I have a bunch of independent pieces of work that I need processes to perform. These pieces of work can be performed in any order, and they last long enough that processes sometimes fail when work is being performed.
I need to coordinate the assignment of these pieces of work, and Curator's DistributedQueue seems like it is almost what I want. I don't need the ordering it provides, though, so I am curious what level of overhead I am paying for that assuming I decline to have a single consumer (ie each process just consumes from the queue).
My main concern is how the lockPath() function on the queue builder actually works. I need the functionality it provides, because it is possible for processes to fail and I need to not be dropping the jobs they were supposed to be doing. But what I don't want is for only one process to be able to do any work at a time. If I use lockPath(), will the queue block for other processes while a process is consuming a message?
Also, if the queue seems like an unreasonable approach, is there another tool available to achieve what I want, or would I have to roll my own? I want to stay within the Curator / ZK environment but am open to alternatives within that.
(Note: I'm the main author of Apache Curator)
The documentation needs to be improved. The lock is used to make the queue entry retry-able. i.e. the entry in the queue is not removed until the consumer finishes. The lock assures that only 1 process is acting on the entry. If you don't care about dropping queue entries on failure you don't need to use the lock. With or without the lock, though, each consumer that you run processes queue entries. So, if you want to have concurrent processing of the queue you'd run multiple consumers (in the same JVM or in separate JVMs - it doesn't matter).
Here's a workflow engine I wrote that uses the Curator queue to do distributed work. Feel free to use it as it is open source: http://nirmataoss.github.io/workflow/
I have to coordinate 5 separate microservices e.g. A,B,C,D,E
I need to create a coordinator which might monitor a queue for new jobs for A. If A completes ok then a rest request should be sent to B then if everything is ok (happy path) then C is called all the way down to E.
However B,C etc might fail for one reason or another e.g. end point is down or credentials are insufficient causing the flow to fail at a particular stage. I'd like to be able to create something that could check the status of failed job and rerun again e.g. lets try B again, ok now it works the flow would then continue.
Any tips or advice for patterns / frameworks to do this. I'd like something fairly simple and not over complex.
I've already looked briefly at Netflix Conductor / Camunda but ideally I'd like something a bit less complex.
Thanks
W
Any tips or advice for patterns / frameworks to do this. I'd like something fairly simple and not over complex.
What you describe is the good ol' domain of A,B,C,D and E. Because the dependencies and engagement rules between the letters are complex enough, it's good to create a dedicated service for this domain. It could be as simple as this overarching service just being triggered by queue events.
The only other alternative is to do more on the client side and organize the service calls from there. But that isn't feasible in every domain for security reasons or other issues.
And since it sounds like you already got an event queue going, I'll not recommend one (Kafka).
One way apart from Camunda, Conductor is to send a event from Service A on some Messaging Queue (eg. lets say kafka ) which provides at least once delivery semantics.
Then write a consumer which receive the event and do the orchestration part (talking to service B,C,D,E).
As these all operations needs to be idempotent.First before starting orchestration create a RequestAgg. for the event from A and keep updating its state to represent where you reach in your orchestration journey.
Now even if the other services are down or your node goes down. This should either reach the end or you should write functions to rollback as well.
And to check the states and debug , you could see the read model of RequestAgg.
I am using SwingWorker to query a server process for a large number of "result" objects on a background thread. As individual results arrive I want to publish them and display them on the GUI.
My question is: Given that I will be receiving potentially thousands of results is it more efficient to call publish(V... chunks) for every N results or should I just call publish for each event received?
I see that the documentation states that multiple calls to publish will be coalesced into a single call to process but wasn't sure if it was still better to retain some form of control in my own code by throttling when I call publish. What do people recommend?
I say do the simplest thing that works - leave it to the Swing API to perform the throttling and if you run into problems later on it'll be an easy fix to add additional throttling yourself at that time (plus you'll have the justification for doing so).
I would like to build an Appender (or something similar) that inspects Events and on certain conditions creates logs new Events.
An example would be and Escalating Appender that checks if a certain amount of identical Events get logged and if so logs the Event with a higher logleve. So you could define something like: If you get more then 10 identical Warnings on this logger, make it an Error.
So my questions are:
Does something like this already exist?
Is an Appender the right class to implement this behavior?
Are there any traps you could think of I should look out for?
Clarification:
I am fine with the algorithm of gathering and analysing the events. I'll do that with a collection inside the appender. Persistence is not necessary for my purpose. My question #2 is: is an appender the right place for this to do? After all it is not normal behaviour to creat logging entries for an appender.
You can create your own appender by implementing the Appender interface provided by log4j.
http://logging.apache.org/log4j/1.2/apidocs/org/apache/log4j/Appender.html
That would be one approach. Another would be to use an existing appender and then write some code that monitors the log. For example, you could log to the database and then write a process that monitors the log entries in the database and creates meta-events based on what it sees.
It depends most on what you're comfortable with. One question you'll have to deal with is how to look back in the log to create your meta-events. Either you'll have to accumulate events in your appender or persist them somewhere that you can query to construct your meta-events. The problem with accumulating them is that if you stop and start your process, you'll either have to dump them somewhere so they get picked back up or start over whenever the process restarts.
For example, let's say that I want to create a log entry every 10th time a NullPointerException is thrown. If I have the log entries in a database of some kind, every time an NPE is thrown I run a query to see how many NPEs have been thrown since the last time I created a log entry for them. If I just count them in memory every time one is thrown, if I restart the application after 5 are thrown, if I don't persist that number I'll lose count.
Logback (log4j's successor) will allow you to enable logging for any event via TurboFilters. For example, assuming the same event occurs N or more times in a given timeframe, you could force the event to be accepted (regardless of its level). See also DuplicateMessageFilter which does the inverse (denying re-occurring events).
However, even logback will not allow the level of the logging event to be incremented. Log4j will not either. Neither framework is designed for this and I would discourage you from attempting to increment the level on the fly and within the same thread. On the other hand, incrementing the level during post processing is a different matter altogether. Signaling another thread to generate a new logging event with a higher level is an additional possibility. (Have your turbo-filter signal another thread to generate a new logging event with a higher level.)
It was not clear from your question why you wished the level to be incremented. Was the increment of the level a reason in itself or was it a means to a goal, that is having the event logged regardless of its level. If the latter, then logback's TurboFilters are the way to go.
HTH,
As Rafe already pointed out, the greatest challenge would be persisting the actual events in the Appender, so that you'll know the time has come to trigger your event (e.g. escalate log level).
Therefore, I propose a following strategy:
Use a custom JDBCAppender. Unlike the one bundled with Log4j, this one can log exceptions.
Set-up an embedded database, like HSQLDB, and set-up a database with one table for event logging. It solves the persistence problem, as you can use SQL to find types of events that occurred.
Run a separate thread that monitors the database, and detects desired event patterns.
Use a LogManager to access desired Loggers and set their level manually.