Spring Events and service restart - java

I want to use Spring Events in a Spring Boot service. I've done all the development and the event is published at some point of my code and is received in the listener. So far, so good.
Showing this to a colleague he made a great point. What happens if we restart the service? It may happen that some events are lost?
To provide you more insights, we deploy our services in AWS using Kubernetes. Personally, I don't expect too much load on this service, so the chances to have an event waiting to be consumed while the service is in the shutdown process is quite low. However, it may happen. And also, I'd like to know it in order to use Spring Events in other scenarios or not.

Spring Events are not meant for "serious" communication. By serious I mean there are no guarantees about reliability, it's just simple message passing between different parts of the application. Hard to say what are the chances that events would be lost even with this, but there's no guarantees.
If you have important messages that can't get lost you need a proper message queue with transactional support. They make guarantees about message delivery, but of course it involves an additional component altogether, and might be overkill for simple things.

Related

Best Practices for Micro-service interaction with Event-Sourcing (CQRS)

I have an API call that will need to work with more than one aggregate. I have the 2 below ideas in my head to how it should be interacting with the aggregates but I'm open to other ideas.
Is it good practice to send commands from one microservice to another one? Or is it better to have an event handler on microservice B that reacts to events from service A and generates the command all within the microservice B?
Is it good practice to send commands from one microservice to another one? Or is it better to have an event handler on microservice B that reacts to events from service A and generates the command all within the microservice B?
An important thing to recognize in a service architecture: we want the services to be autonomous. So A should continue working while B is down for maintenance, and vice versa.
This implies that we need to support asynchronous messaging from A to B.
Current "best practice" is that you are dealing with messages being delivered asynchronously, then the semantics should be past tense: SomethingHappened at A, and B will react to it, or not, in its own time at its own discretion.
Does it matter? Hard to say -- handle(Event) is a command, CommandReceived is an event.
Note: this is really just services and messaging -- Event-Sourcing/CQRS really don't enter into it.
Martin Fowler described Domain Events in 2005.
Each Domain Event captures information from the external stimulus.
If you think of A as being external to B (which makes sense, if there are service boundaries between them), then the semantics of the Domain Event pattern may be a very good fit.
Why not both?
I approach these "micro-services" slightly differently. I usually have a messaging endpoint for each bounded context. I guess this fits in nicely with the micro-service idea and that endpoint only responds to commands sent to it that apply to that BC. It would then also publish the relevant events.
What I then may also have is an orchestration endpoint that responds to process managers that "belong" to the relevant BC. This endpoint only deals with the state of the process managers and issues commands to whichever BC messaging endpoint it needs to talk to. For instance, after an OrderRegisteredEvent has been received a SendEMailCommand may be issued. OK, that is more of a technical endpoint/BC but none-the-less.
On the BC-only messaging endpoint there is absolutely no between the different BCs. It is only there to service its own BC.
I hope that makes sense.

Asynchronous Message-Passing and Microservices

I am planning the develop of a microservice based architecture application and I decided to use kafka for the internal communicaton while I was reading the book Microservice Architecture by Ronnie Mitra; Matt McLarty; Mike Amundsen; Irakli Nadareishvili where they said:
letting microservices directly interact with message brokers (such as
RabbitMQ, etc.) is rarely a good idea. If two microservices are
directly communicating via a message-queue channel, they are sharing a
data space (the channel) and we have already talked, at length, about
the evils of two microservices sharing a data space. Instead, what we
can do is encapsulate message-passing behind an independent
microservice that can provide message-passing capability, in a loosely
coupled way, to all interested microservices.
I am using Netflix Eureka for Service registration and discovery, Zuul as edge server and Hystrix.
Said so, in practice, how can I implement that kind of microservice? How can I make my microservices indipendent from the communcation channel ( in this case Kafka)?
Actually I'm directly interacting with the channel, so I don't have an extra layer between my publishers/subscribers and kafka.
UPDATE 06/02/2018
to be more precise, we have a couple of microservices: one is publishing news on a topic (activemq, kafka...) and the other microservice is subscribed on that topic and doing some operations on the messages that are coming through. So we have these services that are coupled to the message broker (to the channel)... we have the the message broker's apis "embedeed" on our code and for example, if we want to change the message broker we have to change all the microservices that made use of the message broker's api. So, they are suggesting to use a microservice(in the picture I assume is the Events Hub) that is the "dispatcher" of the various messages. In this way it is the only component that interacts with the channel.
A general foreword - Don't do it if you don't need it. Introducing a queue system can be a big improvement if you are dealing with high number of events and events backing up issues etc. But if you don't face any issues you are probably better off with the lower complexity of a direct service communication.
Back to your question - It sounds like you want to abstract your communication with the queue because you are worried about the effort for replacing the queue with a different system - Is that correct?
In this case you can either do what you proposed - Develop a new service in the middle. This comes with all the baggage of a physical service (including deployment, scaling, etc).
Or the second alternative is to write a client library that abstracts the queue the way you want and allows you to reuse it in all services requiring to participate in the queue. This way you don't have to physically deploy another service for this purpose but you are still in full control of what your interface to the queue should look like and you have a single piece of code to incorporate changes (at least toward the direction of the queue). This would work given you are sure the app-facing side of the library can be stable enough.
But, again, don't do any of those in the first iteration when you are not sure you need all the complexity. (Over-engineering is a dangerous thing)
You should create a Interface lets say "Queue" which provide all functionalities which you want from Kafka or RabbitMQ, the create diff. impl like KafkaQueue and RabbitMQQueue of the Queue interface and inject the right impl which you want to use in your system.
In this your if new queue system is used , your existing code will not be changed
Creating another microservice is an extra overhead in this case
In a service architecture proper way to make your code independent out of constraints of communication channel is by having properly modeled self-sufficient messages. Historic examples would be WSDL in document mode, EDIFACT, HATEOAS etc. From this point of view microservices with spring-boot and kafka are just different implementation of same old thing done since mainframes ruled the world.
Essentially if you take a view of your app as blackbox asynchronous server; everything app does is receives events and produces new ones. It should not matter how events are raised within app. Http requests, xml within jms messages, json in kafka, whatever - all those things are just a way to pass events and business layer of application should respond only to a content of events.
So business layer is usually structured around some custom model/domain which are delivered as payload. Business layer is invoked/triggered by listener/producer layer which talks to communcation channel (kafka listener, http listener etc..). Aside from logging and enforcing security you should not have communication channel logic in app. I have seen unfortunate examples of business logic driven by by originating jms connection or parsing url of request. If you ever have this in your code you have failed to properly structure your code.
However that is easier to say than to implement. Some people are good at this level of modeling, and some never learn.
And there is no other way to learn but to try and fail.

Should I consider to use JMS in my case?

I'm not very familiar with JMS so, I can't understand whether I should consider it to use in my case.
I have 3 servers (running on tomcat) which are going to send some notifications to another server (call it PrincipalServer) when some event occured on them. The PrincipalServer is running on tomcat too. When the notifications from one of those 3 servers reach the PrincipalServer it need to handle it in some way, depending on the message (For instance, persist some data in a database). Approximately, the rate of the notification would be 500k-1M a day.
So, should I consider some JMS implementation like ActiveMQ?
It depends on a number of factors, but it may provide a benefit in your case. The main benefit provided by JMS is the ability to reliably queue work that can be done later. There are three key reasons in my mind for using JMS over a web service, rest or ejb call. These are:
The client should return prior to this work being processed. If this work has to be done before returning to the client then don't use JMS, trying to build a synchronous invoke model over JMS while possible is choosing a hammer when you have a screw.
The clients may process bursts of work that the back end can't keep up with. In this case JMS will store the messages until the back end can process the work. Note that you still need to average the number of messages on the Queue to be zero, you can't add messages forever.
The back end may go down independently of the front end. In this case the JMS provider will store the messages until the backend comes back up to process the work.

Best way to utilize multiple instances of a service

We have a component called Workflow which exposes SOAP web service. We are trying to introduce a asynchronous processing in Workflow by allowing it to consume messages from WebSphere MQ. We also want to utilize multiple instances of Workflow. So there can be 4 instances of Workflow listening to same queue. The problem here is, how to make sure all Workflow instances are utilized evenly and not single instance is overloaded.
Workflow is completely written in Java. We use Spring and Hibernate extensively. The processes which will be submitting message to Workflow are written in Java. For message processing and MQ, we use Spring Integration.
The best way to ensure that no Workflow instance is overloaded is to have each individual Workflow instance not consume a message from the message queue that will overload it. In this case, you may not care whether the work is distributed evenly, as long as all the work gets done promptly.
If you really want to make sure all Workflow instances are used evenly even when your load is so light that you don't need all of the instances, you may need to check whether there's a way of reconfiguring WebSphere MQ to distribute messages on a FIFO basis rather than a LIFO basis, or if WebSphere MQ can't be configured that way, to switch to a different message queue. However, I don't recommend this: the system as a whole can work perfectly fine even if, at low loads, only some of the Workflow instances are utilized, with all being utilized only at high loads.

How to implement asynchronous processing with J2EE application

I have an enterprise application with around 2k concurrent users every day. These users handle customer calls so application speed is of vital importance.
When a user is wrapping up a call they commit all the information they captured. This commit can take anywhere from 10-45 seconds.
I am looking into ways to take the delay away from the user.
We have a web front end running in I.E. the backend is heavy java running on a single EJB.
I wanted to make this commit process asynchronous in that once the user submits the request they don't have to wait for the commit to finish before going on to the next customer. This is what is currently implemented.
Originally I was thinking of just spawning another thread to handle the commit but that's a no no with EJB's.
Other options I can think of would be using JMS or SIB,
What would the best solution be? Is there another alternative I am missing?
There are actually two alternatives for cases like that.
The first one will be to use JMS. It has the advantage that the server provides all required infrastructure and you haven't to implement much yourself.
Another way will be to register the request in a database and have a scheduled event to process all of them.
What you select depends on your requirements. If you need to serve the requests as soon as they arrive, then you need to go with JMS. In both cases you need to persist the outcome of the request in a database and design a web service at the top of it. The front end could use this (through pollling) to present the result to the user.
Would have liked to leave a comment, but don't have the ability.
Another possibility:
Wrap the heavy EJB's in a queue mechanism, and expose a different bean with the same API so your client-facing communications talk to the new bean and are quick. They accept the request, add the job to the queue and return to the client immediately. You don't need to change the heavy EJB's or the client communications, just put a mediator in the way, and add a layer of wrapping.

Categories