My target is to build an MQTT publish/subscribe like service by only exploiting Elasticsearch.
The case study scenario I would like to implement is this:
User A create a message (document) inside the Elasticsearch index
User B is warned and updated about the new message on the index.
I'm using plain java clients, since in Android I can't use the High Level Elastic search client.
I have everything that allow me to send and read documents from the ES index, but I would like to find the best way to implement a subscription service for User B, without forcing him to poll for updates every few seconds.
About this I don't know where to start. I didn't find any trigger/websocket service available in ES. Please help with some ideas / documentation.
You can use Elastic Search Watcher to trigger an action when a specific condition is met. An example use case is if you ingest live data from Meetup.com in your Elastic Search instance and want to receive email notifications about events you might be interested in.
For your specific use case, you could create a watch that triggers when User A adds a document in the index. The watch action could be to send an email to User B or call your own API in charge of notifying all users that have subscribed to User A (see Webhook action).
I haven't tried it myself, but this seems like a good place to start.
Related
I'm playing around with setting up a microservices / cqrs architecture for a personal project, and there's one point I don't understand in the "standard" setup.
By standard setup, I mean
https://www.ibm.com/developerworks/cloud/library/cl-build-app-using-microservices-and-cqrs-trs/index.html
Say I have an orders service and a pickup points service, and I have a command like "send order summary email".
How should the orders service get the data about the pickup point (eg opening hours etc) that it needs to send the email ? I see 4 possibilities, but there are surely others.
The command goes directly to the orders service, and then the orders service queries the pickup points service to get the data.
The command goes to the pickup points service, and then pickup points service publishes a new event for orders service with the needed information attached.
The command goes directly to the orders service, and the orders service then queries the read-only client-facing database.
Merge the 2 services... given that they have no other shared context, this would be a pity...
Thanks !
how to get data from another service
There are two use cases for this. In your specific case, what you are describing is somewhat akin to UI Composition; you are creating a view that pulls data from two different sources.
Key point #1: the data you are composing is stale -- by the time the email reaches its destination, the truth understood by the services may have changed anyway. Therefore, there is inherent in the requirements some flexibility about time.
Key point #2: In sending the email, you aren't changing the state of either service at all. You are just making a copy of some part of it. Reads are a safe operation.
Key point #3: Actually sending the email changes the "real world", not the services; it's an activity that can be performed concurrently with the service work.
So what this would normally look like is that one of your read models (probably that of the order service) will support a query that lists orders for which emails will be sent. Some process, running outside of the service, will periodically query that service for pending emails, query the required read models to compose the message, send it, and finally post a message to the input queue of the order service to share the information that the message was successfully sent. The order service would see that, and the read model gets updated to indicate that the message has already been sent.
You are describing a process of sending an order summary email to the customer after the order is completed.
In CQRS this is implemented with a Saga/Process manager.
The idea is that OrderSummaryEmailSaga subscribe to the OrderWasCompleted event; when such event is fired, the saga queries the Pickup service for the information it needs (most probable from a read-model) and then:
it builds+sends a complete SendOrderSummaryEmail command to the relevant aggregate from the orders service or
it calls an infrastructure service that, having all the data, it builds an email and send it to the customer
or a combination of the previous points, depending on how you want to manage this process
The details are specific to you case, like what domain services (building and formatting the email) or infrastructure services (actual sending of the email using sendmail or postfix or whatever) you need to build.
I have an application built with the framework Axon 3.
There are 2 instances (jvm)
The first one handles commands and notifies the second one with RabbitMQ to construct a read model database.
There is an event store for this application (MongoDB)
Now I want to build a third instance and Is that possible to replay all historic events of the first instance via RabbitMQ to construct the initial state of the third instance ? and how to configure it ?
I tried the doc Axons for an answer, it seems that I should use TrackingEventProcessor instead of the default one SubscribingEventProcessor, but it does not allow to use with SpringAMQPMessageSource (mentioned in the doc)
Axon has two modes: Tracking and Subscribing. Depending on the source of your events, you can chose either one or sometimes both styles.
AMQP is a specification for a message broker. Once a message is delivered, it is removed from the Queue it was placed on. Therefore, conceptually, it is impossible to replay those events, since they don't exist in the broker anymore.
If replays are important, make sure you use a messaging mechanism that stores the messages. In Axon, the EventStore does exactly that. For now, Axon only has the EmbeddedEventStore, but you could have an Event Store in the receiving node point to the same database as the sending node.
At the moment, at AxonIQ, we are working on an Event Store Server, that deals with this in a cleaner way (no need to share datasources between instances).
We are developing a document management web application and right now we are thinking about how to tackle actions on multiple documents. For example lets say a user multi selects 100 documents and wants to delete all of them. Until now (where we did not support multiple selection) the deleteDoc action does an ajax request to a deleteDocument service according to docId. The service in turn calls the corresponding utility function which does the required permission checking and proceeds to delete the document from the database. When it comes to multiple-deletion we are not sure what is the best way to proceed. We have come to many solutions but do not know which one is the best(-practice) and I'm looking for advice. Mind you, we are keen on keeping the back end code as intact as possible:
Creating a new multipleDeleteDocument service which calls the single doc delete utility function a number of times according to the amount of documents we want to delete (ugly in my opinion and counter-intuitive with modern practices).
Keep the back end code as is and instead, for every document, make an ajax request on the service.
Somehow (I have no idea if this is even possible) batch the requests into one but still have the server execute the deleteDocument service X amount of times.
Use WebSockets for the multi-delete action essentially cutting down on the communication overhead and time. Our application generally runs over lan networks with low latency which is optimal for websockets (when latency is introduced web sockets tend to match http request speeds).
Something we haven't thought of?
Sending N Ajax calls or N webSocket messages when all the data could be combined into a single call or message is never the most optimal solution so options 2 and 4 are certainly not ideal. I see no particular reason to use a webSocket over an Ajax call. If you already have a webSocket connection, then you can certainly just send a single delete message with a list of document IDs over the webSocket, but if an Ajax call could work just as well so I wouldn't create a webSocket connection just for this purpose.
Options 1 and 3 both require a new service endpoint that lets you make a single call to delete multiple documents. This would be recommended.
If I were designing an API like this, I'd design a single delete endpoint that takes one or more document IDs. That way the same API call can be used whether deleting a single document or multiple documents.
Then, from the client anytime you have multiple documents to delete, always collect them together and make one API call to delete all of them at once.
Internal to the server, how you implement that API depends upon your data store. If your data store also permits sending multiple documents to delete, then you would likewise call the data store that way. If it only supports single deletes, then you would just loop and delete each one individually.
Doing the option 3 would be the most elegant solution for me.
Assuming you send requests like POST /deleteDocument where you have docId as a parameter, you could instead pass an array of document ids to remove.
Then in backend you would only have to iterate through the list of ids and perform the deletion. You should be able keep the deletion code relatively intact.
I have searched a lot on stackoverflow questions but I couldn't find the solution that fulfill my requirement; if any one known about reference that exactly match my requirement please comment otherwise answer it.
I am developing Enterprise application using Spring Framework in team, I have successfully integrate the "JMS" using ActiveMQ in application.
In this application User A send message to User B if user B is online otherwise not!
My Question is that how to check that User B session is live or not before user A message send to User B
Thanks In advance
Yasir Shabbir
it depends how you make communication with user B.
use web sockets to make push notifications to verify the status and push messages to user B.
client can regularly poll the server to get data if arrives from any user (in this case user A). if user B is online then only yhe polling can happen so it would be like user B gets a message from user A when he's online.
You need something that monitor the users http session.
The simple approach is to have a Map or Set. Put the usersId (or what ever you use to identify them) in that Map/Set when ever they login, also put hat Id in the Session. Then use a HttpSessionListener to get notified when ever a session get destroyed (that is the closest thing to SessionTimeOut you will get). When the Session gets destroyed the remove the userId from the Map/Set.
(BTW. there is a new Spring Project: Spring-Session, I did not have a look on it up to now. Maybe it contains support for that problem.)
An easy way if you have a small number of clients is to use request-reply over messaging in conjunction with topics. Have all user sessions listen to topic (pings or similar). When a client wants to see who is online, they send a message (the contents of the body don't matter) and set the JMSReplyTo and JMSCorrellationID headers to identify a temporary queue that they are listening to for replies. The listening parties will pick up this message and all send back a message containing their ids. That way you have a living cache on the sender as to who is currently "online", the cache should expire every couple of seconds.
Take a look at http://activemq.apache.org/how-should-i-implement-request-response-with-jms.html
I'm developing a Java desktop application with Swing as GUI. The app contains services that query from database every seconds to make the interface synced from database. We all know that with this approach, performance is the enemy.
What I want to achieve is that every changes made from database, altered thru psql (Postgres command-line) for example, my app should be notified to update the UI. In this way, performance may be optimized.
Thanks!
As #a_horse_with_no_name points out, PostgreSQL supports asynchronous notification channels for just this purpose.
You should create a trigger, probably in plpgsql, on the table(s) you wish to monitor. This trigger fires a NOTIFY when the table changes, optionally including the changed data its self.
The application LISTENs on the notification channel(s) and processes any asynchronous notifications it receives.
Note that it's valid to send an empty query, and you should do that instead of SELECT 1. e.g.:
stmt.execute("");
IIRC even that's optional if you're not using SSL, there's a way to poll purely client side. I don't remember what it is, though, so that's not real helpful.
You can determine exactly what changed by using a trigger-maintained change list table, or by using payloads on your notifications. Or you can simply re-read the whole table if it's small.