I am not able to find a way to send/broadcast a message to all application instances in Pivotal Cloud Foundry. How can we notify to all app instances of some events? If we use the HTTP request, PCF router will dispatch it to a single instance of the app. How can we solve this problem?
What #Florian said is probably the safer option, but if you want something quick and easy, you can send HTTP requests directly to an app instance by using the X-CF-APP-INSTANCE header. The format for the header is YOUR-APP-GUID:YOUR-INSTANCE-INDEX.
https://docs.cloudfoundry.org/concepts/http-routing.html#app-instance-routing
So given an app guid, you could iterate over the number of instances, say 0 to 5, and send an HTTP request to each one. Make sure to check the response to confirm that each one succeeded.
This also requires that you know the app guid for your app (i.e. cf app <name> --guid) and the number of instances of your app.
CF, out of the box, does not provide any event queue mechanism where apps can subscribe to.
What I would do (assuming you've two app instances A and B):
Provide an event endpoint in your application code, e.g. POST /api/event (alternatively, if the event should arise from another app (e.g. another microservice), this one could directly send messages onto the queue)
All app instances are listening on an internal event queue for new events
instance A receives the call from the CF router and processes it by issuing an event on an internal event queue, the instance will not react to the event, yet
When A publishes the event, A and B receives the event and processes it accordingly
Now, the internal event queue you can use highly depends on your deployment. On AWS you probably can use SQS or SNS or something similar. PCF, as I know, may also provide a messaging system which would suit here as well, rabbitmq. You could also use features of other services that would allow you to subscribe to events, such as redis (pub/sub commands) or similar.
If you provide more information about what you want to achieve more concretely, more detailed answer would be possible, though.
Related
I have a requirement when multiple apps want to send notification out in the form of email.I want to have a central app where users send notification too and that central app converts to email and sends it. Should i use REST API's in between my central app to get notifications from other apps or use MQ where other apps can write and my app listens to it? which is a better approach and why?
REST API <-> REST API are tightly coupled at runtime, and when you have multiple layers of services it can lead to cascading failures and complicated error handling scenarios. Additionally, one service can overrun another, causing a denial of service.
Whereas app -> q -> app connectivity is loosely coupled at runtime. When one system is down, the queue just fills up. Messaging systems also act as a natural buffer in the one-system-produces-too-fast scenario, b/c the second system is consuming data at the rate the second system can handle vs the rate at which the producer sends data.
That being said, sending query requests over messaging adds needless latency, as an API 'query' is effectively a polling-consume and the produce-to-fast and the cascading-failures problem are generally mitigated.
TLDR: You'll want both. API's for queries and messaging for commands. CQRS can be applied here:
C - command.. (create, save, delete, do some action, etc) send over messaging
Q - query.. (search, list, get, etc) send over API
I have a question regarding ChildEventListener. I am working on a backend API that is running on the JVM. I plan on the possibility that multiple instances of this server will be running simultaneously. If I were to subscribe to Child Events of say the "users" sub-document, would all instances of the server running the same code receive the event? If so, is there a way in which only one can consume the event?
Thanks for your help!
Firebase Database broadcasts changes out to all listening clients. There is no way for one client to prevent a specific change from being sent to the other clients.
It sounds though like you're trying to create a producer/consumer queue with multiple consumers/workers. There is a library to support that sort of scenario called `firebase-queue'. It uses Firebase Database transactions to ensure only one client can claim the work.
It's sort of the opposite approach from what you are trying: if multiple workers write to claim a task, it allows only one of those writes to occur. As far as I can see this would accomplish your requirement.
I have a client in AngularJS where I consume multiple SSE (Server-Sent-Events) in Java from the Server Side (there are multiple endpoints in different web servers in the backend).
Note: I have to use SSE.
I currently register a listener to each type of event coming from each SSE connection, such as:
source.addEventListener('alpha', function(e) {
doSomething();
}, false);
The purpose is to show a notification based on these events, and with this I have a few questions:
How can the client know if the information has changed in the backend?
How to organise and filter these events? For example, by receiving simultaneously multiple events from multiple connections, how can I manage them in order to show the client an specific notification regarding an specific event?
Note: I'm not only talking only about organising an event per type, but I also need to have in mind if an event is more important than another.
So far I only think of receiving all the events, and save them in a list that I could order and filter. Is there a problem if two SSE events are fired at the same time? Do you know of an example of this?
Is it a good idea to make the logic for organising the events on the client side?
Should I create a database for these events?
Thank you,
In the client side you need to make ajax calls periodically to know if there is change data in the server (SSE). In angularjs there is $http and $q service for server side call. you can use $scope.$watch as well.
In JavaScript there is not way to handle the way android handles push notification.
You can also see the web socket option if its matches your requirement.
I am implementing sending of browser push notifications via Google Cloud Messaging and Firefox Push Notification System. For this, we have to make HTTP Post requests to GCM and FPNS.
To make HTTP request to GCM/FPNS we should have user registration IDs. Using JavaScript we are collecting registration IDs and storing it in Cassandra. Each record contains user registration information (Registration ID and browser type).
When we make an HTTP request to GCM/FPNS we should send registration IDs along with the request to GCM/FPNS based on browser type (if user registration ID belongs to Chrome we will make GCM request otherwise FPNS request). For example, if we have 10,000 records we should make around 10,000 requests to FPNS/GCM.
Once GCM/FPNS receives the user registration IDs, it will send a push notification to the browser. In browser, we have JavaScript code (Service Worker) to handle the notification event.
For above requirement, synchronous servlet architecture is not good enough. Because to process 10,000 records, it may take assuming 10 to 15 minutes, even if we are using multithreading. It may cause tomcat memory leakage and an out of memory exception.
When I was searching online, people are suggesting asynchronous servlet architecture. Once we take the request from the client to send the notification we will have respond immediately (something like 200 Ok Added to queue) and also this request should be added to Message Queue (JMS). From JMS we use multithreading to make asynchronous HTTP requests.
I am not finding the correct way of doing this. Can you suggest a way of implementing this functionality (Architecture Design and control flow)?
Short of changing to something like PubNub, I would create a worker queue. This could be done with JMS or just a shared Queue (search for producer/consumer). JMS would be, in my opinion, the easiest though it gets harder to distribute in a cluster.
Basically you could continue to have a synchronous servlet - it would take the message, put it on the queue, and return the 200. Placing a message on the queue would have very minimal blocking - a couple of milliseconds at best.
As you indicated, on the queue consumer side you would then have to handle many requests. Depending on the latency requirements of your system you may need to thread or off load that. It really depends on how fast you need to send the messages.
For a totally different architecture, you could consider a "queue in the cloud". I've used Amazon SQS for things like this. You wouldn't even have a servlet - the message would go straight to SQS and then something else would pull it off and process it.
For reference I don't work for Amazon or PubNub.
I have deployed a Java web application in Heroku.
Now, I want to change the back-end so that it can notify connected users regarding specific events. I thought I could use server-sent events to do that and the way I thought it would work is the following:
When user opens up the front-end, it would establish a connection for the server-sent events.
When the back-end receives such a request, it would create such a connection (basically an EventOutput) and store it somewhere along with the user's ID (let's say in a Map in memory).
When a new event comes along, the back-end will find the user that needs to be notified, retrieve his connection according to his ID and send him the notification.
This works just fine when you have only one machine handling the requests.
My problem starts when I want to scale up my app and introduce more machines. Then, I cannot really store these connections in memory in one machine anymore, I need to use some centralized location. But the centralized location will need to serialize/deserialize the connection, which means that it's not the same connection anymore!
How do you usually do something like that?
One solution is to use session affinity (a.k.a. sticky sessions), which will ensure that a single session's requests are "always" routed to the same process (I say "always" because there are some caveats). You can turn this feature on by running this command:
$ heroku labs:enable http-session-affinity
In this way, you can keep things in memory and will not have to serialize the session.
Here is an article describing this feature in more detail: https://blog.heroku.com/archives/2015/4/28/introducing_session_affinity
You could use a pub-sub solution (ex: Redis pub-sub) that is accessible to each of your dynos.
On starting, your app subscribes to the appropriate channels. When an event happens, it is published to a channel. This means all instances of your app (spread across multiple dynos) receive that event, and any of them that have SSE connections open can respond to the event.