Architecture of Java Servlet Browser push notification - java

I am implementing sending of browser push notifications via Google Cloud Messaging and Firefox Push Notification System. For this, we have to make HTTP Post requests to GCM and FPNS.
To make HTTP request to GCM/FPNS we should have user registration IDs. Using JavaScript we are collecting registration IDs and storing it in Cassandra. Each record contains user registration information (Registration ID and browser type).
When we make an HTTP request to GCM/FPNS we should send registration IDs along with the request to GCM/FPNS based on browser type (if user registration ID belongs to Chrome we will make GCM request otherwise FPNS request). For example, if we have 10,000 records we should make around 10,000 requests to FPNS/GCM.
Once GCM/FPNS receives the user registration IDs, it will send a push notification to the browser. In browser, we have JavaScript code (Service Worker) to handle the notification event.
For above requirement, synchronous servlet architecture is not good enough. Because to process 10,000 records, it may take assuming 10 to 15 minutes, even if we are using multithreading. It may cause tomcat memory leakage and an out of memory exception.
When I was searching online, people are suggesting asynchronous servlet architecture. Once we take the request from the client to send the notification we will have respond immediately (something like 200 Ok Added to queue) and also this request should be added to Message Queue (JMS). From JMS we use multithreading to make asynchronous HTTP requests.
I am not finding the correct way of doing this. Can you suggest a way of implementing this functionality (Architecture Design and control flow)?

Short of changing to something like PubNub, I would create a worker queue. This could be done with JMS or just a shared Queue (search for producer/consumer). JMS would be, in my opinion, the easiest though it gets harder to distribute in a cluster.
Basically you could continue to have a synchronous servlet - it would take the message, put it on the queue, and return the 200. Placing a message on the queue would have very minimal blocking - a couple of milliseconds at best.
As you indicated, on the queue consumer side you would then have to handle many requests. Depending on the latency requirements of your system you may need to thread or off load that. It really depends on how fast you need to send the messages.
For a totally different architecture, you could consider a "queue in the cloud". I've used Amazon SQS for things like this. You wouldn't even have a servlet - the message would go straight to SQS and then something else would pull it off and process it.
For reference I don't work for Amazon or PubNub.

Related

How to wait in a continuous loop in servlet?

I am developing a servlet based application. One situation is that a client requests some data from a database which is sent back in the form of html. The client will modify this data and then sent it back to the server. Now the twist starts. There is not a single client. So multiple clients can request the same data. So what I am doing is that when the first client makes a request, this request is stored somewhere so that when the next user makes the same request he is denied the data.
Now suppose the first user gets the data and 2nd is denied. Now while the first user is on the html page which allows him to modify the data. I want to send continuous javascript async post requests at a fixed interval to inform the server that the client is active.
At the server side I need a thread or something which can keep waiting in a loop for the javascript async requests and if the request is not received within the fixed time then the thread removes the saved request so that future requests to the data will be accepted.
I have searched the entire day and looked at things like async servlets, ServletContext listener and scheduledExecutorservice. I dont want to use scheduledExecutorService as it is invoked at app startUp which I dont want to do since this specific situation is a minor part of the code and to handle it I dont want something running all the time. I need some background service which keeps running even after the server has returned requested data.
Servlets won't fulfill your requirements, therefore you should use WebSockets.
As per my understanding, you are trying to push data from the server, therefore you need to a push architecture instead of pull architecture (Servlets are based upon pull architecture).
Java has native support of WebSockets
You can find several tutorials on how to use WebSockets in a Java Web Application.
Here is a link to a basic WebSockets Tutorial.
Hope this helps

How to send events to all instances of the application in PCF

I am not able to find a way to send/broadcast a message to all application instances in Pivotal Cloud Foundry. How can we notify to all app instances of some events? If we use the HTTP request, PCF router will dispatch it to a single instance of the app. How can we solve this problem?
What #Florian said is probably the safer option, but if you want something quick and easy, you can send HTTP requests directly to an app instance by using the X-CF-APP-INSTANCE header. The format for the header is YOUR-APP-GUID:YOUR-INSTANCE-INDEX.
https://docs.cloudfoundry.org/concepts/http-routing.html#app-instance-routing
So given an app guid, you could iterate over the number of instances, say 0 to 5, and send an HTTP request to each one. Make sure to check the response to confirm that each one succeeded.
This also requires that you know the app guid for your app (i.e. cf app <name> --guid) and the number of instances of your app.
CF, out of the box, does not provide any event queue mechanism where apps can subscribe to.
What I would do (assuming you've two app instances A and B):
Provide an event endpoint in your application code, e.g. POST /api/event (alternatively, if the event should arise from another app (e.g. another microservice), this one could directly send messages onto the queue)
All app instances are listening on an internal event queue for new events
instance A receives the call from the CF router and processes it by issuing an event on an internal event queue, the instance will not react to the event, yet
When A publishes the event, A and B receives the event and processes it accordingly
Now, the internal event queue you can use highly depends on your deployment. On AWS you probably can use SQS or SNS or something similar. PCF, as I know, may also provide a messaging system which would suit here as well, rabbitmq. You could also use features of other services that would allow you to subscribe to events, such as redis (pub/sub commands) or similar.
If you provide more information about what you want to achieve more concretely, more detailed answer would be possible, though.

How to acknowledge push webservices

I have a web-service on my server that pushes the xml data to the clients that are communicating to it over internet.
In these cases we have challenge to receive acknowledgement from the
client.
Specific case like, once client has received the data and before
sending the acknowledge, if the communication channel goes down.
Example:
In case of the software updates on clients over internet, how the server makes sure every thing is processed fine.
If you want to go on the "push" path, and you absolutely must know if the update was succesful, then you have to build your service and clients in such a way that you do know.
Basically what you need to do is build a small protocol so that information is transmitted no matter the failures of the communication channel. This means two things:
Your service does re-transmissions;
Your clients can deal with duplicate messages;
For example:
service pushes a message, client acknowledges => all good;
service pushes a message, the connection goes down, the message is lost. The client does not acknowledge since it never got the message => service pushes that same message once again at some later time. Now hopefully you get to case 1.
service pushes a message, client acknowledges but the connection fails and the service does not receive the acknowledge => similar to 2, so the service pushes that same message once again some later time and now the client receives the same message twice. It must ignore the second message but still needs to send an acknowledge so the service does not send it a third, forth, ... nth time;
And so on and so forth...
This is a high level description of what TCP does, for example. TCP is a reliable protocol over an unreliable network. It handles dropped packets, duplicated packets, etc.
Now, that would be pushing. A more simple alternative would be to use "pull" instead. The clients periodically pull the updates from the server. This is simpler to implement (the download is succesful if it worked, otherwise you try again later) but it's not without its gotchas, like for example:
controlling when clients start to pull data from the service. You can't just have them all update at the same time or you might overload the server. Clients should first ask the server if it's OK to update now or comme back later when the service is not so busy;
are you downloading upgrades in the background, from user devices? Data charges might apply so maybe it's better to ask the user if it wants the update now or later instead of doing it behind the scenes;
updating in the background, even if there is no problem with data charges might still consume bandwith when the client needs that bandwith for something else;
And so on and so forth...
The thing is this is a large topic, with general solutions that might not apply given particular situations. But it is not a new topic. Others have had these issues before. Consider for example Windows updates, how each PC's OS updates itself. Something similar happened a while ago when thick clients needed updates. The world moved to thin clients but now thick clients are making a comeback. Have a look at how these issues are solved, you will find usefull information online.
I do not think there is a way to do that. I believe the reason you are asking is for the following reasons:
1) If you are asking because you are sending a lot of data and your client deny receiving it, perhaps you can paginate it. That way you will know when the last page was accessed. You can even go one step further and just put very little data on your last page, that way you are sure that the last page is called.
2) If you are genuinely concerned about ensuring that they receive the entire data. How about suggest they access a 2nd web service which contains the checksum for the data, and suggest that they compare it.
Assuming that your web service is RESTful, your server should be stateless. The client should make sure it receives the data properly.
You could define a service to get the hash value of the data, followed by the request to receive the data itself. The client can check after the download whether the hash value of the downloaded data corresponds to the value received by the first call.
Amongst others, you could use MD5, SHA-1 and SHA256 in standard Java, as described in the Oracle documentation. This will calculate the hash value of the data from the server side.
Assuming you use Javascript from the client side, there are many possibilities to calculate the hash code using the same algorithms (jsSHA, for example).
I hope it helps.

Monitor database with GWT

Maybe I'm overthinking this but I'd like some advice. Customers can place an order inside my GWT application and on a secondary computer I want to monitor those submittals inside th eGWT application and flash an alarm every time an order is submitted, provided the user has OK'd this. I cant figure out the best way to do this. Orders are submitted to a mysql database if that makes any difference. Does anyone have a suggestion on what to do or try?
There are two options: 1) polling or 2) pushing which would allow your server (in the servlet handling the GWT request) to notify you (after the order is successfully placed).
In 1) polling, the client (meaning the browser you are using to monitor the app) will periodically call the server to see if there is data waiting. It may be more resource intensive as many calls are made for infrequent data. It may also be slower due to the delay between calls. If only your monitoring client is calling though it wouldn't be so resource intensive.
In 2) pushing, the client will make a request and the request will be held open until there is data. It is less resource intensive and can be faster. Once data is returned, the client sends another request (this is long polling). Alternatively, streaming is an option where the server doesn't sent a complete request and just keeps sending data. This streaming option requires a specific client-/browser-specific implementation though. If it's just you monitoring though, you should know the client and could set it up specifically for that.
See the demo project in GWT Event Service
Here is the documentation (user manual) for it.
Also see GWT Server Push FAQ
There are other ways of doing it other than GWT Event Service of course. Just google "GWT server push" and you'll find comet, DWR, etc., and if you are using Google's App Engine the Channel API

Java EWS API - Read email from exchange - Comparable options

I am using Java EWS API to connect my application to MS Exchange and read user email requests. These requests are then processed through the system workflow. The amount of emails in a day is limited to 50 so the overall volume is less. However I am looking at an efficient and reliable mechanism to read from exchange server using EWS API. Also note that once the email is processed we move it to sub folders so Inbox only has the unprocessed requests
Currently as I understand the following schemes are used to connect to Exchange server and perform various operations on the mailbox.
Polling - Connect to Exchange using the standard Exchange Service interface; find all new emails and process them in sequence. The client has better control over failures and synchronization between the reads and moving to processed folders. On the downside the experience isn’t real time and connections are made to exchange even if there isn’t any activity.
Pull Notifications - This method is almost identical to previous one, subscribe to pull notifications using an interval and read emails from Inbox whenever the timer event occurs. Pros and cons are similar to approach 1.
Push Notifications - Here the clients subscribe to exchange server for receiving push notifications by registering themselves to particular events and define a callback mechanism (Client Web service) to receive notifications. On the upside the notifications are near real time and connections are made only when there are events. On the downside I see that subscriptions and watermark needs to be managed on the client side so that events aren’t lost. Not sure if this is still a reliable approach as what happens to messages that are already in the inbox before establishing a subscription; will those events be replayed when server starts? It’s not clear.
Streaming Subscription - Clients establish a Streaming connection and then keep it open for a maximum of 30 min with the server and during this time exchange will notify any registered events. Once the connection dies there is an ability to restore it so that the subscription stays alive. It seemed like the best approach until I started hearing that an additional steps to Sync folder items and maintain sync state; is required at regular intervals so that events are not missed from connect/disconnect.
Looking at my needs (read emails from exchange server reliably) and analysis of various options I feel that approach 1 is simple and more reliable as it gives better control over the entire process. But at the same time I wanted to circle with others who are familiar with the API to correct me if my understanding of the framework in terms of pros and cons is wrong.
I am open for any suggestions from the group in order to make this better as the intent is to not miss any email.
I'd go for the code simplicity of option 1. If you connect once a minute the load is very low (just a FindItem call returning nothing) and the users experience it as almost instantaneous.
You're are only handling 50 a day max so the wish to be 'instantaneous' is a bit contradictory (if the user only does that many updates he surely can wait a minute).

Categories