My Google App Engine JSP needs to perform a lengthy processing so it adds the task to Task Queue, then refreshes every 30 sec waiting for task completion. How the task can let JSP know about its status? I tried to use session but it seems session objects are not shared between JSP and tasks. I tried to throw exceptions from the task in case if it fails hoping to launch error page (I did configure error-page in web.xml) but it didn't work either.
The answer for this kind of problems is implement some kind of "Web Hook" where when tasks is finished it will call back the JSP.
Another option is to implement AJAX, where this asynchronous call will check the status of the task then will update the UI as necessary.
If you can give more context into the question it would be better.
I am implementing sending of browser push notifications via Google Cloud Messaging and Firefox Push Notification System. For this, we have to make HTTP Post requests to GCM and FPNS.
To make HTTP request to GCM/FPNS we should have user registration IDs. Using JavaScript we are collecting registration IDs and storing it in Cassandra. Each record contains user registration information (Registration ID and browser type).
When we make an HTTP request to GCM/FPNS we should send registration IDs along with the request to GCM/FPNS based on browser type (if user registration ID belongs to Chrome we will make GCM request otherwise FPNS request). For example, if we have 10,000 records we should make around 10,000 requests to FPNS/GCM.
Once GCM/FPNS receives the user registration IDs, it will send a push notification to the browser. In browser, we have JavaScript code (Service Worker) to handle the notification event.
For above requirement, synchronous servlet architecture is not good enough. Because to process 10,000 records, it may take assuming 10 to 15 minutes, even if we are using multithreading. It may cause tomcat memory leakage and an out of memory exception.
When I was searching online, people are suggesting asynchronous servlet architecture. Once we take the request from the client to send the notification we will have respond immediately (something like 200 Ok Added to queue) and also this request should be added to Message Queue (JMS). From JMS we use multithreading to make asynchronous HTTP requests.
I am not finding the correct way of doing this. Can you suggest a way of implementing this functionality (Architecture Design and control flow)?
Short of changing to something like PubNub, I would create a worker queue. This could be done with JMS or just a shared Queue (search for producer/consumer). JMS would be, in my opinion, the easiest though it gets harder to distribute in a cluster.
Basically you could continue to have a synchronous servlet - it would take the message, put it on the queue, and return the 200. Placing a message on the queue would have very minimal blocking - a couple of milliseconds at best.
As you indicated, on the queue consumer side you would then have to handle many requests. Depending on the latency requirements of your system you may need to thread or off load that. It really depends on how fast you need to send the messages.
For a totally different architecture, you could consider a "queue in the cloud". I've used Amazon SQS for things like this. You wouldn't even have a servlet - the message would go straight to SQS and then something else would pull it off and process it.
For reference I don't work for Amazon or PubNub.
Maybe I'm overthinking this but I'd like some advice. Customers can place an order inside my GWT application and on a secondary computer I want to monitor those submittals inside th eGWT application and flash an alarm every time an order is submitted, provided the user has OK'd this. I cant figure out the best way to do this. Orders are submitted to a mysql database if that makes any difference. Does anyone have a suggestion on what to do or try?
There are two options: 1) polling or 2) pushing which would allow your server (in the servlet handling the GWT request) to notify you (after the order is successfully placed).
In 1) polling, the client (meaning the browser you are using to monitor the app) will periodically call the server to see if there is data waiting. It may be more resource intensive as many calls are made for infrequent data. It may also be slower due to the delay between calls. If only your monitoring client is calling though it wouldn't be so resource intensive.
In 2) pushing, the client will make a request and the request will be held open until there is data. It is less resource intensive and can be faster. Once data is returned, the client sends another request (this is long polling). Alternatively, streaming is an option where the server doesn't sent a complete request and just keeps sending data. This streaming option requires a specific client-/browser-specific implementation though. If it's just you monitoring though, you should know the client and could set it up specifically for that.
See the demo project in GWT Event Service
Here is the documentation (user manual) for it.
Also see GWT Server Push FAQ
There are other ways of doing it other than GWT Event Service of course. Just google "GWT server push" and you'll find comet, DWR, etc., and if you are using Google's App Engine the Channel API
I am working on project in which we have an authentication mechanism. We are following the below steps in the authentication mechanism.
The user opens a browser and enter his/her email in a text box and click the login button.
The request goes to a server. We generate a random string (for example, 123456) and send a notification to the user's Android/iPhone and makes the the current thread wait with the help of the wait() method.
The user enters a password on his/her phone and clicks the submit button on his/her phone.
Once the user clicks the submit button, we are making a webservice hit the server and passing the previously generated string (for example, 123456) and password.
If the password is correct against the previously entered email, we call the notify() method to the previously waiting thread and send success as the response and the user gets entered into our system.
If the password is incorrect against the previously entered email, we call the notify() method to the previously waiting thread and send failed as the response and display an invalid credential message to the user.
Everything is working fine, but recently we moved to a clustered environment. We found that some threads are not notified even after replied by the user and for an unlimited waiting time.
For the server, we are using Tomcat 5.5, and we are following The Apache Tomcat 5.5 Servlet/JSP Container for making tomcat cluster environment.
Answer :: Possible problem and solution
The possible problem is the multiple JVMs in a clustered environment. Now we are also sending the clustered Tomcat URL to the user Android application along with generated string.
And when the user clicks on the reply button, we are sending the generated string along with the clustered Tomcat URL so in this case both requests are going to the same JVM, and it works fine.
But I am wondering if there is a single solution for the above issue.
There is a problem in this solution. What happens if the clustered Tomcat crashes? The load balancer will send a request to the second clustered Tomcat and again the same problem will arise.
The underlying reason for your problems is that Java EE was designed to work in a different way - attempting to block/wait on a service thread is one of the important no-no's. I'll give the reason for this first, and how to solve the issue after that.
Java EE (both the web and EJB tier) is designed to be able to scale to very large size (hundreds of computers in a cluster). However, in order to do that, the designers had to make the following assumptions, which are specific limitations on how to code:
Transactions are:
Short lived (eg don't block or wait for periods greater than a second or so)
Independent of each other (eg no communication between threads)
For EJBs, managed by the container
All user state is maintained in specific data storage containers, including:
A data store accessed through, eg, JDBC. You can use a traditional SQL database or a NoSQL backend
Stateful session beans, if you use EJBs. Think of these as Java Bean that persists its fields to a database. Stateful session beans are managed by the container
Web session This is a key-value store (kinda like a NoSQL database but without the scale or search capabilities) that persists data for a specific user over their session. It's managed by the Java EE container and has the following properties:
It will automatically relocate if the node crashes in a cluster
Users can have more than one current web session (i.e. on two different browsers)
Web sessions end when the user ends their session by logging out, or when the session is inactive for longer than the configurable timeout.
All values that are stored must be serializable for them to be persisted or transfered between nodes in a cluster.
If we follow those rules, the Java EE container can successfully manage a cluster, including shutting down nodes, starting new ones and migrating user sessions, without any specific developer code. Developers write the graphical interface and the business logic - all the 'plumbing' is managed by configurable container features.
Also, at run time, the Java EE container can be monitored and managed by some pretty sophisticated software that can trace application performance and behavioural issues on a live system.
< snark >Well, that was the theory. Practice suggests there are pretty important limitations that were missed, which lead to AOSP and code injection techniques, but that's another story < /snark >
[There are many discussions around the 'net on this. One which focuses on EJBs is here: Why is spawning threads in Java EE container discouraged? Exactly the same is true for web containers such as Tomcat]
Sorry for the essay - but this is important to your problem. Because of the limitations on threads, you should not block on the web request waiting for another, later request.
Another problem with the current design is what should happen if the user becomes disconnected from the network, runs out of power, or simply decides to give up? Presumably you will time out, but after how long? Just too soon for some customers, perhaps, which will cause satisfaction problems. If the timeout is too long, you could end up blocking all worker threads in Tomcat and the server will freeze. This opens your organisation up for a denial of service attack.
EDIT : Improved suggestions after a more detailed description of the algorithm was published.
Notwithstanding the discussion above on the bad practice of blocking a web worker thread and also the possible denial of service, it's clear that the user is presented with a small time window in which to react to the the notification on the Android phone, and this can be kept reasonably small to enhance security. This time window can also be kept below Tomcat's timeout for responses as well. So the thread blocking approach could be used.
There are two ways this problem can be resolved:
Change the focus of the solution to the client end - polling the server using Javascript on the browser
Communication between nodes in the cluster allowing the node receiving the authorization response from the Android App to unblock the node blocking the servlet's response.
For approach 1, the browser polls the server via Javascript with an AJAX call to a web service on Tomcat; the AJAX call returns True if the Android app authenticated. Advantage: client side, minimal implementation on the server, no thread blocking on the server. Disadvantages: During the waiting period, you have to make frequent calls (maybe one a second - the user will not notice this latency) which amounts to a lot of calls and some additional load on the server.
For approach 2, there is again choice:
Block the thread with an Object.wait() optionally storing the node ID, IP or other identifier in a shared data store: If so, the node receiving the Android app authorization needs to:
Either find the node that is currently blocking or broadcast to all nodes in the cluster
For each node in 1. above, send a message that identifies the user session to unblock. The message could be sent via:
Have an internal-only servlet on each node - this is called by the servlet performing the Android app authorization. The internal servlet will call Object.notify on the correct thread
Use a JMS pub-sub message queue to broadcast to all members of the cluster. Each node is a subscriber that, on receipt of a notification will call Object.notify() on the correct thread.
Poll a data store until the thread is authorized to continue: In this case, all the Android app needs to do is save the state in a SQL DB
Using wait/notify can be tricky. Remember that any thread can be suspended at any time. So it's possible for notify to be called before wait, in which case wait will then block for ever.
I wouldn't expect this in your case, as you have user interaction involved. But for the type of synchronisation you are doing, try using a Semaphore. Create a Semaphore with 0 (zero) quantity. The waiting thread calls acquire() and it will block until another thread calls release().
Using Semaphore in this way is much more robust that wait/notify for the task you described.
Consider using an in-memory grid so that the instances in the cluster can share state. We used Hazelcast to share data between instances so in case a response reaches a different instance it still can handle it.
E.g. you could use distributed countdown latch with value of 1 to set the thread waiting after sending the message, and when the response arrives from the client to a separate instance it can decrease, that instance can decrease the latch to 0 letting to run the first thread.
Your clustered deployment means that any node in the cluster could receive any response.
Using wait/notify using threads for a web app risks accumulating a lot of threads that may not be notified which could leak memory or create a lot of blocked threads. This could eventually affect the reliability of your server.
A more robust solution would be to send the request to the android app and store the current state of the users request for later processing and complete the HTTP request. To store the state you could consider:
A database that all tomcat nodes connect to
A java cache solution that will work across tomcat nodes like hazelcast
This state would be visible to all nodes in your tomcat cluster.
When the reply from the android app arrives on a different node, restore the state of what your thread was doing and continue processing on that node.
If the UI of the application is waiting on a response from the server, you might consider using an ajax request to poll for the response state from the server. The node processing the android app response does not need to be the same one handling UI requests.
Using Thread.wait in a web service environment is a colossal mistake. Instead, maintain a database of user/token pairs and expire them at intervals.
If you want a cluster, then use a database that is clusterable. I would recommend something like memcached since it's in-memory (and fast) and low on overhead (key/value pairs are dead simple, so you don't need RDBMS, etc.). memcached handles expiration of tokens for you already, so it seems like a perfect fit.
I think the username -> token -> password strategy is unnecessary, especially because you have two different components sharing the same 2-factor authentication responsibility. I think you can further reduce your complexity, reduce confusion for your users, and save yourself some money in SMS-send fees.
The interaction with your web service is simple:
User logs into your website using username + password
If primary authentication (username/password) is successful, generate a token and insert userid=token into memcached
Send the token to the user's phone
Present "enter token" page to the user
User receives token via phone and enters it into the form
Fetch the token value from memcached based upon the user's id. If it matches, expire the token in memcached and consider the second-factor successful
Tokens will auto-expire after whatever amount of time you want to set in memcached
There are no threading problems with the above solution and it will scale across as many JVMs as you need to support your own software.
After analysing your question, I came to the conclusion that the exact problem is of multiple JVMs in a clustered environment.
The exact problem is because of the cluster environment. Both requests are not going to the same JVM. But we know that a normal/simple notify works on the same JVM when the previous thread is waiting.
You should try to execute both requests (first request, second request when the user replies from an Android application).
I'm afraid, but threads cannot migrate over classic Java EE clusters.
You have to rethink your architecture to implement the wait/notify differently (connection-less).
Or, you may give it a try with terracotta.org. It looks like this allows to cluster an entire JVM process over multiple machines. Maybe it's your only solution.
Read a quick introduction in Introduction to OpenTerracotta.
I guess the problem is, your first thread sends a notification to the user's Android application in JVM 1 and when the user reply back, the control goes to JVM 2. And that's the main problem.
Somehow, both threads can access the same JVM to apply wait and notify logic.
Solution:
Create a single point of contact for all waiting threads. Hence in a clustered environment, all the threads will wait on a third JVM (single point of contact), so in this way all the requests (any clustered Tomcat) will contact the same JVM for waiting and notify logic and hence no thread will wait for an unlimited time. If there is a reply, then the thread will be notified if the same object has waited and is being notified the second time.
I have a jsp/servlet based web app.
I have a button "Clean Up" which calls a servlet and the request goes till DAO class.DAO class performs different DB activities like, moving data from Master table to backup table, then deleting data from master table etc.
As of now this activity is Synchronous and user needs to wait until a response is sent.
I want to implement the same scenario as an Asynch task with user just getting a message as
" Clean Up Activities Triggered"
What could be the best/easiest way to perform this task. I cannot use scheduler.
My Container is TomCat.
Simplest but a different solution for this could be to use some AJAX behavior in the client side. There are lot of simple/powerful frameworks(JS files) to help you achieve AJAX in your page. Using AJAX, you just submit the request asynchronously and display the client side message "Clean Up Activities Triggered", while request is being processed in the server side. If user wait, server process will return and display a "success" message otherwise user is free to navigate other pages or perform other actions.
ExecutorService is the most robust solution. Creating a simple thread is enough as well. However the bigger problem is synchronization. Use Semaphore to control whether two users aren't cleaning up simultaneously.
we did this for our project once and it worked pretty well.
We sent the 200 ok to the user as long as there no issues processing the request. And we used the java executorservice to do the cleanup.
And in case something went wrong notified the user separately.