I'm using GWT (Java to JavaScript) as front-end, and RPC mechanism (AJAX) to make server requests (Servlets are the keys).
Everything going smooth as of now.
Now a test-case has been generated like
1)Make a request to server
2)In between disconnect the internet of client (user).
3)We are handling that InvocationException by showing some message.
#Override
public void onFailure(Throwable caught) {
NTMaskAlert.unMask();
if(caught instanceof InvocationException){
NTFailureMessage.showFailureException(caught,"Network disconnected");
}
onNTFailure(caught);
}
3)Now client reconnected, user making a request.
Here is the interesting point.
As soon as the internet reconnected, the browser started processing the previous request, I observed this in fire-bug. If I disconnect twice and reconnected twice, automatically request going twice and duplication of data happening.
The reason for is simply that this behaviour is typically what users want.
That is, if they are temporarily off of the network, for example because the wireless router is down, then most of the time they expect that the browser, mail, etc, will attempt to reconnect when the network is back, they don't expect to have to go to every window and "refresh" to get it to start working again.
Related
A client sends a request and catches a timeout exception. However the server is still processing the request and saving it to the database. Before that happening, the client already sent a second request which doubles the record on the database. How do I prevent that from happening? Im using java servlets and javascript.
A few suggestions:-
1) Increase the client timeout.
2) Make the server more efficient so it can respond faster.
3) Get the server to respond with an intermediate "I'm working on it" response before returning with the main response.
4) Does the server need to do all the work before it responds to the client, or can some be offloaded to a seperate process for running later?
A client sends a request and catches a timeout exception. However the server is still processing the request
Make the servlet generate some output (can be just blank spaces) and flush the stream every so often (every 15 seconds for example).
If the connection has been closed on the client side, the write will fail with a socket exception.
Before that happening, the client already sent a second request which doubles the record on the database
Use the atomicity of the database, for example, a unique key. Start the process by creating a unique record (maybe in some "unfinished" status), it will fail if the record already exists.
I'm using Jetty 9.3.5 and I would like to know what is the proper way to handle unreliable connections when sending websocket messages, specifically: I noticed cases when a websocket connection does not close normally so, even though the client side is down, it takes a lot of time until onClose() is triggered on the server (for ex. a user closes the laptop lid and puts it in standby - it can take 1-2 hours until the close event is received on the server side).
Thus, because the client is still registered, the server keeps sending messages that begin to build up. This becomes an issue when sending a large number of messages.
I've tested sending byte messages with:
Session.getRemote().sendBytes(ByteBuffer, WriteCallback)
Session.getRemote().sendBytesByFuture(ByteBuffer);
To simulate the connection down on one side (ie. user puts laptop in standby), on Linux, I assigned an IP address to eth0 interface, started sending the messages and then brought it down:
ifconfig eth0 192.168.1.1
ifconfig eth0 up
--- start sending messages (simple incremented numbers) and connect using Chrome browser and print them ---
ifconfig eth0 down
This way: the messages were still being sent by Jetty, the Chrome client did not receive them, the onCllose or onError was not triggered on server-side
My questions regarding Jetty are:
Is there a way to clear queued messages that were not delivered?
I've tried, but with no luck:
Session.getRemote().flush();
Can a max number of queued messages be set?
I've tried:
WebSocketServletFactory.getPolicy().setMaxBinaryMessageBufferSize(1)
Can I detect if the client does not receive the message? (or if the connection is in abnormal state let's say)
I've tried:
session.getRemote().sendBytes(bb, new WriteCallback() {
#Override
public void writeSuccess() {
//print success }
#Override
public void writeFailed(Throwable arg0) {
//print fail
}
});
But this prints success even though the messages are not received.
I also tried to use, but couldn't find a solution:
factory.getPolicy().setIdleTimeout(...);
factory.getPolicy().setAsyncWriteTimeout(3000);
sendPing()
Thanks in advance!
Unfortunately, the WebSocket protocol, being a message passing protocol isn't really designed for this level of nuance between messages.
The first message MUST complete before you can even think of sending the next message. So if you have a message in process, then there is no way to safely cancel that message.
At best, an API could exist to truncate that message with a CONTINUATION / empty payload / fin=true.
But even then the remote endpoint wouldn't know that you canceled the message, it would just see a partial message.
Detecting connectivity issues is best handled with either OS level events (like Android's Connectivity intents), or via periodic websocket PING (which inserts itself in front of the line for outgoing websocket frames.
However, even with PING, if your outgoing websocket frame is in-progress, even the PING cannot be sent until that websocket frame is done sending.
RemoteEndpoint.flush() will attempt to flush any pending messages (and frames), not clear out pending messages (or frames).
As for detecting if client got the message, you'll need to implement some sort of message ACK into your own layer to verify that, the protocol has no such concept. (Some libs/apis built on top of websocket have implemented message ACK in that layer. The cometd message ack extension comes to mind as a real world example)
What sort of situation are you attempting to solve for?
Perhaps using the RemoteEndpoint.sendPartialString(String, boolean) or RemoteEndpoint.sendPartialBytes(ByteBuffer, boolean) to send smaller frames of the whole message could be useful to you. However, the other side might not have an API that can read those partial frames (eg: Javascript in a browser).
I have a form that creates an account and a servlet that handles the request.
However, the process to create this account is a long process and I want to create something like a status bar or a progress bar. Heres the POST:
$.post("createAccount.jsp", function(data) { $("#status").text(data);
});
And the servlet would continuously print data like "creating x..." then "creating y" as the servlet runs. Is there a way to accomplish this or maybe another way to tackle this issue?
Thanks
Http works on a request-response model. You send a request, and server responds back. After that Server doesn't know who are you?!
It's like Server is a post-office that doesn't know your address. You
go to it and get your letters.It doesn't come to your home for
delivering letters.
If you want constant notifications from server, You can either use Web Sockets(Stack Overflow also uses Web Sockets) or use `AJAX Polling' mechanisms,
which sends an AJAX request to the server and waits for server to
respond. On retrieval of response,it generates another AJAX request
and keep on doing the same until server stops generating new data.
Read this for an explanation of AJAX Polling techniques
You could have your account creation servlet update a database or context attribute as it creates the account.
You could have a separate AJAX request to a different servlet that sends back to the webpage the most recent development found in the database or context attribute. You would then poll your server with that AJAX request every so many fractions of a second(or relevant time interval depending on how long of a task it is to create an account) to get all the updates.
We have some long running Servlet's request? We want stop this requests on the server if the client give up. Is it possible to detect via Servlet API whether the client has close the HTTP connection in the mean time ?
Write a byte (space character?) to the response and flush. If it throws IOException, then you know enough.
By the way, a real background job (e.g. with #Asynchronous EJB), in combination with a kind of email notification with a specific link on finish, is likely a more user friendly approach.
I'm having trouble establishing AsyncContexts for users and using them to push notifications to them. On page load I have some jQuery code to send the request:
$.post("TestServlet",{
action: "registerAsynchronousContext"
},function(data, textStatus, jqXHR){
alert("Server received async request"); //Placed here for debugging
}, "json");
And in "TestServlet" I have this code in the doPost method:
HttpSession userSession = request.getSession();
String userIDString = userSession.getAttribute("id").toString();
String paramAction = request.getParameter("action");
if(paramAction.equals("registerAsynchronousContext"))
{
AsyncContext userAsyncContext = request.startAsync();
HashMap<String, AsyncContext> userAsynchronousContextHashMap = (HashMap<String, AsyncContext>)getServletContext().getAttribute("userAsynchronousContextHashMap");
userAsynchronousContextHashMap.put(userIDString, userAsyncContext);
getServletContext().setAttribute("userAsynchronousContextHashMap", userAsynchronousContextHashMap);
System.out.println("Put asynchronous request in global map");
}
//userAsynchronousContextHashMap is created by a ContextListener on the start of the web-app
However, according to Opera Dragonfly (a debugging tool like Firebug), it appears that the server sends an HTTP 500 response about 30000ms after the request is sent.
Any responses created with userAsyncContext.getResponse().getWriter().print(SOME_JSON) and sent before the HTTP 500 response is not received by the browser, and I don't know why. Using the regular response object to send a response (response.print(SOME_JSON)) is received by the browser ONLY if all the code in the "if" statement dealing with AsyncContext is not present.
Can someone help me out? I have a feeling this is due to my misunderstanding of how the asynchronous API works. I thought that I would be able to store these AsyncContexts in a global map, then retrieve them and use their response objects to push things to the clients. However, it doesn't seem as if the AsyncContexts can write back to the clients.
Any help would be appreaciated.
I solved the issue. It seems as though there were several problems wrong with my approach:
In Glassfish, AsyncContext objects all have a default timeout period of 30,000 milliseconds (.5 minutes). Once this period expires, the entire response is committed back to the client, meaning you won't be able to use it again.
If you're implementing long-polling this might not be much of an issue (since you'll end up sending another request after the response anyway), but if you wish to implement streaming (sending data to back to the client without committing the response) you'll want to either increase the timeout, or get rid of it all together.
This can be accomplished with an AsyncContext's .setTimeout() method. Do note that while the spec states: "A timeout value of zero or less indicates no timeout.", Glassfish (at this time) seems to interpret 0 as being "immediate response required", and any negative number as "no timeout".
If you're implementing streaming , you must use the printwriter's .flush() method to push the data to the client after you're done using its .print() .println() or .write() methods to write the data.
On the client side, if you've streamed the data, it will trigger a readyState of 3 ("interactive", which means that the browser is in the process of receiving a response). If you are using jQuery, there is no easy way to handle readyStates of 3, so you're going to have to revert to regular Javascript in order to both send the request and handle the response if you're implementing streaming.
I have noticed that in Glassfish if you use AsyncContext and use .setTimeOut() to a negative number the connection is broken anyway, to fix this I had to go to my Glassfish admin web configurator : asadmin set
configs.config.server-config.network-config.protocols.protocol.http-listener-1.http. And set timeout to -1. All this to avoid glassfish finish the connections after 30 sec.