A client sends a request and catches a timeout exception. However the server is still processing the request and saving it to the database. Before that happening, the client already sent a second request which doubles the record on the database. How do I prevent that from happening? Im using java servlets and javascript.
A few suggestions:-
1) Increase the client timeout.
2) Make the server more efficient so it can respond faster.
3) Get the server to respond with an intermediate "I'm working on it" response before returning with the main response.
4) Does the server need to do all the work before it responds to the client, or can some be offloaded to a seperate process for running later?
A client sends a request and catches a timeout exception. However the server is still processing the request
Make the servlet generate some output (can be just blank spaces) and flush the stream every so often (every 15 seconds for example).
If the connection has been closed on the client side, the write will fail with a socket exception.
Before that happening, the client already sent a second request which doubles the record on the database
Use the atomicity of the database, for example, a unique key. Start the process by creating a unique record (maybe in some "unfinished" status), it will fail if the record already exists.
Related
I am building an application which connect with other too. For the application I need to take a request from a websocket connection. Once i received the request need to send the request to some other application for processing and there will be few cases.
If second application return accepted then wait for a response from 3rd application (3 will send response to 2nd and 2nd will initiate a push model).
If second application return other than accepted then return false to the request
My confusion is, this way I will handle the request as synchronous or asynchronous ?
As in case #1 I have to wait for some time to receive response from another application. more over in case #2 I can immediately process a request.
Sequence diagram for clarity of flow
We have some long running Servlet's request? We want stop this requests on the server if the client give up. Is it possible to detect via Servlet API whether the client has close the HTTP connection in the mean time ?
Write a byte (space character?) to the response and flush. If it throws IOException, then you know enough.
By the way, a real background job (e.g. with #Asynchronous EJB), in combination with a kind of email notification with a specific link on finish, is likely a more user friendly approach.
Assuming no keep alives, when a servlet container is acting as a stand alone server, I assume that the servlet's thread is not released until the entire response is sent to the client (say a web browser). Is this a correct assumption?
But what happens if the servlet is behind a reverse proxy like Nginx? Is the thread released once the response is delivered to Nginx, or is it held until the response is sent to its final client (say a browser)?
Update: Let me try make this a bit more clear.
It takes mere milliseconds (say 2ms) for a response to be sent from servlet to proxy like nginx. But it can then take an additional 80ms (or so) for the final response to be sent from nginx to the browser. Does the servlet release the thread/stream once the response is sent to nginx, or does the servlet hold onto them until the response is sent to the browser (that is the entire 80ms)
Question: I assume that the servlet's thread is not released until the entire response is sent to the client (say a web browser). Is this a correct assumption?
Ans: No it is wrong. Servlet container will just write the content to the socket and return. It is not guaranteed that return from write() method will ensure that the response has reached the client.
Question: Is the thread released once the response is delivered to Nginx, or is it held until the response is sent to its final client (say a browser)?
Ans: When Nginx is behind , then the client for Servlet container is Nginx. It is not aware of actual remote client. So, the thread will be released once the response is written to Nginx.
The server container not being able to send a response to the client will trigger an exception that will be handled by the container. You can enclose the writing to the outputstream or writer by a try catch finally (with close()) but you don't need to, the container will manage, including the return of the thread to the pool.
Regards
S
A servlet does not see the network. According to the specifications It is handled 2 objects: a Request and a Response to be filled in (in the case of HTTP, this means a HTTPRequest and a HTTPResponse). It shall process the request data within the request object, and write to the buffer in the response object. Once that content is commited by the servlet, the container may do some postprocessing (using filters) and will transmit it back to the client.
The servlet thread returns naturally to the pool once the call to the request handling method finishes (that may happen after the payload is sent back to the client, if the method has to do further work.
Note that because the servlet doesn't see the network and is only concerned about a single request, the state of the http connection (keep-alive or close) is independent of the servlet lifetime; several servlets may handle the different requests pipelined in a single connection. See this question for a related issue.
I'm having trouble establishing AsyncContexts for users and using them to push notifications to them. On page load I have some jQuery code to send the request:
$.post("TestServlet",{
action: "registerAsynchronousContext"
},function(data, textStatus, jqXHR){
alert("Server received async request"); //Placed here for debugging
}, "json");
And in "TestServlet" I have this code in the doPost method:
HttpSession userSession = request.getSession();
String userIDString = userSession.getAttribute("id").toString();
String paramAction = request.getParameter("action");
if(paramAction.equals("registerAsynchronousContext"))
{
AsyncContext userAsyncContext = request.startAsync();
HashMap<String, AsyncContext> userAsynchronousContextHashMap = (HashMap<String, AsyncContext>)getServletContext().getAttribute("userAsynchronousContextHashMap");
userAsynchronousContextHashMap.put(userIDString, userAsyncContext);
getServletContext().setAttribute("userAsynchronousContextHashMap", userAsynchronousContextHashMap);
System.out.println("Put asynchronous request in global map");
}
//userAsynchronousContextHashMap is created by a ContextListener on the start of the web-app
However, according to Opera Dragonfly (a debugging tool like Firebug), it appears that the server sends an HTTP 500 response about 30000ms after the request is sent.
Any responses created with userAsyncContext.getResponse().getWriter().print(SOME_JSON) and sent before the HTTP 500 response is not received by the browser, and I don't know why. Using the regular response object to send a response (response.print(SOME_JSON)) is received by the browser ONLY if all the code in the "if" statement dealing with AsyncContext is not present.
Can someone help me out? I have a feeling this is due to my misunderstanding of how the asynchronous API works. I thought that I would be able to store these AsyncContexts in a global map, then retrieve them and use their response objects to push things to the clients. However, it doesn't seem as if the AsyncContexts can write back to the clients.
Any help would be appreaciated.
I solved the issue. It seems as though there were several problems wrong with my approach:
In Glassfish, AsyncContext objects all have a default timeout period of 30,000 milliseconds (.5 minutes). Once this period expires, the entire response is committed back to the client, meaning you won't be able to use it again.
If you're implementing long-polling this might not be much of an issue (since you'll end up sending another request after the response anyway), but if you wish to implement streaming (sending data to back to the client without committing the response) you'll want to either increase the timeout, or get rid of it all together.
This can be accomplished with an AsyncContext's .setTimeout() method. Do note that while the spec states: "A timeout value of zero or less indicates no timeout.", Glassfish (at this time) seems to interpret 0 as being "immediate response required", and any negative number as "no timeout".
If you're implementing streaming , you must use the printwriter's .flush() method to push the data to the client after you're done using its .print() .println() or .write() methods to write the data.
On the client side, if you've streamed the data, it will trigger a readyState of 3 ("interactive", which means that the browser is in the process of receiving a response). If you are using jQuery, there is no easy way to handle readyStates of 3, so you're going to have to revert to regular Javascript in order to both send the request and handle the response if you're implementing streaming.
I have noticed that in Glassfish if you use AsyncContext and use .setTimeOut() to a negative number the connection is broken anyway, to fix this I had to go to my Glassfish admin web configurator : asadmin set
configs.config.server-config.network-config.protocols.protocol.http-listener-1.http. And set timeout to -1. All this to avoid glassfish finish the connections after 30 sec.
Salesforce can send up to 100 requests inside 1 SOAP message. While sending this type of Bulk Ooutbound message request my PHP script finishes executing but SF fails to accept the ACK used to clear the message queue on the Salesforce side of things. Looking at the Outbound message log (monitoring) I see all the messages in a pending state with the Delivery Failure Reason "java.net.SocketTimeoutException: Read timed out". If my script has finished execution, why do I get this error?
I have tried these methods to increase the execution time on my server as I have no access on the Salesforce side:
set_time_limit(0); // in the script
max_execution_time = 360 ; Maximum execution time of each script, in seconds
max_input_time = 360 ; Maximum amount of time each script may spend parsing request data
memory_limit = 32M ; Maximum amount of memory a script may consume
I used the high settings just for testing.
Any thoughts as to why this is failing the ACK delivery back to Salesforce?
Here is some of the code:
This is how I accept and send the ACK file for the imcoming SOAP request
$data = 'php://input';
$content = file_get_contents($data);
if($content) {
respond('true');
} else {
respond('false');
}
The respond function
function respond($tf) {
$ACK = <<<ACK
<?xml version = "1.0" encoding = "utf-8"?>
<soapenv:Envelope xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/" xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
<soapenv:Body>
<notifications xmlns="http://soap.sforce.com/2005/09/outbound">
<Ack>$tf</Ack>
</notifications>
</soapenv:Body>
</soapenv:Envelope>
ACK;
print trim($ACK);
}
These are in a generic script that I include into the script that uses the data for a specific workflow. I can process about 25 requests (That are in 1 SOAP response) but once I go over that I get the timeout error in the Salesforce queue. for 50 requests is usually takes my PHP script 86.77 seconds.
Could it be Apache? PHP?
I have also tested just accepting the 100 request SOAP response and just accepting and sending the ACK the queue clears out, so I know it's on my side of things.
I show no errors in the apache log, the script runs fine.
I did find some info on the Salesforce site but still no luck. Here is the link.
Also I'm using the PHP Toolkit 11 (From Salesforce).
Other forum with good SF help
Thanks for any insight into this,
--Phill
UPDATE:
If I receive the incoming message and print the response, should this happen first regardless if I do anything else after? Or does it wait for my process to finish and then print the response?
UPDATE #2:
okay I think I have the problem:
PHP uses the single thread processing approach and will not send back the ACK file until the thread has completed it's processing. Is there a way to make this a mutli thread process?
Thread #1 - accept the incoming SOAP request and send back the ACK
Thread #2 - Process the SOAP request
I know I could break it up into like a DB table or flat file, but is there a way to accomplish this without doing that?
I'm going to try to close the socket after the ACK submission and continue the processing, cross my fingers it will work.
Sounds like the outbound message is hitting the timeout. Other users have reported timeouts as low as 10 seconds (see forum link below). The sandbox instance that I use (cs1) is timing out after about 1 minute, from my testing. It's possible that the timeout is an organization or instance level setting that Salesforce controls.
Two things you could try:
Open a support ticket with
Salesforce to see if they can
increase the timeout value for
outbound messages. From my
experience, there are lot of
settings that they can modify on the
organization level - this might be
one of them.
Offload processing of your data, so
that the ACK is sent immediately
back to Salesforce. Then the actual
processing of your data will take
place asynchronously. ie. Message
queue, separate thread, etc.
Some other resources that might be helpful:
related Salesforce forum discussion
Outbound messaging documentation
I think they timeout the thing waiting for Your script to end.
There is a way You could try to fix this.
Output the envelope with ack message at the beginning and then flush the thing so that their server gets it before You end processing. No threading, just plain priorities rethinking :)
read this for best info on flushing content
Are you 100% sure that Salesforce will wait the amount of time your scripts need too run? 80 seconds seem like a loong time too me.
If all requests failed I would guess that Salesforce expects you to set the Content-Type header appropriately, but this does not seem to be the case.
I don't know about Salesforce, but if you want to make some multithreading with PHP you should take a look at this code example and more precisely to pcntl_fork().
N.B: pcntl is not enabled by default and won't work on Windows platforms.
So what I've done is:
Accept all incoming OBM's, parse them into a DB
When this is done kick of a process that runs in the background (Actually I send it to the background so the script can end)
Send ACK file back
By just accepting the raw data, parsing into fields and inserting it into a DB is fairly quick. Then I issue a Linux Command Line command that also send the processing script to run in the background. Then I send the ACK file to SF and the script ends within the allotted time. It is cumbersome to split the script process into two separate stages but it works.