I have certain use case where processing can take upto 2 hrs of time. so once user submit the request from browser, my understanding was browser would keep on waiting for response to arrive. But I get error page after some 15-20 mins.
I understand that web request should not so much time consuming, but i stuck with the existing architecture and design.
Can some one suggest some solution for this problem.
I am using IE 9 as browser. Tomcat as server.
What you could do for similar issues is create a separate thread on the server and return a response to the user saying that job has been started and then either
display the result of that job on a specific page (this seems like an acceptable solution, the user will probably not stay in front of his screen for such long task)
or via ajax do some polling to have the status of the job you just triggered.
Most probably the server timeout is about 15 min, therefore you get the error after 15 min. One solution is to increase the server timeout. But increasing to 2 hours would be too long. Another option is to poll the server from the browser to find out the status of the task. You can use ajax call for the purpose.
Related
I have a servlet-jsp web app. One of the requests in the login action is taking upto 120 sec to complete on Firefox and IE browsers. However this same request is completed normally in chrome (below 1 sec). On debugging the code I can conclude that my web app filter is returning the response quickly but the browser shows it took 120 sec to receive it. If you see the firefox developer tool below, it shows the waiting time to be 360ms and the receiving time as 120
s approx. This same behavior can be seen on IE also. Any clue what might be causing this?
EDIT 1: This issue is being observed only for requests that return a 302 response code.
EDIT 2: I tried using an intercepting tool to check the requests, for this I had to route the traffic through a proxy at 127.0.0.1 (localhost) . One observation is that while doing this the application is significantly faster. A possible explanation I observed for this is the proxy application returns the 302 requests with a status code of 200 to the browser. So now the question is why are 302 calls slow on the Firefox and IE browser?
Finally was able to resolve this issue from my app itself. In one of the places in the code the flush operation was being called upon the output stream of the response.
super.getOutputStream().flush();
This was only being done in cases when the request was not of type 302 which makes sense because 302s don't have any data in the response. Chrome is smart enough to see that and doesn't wait expecting any data. However, firefox and IE keep waiting for the data and thus the connection for that particular call is not closed. This causes the delay which can be seen in the image I attached in the question (browser tries to download content which never arrives).
I have now changed the code to do the flush operation for 302s calls also. This has resolved the entire problem.
Each browser has it own setting for handling connection. There are limits.
An answer-aggregation is already there:
how-to-solve-chromes-6-connection-limit
EDIT: Since the Question was edited, my answer don't match the topic anymore.
If a request of a web app act differently on different browsers, then the issue should be on the client side. (in the scope of single request-response interaction)
I am working on project using GWT, Google app engine. I spent whole day for understanding task queue but i did not understand. could you please tell me what is the purpose of Task Queue. And now i am using automatic scaling so server side request processing limit is Only 1 minute so Using task queue shall i process my request more than 1 minute using Task Queue on server side? Any help??
Thanks in advance
In short, task queues are for "work outside of a user request, initiated by a user request" making them good for "background work." Instead of the 1 minute limit for normal requests, Tasks can run for up to 10 minutes. That does mean that you might not have the results of the Task before a typical user request times out (or throws the DeadlineExceededError).
I am developing a web application that allows users to run AI algorithms on a server remotely to decrease wait time for solutions. Here is an outline.
Browser -> jQuery AJAX -> Apache2 proxy -> tomcat7 -> RESTful java -> Runtime.getRuntime().exec() -> command-line C algorithm
the restful service returns desired information via AJAX response and some handling happens on the browser.
This works fine for most of the algorithms that I have tested. Some of the algorithms, however, will timeout after a very long time running. I have a live status update implemented, so the algorithm is still running and generating output constantly on the browser, but after 5 minutes or so, I get a 500 Internal Server Error.
Killing the algorithm process from the command line also results in a 500 Internal Server Error.
Running the algorithm directly from the command-line results in proper execution.
Sending an AJAX call to a restful method that sleeps for an hour results in a 503 (Service Temporarily Unavailable) error.
What else could be the cause of this? I've been troubleshooting for a hot minute and am all out of ideas at the moment.
Thanks for your help!
--EDIT--
I have developed a workaround that efficiently solves the problem, however, I am still interested to know what anyone else thinks. Eliminating the long-lasting AJAX call and replacing it with a repeating call to check on the status of the algorithm through an alternate status file works with long running processes.
But, why previously would I have had a problem with an AJAX call specified never to time out..?
I am try to create a JSP page that will show all the status in a group of local servers.
Currently I create a schedule class that will constantly poll to check the status of the server with 30 second interval, with 5 second delay to wait for each server reply, and provide the JSP page with the information. However I find this way to be not accurate as it will take some time before the information of the schedule class to be updated. Do you guys have a better way to check the status of several server within a local network?
-- Update --
Thanks #Romain Hippeau and #dbyrne for their answers
Currently I am trying to make the code more in server end, that is to do a constant check
on the status of the group of server asynchronously for update so as to make it more responsive.
However I forgot to add that the client have the ability to control the server status. Thus I have problem for example when the client changes the server status, and then refresh the page. When the page retrieve the information from not updated schedule class, it will show the previous status of the server instead.
You can use Tomcat Comet here is an article http://www.ibm.com/developerworks/web/library/wa-cometjava/index.html.
This technology (which is part of the Servlet 3.0 spec) allows you to push notifications to the clients. There are issues with running it behind a firewall, If you are within an Intranet this should not be too big of an issue
Make sure you poll the servers asynchronously. You don't want to wait for a response from one server before polling the next. This will dramatically cut down the amount of time it takes to poll all the servers. It was unclear to me from your question whether or not you are already doing this.
Asynchronous http requests in Java
Let's say I click a button on a web page to initiate a submit request. Then I suddenly realize that some data I have provided is wrong and that if it gets submitted, then I will face unwanted consequences (something like a shopping request where I may be forced to pay up for this incorrect request).
So I frantically click the Stop button not just once but many times (just in case).
What happens in such a scenario? Does the browser just cancel the request without informing the server? If in case it does inform the server, does the server just kill the process or does it also do some rolling back of all actions done as part of this request?
I code in Java. Does Java have any special feature that we can use to detect STOP requests and rollback whatever we did as part of this transaction?
A Web Page load from a browser is usually a 4 step process (not considering redirections):
Browser sends HTTP Request, when the Server is available
Server executes code (for dynamic pages)
Server sends the HTTP Response (usually HTML)
Browser renders HTML, and asks for other files (images, css, ...)
The browser reaction to "Stop" depends on the step your request is at that time:
If your server is slow or overloaded, and you hit "Stop" during step 1, nothing happens. The browser doesn't send the request.
Most of the times, however, "Stop" will be hit on steps 2, 3 and 4, and in those steps your code is already executed, the browser simply stops waiting for the response (2), or receiving the response (3), or rendering the response (4).
The HTTP call itself is always a 2 steps action (Request/Response), and there is no automatic way to rollback the execution from the client
Since this question may attract attention for people not using Java, I thought I would mention PHPs behavior in regard to this question, since it is very surprising.
PHP internally maintains a status of the connection to the client. The possible values are NORMAL, ABORTED and TIMEOUT. While the connection status is NORMAL, life is good and the script will continue to execute as expected.
If the user clicks the Stop button in their browser, the connection is typically closed by the client and the status changes to ABORTED. A change of status to ABORTED will immediately end execution of the running script. As an aside, the same thing happens when the status changes to TIMEOUT (PHPs setting for the allowed run-time of scripts is exceeded).
This behavior may be useful in certain circumstances, but there are others where it could be problematic. It seems that it should be safe to abort at any time during a proper GET request; however, aborting in the middle of a request that makes a change on the server could lead to only partially completed changes.
Check out the PHP manual's entry on Connection Handling to see how to avoid complications resulting from this behavior:
http://www.php.net/manual/en/features.connection-handling.php
Generally speaking, the server will not know that you've hit stop, and the server-side process will complete. At the point that the server tries to send the response data back to the client, you may see an error because the connection was closed, but equally you may not. What you won't get is the server thread being suddenly interrupted.
You can use various elaborate mechanisms to mitigate this, like having the send send frequent ajax calls to the server that say "still waiting", and have the server perform its processing in a new thread which checks these calls, but that doesn't solve the problem completely.
The client will immediately stop transmitting data and close its connection. 9 out of 10 times your request will already have got through (perhaps due to OS buffers being "flushed" to the server). No extra information is sent to the server informing it that you stopped your request.
Your submit in important scenarios should have two stages. Verify and submit. If the final submit goes though, you commit any tranactions. I cant think of any other way really to avoid that situation, other than allowing your user to undo his actions after a commit. for example The order example, after the order is done to allow your customers to change their mind and canel the order if it has not been shipped yet. Of course its extra code you need to write to suuport that.