I have a Java Web application that generates a report and I have the ability to export that report to an excel file, problem is whenever I generate it as an excel file a "Connection Timed Out" page is being displayed on a firefox web browser.
Basically I have no idea why is this happening, I see no problems in my code could it be server issues or the amount of data I'm generating? Also no error logs are being displayed.
Any advise, suggestions would be of great help thanks.
It sounds like the request is taking too long, and being timed out. Basically it's taking too long to generate the report. This could be too long for the client, the app server or the webserver (if you have a separate webserver). You have a few options:
Find out where the timeout settings are in the Application Server and increase them
Speed up your report writing code so it doesn't take as long
Make the report writer an asyncronous job (eg by kicking of the report generation in a new thread), and have the client pole the server until it's finished, then request the file.
Update based on OP comment:
Regarding the last suggestion:
If the report's generated by another thread, the current request will return before the report is generated, so the browser won't have to wait at all. However, this is quite a large amount of work because you have to have a way for the client-side code to find out when the report is finished. Also, you are not supposed to launch your own threads from a Servlet.
Maybe you can make the original request via AJAX, or in an iFrame? This way the restrictive timeout threshold may not be in effect.
Related
I have a servlet that accepts large (up to 4GB) binary file uploads. The submitted file is transmitted as the body of an HTTP POST.
The servlet has to perform some time-consuming processing as it receives the file, and it has to finish doing that before sending the response. As a result, it can appear to a fast client that the server has hung because the client can be waiting for a minute or two after sending the the last few bytes before getting the response.
Is there a way either within Tomcat or within the servlet API to throttle back the speed at which the server accepts the file? I would like it to appear to the client that the server is accepting the file at (for example) 10MB/second rather than it accepting the file at 50MB/second and then taking a few minutes after receiving the body to return a response.
Thanks.
I'm extending on the comment of Mark Thomas here because I feel that this is worth being an answer (or the answer), rather than a comment. Mark, let me know if you want to convert the comment yourself and I'll happily delete mine.
John, you're trying to solve your problem in a way that imposes severe limitations: What's the bandwidth that you want to throttle to? What happens when the server is upgraded to a beefier CPU and can process more quickly? What if multiple uploads happen at the same time?
You probably want to have an upload of 4G in as quick a time as possible - imagine the connection going down in the middle - in a web application this typically means you'll have to restart the upload from the beginning. Thus you should decouple your processing from the upload procedure as much as possible.
You also don't mention the file format that gets uploaded: If it happens to be a zip file, note that the server can't do anything with the file until it's fully transmitted, as zip files have the directory of contents at their end. (this might be old knowledge, but at least the old spec had it this way. Someone correct me if this changed)
So: The proper way: Accept the file for processing, signal that you received it and are processing. If you like: Implement Ajax updates once you're done. In the simplest case: "click here to see if processing finished" or frequently reload the page. Anything works and everything is better than throttling throughput on this layer.
I have a regular JSP/Servlet/Java web application that is used for uploading pictures from a mobile device. I am using Apache Commons library for the upload. Application is hosted on WebSphere Application Server 7.0.
Everything is working fine and the user can upload several images totaling 8MB or more if he has a really good/strong signal/connection or on a good WiFi.
The problem arises when the user is at a location with poor 3G/4G signal/connection. He gets errors like "Illegal state exception" or some time-out error, and in some cases the mobile browser just stays on the submit page with the progress bar no longer moving.
Any suggestions on how to "gracefully" handle this? Like is there a way to intervene after a set amount of time and give the the user an option to submit the form without the file attachment (i.e. just submit the form text fields)? Any other suggestions are welcome too.
UPDATE: The setTimeout solution below worked for me. The other missing piece was that I have to issue a "browser stop" command to stop the original submission that's in progress before I can issue a re-submit. Otherwise, my re-submit command will just be ignored by the browser.
The usecase here is simple - if the upload didn't finish in N minutes, remove/clear the field using javascript and resend the form.
You don't need to control the upload in the basic implementation, just safely asume that if you set a timeout to resend, it won't happen if the first attempt was successful and the page reloaded.
jQuery pseudocode:
setTimeout(function(){
$imageFieldNode.remove();
$form.trigger('submit');
},30000);//after 30 seconds
The more advanced way is to use a ready solution for controlled upload. They work like that:
upload starts
js prompts the server in intervals with a GET query to get the size of content that was already received.
everytime it gets the info - it reports progress.
You can do a lot with these libs.
You can think about the approach used in popular webmail clients (when attaching files to a message):
The files are uploaded independently (i.e. before) of the form submit, using javascript. Each of the files are stored in a temporary directory, and after the upload succeeds the user can proceed with the action.
The upload status is displayed to the user, and if it fails the main action (form fill/submit) does not get interrupted.
I don't if the question title fits, but here is my problem:
I have a regular webhosting service in hostmonster, with a website built in php.
So I have php script running in a cron job that monitors a xml file for changes, and everytime a new entry comes into that xml file the script stores it in a database.
In the other hand there is java built desktop client, which needs to be noticed ASAP that a new entry is created, for this the client connects to a second php file every second, and this second files tells if there has been changes or not.
The thing is, every 260 connections my I.P gets banned from the server :( and the client crashes, the client will be used by several users.
I contacted support on how to handle this, but they tell me to use a single connection, I tried reusing the UrlConnection but after the first request it just gives null. then I tried with Sockets but no luck. I know there are libraries that manage this but I dont know how are they called. Can someone give me advice?
thank you guys.
Use a long polling method. Hold the connection opened until response arrives. This way you only need to ask for the update once.
PHP may not be the best tool for this job though.
I am try to create a JSP page that will show all the status in a group of local servers.
Currently I create a schedule class that will constantly poll to check the status of the server with 30 second interval, with 5 second delay to wait for each server reply, and provide the JSP page with the information. However I find this way to be not accurate as it will take some time before the information of the schedule class to be updated. Do you guys have a better way to check the status of several server within a local network?
-- Update --
Thanks #Romain Hippeau and #dbyrne for their answers
Currently I am trying to make the code more in server end, that is to do a constant check
on the status of the group of server asynchronously for update so as to make it more responsive.
However I forgot to add that the client have the ability to control the server status. Thus I have problem for example when the client changes the server status, and then refresh the page. When the page retrieve the information from not updated schedule class, it will show the previous status of the server instead.
You can use Tomcat Comet here is an article http://www.ibm.com/developerworks/web/library/wa-cometjava/index.html.
This technology (which is part of the Servlet 3.0 spec) allows you to push notifications to the clients. There are issues with running it behind a firewall, If you are within an Intranet this should not be too big of an issue
Make sure you poll the servers asynchronously. You don't want to wait for a response from one server before polling the next. This will dramatically cut down the amount of time it takes to poll all the servers. It was unclear to me from your question whether or not you are already doing this.
Asynchronous http requests in Java
Let's say I click a button on a web page to initiate a submit request. Then I suddenly realize that some data I have provided is wrong and that if it gets submitted, then I will face unwanted consequences (something like a shopping request where I may be forced to pay up for this incorrect request).
So I frantically click the Stop button not just once but many times (just in case).
What happens in such a scenario? Does the browser just cancel the request without informing the server? If in case it does inform the server, does the server just kill the process or does it also do some rolling back of all actions done as part of this request?
I code in Java. Does Java have any special feature that we can use to detect STOP requests and rollback whatever we did as part of this transaction?
A Web Page load from a browser is usually a 4 step process (not considering redirections):
Browser sends HTTP Request, when the Server is available
Server executes code (for dynamic pages)
Server sends the HTTP Response (usually HTML)
Browser renders HTML, and asks for other files (images, css, ...)
The browser reaction to "Stop" depends on the step your request is at that time:
If your server is slow or overloaded, and you hit "Stop" during step 1, nothing happens. The browser doesn't send the request.
Most of the times, however, "Stop" will be hit on steps 2, 3 and 4, and in those steps your code is already executed, the browser simply stops waiting for the response (2), or receiving the response (3), or rendering the response (4).
The HTTP call itself is always a 2 steps action (Request/Response), and there is no automatic way to rollback the execution from the client
Since this question may attract attention for people not using Java, I thought I would mention PHPs behavior in regard to this question, since it is very surprising.
PHP internally maintains a status of the connection to the client. The possible values are NORMAL, ABORTED and TIMEOUT. While the connection status is NORMAL, life is good and the script will continue to execute as expected.
If the user clicks the Stop button in their browser, the connection is typically closed by the client and the status changes to ABORTED. A change of status to ABORTED will immediately end execution of the running script. As an aside, the same thing happens when the status changes to TIMEOUT (PHPs setting for the allowed run-time of scripts is exceeded).
This behavior may be useful in certain circumstances, but there are others where it could be problematic. It seems that it should be safe to abort at any time during a proper GET request; however, aborting in the middle of a request that makes a change on the server could lead to only partially completed changes.
Check out the PHP manual's entry on Connection Handling to see how to avoid complications resulting from this behavior:
http://www.php.net/manual/en/features.connection-handling.php
Generally speaking, the server will not know that you've hit stop, and the server-side process will complete. At the point that the server tries to send the response data back to the client, you may see an error because the connection was closed, but equally you may not. What you won't get is the server thread being suddenly interrupted.
You can use various elaborate mechanisms to mitigate this, like having the send send frequent ajax calls to the server that say "still waiting", and have the server perform its processing in a new thread which checks these calls, but that doesn't solve the problem completely.
The client will immediately stop transmitting data and close its connection. 9 out of 10 times your request will already have got through (perhaps due to OS buffers being "flushed" to the server). No extra information is sent to the server informing it that you stopped your request.
Your submit in important scenarios should have two stages. Verify and submit. If the final submit goes though, you commit any tranactions. I cant think of any other way really to avoid that situation, other than allowing your user to undo his actions after a commit. for example The order example, after the order is done to allow your customers to change their mind and canel the order if it has not been shipped yet. Of course its extra code you need to write to suuport that.