I am developing a web application that allows users to run AI algorithms on a server remotely to decrease wait time for solutions. Here is an outline.
Browser -> jQuery AJAX -> Apache2 proxy -> tomcat7 -> RESTful java -> Runtime.getRuntime().exec() -> command-line C algorithm
the restful service returns desired information via AJAX response and some handling happens on the browser.
This works fine for most of the algorithms that I have tested. Some of the algorithms, however, will timeout after a very long time running. I have a live status update implemented, so the algorithm is still running and generating output constantly on the browser, but after 5 minutes or so, I get a 500 Internal Server Error.
Killing the algorithm process from the command line also results in a 500 Internal Server Error.
Running the algorithm directly from the command-line results in proper execution.
Sending an AJAX call to a restful method that sleeps for an hour results in a 503 (Service Temporarily Unavailable) error.
What else could be the cause of this? I've been troubleshooting for a hot minute and am all out of ideas at the moment.
Thanks for your help!
--EDIT--
I have developed a workaround that efficiently solves the problem, however, I am still interested to know what anyone else thinks. Eliminating the long-lasting AJAX call and replacing it with a repeating call to check on the status of the algorithm through an alternate status file works with long running processes.
But, why previously would I have had a problem with an AJAX call specified never to time out..?
Related
I am experiencing a weird problem in a GWT application. I have multiple RPC calls which work correctly, however if one of them takes longer than 30 seconds on the server side (for example a database or web service call took a long time), the server side code gets repeated (literally; the code get executed again).
I do not experience this in my develompent environment, however when I deploy the application (container is Jetty 6.1.24 on Ubuntu 12.04 LTS) and connect to it from a browser (regardless of the type), the problem starts to exist.
Since I do not think this a designed GWT behaviour (but I might be wrong ofcourse) I am basically looking for ways to debug this and find out the reason for it. Thanks in advance!
Some more information would be great to understand what is going on, but I would start the investigation by first narrowing down whether the erroneous GWT-RPC call is triggered on the client or server.
To see if the extra GWT-RPC request originates from the browser, in Google Chrome, go to View->Developer->Developer Tools.. Click on the Network Tab.
As you reproduce your steps, the Network Tab will show you every request sent to the server.
If you see the erroneous GWT-RPC request logged in this Network Tab View, then the request is fired off from the GWT-compiled Javascript in the application. With SuperDevMode, you can then set debug breakpoints in the browser and see what is triggering the request.
If the erroneous GWT-RPC is not shown in the Network Tab View, then the server-side method is somehow triggered by your server code/configuration. Set some debug breakpoints on your server code, and drill down the call stack to see what is calling the method.
Hope that helps to get started on the investigation.
I have a Mule app that takes in HTTP requests (http inbount endpoint.) When I'm starting my server, I need to make sure the Mule app is ready to take in requests before I start another program, let's call it program B, which is the client sending the requests to the Mule app.
While Mule starts at about the same time as Program B in Ubuntu, program B is much faster to be up and kicking than the Mule app is. Program B only gets "Connection Refused" Errno111 until the Mule app is ready, which, while not being a critical issue (thanks to retries), is annoying to see happening at every startup. Therefore, I need to let Program B idle for a given amount of time until the app is ready to take in requests.
So far I can think of two ways to do this. The first is to use a hard-coded integer in my shell script (Program B), for instance:
sleep 180
with the hope it is long enough for the Mule app to be ready. It does work quite reliably because Mule and the app are identical every time the server is started, and so they tend to take the same amount of time given the same hardware/OS.
The second solution I'm thinking of is to check the output of Mule or the new lines appended to its log file and trigger the program when the Mule app is ready to start. When the app is ready, you usually have a line like this in the main mule.log as well as in the stdout:
+ Started app 'myapp' +
I could then sleep for a few seconds to be sure and then start to make requests.
However, I'm wondering if there is not a more refined way to do this. For instance, Program B could be the one expecting to be notified by Mule when it is ready. Or there may be a way to query Mule in a cleaner fashion to tell whether an app is ready or not.
Thanks for your suggestions!
Mule ESB has an JMX interface which provide the endpoint status.
<management:jmx-server>
<management:connector-server url="service:jmx:rmi:///jndi/rmi://localhost:1099/server" rebind="false" />
</management:jmx-server>
JMX MBean
Evaluate the attribute 'Connected=true' by a jmx client before starting the other process.
Mule.$YOUR_SERVICE_NAME > Endpoint > $FLOW_NAME > $CONNECTOR_NAME
+ Attribute: 'Connected' (true/false)
As a solution, I am sending a test request (a simple GET as opposed to a POST) to Mule every 10 seconds, if I get a connection refused error, then the loop continues. When the request goes through, then program B can start safely. In the main flow, I'm using a choice and separate GET requests from POST requests, so the POST requests do what they have always done, while the GET request is only used to check whether the app is up or not. There may be a simpler way to get information from Mule than this, but this seems to be far better than waiting a number of seconds or grepping the log.
I have certain use case where processing can take upto 2 hrs of time. so once user submit the request from browser, my understanding was browser would keep on waiting for response to arrive. But I get error page after some 15-20 mins.
I understand that web request should not so much time consuming, but i stuck with the existing architecture and design.
Can some one suggest some solution for this problem.
I am using IE 9 as browser. Tomcat as server.
What you could do for similar issues is create a separate thread on the server and return a response to the user saying that job has been started and then either
display the result of that job on a specific page (this seems like an acceptable solution, the user will probably not stay in front of his screen for such long task)
or via ajax do some polling to have the status of the job you just triggered.
Most probably the server timeout is about 15 min, therefore you get the error after 15 min. One solution is to increase the server timeout. But increasing to 2 hours would be too long. Another option is to poll the server from the browser to find out the status of the task. You can use ajax call for the purpose.
I am calling a web service from my Android application. I have many different calls to this service all through the application and every one of them is returning data in less than a second, except for one. One of my calls can take up to a minute to return the data even though the actual web service call is near instantaneous. The problem occurs with the line:
transport.call(SOAP_ACTION, soapEnvelope);
That is called and the value is returned from the web service almost instantaneously. But it can take up to a minute to reach the next line:
SoapObject result = (SoapObject) soapEnvelope.bodyIn;
What is happening between the web service returning data and the app hitting the next line (above)? Is there a way to reduce this delay? Is there anything simple to check?
Is there a way to reduce this delay? Is there anything simple to check?
The only way to know is to measure the difference in time in various areas. For a SOAP web service call these are the times to measure.
Client side time
Client Application code -> Request Handlers ->
Request Serialization ->
Request dispatch ->
HTTP Transport -> Server side
Server side time
Receive HTTP Request -> De-serialization -> Application code -> Response handlers -> Serialization -> Dispatch -> HTTP Transport
The blockage is usually in the application code, handlers and the network. Measure those and you can find where the time is spent.
To measure CPU time taken by your application code and handlers use a
profiler. I'd recommend Jprofiler.
To measure network time, ping the target server and also use a web debugging proxy like Charles. It can tell you the time spent by the request on the network.
Turns out the delay was only while debugging the app. When running the app without the debugger attached it returns near instanteously.
I've got a mad problem with an application I test with Selenium RC
At the end of the page, several calls are made by a javascript script to an analytics webservice which takes literally minutes to respond
Selenium wait for these calls to end before going to the new page, though their response is unrelated to the good execution of the process
Eventually, Selenium throws a TimeOut Exception (timeout is set to 4 minutes)
I'm using selenium-RC 1.0.1 along with Firefox 3.5.16
First ,what I can't do :
- change the application (i have no control over it)
- change my Firefox version (several production machines are involved, and I need this version)
- using WebDriver/Selenium 2 (for the reason above)
I think that blocking javascript calls would be the thing to do, but I can't figure out How to do that.
- I'm trying, with a selenium.runScript and a selenium.getEval to set the javascript variables to null, but it's too late when they're set
- I'm using Gecko's Object.watch method to see when values are changed, but with no success
I would like to know if there is a way to filter content via Selenium before the Dom is created. I think it would be possible via a Firefox extension, but that would be the last thing I want to do
Or, perhaps it's possible to recognize all active XHR in the page and abort it
I'm open to a bunch of new ideas
Thanks for reading
Grooveek
Sorry to hear that changing the application isn't an option - when I ran into a similar situation (external analytics service called through ajax), I wrote a mock in JavaScript for the service and had the version of my application that I run unit tests against use the mock. (In that case it wasn't speed of page load we were worried about, it was junking up the analytics data with automated test runs) That allowed me to avoid hitting the external site, yet still verify in my selenium tests that I was calling the right calls in the analytics site's javascript library at the appropriate times.
What I would suggest for your case is that you write a small HTTP proxy (you may find this question's answers useful, though if I were doing it I'd do it in Perl or Python, because that's pretty fast to write) that takes requests headed out to the external site, and responds immediately with an empty document or whatever's appropriate in your situation. (but handling all requests not aimed at the analytics site normally)
In other words, don't try to prevent the javascript from executing directly or by filtering the DOM, but just intercept the slow external requests and respond quickly. I suggest this because intercepting an entire request is significantly easier than filtering content.
Then, when you start the selenium RC server, point it at your http proxy as the upstream proxy. Browsers started by the selenium server will use the RC server as their proxy, and it'll then filter everything through your proxy.
With this approach, you basically get to pretend that the external site is whatever you want.