I am experiencing a weird problem in a GWT application. I have multiple RPC calls which work correctly, however if one of them takes longer than 30 seconds on the server side (for example a database or web service call took a long time), the server side code gets repeated (literally; the code get executed again).
I do not experience this in my develompent environment, however when I deploy the application (container is Jetty 6.1.24 on Ubuntu 12.04 LTS) and connect to it from a browser (regardless of the type), the problem starts to exist.
Since I do not think this a designed GWT behaviour (but I might be wrong ofcourse) I am basically looking for ways to debug this and find out the reason for it. Thanks in advance!
Some more information would be great to understand what is going on, but I would start the investigation by first narrowing down whether the erroneous GWT-RPC call is triggered on the client or server.
To see if the extra GWT-RPC request originates from the browser, in Google Chrome, go to View->Developer->Developer Tools.. Click on the Network Tab.
As you reproduce your steps, the Network Tab will show you every request sent to the server.
If you see the erroneous GWT-RPC request logged in this Network Tab View, then the request is fired off from the GWT-compiled Javascript in the application. With SuperDevMode, you can then set debug breakpoints in the browser and see what is triggering the request.
If the erroneous GWT-RPC is not shown in the Network Tab View, then the server-side method is somehow triggered by your server code/configuration. Set some debug breakpoints on your server code, and drill down the call stack to see what is calling the method.
Hope that helps to get started on the investigation.
Related
I have a servlet-jsp web app. One of the requests in the login action is taking upto 120 sec to complete on Firefox and IE browsers. However this same request is completed normally in chrome (below 1 sec). On debugging the code I can conclude that my web app filter is returning the response quickly but the browser shows it took 120 sec to receive it. If you see the firefox developer tool below, it shows the waiting time to be 360ms and the receiving time as 120
s approx. This same behavior can be seen on IE also. Any clue what might be causing this?
EDIT 1: This issue is being observed only for requests that return a 302 response code.
EDIT 2: I tried using an intercepting tool to check the requests, for this I had to route the traffic through a proxy at 127.0.0.1 (localhost) . One observation is that while doing this the application is significantly faster. A possible explanation I observed for this is the proxy application returns the 302 requests with a status code of 200 to the browser. So now the question is why are 302 calls slow on the Firefox and IE browser?
Finally was able to resolve this issue from my app itself. In one of the places in the code the flush operation was being called upon the output stream of the response.
super.getOutputStream().flush();
This was only being done in cases when the request was not of type 302 which makes sense because 302s don't have any data in the response. Chrome is smart enough to see that and doesn't wait expecting any data. However, firefox and IE keep waiting for the data and thus the connection for that particular call is not closed. This causes the delay which can be seen in the image I attached in the question (browser tries to download content which never arrives).
I have now changed the code to do the flush operation for 302s calls also. This has resolved the entire problem.
Each browser has it own setting for handling connection. There are limits.
An answer-aggregation is already there:
how-to-solve-chromes-6-connection-limit
EDIT: Since the Question was edited, my answer don't match the topic anymore.
If a request of a web app act differently on different browsers, then the issue should be on the client side. (in the scope of single request-response interaction)
Application - Struts 1.2, Tomcat 6
Action class calls a service which through DAO executes a query and returns results.
The query is not long running and gives results in seconds when run directly over the database (through SQL client say SQL Developer), but, when the user browses through application front end and same query is run in background through the application, the system hangs and the response either times out or takes a lot of time.
Issue is specific to one particular screen implying that app server to db server connectivity is ok.
Is there a way to enable debug logging of Tomcat/ Struts without any code change, to identify one out of the two scenarios below or any other scenarios possible?
The query is taking the time.
The response is not being sent back to the browser.
P.S. - Debugging or code change to add logging is not an immediate option.
Something to look at is a "Java Profiler". The one that I've used and have liked is YourKit.
I have an android application which sends information to a server in particular it sends some stats to check the normal use that users do with the app. For example if they click a button to see a specific page a stat is sent to the server specifying a series of informations like the model of the phone, the page requested, the version of android ecc ecc. These informations are later visible using splunk http://www.splunk.com/. Now my problem is, for each event sent by my app, I should be able to say if the event was correclty sent to the server or not. My idea is to develop a proxy that can intercept the requests made by my app to the server and listen the response, like Charles proxy does. The problem is I don't know where to start. Anyone can suggest me how could accomplish this task? Thanks
Well I think you could add code to your application so that, when you send your data, you get the HTTP reply and check if code is 200 (Which stand for OK responde from server).
If so you know that your request went fine, if not(different reply code) you treat it as an error.
You will have to check if your server is responding properly, for an error and an OK message.
See also HTTP STATUS CODES
A good idea would be to use the Proxy Design Pattern for your extra code.
I have a crawler Java application which is supposed to connect to some HTTP servers, download the HTML content of their pages, then move on to other HTTP servers. For this task, I've used the Apache HTTP library.
At the first few hours of the run, things seem to work rather smoothly (there are some connection-related exceptions thrown around from time to time, but that's to be expected).
Yet after a while, it seems like I keep getting SocketTimeoutException on every request I send out. The exception does not occur on the HttpClient class's "execute" method, but rather when I try to get the content of the Entity (which I retrieve from the HttpResponse object), or when I try to write that content to a file.
Then, if I stop the application, and start it over again, things seem to go back to working fine - even though it picks up from where it stopped at, meaning it's interacting with the same servers which I received the SocketTimeoutException when trying to interact with before.
I tried looking for all kinds of possible clean-ups that I might be missing and might be essential when using this library, but couldn't find anything.
Any help would be greatly appreciated.
Thanks.
This sounds like the kind of thing which could be caused by connection pools where you're not closing things when you're done with them, if the timeout occurs while the client library waits to retrieve a pooled connection. Are you sure you're closing everything properly (in finally statements)?
If you run Wireshark to monitor your traffic, what network traffic occurs while it's "broken"?
Make sure that you're not using a lot of http requests at the same time. For example, send 5 http requests, and wait for first response. Then you can make another request etc. Looks like your http requests opens too much sockets.
I've got a mad problem with an application I test with Selenium RC
At the end of the page, several calls are made by a javascript script to an analytics webservice which takes literally minutes to respond
Selenium wait for these calls to end before going to the new page, though their response is unrelated to the good execution of the process
Eventually, Selenium throws a TimeOut Exception (timeout is set to 4 minutes)
I'm using selenium-RC 1.0.1 along with Firefox 3.5.16
First ,what I can't do :
- change the application (i have no control over it)
- change my Firefox version (several production machines are involved, and I need this version)
- using WebDriver/Selenium 2 (for the reason above)
I think that blocking javascript calls would be the thing to do, but I can't figure out How to do that.
- I'm trying, with a selenium.runScript and a selenium.getEval to set the javascript variables to null, but it's too late when they're set
- I'm using Gecko's Object.watch method to see when values are changed, but with no success
I would like to know if there is a way to filter content via Selenium before the Dom is created. I think it would be possible via a Firefox extension, but that would be the last thing I want to do
Or, perhaps it's possible to recognize all active XHR in the page and abort it
I'm open to a bunch of new ideas
Thanks for reading
Grooveek
Sorry to hear that changing the application isn't an option - when I ran into a similar situation (external analytics service called through ajax), I wrote a mock in JavaScript for the service and had the version of my application that I run unit tests against use the mock. (In that case it wasn't speed of page load we were worried about, it was junking up the analytics data with automated test runs) That allowed me to avoid hitting the external site, yet still verify in my selenium tests that I was calling the right calls in the analytics site's javascript library at the appropriate times.
What I would suggest for your case is that you write a small HTTP proxy (you may find this question's answers useful, though if I were doing it I'd do it in Perl or Python, because that's pretty fast to write) that takes requests headed out to the external site, and responds immediately with an empty document or whatever's appropriate in your situation. (but handling all requests not aimed at the analytics site normally)
In other words, don't try to prevent the javascript from executing directly or by filtering the DOM, but just intercept the slow external requests and respond quickly. I suggest this because intercepting an entire request is significantly easier than filtering content.
Then, when you start the selenium RC server, point it at your http proxy as the upstream proxy. Browsers started by the selenium server will use the RC server as their proxy, and it'll then filter everything through your proxy.
With this approach, you basically get to pretend that the external site is whatever you want.