in my project, I have some JavaScript responsible for tracking user actions in order to optimize the page's layout. These calls are executed when the user clicks something, including the links leading to further pages.
I have the whole flow covered by automated tests written in Java and based on Selenium Webdriver. I'm using a Browsermob proxy to capture the requests and verify that the correct data is being passed to the user tracking service.
In certain situations, the requests hitting the service do not get recorded by the proxy. The reason this happens is because the browser navigates to the next page before getting the response from the tracking service. The request actually hits the destination, which I can see by the state of the database. Because the browser does not wait for the responses, they happen not to be noticed by the proxy, despite the default 5 second wait, which just seems to be ignored in this case. This only happens once in a while, causing false negatives in my test reports.
I can't force the browser to actually wait for those requests because I don't want the tracking to impede the user journey. What I'd like to do is to somehow configure the proxy to tell the difference between requests that have not been sent and the ones cancelled midway. This way I could attach this information to my reports.
Is this possible to achieve using the Browsermob proxy? Perhaps some other tool would do a better job.
Try to use phantomjs webDriver implementation, we don't need to initiate jetty proxy server and we can get all requests even those without responses.
Related
Maybe I'm overthinking this but I'd like some advice. Customers can place an order inside my GWT application and on a secondary computer I want to monitor those submittals inside th eGWT application and flash an alarm every time an order is submitted, provided the user has OK'd this. I cant figure out the best way to do this. Orders are submitted to a mysql database if that makes any difference. Does anyone have a suggestion on what to do or try?
There are two options: 1) polling or 2) pushing which would allow your server (in the servlet handling the GWT request) to notify you (after the order is successfully placed).
In 1) polling, the client (meaning the browser you are using to monitor the app) will periodically call the server to see if there is data waiting. It may be more resource intensive as many calls are made for infrequent data. It may also be slower due to the delay between calls. If only your monitoring client is calling though it wouldn't be so resource intensive.
In 2) pushing, the client will make a request and the request will be held open until there is data. It is less resource intensive and can be faster. Once data is returned, the client sends another request (this is long polling). Alternatively, streaming is an option where the server doesn't sent a complete request and just keeps sending data. This streaming option requires a specific client-/browser-specific implementation though. If it's just you monitoring though, you should know the client and could set it up specifically for that.
See the demo project in GWT Event Service
Here is the documentation (user manual) for it.
Also see GWT Server Push FAQ
There are other ways of doing it other than GWT Event Service of course. Just google "GWT server push" and you'll find comet, DWR, etc., and if you are using Google's App Engine the Channel API
Here's the problem:
I'd like to expose a page within a site which is able to report some log lines.
The site is a java spring 3.0 web application.
Theoretically there are two ways to get the job done:
1- the server pushes the lines to be logged whenever they are ready.
2- the client does a polling for new lines.
I'd prefer the first way but I really don't know if it is feasible or not.
I imagine the scenario as follows:
the client REQUESTs the "consolle page"
the server RESPONSEs such page
END TRANSACTION
the server REQUESTs (or what?) the updates...
the client... ?
And finally, what techonolgy suits best my requirements? I suppose JSP are not enough maybe some javascript?
I've implemented similar things in the past using timed polling with AJAX.
Your console page would run some javascript/jQuery that polls the server every so often via an AJAX request, and if it receives new data, appends (or prepends, whichever you like) it to your console box, or div, or whatever it is you're using.
Last I checked (which was quite a while back), this is how Facebook chat worked (though it's probably changed since then).
There are push implementations you could use (check out HTML5 Websockets, that might help), but AJAX polling is probably the simplest solution for something like this.
I've got a mad problem with an application I test with Selenium RC
At the end of the page, several calls are made by a javascript script to an analytics webservice which takes literally minutes to respond
Selenium wait for these calls to end before going to the new page, though their response is unrelated to the good execution of the process
Eventually, Selenium throws a TimeOut Exception (timeout is set to 4 minutes)
I'm using selenium-RC 1.0.1 along with Firefox 3.5.16
First ,what I can't do :
- change the application (i have no control over it)
- change my Firefox version (several production machines are involved, and I need this version)
- using WebDriver/Selenium 2 (for the reason above)
I think that blocking javascript calls would be the thing to do, but I can't figure out How to do that.
- I'm trying, with a selenium.runScript and a selenium.getEval to set the javascript variables to null, but it's too late when they're set
- I'm using Gecko's Object.watch method to see when values are changed, but with no success
I would like to know if there is a way to filter content via Selenium before the Dom is created. I think it would be possible via a Firefox extension, but that would be the last thing I want to do
Or, perhaps it's possible to recognize all active XHR in the page and abort it
I'm open to a bunch of new ideas
Thanks for reading
Grooveek
Sorry to hear that changing the application isn't an option - when I ran into a similar situation (external analytics service called through ajax), I wrote a mock in JavaScript for the service and had the version of my application that I run unit tests against use the mock. (In that case it wasn't speed of page load we were worried about, it was junking up the analytics data with automated test runs) That allowed me to avoid hitting the external site, yet still verify in my selenium tests that I was calling the right calls in the analytics site's javascript library at the appropriate times.
What I would suggest for your case is that you write a small HTTP proxy (you may find this question's answers useful, though if I were doing it I'd do it in Perl or Python, because that's pretty fast to write) that takes requests headed out to the external site, and responds immediately with an empty document or whatever's appropriate in your situation. (but handling all requests not aimed at the analytics site normally)
In other words, don't try to prevent the javascript from executing directly or by filtering the DOM, but just intercept the slow external requests and respond quickly. I suggest this because intercepting an entire request is significantly easier than filtering content.
Then, when you start the selenium RC server, point it at your http proxy as the upstream proxy. Browsers started by the selenium server will use the RC server as their proxy, and it'll then filter everything through your proxy.
With this approach, you basically get to pretend that the external site is whatever you want.
I am try to create a JSP page that will show all the status in a group of local servers.
Currently I create a schedule class that will constantly poll to check the status of the server with 30 second interval, with 5 second delay to wait for each server reply, and provide the JSP page with the information. However I find this way to be not accurate as it will take some time before the information of the schedule class to be updated. Do you guys have a better way to check the status of several server within a local network?
-- Update --
Thanks #Romain Hippeau and #dbyrne for their answers
Currently I am trying to make the code more in server end, that is to do a constant check
on the status of the group of server asynchronously for update so as to make it more responsive.
However I forgot to add that the client have the ability to control the server status. Thus I have problem for example when the client changes the server status, and then refresh the page. When the page retrieve the information from not updated schedule class, it will show the previous status of the server instead.
You can use Tomcat Comet here is an article http://www.ibm.com/developerworks/web/library/wa-cometjava/index.html.
This technology (which is part of the Servlet 3.0 spec) allows you to push notifications to the clients. There are issues with running it behind a firewall, If you are within an Intranet this should not be too big of an issue
Make sure you poll the servers asynchronously. You don't want to wait for a response from one server before polling the next. This will dramatically cut down the amount of time it takes to poll all the servers. It was unclear to me from your question whether or not you are already doing this.
Asynchronous http requests in Java
Let's say I click a button on a web page to initiate a submit request. Then I suddenly realize that some data I have provided is wrong and that if it gets submitted, then I will face unwanted consequences (something like a shopping request where I may be forced to pay up for this incorrect request).
So I frantically click the Stop button not just once but many times (just in case).
What happens in such a scenario? Does the browser just cancel the request without informing the server? If in case it does inform the server, does the server just kill the process or does it also do some rolling back of all actions done as part of this request?
I code in Java. Does Java have any special feature that we can use to detect STOP requests and rollback whatever we did as part of this transaction?
A Web Page load from a browser is usually a 4 step process (not considering redirections):
Browser sends HTTP Request, when the Server is available
Server executes code (for dynamic pages)
Server sends the HTTP Response (usually HTML)
Browser renders HTML, and asks for other files (images, css, ...)
The browser reaction to "Stop" depends on the step your request is at that time:
If your server is slow or overloaded, and you hit "Stop" during step 1, nothing happens. The browser doesn't send the request.
Most of the times, however, "Stop" will be hit on steps 2, 3 and 4, and in those steps your code is already executed, the browser simply stops waiting for the response (2), or receiving the response (3), or rendering the response (4).
The HTTP call itself is always a 2 steps action (Request/Response), and there is no automatic way to rollback the execution from the client
Since this question may attract attention for people not using Java, I thought I would mention PHPs behavior in regard to this question, since it is very surprising.
PHP internally maintains a status of the connection to the client. The possible values are NORMAL, ABORTED and TIMEOUT. While the connection status is NORMAL, life is good and the script will continue to execute as expected.
If the user clicks the Stop button in their browser, the connection is typically closed by the client and the status changes to ABORTED. A change of status to ABORTED will immediately end execution of the running script. As an aside, the same thing happens when the status changes to TIMEOUT (PHPs setting for the allowed run-time of scripts is exceeded).
This behavior may be useful in certain circumstances, but there are others where it could be problematic. It seems that it should be safe to abort at any time during a proper GET request; however, aborting in the middle of a request that makes a change on the server could lead to only partially completed changes.
Check out the PHP manual's entry on Connection Handling to see how to avoid complications resulting from this behavior:
http://www.php.net/manual/en/features.connection-handling.php
Generally speaking, the server will not know that you've hit stop, and the server-side process will complete. At the point that the server tries to send the response data back to the client, you may see an error because the connection was closed, but equally you may not. What you won't get is the server thread being suddenly interrupted.
You can use various elaborate mechanisms to mitigate this, like having the send send frequent ajax calls to the server that say "still waiting", and have the server perform its processing in a new thread which checks these calls, but that doesn't solve the problem completely.
The client will immediately stop transmitting data and close its connection. 9 out of 10 times your request will already have got through (perhaps due to OS buffers being "flushed" to the server). No extra information is sent to the server informing it that you stopped your request.
Your submit in important scenarios should have two stages. Verify and submit. If the final submit goes though, you commit any tranactions. I cant think of any other way really to avoid that situation, other than allowing your user to undo his actions after a commit. for example The order example, after the order is done to allow your customers to change their mind and canel the order if it has not been shipped yet. Of course its extra code you need to write to suuport that.