Functional Testing in Jmeter - java

I am new to JMeter and I wish to do a functional test on a website. I have added an HTTP request default and added a recorder and I successfully recorded my usecase. When I run the script it shows as executing in the view record table listener. There are a few errors along the way but the script does execute.
After the script terminates execution I logged into the website and the use-case has not got executed. I am confused as to what is going on.

As per 4 Things You Should Never Do with Your JMeter Script article:
2. Don’t run the script exactly as you recorded it
After recording your script, there is still some work to do before you run it. It’s necessary to correlate variables, parameterize and add elements, to faithfully simulate users.
First of all add a HTTP Cookie Manager to your Test Plan, most web applications use cookies for authentication and cookie support is a vital part of JMeter web test plan.
Record your test one more time and compare 2 recorded scripts. If there are differences - you will need to correlate the corresponding values, i.e. extract them from the previous response using one of JMeter's Post-Processors and substitute recorded "hard-coded" values with the appropriate JMeter Variables
Add View Results Tree listener to your Test Plan and re-run your test scenario carefully inspecting request and response details. The fact that request is not failing doesn't necessarily mean the fact your test is doing what it is supposed to be doing as you might be hitting login page all the time and JMeter simply treats HTTP Status Code 200 as successful.

Related

Investigating root cause for long response from application

Application - Struts 1.2, Tomcat 6
Action class calls a service which through DAO executes a query and returns results.
The query is not long running and gives results in seconds when run directly over the database (through SQL client say SQL Developer), but, when the user browses through application front end and same query is run in background through the application, the system hangs and the response either times out or takes a lot of time.
Issue is specific to one particular screen implying that app server to db server connectivity is ok.
Is there a way to enable debug logging of Tomcat/ Struts without any code change, to identify one out of the two scenarios below or any other scenarios possible?
The query is taking the time.
The response is not being sent back to the browser.
P.S. - Debugging or code change to add logging is not an immediate option.
Something to look at is a "Java Profiler". The one that I've used and have liked is YourKit.

Connection Timed out Page

I have a Java Web application that generates a report and I have the ability to export that report to an excel file, problem is whenever I generate it as an excel file a "Connection Timed Out" page is being displayed on a firefox web browser.
Basically I have no idea why is this happening, I see no problems in my code could it be server issues or the amount of data I'm generating? Also no error logs are being displayed.
Any advise, suggestions would be of great help thanks.
It sounds like the request is taking too long, and being timed out. Basically it's taking too long to generate the report. This could be too long for the client, the app server or the webserver (if you have a separate webserver). You have a few options:
Find out where the timeout settings are in the Application Server and increase them
Speed up your report writing code so it doesn't take as long
Make the report writer an asyncronous job (eg by kicking of the report generation in a new thread), and have the client pole the server until it's finished, then request the file.
Update based on OP comment:
Regarding the last suggestion:
If the report's generated by another thread, the current request will return before the report is generated, so the browser won't have to wait at all. However, this is quite a large amount of work because you have to have a way for the client-side code to find out when the report is finished. Also, you are not supposed to launch your own threads from a Servlet.
Maybe you can make the original request via AJAX, or in an iFrame? This way the restrictive timeout threshold may not be in effect.

Is there a way to capture cancelled requests using a har proxy?

in my project, I have some JavaScript responsible for tracking user actions in order to optimize the page's layout. These calls are executed when the user clicks something, including the links leading to further pages.
I have the whole flow covered by automated tests written in Java and based on Selenium Webdriver. I'm using a Browsermob proxy to capture the requests and verify that the correct data is being passed to the user tracking service.
In certain situations, the requests hitting the service do not get recorded by the proxy. The reason this happens is because the browser navigates to the next page before getting the response from the tracking service. The request actually hits the destination, which I can see by the state of the database. Because the browser does not wait for the responses, they happen not to be noticed by the proxy, despite the default 5 second wait, which just seems to be ignored in this case. This only happens once in a while, causing false negatives in my test reports.
I can't force the browser to actually wait for those requests because I don't want the tracking to impede the user journey. What I'd like to do is to somehow configure the proxy to tell the difference between requests that have not been sent and the ones cancelled midway. This way I could attach this information to my reports.
Is this possible to achieve using the Browsermob proxy? Perhaps some other tool would do a better job.
Try to use phantomjs webDriver implementation, we don't need to initiate jetty proxy server and we can get all requests even those without responses.

Filtering javascript XHR calls in selenium RC

I've got a mad problem with an application I test with Selenium RC
At the end of the page, several calls are made by a javascript script to an analytics webservice which takes literally minutes to respond
Selenium wait for these calls to end before going to the new page, though their response is unrelated to the good execution of the process
Eventually, Selenium throws a TimeOut Exception (timeout is set to 4 minutes)
I'm using selenium-RC 1.0.1 along with Firefox 3.5.16
First ,what I can't do :
- change the application (i have no control over it)
- change my Firefox version (several production machines are involved, and I need this version)
- using WebDriver/Selenium 2 (for the reason above)
I think that blocking javascript calls would be the thing to do, but I can't figure out How to do that.
- I'm trying, with a selenium.runScript and a selenium.getEval to set the javascript variables to null, but it's too late when they're set
- I'm using Gecko's Object.watch method to see when values are changed, but with no success
I would like to know if there is a way to filter content via Selenium before the Dom is created. I think it would be possible via a Firefox extension, but that would be the last thing I want to do
Or, perhaps it's possible to recognize all active XHR in the page and abort it
I'm open to a bunch of new ideas
Thanks for reading
Grooveek
Sorry to hear that changing the application isn't an option - when I ran into a similar situation (external analytics service called through ajax), I wrote a mock in JavaScript for the service and had the version of my application that I run unit tests against use the mock. (In that case it wasn't speed of page load we were worried about, it was junking up the analytics data with automated test runs) That allowed me to avoid hitting the external site, yet still verify in my selenium tests that I was calling the right calls in the analytics site's javascript library at the appropriate times.
What I would suggest for your case is that you write a small HTTP proxy (you may find this question's answers useful, though if I were doing it I'd do it in Perl or Python, because that's pretty fast to write) that takes requests headed out to the external site, and responds immediately with an empty document or whatever's appropriate in your situation. (but handling all requests not aimed at the analytics site normally)
In other words, don't try to prevent the javascript from executing directly or by filtering the DOM, but just intercept the slow external requests and respond quickly. I suggest this because intercepting an entire request is significantly easier than filtering content.
Then, when you start the selenium RC server, point it at your http proxy as the upstream proxy. Browsers started by the selenium server will use the RC server as their proxy, and it'll then filter everything through your proxy.
With this approach, you basically get to pretend that the external site is whatever you want.

Best method of triggering a shell script from Java

I have a shell script which I'd like to trigger from a J2EE web app.
The script does lots of things - processing, FTPing, etc - it's a legacy thing.
It takes a long time to run.
I'm wondering what is the best approach to this. I want a user to be able to click on a link, trigger the script, and display a message to the user saying that the script has started. I'd like the HTTP request/response cycle to be instantaneous, irrespective of the fact that my script takes a long time to run.
I can think of three options:
Spawn a new thread during the processing of the user's click. However, I don't think this is compliant with the J2EE spec.
Send some output down the HTTP response stream and commit it before triggering the script. This gives the illusion that the HTTP request/response cycle has finished, but actually the thread processing the request is still sat there waiting for the shell script to finish. So I've basically hijacked the containers HTTP processing thread for my own purpose.
Create a wrapper script which starts my main script in the background. This would let the request/response cycle to finish normally in the container.
All the above would be using a servlet and Runtime.getRuntime().exec().
This is running on Solaris using Oracle's OC4J app server, on Java 1.4.2.
Please does anyone have any opinions on which is the least hacky solution and why?
Or does anyone have a better approach? We've got Quartz available, but we don't want to have to reimplement the shell script as a Java process.
Thanks.
You mentioned Quartz so let's go for an option #4 (which is IMO the best of course):
Use Quartz Scheduler and a org.quartz.jobs.NativeJob
PS: The biggest problem may be to find documentation and this is the best source I've been able to find: How to use NativeJob?
I'd go with option 3, especially if you don't actually need to know when the script finishes (or have some other way of finding out other than waiting for the process to end).
Option 1 wastes a thread that's just going to be sitting around waiting for the script to finish. Option 2 seems like a bad idea. I wouldn't hijack servlet container threads.
Is it necessary for your application to evaluate output from the script you are starting, or is this a simple fire-and-forget job? If it's not required, you can 'abuse' the fact that Runtime.getRuntime().exec() will return immediately with the process continuing to run in the background. If you actually wanted to wait for the script/process to finish, you would have to invoke waitFor() on the Process object returned by exec().
If the process you are starting writes anything to stdout or stderr, be sure to redirect these to either log files or /dev/null, otherwise the process will block after a while, since stdout and stderr are available as InputStreams with limited buffering capabilites through the Process object.
My approach to this would probably be something like the following:
Set up an ExecutorService within the servlet to perform the actual execution.
Create an implementation of Callable with an appropriate return type, that wraps the actual script execution (using Runtime.exec()) to translate Java input variables to shell script arguments, and the script output to an appropriate Java object.
When a request comes in, create an appropriate Callable object, submit it to the executor service and put the resulting Future somewhere persistent (e.g. user's session, or UID-keyed map returning the key to the user for later lookups, depending on requirements). Then immediately send an HTTP response to the user implying that the script was started OK (including the lookup key if required).
Add some mechanism for the user to poll the progress of their task, returning either a "still running" response, a "failed" response or a "succeeded + result" response depending on the state of the Future that you just looked up.
It's a bit handwavy but depending on how your webapp is structured you can probably fit these general components in somewhere.
If your HTTP response / the user does not need to see the output of the script, or be aware of when the script completes, then your best option is to launch the thread in some sort of wrapper script as you mention so that it can run outside of the servlet container environment as a whole. This means you can absolve yourself from needing to manage threads within the container, or hijacking a thread as you mention, etc.
Only if the user needs to be informed of when the script completes and/or monitor the script's output would I consider options 1 or 2.
For the second option, you can use a servlet, and after you've responded to the HTTP request, you can use java.lang.Runtime.exec() to execute your script. I'd also recommend that you look here : http://www.javaworld.com/javaworld/jw-12-2000/jw-1229-traps.html
... for some of the problems and pitfalls of using it.
The most robust solution for asynchronous backend processes is using a message queue IMO. Recently I implemented this using a Spring-embedded ActiveMQ broker, and rigging up a producing and consuming bean. When a job needs to be started, my code calls the producer which puts a message on the queue. The consumer is subscribed to the queue and get kicked into action by the message in a separate thread. This approach neatly separates the UI from the queueing mechanism (via the producer), and from the asynchronous process (handled by the consumer).
Note this was a Java 5, Spring-configured environment running on a Tomcat server on developer machines, and deployed to Weblogic on the test/production machines.
Your problem stems from the fact that you are trying to go against the 'single response per request' model in J2EE, and have the end-user's page dynamically update as the backend task executes.
Unless you want to go down the introducing an Ajax-based solution, you will have to force the rendered page on the user's browser to 'poll' the server for information periodically, until the back-end task completes.
This can be achieved by:
When the J2EE container receives the request, spawn a thread which takes a reference to the session object (which will be used to write the output of your script)
Initialize the response servlet to write an html page which will contain a Javascript function to reload the page from the server at regular intervals (every 10 seconds or so).
On each request, poll the session object to display the output stored by the spawned thread in step 1
[clean-up logic can be added to delete the stored content from the session once the thread completes if needed, also you can set any additional flags in the session for mark state transitions of the execution of your script]
This is one way to achieve what you want - it isn't the most elegant of all approaches, but it is essentially due to needing to asynchronously update your page content from the server , with a request/response model.
There are other ways to achieve this, but it really depends on how inflexible your constraints are. I have heard of Direct Web Remoting (although I haven't played with it yet), might be worth taking a look at Developing Applications using Reverse-Ajax

Categories