Executing multiple Selenium Web driver test cases in parallel on one machine - java

I am able to execute maximum 5 test cases in parallel on one machine.
How to increase the number of test cases?

You'll typically be limited by the power of your computer (number of CPU core and memory speed and amount). This will relate to the number of browsers you open, and therefore the number of test you can run. To run more tests, faster, use a Selenium Grid, such as SauceLabs (which is free for a trial period).
I talk about this is the book I'm working on -- Selenium WebDriver In Practice.
Alex

Related

Running junits test in parallel to maximize the performance through jenkins/ intellij

I have a project which contains around 50 junit tests. Each test takes about 5-6 minutes to run since it tries to assert on some data which takes about that time to be available in redshift(the test look up the data in redshift). Now I am trying to run all them in parallel and expecting all test to be run in about 15-20 minutes max. I tried to use test.runWith(){maxParallelForks} option but the tests takes more than an hour. These tests are all independent. Is there an efficient way to parallelize them?
N.B This is a neighbor test not just junits. So I don't have the option to mock the results as results to be derived from the actual interactions between our neighbor systems.
Thanks
Here is what I am using:
tasks.withType(Test) {
maxParallelForks = 50
}
Our cut off is to have them running in 20-25 minutes max but it's taking more then an hour.
Create a file named junit-platform.properties in your test/resources folder with the following data:
junit.jupiter.execution.parallel.enabled=false
junit.jupiter.execution.parallel.config.strategy=fixed
junit.jupiter.execution.parallel.config.fixed.parallelism=4
I have a 4 cores (without hyper threading). You can also try the dynamic option.
Annotate your class with #Execution(ExecutionMode.CONCURRENT)
Here is a great article on the topic. Also one here.
I have also experimented with parallel test execution and hat the following conclusions:
If the tests are short (not as in your case though) it's not worth it. When dealing with short tests the effort to setup the whole multi-threaded testing environment and the context switches later on didn't result in any performance gain. (I have had ca 150+ Unit tests)
When dealing with Spring the tests need to be configured correctly. As you start containers parallel, they must not try to use the same port etc. Also you have to be aware of context-caching. (Even if test B didn't require the component XYZ, I asked spring to initialize it, and then it realized that test A already had the same configuration, hence the context was reused with resulted in a way better performance gain than having two different containers starting on multiple threads.)
If your tests contain code (as in your case), where you "await" an event in order to complete the test, using multiple threads is the way to go.
Finally: "Sounds good, doesn't work." - On my local machine the parallel execution resulted in some degree of performance gain (IntelliJ, maven), but in our CI server we have a single virtual core machine, and there asking for fixed 4 threads resulted in a massive performance drop.

Selenium Grid - Specified number of threads not getting fully occupied

I have specified the number of threads as 20 for 300 test cases. When the test run starts it occupies all the 20 threads and completes over 270+ test cases, after that the thread occupancy is getting reduced to very few and by the end it is running with single thread.
This is the same case irrespective of number of threads/number of test cases, where the last 10% of the tests are occupied with minimum number of threads even though there are more test cases to run than the number_of_threads.
Test environment:
Selenium Grid v2.53.1
Ruby, Cucumber, Remotewebdriver with http::persistent client
I have searched for similar issue and there is nothing i can relate to, please let me know if there is an existing issue in Selenium for this, or is there any tweaks to resolve this?
This has got nothing to do with Selenium, especially the Selenium Grid. The Grid is merely an execution environment, which facilitates running your browser based UI tests in a remote environment.
It does not manage the test concurrency. You would need to check at the ruby level to figure out what is happening here.

selenium Grid2 parallel test on multi browsers fail occasionally with no obvious reason

I run grid with 2 nodes on the same Linux VM, sometimes (50%) a test fails due to NoSuchElementException, with no real reason, the Element is there - can see it on runtime,
however I can never catch this failure while debugging.
I'm pretty sure it related to parallel testing.
I use Ubuntu 12.04, with Firefox 18 & google-chrome 23.0.
My webdriver instances are initiated in the test method itself.
My testing.xml specifies parallel=methods and I can see all browsers open at the same time and tests are running together.
Is this a known issue? I intended to run even higher parallelism, but the more nodes I add to the VM and more tests I run in parallel - the bigger the failure rate.
Is there a fix for that ?
i can guess that you are running all your nodes on the same VM display, therefore - it is very likely, that your tests are interfering one another when running in parallel, 2 actions (in test) can be executed simultaneously and only one event will actually be executed (like click).
also this probably consumes high compute resources from your node HW.
it is recommended (by me, from experience) to run one node per browser/platform machine when running in parallel to prevent false negatives. (hub can still be in same machine of a node).
or - in linux only you can run different node sessions on different DISPLAY-Xs, this will still consume compute resources and probably slow down the tests if you use too many.
you can try and read this, maybe it will raise some ideas :
effective ui testing lab

How to speed up Selenium/Junit test execution by avoiding the registry settings

we have 500+ test cases for our application. Test cases are running for 4 to 6 hours based on the CPU and RAM overhead.
For each test case Selenium will start and stop the IE. IE takes the backup of the registry for each test case. I am seeing the following statements for each test case and
14:43:38,312 INFO [org.openqa.selenium.server.browserlaunchers.WindowsProxyManager] Backing up registry settings...
14:43:40,234 INFO [org.openqa.selenium.server.browserlaunchers.WindowsProxyManager] Modifying registry settings...
There is 2 minutes time difference between above two statements.
Can we bypass the backup of the registry and run the test cases. In this way my test cases will finish less than 20min to current build.
This thread shows how to reuse a Firefox session. I haven't tried it, but I imagine there is an equivalent for Internet Explorer. You could also use Selenium Grid to speed up the duration.

How do I improve jetty response times?

I'm trying to speed test jetty (to compare it with using apache) for serving dynamic content.
I'm testing this using three client threads requesting again as soon as a response comes back.
These are running on a local box (OSX 10.5.8 mac book pro). Apache is pretty much straight out of the box (XAMPP distribution) and I've tested Jetty 7.0.2 and 7.1.6
Apache is giving my spikey times : response times upto 2000ms, but an average of 50ms, and if you remove the spikes (about 2%) the average is 10ms per call. (This was to a PHP hello world page)
Jetty is giving me no spikes, but response times of about 200ms.
This was calling to the localhost:8080/hello/ that is distributed with jetty, and starting jetty with java -jar start.jar.
This seems slow to me, and I'm wondering if its just me doing something wrong.
Any sugestions on how to get better numbers out of Jetty would be appreciated.
Thanks
Well, since I am successfully running a site with some traffic on Jetty, I was pretty surprised by your observation.
So I just tried your test. With the same result.
So I decompiled the Hello Servlet which comes with Jetty. And I had to laugh - it really includes following line:
Thread.sleep(200L);
You can see for yourself.
My own experience with Jetty performance: I ran multi threaded load tests on my real-world app where I had a throughput of about 1000 requests per second on my dev workstation...
Note also that your speed test is really just a latency test, which is fine so long as you know what you are measuring. But Jetty does trade off latency for throughput, so often there are servers with lower latency, but also lower throughput as well.
Realistic traffic for a webserver is not 3 very busy connections - 1 browser will open 6 connections, so that represents half a user. More realistic traffic is many hundreds or thousands of connections, each of them mostly idle.
Have a read of my blogs on this subject:
https://webtide.com/truth-in-benchmarking/
and
https://webtide.com/lies-damned-lies-and-benchmarks-2/
You should definitely check it with profiler. Here are instructions how to setup remote profiling with Jetty:
http://sujitpal.sys-con.com/node/508048/mobile
Speedup or performance tune any application or server is really hard to get done in my experience. You'll need to benchmark several times with different work models to define what your peak load is. Once you define the peak load for the configuration/environment mixture you need to tune and benchmark, you might have to run 5+ iterations of your benchmark. Check the configuration of both apache/jetty in terms of number of working threads to process the request and get them to match if possible. Here are some recommendations:
Consider the differences of the two environments (GC in jetty, consider tuning you min and max memory threshold to the same size and later proceed to execute your test)
The load should come from another box. If you don't have a second box/PC/server take your CPU/core into count and setup your the test to a specific CPU, do the same for jetty/apache.
This is given that you cant get another machine to be the stress agent.
Run several workload model
Moving to modeling the test do the following 2 stages:
One Thread for each configuration for 30 minutes.
Start with 1 thread and going up to 5 with a 10 minutes interval to increase the count,
Base on the metrics Stage 2 define a number of threads for the test. and run that number of thread concurrent for 1 hour.
Correlate the metrics (response times) from your testing app to the server hosting the application resources (use sar, top and other unix commands to track cpu and memory), some other process might be impacting you app. (memory is relevant for apache jetty will be constraint to the JVM memory configuration so it should not change the memory usage once the server is up and running)
Be aware of the Hotspot Compiler.
Methods have to be called several times (1000 times ?), before the are compiled into native code.

Categories