Selenium, Java, Grid How to create an unconditional pause? - java

I need to test time tracking on a page. I need to be able to pause and do nothing but run the clock and log time. Most of what I've seen from Googling is just Thread.sleep(300) but there are times that I actually need a test to wait five minutes or more. I don't want to exceed 5 minutes of timeout when I start the node simply because, if there's a client failure, I want the node to release the browser so another test can start. One thing I have tried is waiting a specific amount of time for an element that I know isn't there so that it periodically sends instructions to the node so it doesn't release the browser, but for some reason, it only works when I'm debugging. Otherwise it waits forever. I could make a method that uses .sleep() and periodically sends some trivial instruction to the node like getting the current URL to keep it from dropping the browser. What is the best way to pause for 5+ minutes without increasing the timeout parameter for the node?

Related

Can I change an object on a sleeping thread?

Scenario
There is a factory that receives orders. Once received, every order item goes through a multi-step production process. Every step is done by a separate machine and every machine can only handle one item at a time. So the order comes in, the first item goes to machine1, when it's done it goes to machine2 and the next item to machine1, etc.
Technical part
Every machine is implemented as a thread and has a queue with all items lined up that need this step of the process next. The run method of the machine checks in an endless while loop if there is anything in the queue, if yes it will handle that item, sleep for a certain amount of time and then push the item to the queue of the next machine.
Questions
In my head, this sounds all pretty simple. But I constantly run into null-pointer errors and other weird exceptions. I honestly don't fully understand what's wrong but I suspect it's a problem with multi-threading vs. sleep. At this point I got two questions:
What happens if I call a method of a sleeping thread (machine)? (Example: I call machine.addItemToQueue() while that machine is working on another item).
Follows Q1: Let's say I really can't call that method while the machine 'sleeps'. How else would I handle this? Should I take the queue outside the machine? Is this an async problem?

Problems maintaining trivial RPS numbers with jMeter for API testing

I am doing API load testing with JMeter. I have a Macbook Air (client) connected with ethernet to a machine being tested with the load (server).
I wanted to do a simple test. Hit the server with 5 requests per second (RPS). I create a concurrency thread group with 60 threads, a throughput shaping timer with 5 RPS for one minute, my HTTP request and hit the play button and run the test.
I expect to see my Hits per Second listener indicating a flat line of 5 hits per second, instead I see a variable rate, starting with 5 and then dropping to 2 and then later to 4... Sometimes there is more than the specified 5 RPS (e.g. 6 RPS) the point is that it's not a constant 5. It's too much of a variable rate - it's all over the place. And I don't get any errors.
My server, takes between 500ms to 3s to return an answer based on how much load is present - this is what I am testing. What I want to achieve with this test is to return as much as possible a response in 500ms time under load and I am not getting that. I have to start wondering if it's JMeter's fault in some way, but that's a topic for another day.
When I replace my HTTP sample request with a dummy sampler, I get the RPS I desire.
I thought I had a problem with JMeter resources, so I change heap size/memory to 1GB, use the -XX:+ DisableExplicitGC and -d64 flag and run in CLI mode. I never got any errors, not before setting the flags and not after. Also, I believe that 5 RPS is a small number so I don't expect resources to be a problem.
Something worth noting is that sometimes, the threads start executing towards the end of the test rather than at the start, I find this very odd behaviour.
What's next? Time to move to a new tool?

RESTful: What is the difference between ClientProperties.CONNECT_TIMEOUT and ClientProperties.READ_TIMEOUT in Jersey?

For setting up the timeouts while making REST calls we should specify both these parameters but I'm not sure why both and exactly what different purpose they serve. Also, what if we set only one of them or both with different value?
CONNECT_TIMEOUT is the amount of time it will wait to establish the connection to the host. Once connected, READ_TIMEOUT is the amount of time allowed for the server to respond with all of the content in a give request.
How you set either one will depend on your requirements, but they can be different values. CONNECT_TIMEOUT should not require a large value, because it is only the time required to setup a socket connection with the server. 30 seconds should be ample time - frankly if it is not complete within 10 seconds it is too long, and the server is likely hosed, or at least overloaded.
READ_TIMEOUT - this could be longer, especially if you know that the action/resource you requested takes a long time to process. You might set this as high as 60 seconds, or even several minutes. Again, this depends on how critical it is that you wait for confirmation that the process completed, and you'll weigh this against how quickly your system needs to respond on its end. If your client times out while waiting for the process to complete, that doesn't necessarily mean that the process stopped, it may keep on running until it is finished on the server (or at least, until it reaches the server's timeout).
If these calls are directly driving an interface, then you may want much lower times, as your users may not have the patience for such a delay. If it is called in a background or batch process, then longer times may be acceptable. This is up to you.

REST (Jersey) Call Taking Forever Certain Number of Requests

Basic Info:
REST Request, Using Jersey (Java)
I'm working on a project where there's a list of numbers that refer to an individual item.
The user can click on an item number and the corresponding item/data is loaded and presented.
We're having this odd issue where after about the 14th click or so, (direction is irrelevant), a singular REST call
takes forever.
We're talking another 500ms to 1s for each additional click after that 14th (or so) click.
I've been patient enough to drive it up to 15 seconds.
Chrome displays < 2 seconds for the "waiting" portion of the event and 2+ seconds in the receiving state for 360 bytes.
Any ideas on what could possible cause this?
I wrote a test page that just hammered the server with dozens and dozens of requests. As expected, the browser prevented more than 6 at a time being loaded.
The individual set of 6 requests behaved normally.
I've also tried making the same REST request sequentially, waiting till one was done, then waiting 500 ms, then calling it again to simulate the user click on an additional item.
Behaved as expected.
There's only two differences between my test page and the actual deployed version.
1) We make 3 ajax calls (2 to the same rest service, one to a different one) that always complete on time. These 3 are finished before the 4th (the trouble one) even begins.
2) We have a "auto" save feature that does the above on a 30 second timer. This never has issues and always completes on time as expected.
Thanks SO community. Been banging my head against this for a couple days now and I'm at my wits end. :P
do you have the access to the server side?
you seen anything unusual in the logs?
Did you try and measuring execution time of the each method in the service tier?
You might want to take a look :
http://codemate.wordpress.com/2009/05/08/cpu-profiling-explained/
Maybe also memory profiling but not necessarily since you don't have the out of memory exception

Scheduling tasks, making sure task is ever being executed

I have an application that checks a resource on the internet for new mails. If there is are new mails it does some processing on them. This means that depending on the amount of mails it might take just a few seconds to hours of processing.
Now the object/program that does the processing is already a singleton. So right now I already took care of there really only being 1 instance that's handling the checking and processing.
However I only have it running once now and I'd like to have it continuously running, checking for new mails more or less every 10 minutes or so to handle them in a timely manner.
I understand I can take care of this with Timer/Timertask or even better I found a resource here: http://www.ibm.com/developerworks/java/library/j-schedule/index.html that uses Scheduler/SchedulerTask. But what I am afraid of.. is if I set it to run every 10 minutes and a previous session is already processing data it will put the new task in a stack waiting to be executed once the previous one is done. So what I'm afraid of is for instance the first run running for 5 hours and then, because it was busy all the time, after that it will launch 5*6-1=29 runs immediately after each other checking for mails and/do some processing without giving the server a break.
Does anyone know how I can solve this?
P.S. the way I have my application set up right now is I'm using a Java Servlet on my tomcat server that's launched upon server start where it creates a Singleton instance of my main program, then calls some method to do the fetching/processing. And what I want is to repeat that fetching/processing every "x" amount of time (10 minutes or so), making sure that really only 1 instance is doing this and that really after each run 10 minutes or so are given to rest.
Actually, Timer + TimerTask can deal with this pretty cleanly. If you schedule something with Timer.scheduleAtFixedRate() You will notice that the docs say that it will attempt to "make up" late events to maintain the long-term period of execution. However, this can be overcome by using TimerTask.scheduledExecutionTime(). The example therein lets you figure out if the task is too tardy to run, and you can just return instead of doing anything. This will, in effect, "clear the queue" of TimerTask.
Of note: TimerTask uses a single thread to execute, so it won't spawn two copies of your task side-by-side.
On the side note part, you don't have to process all 10k emails in the queue in a single run. I would suggest processing for a fixed amount of time using TimerTask.scheduledExecutionTime() to figure out how long you have, then returning. That keeps your process more limber, cleans up the stack between runs, and if you are doing aggregates, ensures that you don't have to rebuild too much data if, for example, the server is restarted in the middle of the task. But this recommendation is based on generalities, since I don't know what you're doing in the task :)

Categories