I am testing a certain "functionality" that happens after log in.
The test case is 500 users exercising that functionality within 5 minutes.
I can add a synchronising timer after the log in, to ensure all 500 threads have logged in but then it will do all 500 "functionality" tasks at once, rather than 5 minutes, which will crash the app (it thinks there's a DDoS attack and shuts down).
Right now, I am handling this by giving some think time after login, to slow down login to a stable figure that I can predict and then start "functionality" at each thread's turn, as scheduled by: the main scheduler + the the log in response time + the think time...
But that's a bit fuzzy.
Is there a way to "ramp up" tasks once already running?
I can think in two options.
The first one is two use random times. You would use the range from 0 seconds to 300 - 1 that is [0-300) or using millis [0-300000). Then sleep the thread basesd on this ramdon time.
This approach can be a little more realist, because for instance, in a specific second of the given interval you don't have any threads starting and in other particular second you have 2-3. This still should be well balanced in general, since you won't make all petitions at start.
The second one is to start the threads uniformly. During your configuration time (login and before firing the threads) you can use something like an AtomicInteger, initializing it with new AtomicInteger(0) and calling getAndIncrement() to assign the possition of the thread, in the range [0-500) and then when you fire the threads sleep 300.0 * id / 500.0 milliseconds to execute the task/petition.
By default JMeter executes requests as fast as it can, you can "throttle" the execution to desired throughput (request per minute) rate using Constant Throughput Timer.
Example Test Plan would look like:
Thread Group
Login
Synchronizing Timer
Functionality
Constant Throughput Timer
Constant Throughput Timer follows JMeter Scoping Rules so you can apply it either to single sampler or to a group of samplers.
Related
I am doing calculations in milliseconds and I really do not want my thread to spend more time doing time calculations rather the job it is assigned to do. However I want to implement something that:
1- It should not generate more than n requests per second
2- If it has generated less, it should start at zero for the next second(obviously :D)
I am trying to do some performance benchmarking where my goal is to give all cpu to only processing and not time computations after every request. Roughly, I am processing
08:36 - 171299
08:37 - 170970
08:38 - 163763
I want to make sure I do not make more than 160000 requests per minute here. How to acheive that is the problem.
Thanks in advance!
You can combine ScheduledExecutorService to run some code every second and this answer to set timeout on that code. In the end, your runnable that should have 1-second timeout should generate up to n requests, and if it times out, it will start in next second with new context.
My Google App Engine application is adding a large number of deferred tasks to a task queue. The tasks are scheduled to run every x seconds. If I understand the bucket-size property b correctly, a high value would prevent the deferred tasks to run until b tasks have been added. However, there is a close-to-realtime requirement that the tasks run as scheduled. I do not want that the tasks are blocked until the bucket-size is reached. Instead they should run as close to their scheduled time as possible.
To support this use case, should I use a bucket-size of 1 and a rate of 500 (which is the current maximum rate)? Which other approaches exist to support this? Thanks!
The bucket size does not prevent tasks from running individually. It plays a different role.
Suppose you have an empty queue with rate of 500 tasks per second, and several hours where no tasks are added or started. Then suddenly a large number of tasks are added at once. How many of these tasks would you like started immediately? Set this number as your bucket size. For example, with a bucket size of 1000, 1000 tasks will be started immediately (then 500 per second going forward).
How does this work? The bucket is topped up by 500 tokens every second (the queue's rate), up to the maximum being the bucket size. When there are tasks are available to start, they will only be started while the bucket is not empty, and one token will be removed from the bucket as each task is started.
You should NOT use taskqueues (TQ) for deferred tasks that are important to run close-to-realtime using the assumption that bucket/rate setting will assure high throughput. There have been several discussion threads in Google groups about infrequent delays with task start times that are minutes or more in length. Bucket size and rates will not have an affect on this -- your TQ tasks will simply sit there while your high-throughput TQ is idle. To date I have not ever seen any explanation from Google as to why this occurs. Again, if you utilize TQs for close-to-real-time tasks you MUST handle as an exception the infrequent times when your tasks will delay for minutes prior to starting. (I in fact do this, and have not yet been negatively affected, but you have to have code in place to handle a result = delayed task). My great hope is that with the new server/application testing underway, Google will find an easy way to kill this incredibly big issue with TQs (fingers crossed).
I'm working on a job dispatcher for appengine, and the default scheduler always winds up firing up 3-4 instances that do all the work, some overflow instances that might take thousands of tasks, or only a couple and then sits there burning cpus doing nothing.
My task involves processing jobs for many different sized domains, sometimes there's huge throughput, and other times it's one user with 10,000 models to update; if I turn the normal appengine task scheduler loose, it fails in two ways: 1) backends never shut down, and when memory hits the cap, java gc makes an instance thrash and act like it's almost a zombie yet never shut down {and still take/hold jobs}, and 2) many domains have a single user that takes far longer than all the others to process, and this keeps a backend alive long after the rest of the domain has finished.
These tasks must run throughout the day, and it takes multiple backends to handle fanout, so I can't just dump them all on a B8 and call it a day., so we need a dispatcher to manage how tasks get allocated to backends.
Now, I don't want to pay datastore ops on every task just to save a few minutes of cpu time, so my plan of attack {please critique} is to use a static ConcurrentHashMap in RAM, start each run() in a try, have every deferred task put it's [hashcode, startTime] in at startup and remove(hashcode) in a finally. There will be one such map per backend instance that's running jobs, wrapped in a method, BackendCounter.addToLiveMap(this); it's .size() serves as a running total of how many jobs are alive on that backend {with timestamp to detect zombie jobs that run >10 minutes}. The job dispatcher can fire off a worker thread per instance to monitor how many jobs, excluding itself, are running in that instance, and keep a ranked list in memcache of which instances have how many tasks alive. If one instance drops below a threshold of X live tasks, pick an overflow instance to defer to, then have the method BackendCounter.addToLiveMap(this) throw an exception I can catch to tell jobs to just schedule themselves to a new instance {ChangeInstanceException#getNewTarget()}. This way I can prevent barely-used instances from getting new jobs so they have a chance to shut down, paying only for some memcache ops and fanout only pays a write and delete to static map.
That takes care of problem two, which is the instance-hour killer. As for problem one, which is how to prevent one instance {usually instance 0 and 1} from hitting peak memory and start turning toward the dark side, I am torn between two options.
On the one hand, I can use the expected call to BackendCounter.addToLiveMap(this) throws ChangeInstanceException and simply check memory:
if (((float)Runtime.getRuntime().freeMemory() / Runtime.getRuntime().totalMemory())<0.9) throw new ChangeInstanceException(getOverflowInstance());
This naive approach will simply tell any instance approaching it's memory limit to send all new work elsewhere.
On the other hand, I could keep instance 0 and 1 for handling overflow {and toggle between which of the two gets new jobs to give them chances to shut down}, then send the fanout to instances 2+, which will only run until they drop to say, 10 or 15 jobs in parallel. The fanout is pretty consistent, and only takes a couple minutes, so instances 2, 3 and, at most, 4, will need to turn on, and be given time to turn off while a different instance gets hit with more load.
The only thing I'm afraid of is if jobs starting bouncing from one instance to another, which can probably be overcome with a redirect header limit to skip throwing ChangeInstanceException.
Any thoughts or advice are greatly appreciated.
I have an application that checks a resource on the internet for new mails. If there is are new mails it does some processing on them. This means that depending on the amount of mails it might take just a few seconds to hours of processing.
Now the object/program that does the processing is already a singleton. So right now I already took care of there really only being 1 instance that's handling the checking and processing.
However I only have it running once now and I'd like to have it continuously running, checking for new mails more or less every 10 minutes or so to handle them in a timely manner.
I understand I can take care of this with Timer/Timertask or even better I found a resource here: http://www.ibm.com/developerworks/java/library/j-schedule/index.html that uses Scheduler/SchedulerTask. But what I am afraid of.. is if I set it to run every 10 minutes and a previous session is already processing data it will put the new task in a stack waiting to be executed once the previous one is done. So what I'm afraid of is for instance the first run running for 5 hours and then, because it was busy all the time, after that it will launch 5*6-1=29 runs immediately after each other checking for mails and/do some processing without giving the server a break.
Does anyone know how I can solve this?
P.S. the way I have my application set up right now is I'm using a Java Servlet on my tomcat server that's launched upon server start where it creates a Singleton instance of my main program, then calls some method to do the fetching/processing. And what I want is to repeat that fetching/processing every "x" amount of time (10 minutes or so), making sure that really only 1 instance is doing this and that really after each run 10 minutes or so are given to rest.
Actually, Timer + TimerTask can deal with this pretty cleanly. If you schedule something with Timer.scheduleAtFixedRate() You will notice that the docs say that it will attempt to "make up" late events to maintain the long-term period of execution. However, this can be overcome by using TimerTask.scheduledExecutionTime(). The example therein lets you figure out if the task is too tardy to run, and you can just return instead of doing anything. This will, in effect, "clear the queue" of TimerTask.
Of note: TimerTask uses a single thread to execute, so it won't spawn two copies of your task side-by-side.
On the side note part, you don't have to process all 10k emails in the queue in a single run. I would suggest processing for a fixed amount of time using TimerTask.scheduledExecutionTime() to figure out how long you have, then returning. That keeps your process more limber, cleans up the stack between runs, and if you are doing aggregates, ensures that you don't have to rebuild too much data if, for example, the server is restarted in the middle of the task. But this recommendation is based on generalities, since I don't know what you're doing in the task :)
Been experimenting with Jmeter, and I'd like to know the best way to accomplish:
20 users logging onto an application, over 20 minutes, and performing some actions for another 20 minutes, before logging off over a period of 20 minutes. I.e. have 200 users logging on, and then once ALL of them are logged on, begin 20 minute timer. Once 20 minutes are up, start to log the ones who logged on earliest off.
I realise this MAY or MAY NOT BE a realistic testing scenario, but i'd like to see if it's possible.
At the moment I have a test plan whereby a user logs on, performs some actions, and then logs off. I can't see how I can ramp up and ramp down.
There's an option in Test Plan "Run thread groups consecutively". Set it to checked.
Then add 3 thread groups to your test plan. I'd suggest using Thread Group for first (20 threads, loop count 1, ramp up time 1), Ultimate Thread Group (20 threads starting immediately and holding load for 20min) for second and Thread Group again for third (20 threads, loop count 1, ramp up time 1).
Place appropriate samplers inside each TG - first just logs in, second does actions, third logs off.
That's it. If you have any troubles - let me know.
You'll need several thread groups in JMeter starting off and running at different intervals, in that way you could ensure that the users who start first will end first.
Also see a related question on this.
You can have no of users=20, ramp up time=1200 sec (1 per min), difference of time between test start and test end time=20 min to achieve that.
I think I had a similar problem in the past
Here Want I've did:
First set your thread group to have 20 thread with a rampup period of 60 seconds
After the login put a "test action" (in the sampler menu)
target = current thread, with the action pause and 20 minutes (1 200 000 ms) or more if you want to be safe.
After this test action, put all your navigating request.
Once your navigation is done, put another "test action" with the same setting has the previous one
(target = current thread, with the action pause and 20 minutes (1 200 000 ms))
put the logouf request after the sampler.
This should cover you case.
Take note that the sampler just pause your thread so the first thread that start should be the first thread that end.
If you want to scale it to 200 you just need to change your thread group rampup period to 6 or 5 seconds.
hope it's help.