I have a java application (based on dropwizard) that accepts HTTP requests, performs some in memory computations and returns HTTP responses.
I have another java application (jmeter) which load tests the first one by sending requests with random parameters.
When I run the test (both the test and applications on the same laptop) I get bad performance: throughput is low (25 requests per second (rps)), latency is high (150ms). What interesting is that CPU usage of the application is low too (10%). It seems that the application spends much time waiting for network calls.
But when some other network activity (not all, playing YouTube video doesn't help) is going on, for instance when I do a Skype call or open local auto-refreshing grafana dashboard the situation changes. CPU usage by the application increases to 50%, throughput increases to 70-90 rps, latency decreases to 50ms.
It seems that irrelevant network activity somehow speed-ups network calls between the application and test. Can anybody explain that behavior?
Win 10, Java 8.
Are you on a computer with power saving settings? (Such as a laptop on battery power?)
Try to turn off all kinds of power saving, on CPU as well as other devices.
Related
I have an ec2 instance doing a long running job. The job should take about a week but after a week it is only at 31%. I believe this is due to the low average cpu (less than 1%) and because it very rarely receives a GET request (just me checking the status).
Reason for low cpu:
this Java service performs many GET requests then processes a batch of pages once it has a few hundred (non-arbitrary, there is a reason they are all required first). but to prevent me getting http 429 (too many requests) i must time my GET requests apart using Thread.sleep(x) and synchronization. this results in a very low cpu which spikes every so often.
I think amazons preemptive systems think that my service is arbitrarily waiting, when in actual fact it needs to wake up at a specific moment. I also notice that if i check the status more often then it goes quicker.
How do i stop amazons preemptive system thinking my service isn't doing anything?
I have thought of 2 solutions, however neither seems intuitive:
have another process running to keep the cpu at ~25%. which would only really consist of
while(true){
Thread.sleep(300);
LocalDateTime until = LocalDateTime.now().plusMillis(100);
while(LocalDateTime.now().isBefore(until){
//empty loop
}
}
however this just seems like an unnecessary use of resources.
have a process on my laptop perform a GET request to the aws service every 10 minutes. but one of the reasons i put it on aws was to free up my laptop, although this would be magnitudes less resources of my laptop than having the service run locally.
Is one of these solutions more desirable than the other? Is there another solution which would be more appropriate?
Many Thanks,
edit: note, I use the free-tier services only.
We are using Groovy, Grill, web socket and REST API in our application. Our production server shows high CPU as soon as there are 10+ users simultaneously accessing the app. On checking the server health I can see some logging activity showing high CPU (screenshot attached). I wanted to know if excess of logging contributes to high CPU utilization. Definitely there could be more other reasons for high CPU.
I'm reading about virtual memory swapping and it says that pages of memory can be swapped when an application becomes idle. I've tried to google what that means but haven't found much elaborate information except for this stackoverflow answer:
Your WinForms app is driven by a message loop that pulls messages out
of a queue. When that queue is emptied, the message loop enters a
quiet state, sleeping efficiently until the next message appears in
the message queue. This helps conserve CPU processing resources
(cycles wasted spinning in a loop takes CPU time away from other
processes running on the machine, so everything feels slower) and also
helps reduce power consumption / extend laptop battery life.
So does the application become idle when there's no messages in the message queue?
The operating system decides what idle means. In general, that means that the application doesn't actively utilize system resources (like processor cycles, IO operations, etc).
However, that doesn't mean that application's pages in memory will not be swapped if the application isn't 'idle'. There may be many 'active' applications which contend for the same limited physical memory and the OS may be forced to swap some pages belonging to an active application to make room for another active application.
I've got a Java App running on Ubuntu, the app listens on a socket for incoming connections, and creates a new thread to process each connection. The app receives incoming data on each connection processes the data, and sends the processed data back to the client. Simple enough.
With only one instance of the application running and up to 70 simultaneous threads, the app will run up the CPU to over 150%.. and have trouble keeping up processing the incoming data. This is running on a Dell 24 Core System.
Now if I create 3 instances of my application, and split the incoming data across the 3 instances on the same machine, the max overall cpu on the same machine may only reach 25%.
Question is why would one instance of the application use 6 times the amount of CPU that 3 instances on the same machine each processing one third of the amount of data use?
I'm not a linux guy, but can anyone recommend a tool to monitor system resources to try and figure out where the bottleneck is occurring? Any clues as to why 3 instances processing the same amount of data as 1 instance would use so much less overall system CPU?
In general this should not be the case. Maybe you are reading the CPU usage wrong. Try top, htop, ps, vmstat commands to see what's going on.
I could imagine one of the reasons for such behaviour - resource contention. If you have some sort of lock or a busy loop which manifests itself only on one instance (max connections, or max threads) then your system might not parallelize processing optimally and wait for resources. I suggest to connect something like jconsole to your java processes and see what's happening.
As a general recommendation check how many threads are available per JVM and if you are using them correctly. Maybe you don't have enough memory allocated to JVM so it's garbage collecting too often. If you use database ops then check for bottlenecks there too. Profile and find the place where it spends most of the time and compare 1 to 3 instances in terms of % of time spend in that function.
I'm trying to speed test jetty (to compare it with using apache) for serving dynamic content.
I'm testing this using three client threads requesting again as soon as a response comes back.
These are running on a local box (OSX 10.5.8 mac book pro). Apache is pretty much straight out of the box (XAMPP distribution) and I've tested Jetty 7.0.2 and 7.1.6
Apache is giving my spikey times : response times upto 2000ms, but an average of 50ms, and if you remove the spikes (about 2%) the average is 10ms per call. (This was to a PHP hello world page)
Jetty is giving me no spikes, but response times of about 200ms.
This was calling to the localhost:8080/hello/ that is distributed with jetty, and starting jetty with java -jar start.jar.
This seems slow to me, and I'm wondering if its just me doing something wrong.
Any sugestions on how to get better numbers out of Jetty would be appreciated.
Thanks
Well, since I am successfully running a site with some traffic on Jetty, I was pretty surprised by your observation.
So I just tried your test. With the same result.
So I decompiled the Hello Servlet which comes with Jetty. And I had to laugh - it really includes following line:
Thread.sleep(200L);
You can see for yourself.
My own experience with Jetty performance: I ran multi threaded load tests on my real-world app where I had a throughput of about 1000 requests per second on my dev workstation...
Note also that your speed test is really just a latency test, which is fine so long as you know what you are measuring. But Jetty does trade off latency for throughput, so often there are servers with lower latency, but also lower throughput as well.
Realistic traffic for a webserver is not 3 very busy connections - 1 browser will open 6 connections, so that represents half a user. More realistic traffic is many hundreds or thousands of connections, each of them mostly idle.
Have a read of my blogs on this subject:
https://webtide.com/truth-in-benchmarking/
and
https://webtide.com/lies-damned-lies-and-benchmarks-2/
You should definitely check it with profiler. Here are instructions how to setup remote profiling with Jetty:
http://sujitpal.sys-con.com/node/508048/mobile
Speedup or performance tune any application or server is really hard to get done in my experience. You'll need to benchmark several times with different work models to define what your peak load is. Once you define the peak load for the configuration/environment mixture you need to tune and benchmark, you might have to run 5+ iterations of your benchmark. Check the configuration of both apache/jetty in terms of number of working threads to process the request and get them to match if possible. Here are some recommendations:
Consider the differences of the two environments (GC in jetty, consider tuning you min and max memory threshold to the same size and later proceed to execute your test)
The load should come from another box. If you don't have a second box/PC/server take your CPU/core into count and setup your the test to a specific CPU, do the same for jetty/apache.
This is given that you cant get another machine to be the stress agent.
Run several workload model
Moving to modeling the test do the following 2 stages:
One Thread for each configuration for 30 minutes.
Start with 1 thread and going up to 5 with a 10 minutes interval to increase the count,
Base on the metrics Stage 2 define a number of threads for the test. and run that number of thread concurrent for 1 hour.
Correlate the metrics (response times) from your testing app to the server hosting the application resources (use sar, top and other unix commands to track cpu and memory), some other process might be impacting you app. (memory is relevant for apache jetty will be constraint to the JVM memory configuration so it should not change the memory usage once the server is up and running)
Be aware of the Hotspot Compiler.
Methods have to be called several times (1000 times ?), before the are compiled into native code.