java actuator cpu utilization - java

I want to know the JVM CPU usage rate.
So we used the Java actuator's process.cpu.usage.
However, this value was different from the CPU utilization of the container.
So I wonder if "process.cpu.usage" is the usage rate.
And if it is, what is the limit value based on when calculating the utilization rate?
I can't find the official document or source code related to it.

Related

What is the difference between CPU Usage and CPU Utilization?

To evaluate my Java codes I want to use VisualVM, I noticed it showed me CPU Usage and when I go to search regarding it, I'm also shown CPU Utilization. Is there a definitive difference between the two ? I want to talk about CPU Usage in the context of real time systems but nothing specific comes out and rather both seem to be similar to each other?
Utilisation is of the physical CPU and Usage is of the logical CPU,
this is based on CPU Hyperthreading.
I found this in the VMware-community
You are in a shared environment when using a VM and other instances besides your own, are competing for time on the same core. For example, you might be using only a part of the core time and the rest might be getting used by other instances. In this case, CPU usage will take into account both your usage as well as the usage of other instances when reporting. In other words, CPU usage is calculated as the percentage of your CPU usage as a function of the total.
CPU utilization takes into account only your metrics - the amount of time your CPU is busy or idle.

Increasing heap memory utilization on Java Tomcat application

I have a Java application running on a Tomcat7 instance. I am using Java 8.
Now within my application there is a webservice of the format : http://zzz.xxx.zzz.xxx/someWebservice?somepar1=rrr&somePar2=yyy. This returns a String value of max 10 characters.
I have now started load testing this service using Jmeter. I am putting a load of 100 concurrent connections and getting a throughput of roughly 150 requests/second. The server is a 4 core-6GB machine and is only running Tomcat (application instance). The database instance is running on a separate machine. JVM is running with min 2GB and Max 4 GB memory allocation. Max Perm Size is 512 MB. Tomcat has enough threads to cater to my load (Max connections/threads/executor values have been setup correctly)
I am now trying to optimize this service and in order to do so I am trying to analyze memory consumption. I am using JConsole for the same. My CPU usage is not a concern , but when I look at the memory (HEAP) usage, I feel something is not right. What I observe is a sawtooth shaped graph, which I know is correct as regular GC clears heap memory.
My concern is that this sawtooth shaped graph has an upward trend. I mean that the troughs in the sawtooth seem to be increasing over time. With this trend eventually my server reaches max heap memory in an hour or so and then stabilizes at 4GB. I believed that if I am putting a CONSTANT load, then the heap memory utilization should have a constant trend, i.e. a saw tooth graph with its peaks and troughs aligned. If there is an upward trend I am suspecting that there is a memory leak of variables which are collecting over time and since GC isn't able to clear them there is an increase over a period of time. I am attaching a screenshot.
Heap Usage
Questions:
1). Is this normal behavior? If yes, then why does the heap continuously increase despite no change in load? I don't believe that a load of 100 threads should saturate 4GB heap in roughly 30 minutes.
2). What could be the potential reasons here? Do I need to look at memory leaks? Any JVM analyzer apart from JConsole which can help me pinpoint the variables which the GC is unable to clear?
The see-saw pattern most likely stems from minor-collections, the dip around 14:30 then is a major collection, which you did not take into account when doing your reasoning.
Your load may simply be so low that it needs a long time to reach a stable state.
With this trend eventually my server reaches max heap memory in an hour or so and then stabilizes at 4GB.
Supports that conclusion if you're not seeing any OOMEs.
But there's only so much one can deduce from such charts. If you want to know more you should enable GC logging and inspect the log outputs instead.

Limit total memory consumption of Java process (in Cloud Foundry)

Related to these two questions:
How to set the maximum memory usage for JVM?
What would cause a java process to greatly exceed the Xmx or Xss limit?
I run a Java application on Cloud Foundry and need to make sure that the allocated memory is not exceeded. Otherwise, and this is the current issue, the process is killed by Cloud Foundry monitoring mechanisms (Linux CGROUP).
The Java Buildpack automatically sets sane values for -Xmx and -Xss. By tuning the arguments and configuring the (maximum) number of expected threads, I'm pretty sure that the memory consumed by the Java process should be less than the upper limit which I assigned to my Cloud Foundry application.
However, I still experience Cloud Foundry "out of memory" errors (NOT the Java OOM error!):
index: 3, reason: CRASHED, exit_description: out of memory, exit_status: 255
I experimented with the MALLOC_ARENA_MAX setting. Setting the value to 1 or 2 leads to slow startups. With MALLOC_ARENA_MAX=4 I still saw an error as described above, so this is no solution for my problem.
Currently I test with very tight memory settings so that the problem is easier to reproduce. However, even with this, I have to wait about 20-25 minutes for the error to occur.
Which arguments and/or environment variables do I have to specify to ensure that my Java process never exceeds a certain memory limit? Crashing with a Java OOM Error is acceptable if the application actually needs more memory.
Further information regarding MALLOC_ARENA_MAX:
https://github.com/cloudfoundry/java-buildpack/pull/160
https://www.infobright.com/index.php/malloc_arena_max/#.VmgdprgrJaQ
https://www.ibm.com/developerworks/community/blogs/kevgrig/entry/linux_glibc_2_10_rhel_6_malloc_may_show_excessive_virtual_memory_usage?lang=en
EDIT: A possible explaination is this: http://www.evanjones.ca/java-bytebuffer-leak.html. As I currently see the OOM issue when doing lots of outgoing HTTP/REST requests, these buffers might be to blame.
Unfortunately, there is no way to definitively enforce a memory limit on the JVM. Most of the memory regions are configurable (-Xmx, -Xss, -XX:MaxPermSize, -XX: MaxMetaspaceSize, etc.) but the one you can't control is Native memory. Native memory contains a whole host of things from memory mapped files to native libraries to JNI code. The best you can do is profile your application, find out where the memory growth is occurring, and either solve the growth or give yourself enough breathing room to survive.
Certainly unsatisfying, but in the end not much different from other languages and runtimes that have no control over their memory footprint.

extracting CPU usage,Memory usage and network utilization using java on windows/linux

I'm trying to develop a small testing application that runs a few commands on and every X seconds measures cpu usage,memory usage and network utilization as seen in Windows Task Manager.
the Application will be written in java and is supposed to run on both windows and linux.
I have found that many people uses the Sigar API in order to extract system information easily.
I found out how to use it to extract memory usage using
Mem mem = sigar.getMem();
mem.getUsed();
I'm still not sure what the difference is between memory used and actual memory used, can someone elaborate me on this?
Also I'm still not sure how to extract Cpu usage and network utilization.
for Cpu I have tried:
cpu = sigar.getCpuPerc();
cpu.getCombined();
but the numbers seems very different from what I'm seeing in the Task Manager.
which API should I use to get the desired results?
for network utilization I have no clue.
Memory used refer to the allocated memory size, actual memory used is the one which is actually used out of the allocated, it reduces some kernel and other areas memory from it.
For CPU, I also figured out values to be different and seen some blog where it suggest to multiply by 100. So I did, now the values are quite similar...
http://code.google.com/p/starfire/source/browse/trunk/starfire/src/main/java/es/upm/dit/gsi/starfire/capability/systemMonitor/CpuMonitor.java?spec=svn279&r=279
Network I am still searching.

Explain observed JVM Garbage Collection on JBoss Server

With VisualVM I am observing the following heap usage on a JBoss server:
The server is started with the following (relevant) JVM options:
-Xrs -Xms3072m -Xmx3072m -XX:MaxPermSize=512m -XX:+UseParallelOldGC -Dsun.rmi.dgc.client.gcInterval=3600000 -Dsun.rmi.dgc.server.gcInterval=3600000
And we currently also have enabled GC logging:
-XX:+PrintGC -XX:+PrintGCTimeStamps -XX:+PrintGCDetails -Xloggc:log\gc.log
Basically I am happy with the observed pattern, since it looks like we don't have any memory leaks (the pattern repeats itself over days).
However I am wondering if there is room for optimization?
First of all, I don't understand why the garbage collection already kicks in when the heap usage reaches about 2GB? It looks to me like it could kick in later since the heap would have 3GB available?
Further more I would be interested in tips regarding the observed heap usage pattern and the used JVM options:
Does the observed pattern allow me to draw conclusions about the used GC strategy (UseParallelOldGC)? Ist this strategy the right one, or should I try to use another one given the observed heap usage?
Can I optimize the GC process, so that the full heap size (3GB) is used?
Right now it looks like the full 3GB are never used, should I reduce the Xms/Xmx to 2.5GB?
Are there any obvious GC optimizations that I am missing? Like tuning -XX:NewSize or -XX:NewRatio?
Any other tips that come to mind?
Thanks!
I'd say the GC behaviour in your screen-shot looks 'normal'.
You'd usually want major collections to trigger before the heap space gets too full or it would be very easy to encounter OutOfMemoryError's, based on a number of scenarios.
Also, are you aware that Java's heap space is divided into distinct areas for new (eden), current (survivor) and old (tenured) objects?
This answer provides some excellent information on the subject, so I won't repeat it here:
How is the java memory pool divided?
Very basically, each area of the heap triggers its own collections. The eden space is normally collected often and 'quickly' the survivor and tenured spaces are usually larger and take longer to collect.
Could you reduce your heap size based on the above graph?
Yes. However, your current configuration allows your application some breathing room, if it's ever likely to encounter busier periods or spikes in load.
Can you optimize GC?
Yes, but there are no magic settings. The first question is do you really need to? If your application is just a non-interactive 'processor', I really wouldn't bother. If you have a genuine need for a low pause application, then there are some tweaks available. The trade off is generally that you'll need more resources to achieve the same result.
My experience is that low-pause JVM configurations have a very noticeable fall-off point when load increases. If your application is usually fairly idle, but you expect a 'quick' response when it is called, low pause may be appropriate. On a busier system, with peaks in traffic / load, you may prefer a more traditional approach.
Summary
In any case, don't be tempted to make arbitrary changes to 'improve' your configuration. Be scientific and professional about your approach.
If you don't have production metrics available, consider using tools like Apache JMeter to build load test scenarios to simulate the typical live load on you application, increased load (by say, 10%, 20% or 50% etc.) and intermittent peak load.
Use metrics for both the GC and the application, measuring at least:
Average throughput.
Peak throughput.
Average load (CPU and memory).
Peak load.
Application pause times (total and individual pauses).
Time spent performing collections.
Reliability (OOME's etc.).
Once you're happy that you've recorded an accurate benchmark on the performance of you application with its current configuration, only then should you start making any changes.
Obviously, record you configuration and its metrics. Document any changes and then perform the same benchmark tests. Then you'll be able to see any performance gain (or loss) and any trade-off that may be applicable.
Here's the some further reading from Oracle on the subject to get you started:
Java SE 6 Virtual Machine Garbage Collection Tuning

Categories