I want to write a simple Java code to monitor a Cassandra database with JMX. Now I'm stuck with retrieving the CPU usage of the database. As far as I figured out, a possible MBean would be the java.lang:type=OperatingSystem with attribute ProcessCpuLoad.
However, it seems in this case the result would be the CPU usage of all processes running in the JVM, not only the Cassandra threads. Is this assumption correct?
I also wonder what data is shown as CPU usage when connecting with JConsole to the database. Is it possible to get direct access to these values(I mean without JConsole)? Or is there another Mbean which gives exactly the desired values?
Thanks,
Nico
ProcessCpuLoad in the OS mbean is correct. Its not all JVMs, just the one JVM thats reporting it. You do not have multiple processes running within a single JVM, the JVM runs as a single process per java application.
You can use java.lang:type=Threading to monitor cpu time spent on individual threads but there are a ton of threads in Cassandra and it will probably never be totally right (miss things like GC time).
If dont want to use jconsole you can check:
ps -p <whatever-your-cassandra-pid-is> -o %cpu
# or depending on OS/installer
ps -p `cat /var/run/cassandra.pid` -o %cpu
Related
How to troubleshoot/Optimize CPU usage in a Springboot application. Are the allocated resources sufficient for an application having a total of around a 300k user base? The application isn't heavy at all. It just calls third-party APIs and do the necessary checks and gives the response.
How to identify exact codes that could have been using more resources than normally required? I found out somewhere that tracking the processes id from top command and reaching to thread dump and looking up for the corresponding hexadecimal value of processid that could have been using more CPU is one way to figure out. This wasn't easily achievable as some of the commands suggested didn't work. I would appreciate any help or suggestions.
Thanks in advance.
Htop command output
Htop when it's normal
The process of Collection of Thread Stack is no different for a spring boot app. Before a boot app is containerized it is still a Jar. If you suspect that its your application that is actually contributing to the high CPUT then run your jar and attach a profiler to it and trace the code contributing to the high CPU on load. If you can not do it then take the thread dump of the running jar/java process and use any free or opensource tool to analyze the trace. The second logic explained is applicable for the containerized application as well.
Follow this steps to take the thread dump of a java app/boot app running inside a docker container :-
docker exec -it <containerName> jstack > someFile.txt
Take multiple snapshot of it for better visiblity and comparision.
If you have not added JMX enable options to the JVM commandline, do it to begin with:
-Dcom.sun.management.jmxremote
-Dcom.sun.management.jmxremote.port=10000
-Dcom.sun.management.jmxremote.rmi.port=10000
-Dcom.sun.management.jmxremote.local.only=false
-Dcom.sun.management.jmxremote.authenticate=false
-Dcom.sun.management.jmxremote.ssl=false
Then on your local machine you start "jmc" from your JDK bin folder and connect to your spring boot server.
You will then be able to see all the threads and enable both CPU load and thread locks on all active threads.
Be aware though that the above opens up JVM for unauthenticated entry so keep the port safe.
Next if your JVM dies send a "kill -3" SIGHUP which will tell the JVM to dump the core. that can then be read via the Eclipse MAT plugin in order to analyze the JVM inner doings.
Another way is to install jolokia into your server for other ways to retrieve the same info.
We are running some heavy deployments on weblogic setup and it takes around an hour. During that time, we want to take a memory snapshots/heap dumps to see how much headroom we have wrt memory to avoid crash. Is there any optional jvm arg that we can provide while starting the server which will do the job? I checked below link but nothing is fitting the requirement -
http://docs.oracle.com/cd/E15289_01/doc.40/e15062/optionxx.htm
If acceptable to drive the snapshots from the outside then you can use jrcmd to send commands to your JVM.
To get the PID use
jrcmd -P
and then you can use
jrcmd PID hprofdump dumpfile.bin
See http://docs.oracle.com/cd/E15289_01/doc.40/e15062/diagnostic.htm#BABIACCC for hrpofdump and http://docs.oracle.com/cd/E15289_01/doc.40/e15061/ctrlbreakhndlr.htm#i1001760 for jrcmd.
I currently work in a Weblogic Java EE project, where from time to time the application executes a Perl script to do some batch jobs. In the application the script is getting invoked as
Process p = Runtime.getRuntime().exec(cmdString);
Though it is a dangerous way to run, but it was working properly until we had a requirement to execute the script synchronously under a for loop. After a couple of run we are getting
java.io.IOException: Not enough space as probably OS is running out of virtual memory while exec-ing under a for loop. As a result we are not able to run the script at all in the server.
I am desperately looking for a safer and better way to run the Perl script, where we don't need to fork the parent process, or at-least not to eat-up all swap space!
The spec is as follows:
Appserver - Weblogic 9.52
JDK - 1.5
OS - SunOS 5.10
Sun-Fire-T200
I've had something similar on a couple of occasions. Since the child process is a fork of the (very large parent it can see all of it shares all it's memory (using copy on write). What i discovered was that the kernel needs to be able to ensure that it could copy all of the memory pages before forking the child, on a 32bit OS you run out of virtual head run really fast.
Possible solutions:
Use a 64Bit OS and JVM, pushes the issue down the road so far it doesn't matter
Host your script in another process (like HTTPD) and poke it using a HTTP request to invoke it
Create a perl-server, which reads perl scripts via network and executes them one by one.
If you want to keep your code unchanged and have enough disk free space, you can just add a sufficiently large swap area to your OS.
Assuming you need 10 GB, here is how you do it with UFS:
mkfile 10g /export/home/10g-swap
swap -a /export/home/10g-swap
echo "/export/home/10g-swap - - swap - no -" >> /etc/vfstab
If you use ZFS, that would be:
zfs create -V 10gb rpool/swap1
swap -a /dev/zvol/dsk/rpool/swap1
Don't worry about that large a swap, this won't have any performance impact as the swap will only be used for virtual memory reservation, not pagination.
Otherwise, as already suggested in previous replies, one way to avoid the virtual memory issue you experience would be to use a helper program, i.e. a small service that your contact through a network socket (or a higher level protocol like ssh) and that executes the perl script "remotely".
Note that the issue has nothing to do with a 32-bit or 64-bit JVM, it is just Solaris doesn't overcommit memory and this is by design.
I know that this is not "best practice" but I would like to know if I can auto restart tomcat if my deployed app throws an outofmemory exception
You can try to use the OnOutOfMemoryError JVM option
-XX:OnOutOfMemoryError="/yourscripts/tomcat-restart"
It is also possible to generate the heap dump for later analysis:
-XX:+HeapDumpOnOutOfMemoryError
Be careful with combining these two options. If you force killing the process in "tomcat-restart" the heap dump might not be complete.
I know this isn't what you asked, but have you tried looking through a heap dump to see where you may be leaking memory?
Some very useful tools for tracking down memory leaks:
jdk/bin/jmap -histo:live pid
This will give you a histogram of all live objects currently in the JVM. Look for any odd object counts. You'll have to know your application pretty well to be able to determine what object counts are odd.
jdk/bin/jmap -dump:live,file=heap.hprof pid
This will dump the entire heap of the JVM identified by pid. You can then use the great Eclipse Memory Analyzer to inspect it and find out who is holding on to references of your objects. Your two biggest friends in Eclipse Memory Analyzer are the histo gram and a right click -> references -> exclude weak/soft references to see what is referencing your object.
jconsole is of course another good tool.
not easily, and definitely not through the JVM that just suffered the out of memory exception. Your best bet would be some combination of tomcat status monitor coupled with cron scripts or related scheduled system administrator scripts; something to check the status of the server and automatically stop and restart the service if it has failed.
Unfortunately when you kill the java process. Your script will keep a reference to the tomcat ports 8080 8005 8009 and you will not be able to start it again from the same script. The only way it works for me is:
-XX:OnOutOfMemoryError="kill -9 %p" and then another cron or monit or something similar to ensure you have the tomcat running again.
%p is actually the JVM pid , something the JVM provides for you.
Generally, no. The VM is a bad state, and cannot be completely trusted.
Typically, one can use a configurable wrapper process that starts and stops the "real" server VM you want. An example I've worked with is "Java Service Wrapper" from Tanuki Software http://wrapper.tanukisoftware.com/doc/english/download.jsp
I know there are others.
To guard against OOMs in the first place, there are ways to instrument modern VMs via interface beans to query the status of the heap and other memory structures. These can be used to, say, warn in a log or an email if some app specific operations are pushing some established limits.
I use
-XX:OnOutOfMemoryError='pkill java;/usr/local/tomcat/bin/start.sh'
What about something like this? -XX:OnOutOfMemoryError="exec \`ps --no-heading -p $$ -o cmd\`"
I am coding an application that creates JVMs and needs to control the memory usage of the processes spawned by the JVM.
You can connect to JVM process using JMX to get information about memory status / allocations and also provoke garbage collection. But you first need to enable JMX monitoring of your JVM: http://java.sun.com/j2se/1.5.0/docs/guide/management/agent.html.
I assume that you are talking about non-Java "processes" spawned using Runtime.exec(...) etc.
The answer is that this is OS specific and not something that the standard Java libraries support. But if you were going to do this in Linux (or UNIX) I can think of three approaches:
Have Java spawn the command via a shell wrapper script that uses the ulimit builtin to reduce the memory limits, then execs the actual command; see man 1 ulimit.
Write a little C command that does the same as the shell wrapper. This will have less overhead than the wrapper script approach.
Try to do the same with JNI and a native code library. Not recommended because you'd probably need to replicate the behavior of Process and ProcessBuilder, and that could be very difficult.
If by 'control' you mean 'limit to a known upper bound', then you can simply pass
-Xms`lower_bound`
and
-Xmx`upper_bound`
to the vm's args when you spawn the process. see the approproate setting here