Explain a thread dump information of Java Hotspot virtual machine - java

Learning and getting a hands on java thread dump I created a thread dump of my IDEA intellij running process.
The problem is that I don't understand what it means.
Can you explain what this information means and how to read it ?
No need to list every single detail just the main fields and what each means and what it indicates as information.
Here is a small snippet
Full thread dump Java HotSpot(TM) 64-Bit Server VM (25.45-b02 mixed mode):
"JobScheduler FJ pool 0/4" #165 daemon prio=6 os_prio=0 tid=0x00007f9cb0001800 nid=0x1d2a waiting on condition [0x00007f9ca7e1e000]
java.lang.Thread.State: TIMED_WAITING (parking)
at sun.misc.Unsafe.park(Native Method)
- parking to wait for <0x00000000e74c3c90> (a jsr166e.ForkJoinPool)
at jsr166e.ForkJoinPool.awaitWork(ForkJoinPool.java:1756)
at jsr166e.ForkJoinPool.scan(ForkJoinPool.java:1694)
at jsr166e.ForkJoinPool.runWorker(ForkJoinPool.java:1642)
at jsr166e.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:109)
and here is the whole file

<Name> <nature> <vm priority> <os priority><vm id> <os id> <status [adress]>
<thread state>
<current execution point>
- detail : wait until timeout or thread 0x00000000e74c3c90 notification
<stack>
As far as I understand you have a job scheduler worker thread which has been released to its thread pool (and is waiting for a new job)
See also How to analyze a java thread dump?

Related

All Quartz Scheduler threads go to TIMED_WAITING?

I have Spring with Quartz jobs (clustered) running at periodic interval (1 minute). When server starts everything seems fine, but jobs don't get triggered after some time. Restart of the server makes the jobs run, but issue re-occurs after some time.
I suspected it to be a thread exhaustion issue and from thread dump I noticed that all my Quartz threads (10) are in TIMED_WAITING.
Config:
org.quartz.threadPool.class = org.quartz.simpl.SimpleThreadPool
org.quartz.threadPool.threadCount = 10
org.quartz.threadPool.threadPriority = 5
Thread dump:
quartzScheduler_Worker-10 - priority:10 - threadId:0x00007f8ae534d800 - nativeId:0x13c78 - state:TIMED_WAITING stackTrace:
java.lang.Thread.State: TIMED_WAITING (on object monitor)
at java.lang.Object.wait(Native Method)
- waiting on <0x000000066cd73220> (a java.lang.Object)
at org.quartz.simpl.SimpleThreadPool$WorkerThread.run(SimpleThreadPool.java:568)
- locked <0x000000066cd73220> (a java.lang.Object)
Using quartz 2.2.1 (i doubt if it could be version specific issue)
I verified from the logs that there are no DB connectivity issues.
Kindly help in diagnosing the problem. Is there a possibility that I have maxed out system resources (number of threads) ? But my jobs are synchronous and exist only when all its child threads have completed their task and I also have this annotation #DisallowConcurrentExecution
The root cause was we had too many miss fires in our quartz job. We have quartz kicks in every 1 minute and job doesn't really complete in say 1 min, so it's getting pilled up as miss fires and quartz tries to execute them first.
During this process there's an operation of update of miss fires which takes a lots of time which leads quartz to get stuck. This is evident from thread dump where in all our quartz threads are in TIMED_WAITING state as below
quartzScheduler_Worker-10 - priority:10 - threadId:0x00007f8ae534d800 - nativeId:0x13c78 - state:TIMED_WAITING
stackTrace:
java.lang.Thread.State: TIMED_WAITING (on object monitor)
at java.lang.Object.wait(Native Method)
- waiting on <0x000000066cd73220> (a java.lang.Object)
at org.quartz.simpl.SimpleThreadPool$WorkerThread.run(SimpleThreadPool.java:568)
- locked <0x000000066cd73220> (a java.lang.Object)
Refer : https://jira.terracotta.org/jira/si/jira.issueviews:issue-html/QTZ-357/QTZ-357.html
For our use case miss fires can be ignored and can be picked with next run. Hence I changed the Misfire instruction to ignore as below
<property name="misfireInstructionName" value="MISFIRE_INSTRUCTION_IGNORE_MISFIRE_POLICY" />

Apache Tomcat Threads in WAITING state while thread pool increases Class name is ImageLoadWorker

I got this type thread dump on tomcat, All thread are in wating state.
So application is slow down.
Please suggest me the solution for that.
I am using Tomcat 7 and java 7
"ImageLoadWorker(653)" prio=5 tid=0x2089 nid=0x829 in Object.wait() - stats: cpu=0 blk=-1 wait=-1
java.lang.Thread.State: WAITING
at java.lang.Object.wait(Native Method)
- waiting on org.xhtmlrenderer.swing.ImageLoadQueue#4651e7d2
at java.lang.Object.wait(Object.java:503)
at org.xhtmlrenderer.swing.ImageLoadQueue.getTask(ImageLoadQueue.java:83)
at org.xhtmlrenderer.swing.ImageLoadWorker.run(ImageLoadWorker.java:53)
Locked synchronizers: count = 0
There is thread leakage in this class of flyingsaucer jar. I got answer for this problem on below URL.
https://technotailor.wordpress.com/2017/04/17/thread-leak-with-imageloadworker-in-flying-saucer-jar/

Blocking threads using javassist in web app

Could somebody help me and explain what exactly is going on, looking at the following thread dump. It is a web application running on Tomcat 7 and we experience that some requests do not get answered:
"ajp-bio-8012-exec-161" daemon prio=10 tid=0x00007fe170603000 nid=0x344f runnable [0x00007fe174fae000]
java.lang.Thread.State: RUNNABLE
at java.security.AccessController.doPrivileged(Native Method)
at java.io.FilePermission.init(FilePermission.java:209)
at java.io.FilePermission.<init>(FilePermission.java:285)
at java.lang.SecurityManager.checkRead(SecurityManager.java:888)
at java.io.File.exists(File.java:808)
at sun.misc.URLClassPath$FileLoader.getResource(URLClassPath.java:1080)
at sun.misc.URLClassPath$FileLoader.findResource(URLClassPath.java:1047)
at sun.misc.URLClassPath.findResource(URLClassPath.java:176)
at java.net.URLClassLoader$2.run(URLClassLoader.java:551)
at java.net.URLClassLoader$2.run(URLClassLoader.java:549)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findResource(URLClassLoader.java:548)
at java.lang.ClassLoader.getResource(ClassLoader.java:1147)
at java.lang.ClassLoader.getResource(ClassLoader.java:1142)
at org.apache.catalina.loader.WebappClassLoader.getResource(WebappClassLoader.java:1445)
at java.lang.Class.getResource(Class.java:2142)
at javassist.ClassClassPath.find(ClassClassPath.java:84)
at javassist.ClassPoolTail.find(ClassPoolTail.java:317)
at javassist.ClassPool.find(ClassPool.java:495)
at javassist.ClassPool.createCtClass(ClassPool.java:479)
at javassist.ClassPool.get0(ClassPool.java:445)
- locked <0x00000000db6cdb48> (a javassist.ClassPool)
at javassist.ClassPool.get(ClassPool.java:414)
at javassist.compiler.MemberResolver.lookupClass0(MemberResolver.java:425)
at javassist.compiler.MemberResolver.lookupClass(MemberResolver.java:389)
at javassist.compiler.MemberResolver.lookupClassByJvmName(MemberResolver.java:310)
at javassist.compiler.MemberResolver.lookupClass(MemberResolver.java:327)
at javassist.compiler.MemberResolver.lookupClass(MemberResolver.java:314)
at javassist.compiler.Javac.compileField(Javac.java:122)
at javassist.compiler.Javac.compile(Javac.java:91)
at javassist.CtField.make(CtField.java:163)
at com.mycompany.validation.util.BeanGenerator.addProperty(BeanGenerator.java:157)
...
at java.lang.Thread.run(Thread.java:745)
"ajp-bio-8012-exec-12" daemon prio=10 tid=0x0000000000c49800 nid=0x3d99 waiting for monitor entry [0x00007fe1756b7000]
java.lang.Thread.State: BLOCKED (on object monitor)
at javassist.ClassPool.get0(ClassPool.java:432)
- waiting to lock <0x00000000db6cdb48> (a javassist.ClassPool)
at javassist.ClassPool.get(ClassPool.java:414)
at javassist.compiler.MemberResolver.lookupClass0(MemberResolver.java:425)
at javassist.compiler.MemberResolver.lookupClass(MemberResolver.java:389)
at javassist.compiler.MemberResolver.lookupClassByJvmName(MemberResolver.java:310)
at javassist.compiler.MemberResolver.lookupClass(MemberResolver.java:327)
at javassist.compiler.MemberResolver.lookupClass(MemberResolver.java:314)
at javassist.compiler.Javac.compileField(Javac.java:122)
at javassist.compiler.Javac.compile(Javac.java:91)
at javassist.CtField.make(CtField.java:163)
at com.mycompany.validation.util.BeanGenerator.addProperty(BeanGenerator.java:157)
...
at java.lang.Thread.run(Thread.java:745)
"ajp-bio-8012-exec-5" daemon prio=10 tid=0x0000000001603800 nid=0x7c77 waiting for monitor entry [0x00007fe174aaa000]
java.lang.Thread.State: BLOCKED (on object monitor)
at javassist.ClassPool.makeClass(ClassPool.java:621)
- waiting to lock <0x00000000db6cdb48> (a javassist.ClassPool)
at javassist.ClassPool.makeClass(ClassPool.java:606)
at com.mycompany.validation.util.BeanGenerator.init(BeanGenerator.java:334)
...
at java.lang.Thread.run(Thread.java:745)
I am not a specialist in thread dumps and just would like to know how to get rid of the problem.
Has the first thread locked an object (ClassPool) that two other request threads are waiting for to unlock? Why is it locked? Do I have to synchronize and how?
Thanks for any hint!
You have a single ClassPool in your application which has the monitor id 0x00000000db6cdb48.
If you see the method signature of ClassPool.get0:
protected synchronized CtClass get0(String classname, boolean useCache)
So it is synchronized. I suppose you simply used the ClassPool.getDefault(). You have to know that this method returns a single instance, so all of your calls will be sychronized.
Create a new pool for each thread (with new ClassPool(ClassPool.getDefault()) for example) and it will be fine then.
Apart from that you might check your secrurity manager, why it takes so long.

Strange thread dump on deadlock

We've been experiencing a strange deadlock during the startup of our java application. When I run jstack on the application to investigate, I see that the AWT-EventQueue is in Object.wait(), but the thread is still marked as RUNNABLE. I've included the relevent parts of the thread dump, and I'm hoping that someone can shed some light on this issue.
"AWT-EventQueue-0" prio=6 tid=0x5f0a2400 nid=0x19e4 in Object.wait() [0x6007e000]
java.lang.Thread.State: RUNNABLE
at com.ge.med.platinum.work.isu.ExamTransaction.getEAOTableLite(ExamTransaction.java:1514)
...
- locked <0x1fc87448> (a java.awt.Component$AWTTreeLock)
...
"Thread-63-Pool-9" prio=6 tid=0x5f1a2800 nid=0x1f54 waiting for monitor entry [0x61a9f000]
java.lang.Thread.State: BLOCKED (on object monitor)
at java.awt.Component.setFont(Component.java:1777)
- waiting to lock <0x1fc87448> (a java.awt.Component$AWTTreeLock)
...
"Thread-289-Pool-3" prio=6 tid=0x60afe800 nid=0x12b8 waiting for monitor entry [0x623fe000]
java.lang.Thread.State: BLOCKED (on object monitor)
at java.awt.Component.setFont(Component.java:1777)
- waiting to lock <0x1fc87448> (a java.awt.Component$AWTTreeLock)
...
In addition, I've noticed this thread, which mentions that accessing a static variable may be involved. This is also the case in our application. The line in getEAOTableLite in question references a static method.
I'm not sure how I missed it, but if I had read down the stack trace a little better I would have found that the issue was that the static initialization of the EAOAlertManager class would eventaully make a call to the Component.setFont() method, which was be blocked by the AWT-EventQueue (it is illegal to call setFont() outside of the EventQueue). The EventQueue then ended up back in ExamTransaction.getEAOTableLite, which meant that it would reference the EAOAlertManager class again, causing it to wait for the class to finish loading. But the EAOAlertManager class was waiting on the EventQueue. That, my friends, is a deadlock.
"Thread-289-Pool-3" prio=6 tid=0x60afe800 nid=0x12b8 waiting for monitor entry [0x623fe000]
java.lang.Thread.State: BLOCKED (on object monitor)
at java.awt.Component.setFont(Component.java:1777)
- waiting to lock <0x1fc87448> (a java.awt.Component$AWTTreeLock)
at java.awt.Container.setFont(Container.java:1554)
at javax.swing.JComponent.setFont(JComponent.java:2723)
at javax.swing.LookAndFeel.installColorsAndFont(LookAndFeel.java:191)
at javax.swing.plaf.basic.BasicPanelUI.installDefaults(BasicPanelUI.java:49)
at javax.swing.plaf.basic.BasicPanelUI.installUI(BasicPanelUI.java:39)
at com.ge.med.ptk.laf.CuiPanelUI.installUI(CuiPanelUI.java:53)
at javax.swing.JComponent.setUI(JComponent.java:662)
at javax.swing.JPanel.setUI(JPanel.java:136)
at javax.swing.JPanel.updateUI(JPanel.java:109)
at javax.swing.JPanel.<init>(JPanel.java:69)
at javax.swing.JPanel.<init>(JPanel.java:92)
at javax.swing.JPanel.<init>(JPanel.java:100)
at javax.swing.JRootPane.createGlassPane(JRootPane.java:528)
at javax.swing.JRootPane.<init>(JRootPane.java:348)
at javax.swing.JDialog.createRootPane(JDialog.java:611)
at javax.swing.JDialog.dialogInit(JDialog.java:593)
at com.ge.med.plaf.wrapper.WJDialog.dialogInit(WJDialog.java:42)
at javax.swing.JDialog.<init>(JDialog.java:545)
at javax.swing.JDialog.<init>(JDialog.java:515)
at com.ge.med.plaf.wrapper.WJDialog.<init>(WJDialog.java:424)
at com.ge.med.platinum.gui.util.PlatinumDialog.<init>(PlatinumDialog.java:138)
at com.ge.med.platinum.gui.util.EAOAlertManager$EAOAlertDialog.<init>(EAOAlertManager.java:450)
at com.ge.med.platinum.gui.util.EAOAlertManager.<clinit>(EAOAlertManager.java:77)
at com.ge.med.platinum.work.isu.ExamTransaction.getEAOTableLite(ExamTransaction.java:1514)
This article seems to get directly to the point:
http://www.javaworld.com/javaworld/jw-04-1999/jw-04-toolbox.html
There are 5 pages. I don't know what it all means to your application, but it should help you.

Which Java thread is hogging the CPU?

Let's say your Java program is taking 100% CPU. It has 50 threads. You need to find which thread is guilty. I have not found a tool that can help. Currently I use the following very time consuming routine:
Run jstack <pid>, where pid is the process id of a Java process. The easy way to find it is to run another utility included in the JDK - jps. It is better to redirect jstack's output to a file.
Search for "runnable" threads. Skip those that wait on a socket (for some reason they are still marked runnable).
Repeat steps 1 and 2 a couple of times and see if you can locate a pattern.
Alternatively, you could attach to a Java process in Eclipse and try to suspend threads one by one, until you hit the one that hogs CPU. On a one-CPU machine, you might need to first reduce the Java process's priority to be able to move around. Even then, Eclipse often isn't able to attach to a running process due to a timeout.
I would have expected Sun's visualvm tool to do this.
Does anybody know of a better way?
Identifying which Java Thread is consuming most CPU in production server.
Most (if not all) productive systems doing anything important will use more than 1 java thread. And when something goes crazy and your cpu usage is on 100%, it is hard to identify which thread(s) is/are causing this. Or so I thought. Until someone smarter than me showed me how it can be done. And here I will show you how to do it and you too can amaze your family and friends with your geek skills.
A Test Application
In order to test this, we need a test application. So I will give you one. It consists of 3 classes:
A HeavyThread class that does something CPU intensive (computing MD5 hashes)
A LightThread class that does something not-so-cpu-intensive (counting and sleeping).
A StartThreads class to start 1 cpu intensive and several light threads.
Here is code for these classes:
import java.security.MessageDigest;
import java.security.NoSuchAlgorithmException;
import java.util.UUID;
/**
* thread that does some heavy lifting
*
* #author srasul
*
*/
public class HeavyThread implements Runnable {
private long length;
public HeavyThread(long length) {
this.length = length;
new Thread(this).start();
}
#Override
public void run() {
while (true) {
String data = "";
// make some stuff up
for (int i = 0; i < length; i++) {
data += UUID.randomUUID().toString();
}
MessageDigest digest;
try {
digest = MessageDigest.getInstance("MD5");
} catch (NoSuchAlgorithmException e) {
throw new RuntimeException(e);
}
// hash the data
digest.update(data.getBytes());
}
}
}
import java.util.Random;
/**
* thread that does little work. just count & sleep
*
* #author srasul
*
*/
public class LightThread implements Runnable {
public LightThread() {
new Thread(this).start();
}
#Override
public void run() {
Long l = 0l;
while(true) {
l++;
try {
Thread.sleep(new Random().nextInt(10));
} catch (InterruptedException e) {
e.printStackTrace();
}
if(l == Long.MAX_VALUE) {
l = 0l;
}
}
}
}
/**
* start it all
*
* #author srasul
*
*/
public class StartThreads {
public static void main(String[] args) {
// lets start 1 heavy ...
new HeavyThread(1000);
// ... and 3 light threads
new LightThread();
new LightThread();
new LightThread();
}
}
Assuming that you have never seen this code, and all you have a PID of a runaway Java process that is running these classes and is consuming 100% CPU.
First let's start the StartThreads class.
$ ls
HeavyThread.java LightThread.java StartThreads.java
$ javac *
$ java StartThreads &
At this stage a Java process is running should be taking up 100 cpu. In my top I see:
In top press Shift-H which turns on Threads. The man page for top says:
-H : Threads toggle
Starts top with the last remembered 'H' state reversed. When
this toggle is On, all individual threads will be displayed.
Otherwise, top displays a summation of all threads in a
process.
And now in my top with Threads display turned ON i see:
And I have a java process with PID 28294. Lets get the stack dump of this process using jstack:
$ jstack 28924
2010-11-18 13:05:41
Full thread dump Java HotSpot(TM) 64-Bit Server VM (17.0-b16 mixed mode):
"Attach Listener" daemon prio=10 tid=0x0000000040ecb000 nid=0x7150 waiting on condition [0x0000000000000000]
java.lang.Thread.State: RUNNABLE
"DestroyJavaVM" prio=10 tid=0x00007f9a98027800 nid=0x70fd waiting on condition [0x0000000000000000]
java.lang.Thread.State: RUNNABLE
"Thread-3" prio=10 tid=0x00007f9a98025800 nid=0x710d waiting on condition [0x00007f9a9d543000]
java.lang.Thread.State: TIMED_WAITING (sleeping)
at java.lang.Thread.sleep(Native Method)
at LightThread.run(LightThread.java:21)
at java.lang.Thread.run(Thread.java:619)
"Thread-2" prio=10 tid=0x00007f9a98023800 nid=0x710c waiting on condition [0x00007f9a9d644000]
java.lang.Thread.State: TIMED_WAITING (sleeping)
at java.lang.Thread.sleep(Native Method)
at LightThread.run(LightThread.java:21)
at java.lang.Thread.run(Thread.java:619)
"Thread-1" prio=10 tid=0x00007f9a98021800 nid=0x710b waiting on condition [0x00007f9a9d745000]
java.lang.Thread.State: TIMED_WAITING (sleeping)
at java.lang.Thread.sleep(Native Method)
at LightThread.run(LightThread.java:21)
at java.lang.Thread.run(Thread.java:619)
"Thread-0" prio=10 tid=0x00007f9a98020000 nid=0x710a runnable [0x00007f9a9d846000]
java.lang.Thread.State: RUNNABLE
at sun.security.provider.DigestBase.engineReset(DigestBase.java:139)
at sun.security.provider.DigestBase.engineUpdate(DigestBase.java:104)
at java.security.MessageDigest$Delegate.engineUpdate(MessageDigest.java:538)
at java.security.MessageDigest.update(MessageDigest.java:293)
at sun.security.provider.SecureRandom.engineNextBytes(SecureRandom.java:197)
- locked <0x00007f9aa457e400> (a sun.security.provider.SecureRandom)
at sun.security.provider.NativePRNG$RandomIO.implNextBytes(NativePRNG.java:257)
- locked <0x00007f9aa457e708> (a java.lang.Object)
at sun.security.provider.NativePRNG$RandomIO.access$200(NativePRNG.java:108)
at sun.security.provider.NativePRNG.engineNextBytes(NativePRNG.java:97)
at java.security.SecureRandom.nextBytes(SecureRandom.java:433)
- locked <0x00007f9aa4582fc8> (a java.security.SecureRandom)
at java.util.UUID.randomUUID(UUID.java:162)
at HeavyThread.run(HeavyThread.java:27)
at java.lang.Thread.run(Thread.java:619)
"Low Memory Detector" daemon prio=10 tid=0x00007f9a98006800 nid=0x7108 runnable [0x0000000000000000]
java.lang.Thread.State: RUNNABLE
"CompilerThread1" daemon prio=10 tid=0x00007f9a98004000 nid=0x7107 waiting on condition [0x0000000000000000]
java.lang.Thread.State: RUNNABLE
"CompilerThread0" daemon prio=10 tid=0x00007f9a98001000 nid=0x7106 waiting on condition [0x0000000000000000]
java.lang.Thread.State: RUNNABLE
"Signal Dispatcher" daemon prio=10 tid=0x0000000040de4000 nid=0x7105 runnable [0x0000000000000000]
java.lang.Thread.State: RUNNABLE
"Finalizer" daemon prio=10 tid=0x0000000040dc4800 nid=0x7104 in Object.wait() [0x00007f9a97ffe000]
java.lang.Thread.State: WAITING (on object monitor)
at java.lang.Object.wait(Native Method)
- waiting on <0x00007f9aa45506b0> (a java.lang.ref.ReferenceQueue$Lock)
at java.lang.ref.ReferenceQueue.remove(ReferenceQueue.java:118)
- locked <0x00007f9aa45506b0> (a java.lang.ref.ReferenceQueue$Lock)
at java.lang.ref.ReferenceQueue.remove(ReferenceQueue.java:134)
at java.lang.ref.Finalizer$FinalizerThread.run(Finalizer.java:159)
"Reference Handler" daemon prio=10 tid=0x0000000040dbd000 nid=0x7103 in Object.wait() [0x00007f9a9de92000]
java.lang.Thread.State: WAITING (on object monitor)
at java.lang.Object.wait(Native Method)
- waiting on <0x00007f9aa4550318> (a java.lang.ref.Reference$Lock)
at java.lang.Object.wait(Object.java:485)
at java.lang.ref.Reference$ReferenceHandler.run(Reference.java:116)
- locked <0x00007f9aa4550318> (a java.lang.ref.Reference$Lock)
"VM Thread" prio=10 tid=0x0000000040db8800 nid=0x7102 runnable
"GC task thread#0 (ParallelGC)" prio=10 tid=0x0000000040d6e800 nid=0x70fe runnable
"GC task thread#1 (ParallelGC)" prio=10 tid=0x0000000040d70800 nid=0x70ff runnable
"GC task thread#2 (ParallelGC)" prio=10 tid=0x0000000040d72000 nid=0x7100 runnable
"GC task thread#3 (ParallelGC)" prio=10 tid=0x0000000040d74000 nid=0x7101 runnable
"VM Periodic Task Thread" prio=10 tid=0x00007f9a98011800 nid=0x7109 waiting on condition
JNI global references: 910
From my top I see that the PID of the top thread is 28938. And 28938 in hex is 0x710A. Notice that in the stack dump, each thread has an nid which is dispalyed in hex. And it just so happens that 0x710A is the id of the thread:
"Thread-0" prio=10 tid=0x00007f9a98020000 nid=0x710a runnable [0x00007f9a9d846000]
java.lang.Thread.State: RUNNABLE
at sun.security.provider.DigestBase.engineReset(DigestBase.java:139)
at sun.security.provider.DigestBase.engineUpdate(DigestBase.java:104)
at java.security.MessageDigest$Delegate.engineUpdate(MessageDigest.java:538)
at java.security.MessageDigest.update(MessageDigest.java:293)
at sun.security.provider.SecureRandom.engineNextBytes(SecureRandom.java:197)
- locked <0x00007f9aa457e400> (a sun.security.provider.SecureRandom)
at sun.security.provider.NativePRNG$RandomIO.implNextBytes(NativePRNG.java:257)
- locked <0x00007f9aa457e708> (a java.lang.Object)
at sun.security.provider.NativePRNG$RandomIO.access$200(NativePRNG.java:108)
at sun.security.provider.NativePRNG.engineNextBytes(NativePRNG.java:97)
at java.security.SecureRandom.nextBytes(SecureRandom.java:433)
- locked <0x00007f9aa4582fc8> (a java.security.SecureRandom)
at java.util.UUID.randomUUID(UUID.java:162)
at HeavyThread.run(HeavyThread.java:27)
at java.lang.Thread.run(Thread.java:619)
And so you can confirm that the thread which is running the HeavyThread class is consuming most CPU.
In read world situations, it will probably be a bunch of threads that consume some portion of CPU and these threads put together will lead to the Java process using 100% CPU.
Summary
Run top
Press Shift-H to enable Threads View
Get PID of the thread with highest CPU
Convert PID to HEX
Get stack dump of java process
Look for thread with the matching HEX PID.
jvmtop can show you the top consuming threads:
TID NAME STATE CPU TOTALCPU
25 http-8080-Processor13 RUNNABLE 4.55% 1.60%
128022 RMI TCP Connection(18)-10.101. RUNNABLE 1.82% 0.02%
36578 http-8080-Processor164 RUNNABLE 0.91% 2.35%
128026 JMX server connection timeout TIMED_WAITING 0.00% 0.00%
Try looking at the Hot Thread Detector plugin for visual VM -- it uses the ThreadMXBean API to take multiple CPU consumption samples to find the most active threads. It's based on a command-line equivalent from Bruce Chapman which might also be useful.
Just run up JVisualVM, connect to your app and and use the thread view. The one which remains continually active is your most likely culprit.
Have a look at the Top Threads plugin for JConsole.
I would recommend taking a look at Arthas tool open sourced by Alibaba.
It contains a bunch of useful commands that can help you debug your production code:
Dashboard: Overview of Your Java Process
SC: Search Class Loaded by Your JVM
Jad: Decompile Class Into Source Code
Watch: View Method Invocation Input and Results
Trace: Find the Bottleneck of Your Method Invocation
Monitor: View Method Invocation Statistics
Stack: View Call Stack of the Method
Tt: Time Tunnel of Method Invocations
Example of the dashboard:
If you're running under Windows, try Process Explorer. Bring up the properties dialog for your process, then select the Threads tab.
Take a thread dump. Wait for 10 seconds. Take another thread dump. Repeat one more time.
Inspect the thread dumps and see which threads are stuck at the same place, or processing the same request.
This is a manual way of doing it, but often useful.
Use ps -eL or top -H -p <pid>, or if you need to see and monitor in real time, run top (then shift H), to get the Light Weight Process ( LWP aka threads) associated with the java process.
root#xxx:/# ps -eL
PID LWP TTY TIME CMD
1 1 ? 00:00:00 java
1 7 ? 00:00:01 java
1 8 ? 00:07:52 java
1 9 ? 00:07:52 java
1 10 ? 00:07:51 java
1 11 ? 00:07:52 java
1 12 ? 00:07:52 java
1 13 ? 00:07:51 java
1 14 ? 00:07:51 java
1 15 ? 00:07:53 java
…
1 164 ? 00:00:02 java
1 166 ? 00:00:02 java
1 169 ? 00:00:02 java
Note LWP= Lightweight Process; In Linux, a thread is associated with a process so that it can be managed in the kernel; LWP shares files and other resources with the parent process.
Now let us see the threads that are taking most time
1 8 ? 00:07:52 java
1 9 ? 00:07:52 java
1 10 ? 00:07:51 java
1 11 ? 00:07:52 java
1 12 ? 00:07:52 java
1 13 ? 00:07:51 java
1 14 ? 00:07:51 java
1 15 ? 00:07:53 java
Jstack is a JDK utility to print Java Stack; It prints thread of the form.
Familiarize yourself with others cool JDK tools as well (jcmd jstat jhat jmap jstack etc — https://docs.oracle.com/javase/8/docs/technotes/tools/unix/)
jstack -l <process id>
The nid, Native thread id in the stack trace is the one that is connected to LWT in linux (https://gist.github.com/rednaxelafx/843622)
“GC task thread#0 (ParallelGC)” os_prio=0 tid=0x00007fc21801f000 nid=0x8 runnable
The nid is given in Hex; So we convert the thread id taking the most time
8,9,10,11,12,13,14,15 in DEC = 8,9,A, B,C,D,E,F in HEX.
(note that this particular stack was taken from Java in a Docker container, with a convenient process if of 1 )
Let us see the thread with this ids..
“GC task thread#0 (ParallelGC)” os_prio=0 tid=0x00007fc21801f000 nid=0x8 runnable
“GC task thread#1 (ParallelGC)” os_prio=0 tid=0x00007fc218020800 nid=0x9 runnable
“GC task thread#2 (ParallelGC)” os_prio=0 tid=0x00007fc218022800 nid=0xa runnable
“GC task thread#3 (ParallelGC)” os_prio=0 tid=0x00007fc218024000 nid=0xb runnable
“GC task thread#4 (ParallelGC)” os_prio=0 tid=0x00007fc218026000 nid=0xc runnable
“GC task thread#5 (ParallelGC)” os_prio=0 tid=0x00007fc218027800 nid=0xd runnable
“GC task thread#6 (ParallelGC)” os_prio=0 tid=0x00007fc218029800 nid=0xe runnable
“GC task thread#7 (ParallelGC)” os_prio=0 tid=0x00007fc21802b000 nid=0xf runnable
All GC related threads; No wonder it was taking lot of CPU time; But then is GC a problem here.
Use jstat (not jstack !) utility to have a quick check for GC.
jstat -gcutil <pid>
This is a kind of hacky way, but it seems like you could fire the application up in a debugger, and then suspend all the threads, and go through the code and find out which one isn't blocking on a lock or an I/O call in some kind of loop. Or is this like what you've already tried?
An option you could consider is querying your threads for the answer from within application. Via the ThreadMXBean you can query CPU usage of threads from within your Java application and query stack traces of the offending thread(s).
The ThreadMXBean option allows you to build this kind of monitoring into your live application. It has negligible impact and has the distinct advantage that you can make it do exactly what you want.
If you suspect VisualVM is a good tool, try it (because it does this) Find out the threads(s) only helps you in the general direction of why it is consuming so much CPU.
However, if its that obvious I would go straight to using a profiler to find out why you are consuming so much CPU.

Categories