Found a deadlock situation when using e.printStackTrace() and logback in different threads. The thread dumps are given below.
It seems to me, the logback (used in thread AsyncAppender-Worker-Thread-1) trying to acquire the lock of PrintStream, which is already owned by by main thread's java.lang.Throwable$WrappedPrintStream.println. If that's the case, why the printStackTrace keep holding the lock of PrintStream (as it should release it once the printing is done)?
Thread dump For the main thread.
"main#1" prio=5 tid=0x1 nid=NA waiting
java.lang.Thread.State: WAITING
blocks AsyncAppender-Worker-Thread-1#831
at sun.misc.Unsafe.park(Unsafe.java:-1)
at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039)
at java.util.concurrent.ArrayBlockingQueue.put(ArrayBlockingQueue.java:353)
at ch.qos.logback.core.AsyncAppenderBase.put(AsyncAppenderBase.java:139)
at ch.qos.logback.core.AsyncAppenderBase.append(AsyncAppenderBase.java:130)
at ch.qos.logback.core.UnsynchronizedAppenderBase.doAppend(UnsynchronizedAppenderBase.java:88)
at ch.qos.logback.core.spi.AppenderAttachableImpl.appendLoopOnAppenders(AppenderAttachableImpl.java:48)
at ch.qos.logback.classic.Logger.appendLoopOnAppenders(Logger.java:273)
at ch.qos.logback.classic.Logger.callAppenders(Logger.java:260)
at ch.qos.logback.classic.Logger.buildLoggingEventAndAppend(Logger.java:442)
at ch.qos.logback.classic.Logger.filterAndLog_0_Or3Plus(Logger.java:396)
at ch.qos.logback.classic.Logger.error(Logger.java:543)
at com.side.stdlib.logging.StdOutErrLog$2.print(StdOutErrLog.java:43)
at java.io.PrintStream.println(PrintStream.java:823)
- locked <0x1183> (a com.side.stdlib.logging.StdOutErrLog$2)
at java.lang.Throwable$WrappedPrintStream.println(Throwable.java:749)
at java.lang.Throwable.printEnclosedStackTrace(Throwable.java:698)
at java.lang.Throwable.printStackTrace(Throwable.java:668)
at java.lang.Throwable.printStackTrace(Throwable.java:644)
at java.lang.Throwable.printStackTrace(Throwable.java:635)
at com.side.SidekApi.sideAPIExecution(SidekApi.java:175)
Thread dump for the thread AsyncAppender-Worker-Thread-1
"AsyncAppender-Worker-Thread-1#831" daemon prio=5 tid=0xe nid=NA waiting for monitor entry
java.lang.Thread.State: BLOCKED
waiting for main#1 to release lock on <0x1183> (a com.side.stdlib.logging.StdOutErrLog$2)
at java.io.PrintStream.write(PrintStream.java:478)
at java.io.FilterOutputStream.write(FilterOutputStream.java:97)
at ch.qos.logback.core.joran.spi.ConsoleTarget$2.write(ConsoleTarget.java:55)
at ch.qos.logback.core.encoder.LayoutWrappingEncoder.doEncode(LayoutWrappingEncoder.java:135)
at ch.qos.logback.core.OutputStreamAppender.writeOut(OutputStreamAppender.java:194)
at ch.qos.logback.core.OutputStreamAppender.subAppend(OutputStreamAppender.java:219)
at ch.qos.logback.core.OutputStreamAppender.append(OutputStreamAppender.java:103)
at ch.qos.logback.core.UnsynchronizedAppenderBase.doAppend(UnsynchronizedAppenderBase.java:88)
at ch.qos.logback.core.spi.AppenderAttachableImpl.appendLoopOnAppenders(AppenderAttachableImpl.java:48)
at ch.qos.logback.core.AsyncAppenderBase$Worker.run(AsyncAppenderBase.java:226)
It seems the situation is a bit similar with https://bugs.openjdk.java.net/browse/JDK-6719464, but no answer there.
If the logback worker thread can't finish, it must be because its blocking queue is full. The worker is waiting to deposit its log entry, and since the thread is WAITING we know it released the lock on the queue, but it still holds the lock on the printstream. The console writing thread is BLOCKED trying to acquire the lock on the printstream, which it needs in order to write to the console, so they are deadlocked.
A minimal fix that avoids code changes could be swapping out the console appender for one that doesn't need to acquire a lock on the printstream.
In any case needing to take the lock on the printstream probably reduces the benefit from logging asynchronously. The long term fix will involve replacing the printlns with calls to a logger (like slf4j).
Related
I have an application with 1 writer thread and 8 reader threads accessing a shared resource, which is behind a ReentrantReadWriteLock. It froze for about an hour, producing no log output and not responding to requests. This is on Java 8.
Before killing it someone took thread dumps, which look like this:
Writer thread:
"writer-0" #83 prio=5 os_prio=0 tid=0x00007f899c166800 nid=0x2b1f waiting on condition [0x00007f898d3ba000]
java.lang.Thread.State: WAITING (parking)
at sun.misc.Unsafe.park(Native Method)
- parking to wait for <0x00000002b8dd4ea8> (a java.util.concurrent.locks.ReentrantReadWriteLock$FairSync)
at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
at java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836)
at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireQueued(AbstractQueuedSynchronizer.java:870)
at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquire(AbstractQueuedSynchronizer.java:1199)
at java.util.concurrent.locks.ReentrantReadWriteLock$WriteLock.lock(ReentrantReadWriteLock.java:943)
Reader:
"reader-1" #249 daemon prio=5 os_prio=0 tid=0x00007f895000c000 nid=0x33d6 waiting on condition [0x00007f898edcf000]
java.lang.Thread.State: WAITING (parking)
at sun.misc.Unsafe.park(Native Method)
- parking to wait for <0x00000002b8dd4ea8> (a java.util.concurrent.locks.ReentrantReadWriteLock$FairSync)
at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
at java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836)
at java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireShared(AbstractQueuedSynchronizer.java:967)
at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireShared(AbstractQueuedSynchronizer.java:1283)
at java.util.concurrent.locks.ReentrantReadWriteLock$ReadLock.lock(ReentrantReadWriteLock.java:727)
This looks like a deadlock, however there are a couple of things that make me doubt that:
I can't find another thread that could possibly be holding the same lock
Taking a thread dump 4 seconds later yealds the same result, but all threads now report parking to wait for <0x00000002a7daa878>, which is different than 0x00000002b8dd4ea8 in the first dump.
Is this a deadlock? I see that there is some change in the threads' state, but it could only be internal to the lock implementation. What else could be causing this behaviour?
It turned out it was a deadlock. The thread holding the lock was not reported as holding any locks in the thread dump, which made it difficult to diagnose.
The only way to understand that was to inspect a heap dump of the application. For those interested in how, here's the process step-by-step:
A heap dump was taken at roughly the same time as the thread dumps.
I opened it using Java VisualVM, which comes with JDK.
In the "Classes" view I filtered by the class name of the class that contains the lock as a field.
I double-clicked on the class to be taken to the "Instances" view
Thankfully, there were only a few instances of that class, so I was able to find the one that was causing problems.
I inspected the ReentrantReadWriteLock object kept in a field in the class. In particular the sync field of that lock keeps its state - in this case it was ReentrantReadWriteLock$FairSync.
The state property of it was 65536. This represents both the number of shared and exclusive holds of the lock. The shared holds count is stored in the first 16 bits of the state, and is retrieved as state >>> 16. The exclusive holds count is in the last 16 bits, and is retrieved as state & ((1 << 16) - 1). From this we can see that there's 1 shared hold and 0 exclusive holds on the lock.
You can see the threads waiting for the lock in the head field. It is a queue, with thread containing the waiting thread, and next containing the next node in the queue. Going through it I found the writer-0 and 7 of the 8 reader-n threads, confirming what we know from the thread dump.
The firstReader field of the sync object contains the thread that have acquired the read log - from the comment in the code firstReader is the first thread to have acquired the read lock. firstReaderHoldCount is firstReader's hold count.More precisely, firstReader is the unique thread that last changed the shared count from 0 to 1, and has not released theread lock since then; null if there is no such thread.
In this case the thread holding the lock was one of the reader threads. It was blocked on something entirely different, which would have required one of the other reader threads to progress. Ultimately it was caused by a bug, in which a reader thread would not properly release the lock, and keep it forever. That I found by analyzing the code, and adding tracking and logging when the lock was acquired and released.
Is this a deadlock?
I don't think this is evidence of a deadlock. At least, not in the classic sense of the term.
The stack dump shows two threads waiting on the same ReentrantReadWriteLock. One thread is waiting to acquire the read lock. The other is waiting to acquire the write lock.
Now if no threads currently holds any locks, then one of these threads would be able to proceed.
If some other thread currently held the write lock, that would be sufficient to block both of these threads. But that isn't a deadlock. It would only be a deadlock if that third thread was itself waiting on a different lock ... and there was a circularity in the blocking.
So what about the possibility of these two threads blocking each other? I don't think that is possible. The reentrancy rules in the javadocs allow a thread that has the write lock to acquire the read lock without blocking. Likewise it can acquire the write lock it it already holds it.
The other piece of evidence is that things have changed in the thread dump you took a bit later. If there was a genuine deadlock, there would be no change.
If it is not a deadlock between (just) these two threads, what else could it be?
One possibility is that a third thread is holding the write lock (for a long time) and that is gumming things up. Just too much contention on this readwrite lock.
If the (assumed) third thread is using tryLock, it is possible that you have a livelock ... which could explain the "change" evidence. But on the flip-side, that thread should have been parked too ... which you say that you don't see.
Another possibility is that you have too many active threads ... and the OS is struggling to schedule them to cores.
But this is all speculation.
I have extracted a JStack of my container process and got the threads running there with the following distribution grouped by Thread.state:
count thread state
67 RUNNABLE
1 TIMED_WAITING (on object monitor)
8 TIMED_WAITING (parking)
4 TIMED_WAITING (sleeping)
3 WAITING (on object monitor)
17 WAITING (parking)
For the runnable threads I have the following description:
"http-bio-8080-exec-55" daemon prio=10 tid=0x000000002cbab300 nid=0x642b in Object.wait() [0x00002ab37ad11000]
java.lang.Thread.State: RUNNABLE
at com.mysema.query.jpa.impl.JPAQuery.<init>(JPAQuery.java:44)
at net.mbppcb.cube.repository.TransactionDaoImpl.findByBusinessId(TransactionDaoImpl.java:73)
at sun.reflect.GeneratedMethodAccessor76.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:601)
at org.springframework.aop.support.AopUtils.invokeJoinpointUsingReflection(AopUtils.java:317)
at org.springframework.aop.framework.ReflectiveMethodInvocation.invokeJoinpoint(ReflectiveMethodInvocation.java:183)
at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:150)
at org.springframework.dao.support.PersistenceExceptionTranslationInterceptor.invoke(PersistenceExceptionTranslationInterceptor.java:155)
at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:172)
at org.springframework.aop.framework.JdkDynamicAopProxy.invoke(JdkDynamicAopProxy.java:204)
...
The number of threads in RUNNABLE state as shown above raises with the time and it seems to be hanging. If they suppose to be blocked, shouldn't they be on state BLOCKED? Or should they be on WAITING state? Is strange to have RUNNABLE threads but in Object.wait() isn't it?
Update 1
I can see in the documentation:
A thread in the runnable state is executing in the Java virtual
machine but it may be waiting for other resources from the operating
system such as processor.
How can I figure out what is the thread waiting for?
This seems like a class initialization deadlock.
JPAQuery constructor is waiting for the initialization of a dependent class, probably JPAProvider:
public JPAQuery(EntityManager em) {
super(em, JPAProvider.getTemplates(em), new DefaultQueryMetadata());
}
Such deadlocks may be caused by a typical bug when a subclass is referenced from a static initializer. If you share the details of other thread stacks, we'll likely find out which thread holds the class lock.
Why is the thread in RUNNABLE state then?
Well, it's a confusion inside HotSpot JVM. The class initialization procedure is implemented in VM runtime, not in Java land, and the class lock is grabbed natively. Seems to be a reason why a thread state has not been changed, but I guess this behavior should be fixed in JVM as well.
The Oracle Thread.State documentation specifies that a thread in the blocked state is waiting for a monitor lock to enter a synchronized block/method or reenter a synchronized block/method after calling.
It looks like none of the threads in blocking mode.
If all the Runnable threads are apparently blocked in database operations, I would suggest using database monitoring/diagnostic tools to explore the reason. After that, possibly investigating your database code to find problems such as uncommitted transactions, mishandled exceptions leading to unclosed resources.
Java thread dumps have probably given you all the information they can at this point - a pointer for where to start looking next.
"The number of threads in RUNNABLE state as shown above raises with
the time and it seems to be hanging."
The number of RUNNABLES would rise depending upon the number of threads executing the method "com.mysema.query.jpa.impl.JPAQuery." at the time this thread dump was taken. This method is actually the constructor of JPAQuery class - denoted by "init". You would probably need to investigate the code in the constructor and the subsequent calls to the JPA implementation.
The Object.wait() merely calls Object.wait(0) (zero for no timeout).
The implementation of Object.wait(long) is:
public final native void wait(long timeout) throws InterruptedException;
All threads that are in a native frame are RUNNABLE, as the JVM does not know (does not "manage", hence it is a "native" frame) the state of the call.
In Java, thread can have different state:
NEW, RUNNABLE, BLOCKED, WAITING, TIMED_WAITING, TERMINATED
However, when the thread is blocked by IO, its state is "RUNNABLE". How can I tell if it is blocked by IO?
NEW: The thread is created but has not been processed yet.
RUNNABLE:
The thread is occupying the CPU and processing a task. (It may be in WAITING status due to the OS's resource distribution.)
BLOCKED: The thread is waiting for a different thread to release its lock in order to get the monitor lock. JVISULVM shows thta as Monitoring
WAITING: The thread is waiting by using a wait, join or park method.
TIMED_WAITING: The thread is waiting by using a sleep, wait, join or park method. (The difference from WAITING is that the maximum waiting time is specified by the method parameter, and WAITING can be relieved by time as well as external changes.)
TERMINATED: A thread that has exited is in this state.
see also http://architects.dzone.com/articles/how-analyze-java-thread-dumps
Thread Dump
Dumping java thread stack you can find something like that
java.lang.Thread.State: RUNNABLE
at java.io.FileInputStream.readBytes(Native Method)
or
java.lang.Thread.State: RUNNABLE
at java.net.SocketInputStream.socketRead0(Native Method)
and you can understand that java is waiting response.
I suggest this tool Java Thread Dump Analyser or this plug-in TDA
ThreadMXBean
Yiu can obtain more information using the ThreadMXBean
http://docs.oracle.com/javase/7/docs/api/java/lang/management/ThreadMXBean.html
You can check the statckTraces of the thread then find if the last stack is in some specific method associated with i/o blocking (eg: java.net.SocketInputStream.socketRead0)
This is not a clever way but it works.
JProfiler supports the feature you need, details show at: WHAT'S NEW IN JPROFILER 3.1
I have Java EE based application running on tomcat and I am seeing that all of a sudden the application hangs after running for couple of hours.
I collected the thread dump from the application just before it hangs and put it in TDA for analysis:
TDA (Thread Dump Analyzer) gives the following message for the above monitor:
A lot of threads are waiting for this monitor to become available again.
This might indicate a congestion. You also should analyze other locks
blocked by threads waiting for this monitor as there might be much more
threads waiting for it.
And here is the stacktrace of the thread highlighted above:
"MY_THREAD" prio=10 tid=0x00007f97f1918800 nid=0x776a
waiting for monitor entry [0x00007f9819560000]
java.lang.Thread.State: BLOCKED (on object monitor)
at java.util.Hashtable.get(Hashtable.java:356)
- locked <0x0000000680038b68> (a java.util.Properties)
at java.util.Properties.getProperty(Properties.java:951)
at java.lang.System.getProperty(System.java:709)
at com.MyClass.myMethod(MyClass.java:344)
I want to know what does the "waiting for monitor entry" state means? And also would appreciate any pointers to help me debug this issue.
One of your threads acquired a monitor object (an exclusive lock on a object). That means the thread is executing synchronized code and for whatever reason stuck there, possibly waiting for other threads. But the other threads cannot continue their execution because they encountered a synchronized block and asked for a lock (monitor object), however they cannot get it until it is released by other thread. So... probably deadlock.
Please look for this string from the whole thread dump
- locked <0x00007f9819560000>
If you can find it, the thread is deadlock with thread "tid=0x00007f97f1918800"
Monitor = synchronized. You have lots of threads trying to get the lock on the same object.
Maybe you should switch from using a Hashtable and use a HashMap
This means that your thread is trying to set a lock (on the Hashtable), but some other thread is already accessing it and has set a lock. So it's waiting for the lock to release. Check what your other threads are doing. Especially thread with tid="0x00007f9819560000"
In a Java threaddump I found the following:
"TP-Processor184" daemon prio=10 tid=0x00007f2a7c056800 nid=0x47e7 waiting for monitor entry [0x00007f2a21278000]
java.lang.Thread.State: BLOCKED (on object monitor)
at org.apache.jackrabbit.core.state.SharedItemStateManager.getNonVirtualItemState(SharedItemStateManager.java:1725)
- locked <0x0000000682f99d98> (a org.apache.jackrabbit.core.state.SharedItemStateManager)
at org.apache.jackrabbit.core.state.SharedItemStateManager.getItemState(SharedItemStateManager.java:257)
"TP-Processor137" daemon prio=10 tid=0x00007f2a7c00f800 nid=0x4131 waiting for monitor entry [0x00007f2a1ace7000]
java.lang.Thread.State: BLOCKED (on object monitor)
at org.apache.jackrabbit.core.state.SharedItemStateManager.getNonVirtualItemState(SharedItemStateManager.java:1725)
- locked <0x0000000682f99d98> (a org.apache.jackrabbit.core.state.SharedItemStateManager)
at org.apache.jackrabbit.core.state.SharedItemStateManager.getItemState(SharedItemStateManager.java:257)
The point here being that both threads have locked monitor <0x0000000682f99d98> (regardless of them now waiting for two different other monitors).
When looking at Thread Dump Analyzer, with that monitor being selected, it really says "Threads locking monitor: 2" at the bottom, and "2 Thread(s) locking". Please see https://lh4.googleusercontent.com/-fCmlnohVqE0/T1D5lcPerZI/AAAAAAAAD2c/vAHcDiGOoMo/s971/locked_by_two_threads_3.png for the screenshot, I'm not allowed to paste images here.
Does this mean threaddumps aren't atomic with respect to monitor lock information? I can't imagine this really being a locking bug of the JVM (1.6.0_26-b03).
A similar question has already been asked in Can several threads hold a lock on the same monitor in Java?, but the answer to me didn't see the real point of multiple threads locking the same monitor, even though they may be waiting for some other.
Update May 13th 2014:
Newer question Multiple threads hold the same lock? has code to reproduce the behaviour, and #rsxg has filed an according bug report https://bugs.openjdk.java.net/browse/JDK-8036823 along the lines of his answer here.
I don't think that your thread dump is saying that your two threads are "waiting for two different other monitors". I think it is saying that they are both waiting on the same monitor but at two different code points. That may be a stack location or an object instance location or something. This is a great document about analyzing the stack dumps.
Can several threads hold a lock on the same monitor in Java?
No. Your stack dump is showing two threads locked on the same monitor at the same code location but in different stack frames -- or whatever that value is which seems OS dependent.
Edit:
I'm not sure why the thread dump seems to be saying that both threads have a line locked since that seems to only be allowed if they are in a wait() method. I noticed that you are linking to version 1.6.5. Is that really the version you are using? In version 2.3.6 (which may be the latest), the 1725 line actually is a wait.
1722 synchronized (this) {
1723 while (currentlyLoading.contains(id)) {
1724 try {
1725 wait();
1726 } catch (InterruptedException e) {
You could also see this sort of stack trace even if it was an exclusive synchronized lock. For example, the following stack dump under Linux is for two threads locked on the same object from the same code line but in two different instances of the Runnable.run() method. Here's my stupid little test program. Notice that the monitor entry numbers are different, even thought it is the same lock and same code line number.
"Thread-1" prio=10 tid=0x00002aab34055c00 nid=0x4874
waiting for monitor entry [0x0000000041017000..0x0000000041017d90]
java.lang.Thread.State: BLOCKED (on object monitor)
at java.lang.Object.wait(Native Method)
- waiting on <0x00002aab072a1318> (a java.lang.Object)
at com.mprew.be.service.auto.freecause.Foo$OurRunnable.run(Foo.java:38)
- locked <0x00002aab072a1318> (a java.lang.Object)
at java.lang.Thread.run(Thread.java:619)
"Thread-0" prio=10 tid=0x00002aab34054c00 nid=0x4873
waiting for monitor entry [0x0000000040f16000..0x0000000040f16d10]
java.lang.Thread.State: BLOCKED (on object monitor)
at java.lang.Object.wait(Native Method)
- waiting on <0x00002aab072a1318> (a java.lang.Object)
at com.mprew.be.service.auto.freecause.Foo$OurRunnable.run(Foo.java:38)
- locked <0x00002aab072a1318> (a java.lang.Object)
at java.lang.Thread.run(Thread.java:619)
On my Mac, the format is different but again the number after the "monitor entry" is not the same for the same line number.
"Thread-2" prio=5 tid=7f8b9c00d000 nid=0x109622000
waiting for monitor entry [109621000]
java.lang.Thread.State: BLOCKED (on object monitor)
at java.lang.Object.wait(Native Method)
- waiting on <7f3192fb0> (a java.lang.Object)
at com.mprew.be.service.auto.freecause.Foo$OurRunnable.run(Foo.java:38)
- locked <7f3192fb0> (a java.lang.Object)
"Thread-1" prio=5 tid=7f8b9f80d800 nid=0x10951f000
waiting for monitor entry [10951e000]
java.lang.Thread.State: BLOCKED (on object monitor)
at java.lang.Object.wait(Native Method)
- waiting on <7f3192fb0> (a java.lang.Object)
at com.mprew.be.service.auto.freecause.Foo$OurRunnable.run(Foo.java:38)
- locked <7f3192fb0> (a java.lang.Object)
This Oracle document describe that value as the following:
Address range, which gives an estimate of the valid stack region for the thread
You are probably running into a cosmetic bug in the stack trace routines in the JVM when analyzing heavily contended locks - it may or may not be the same as this bug.
The fact is that neither of your two threads have actually managed to acquire the lock on the SharedItemStateManager, as you can see from the fact that they are reporting waiting for monitor entry. The bug is that further up in the stack trace in both cases they should report waiting to lock instead of locked.
The workaround when analyzing strange stack traces like this is to always check that a thread claiming to have locked an object is not also waiting to acquire a lock on the same object.
(Unfortunately this analysis requires cross-referencing the line numbers in the stack trace with the source, code since there is no relationship between the figures in the waiting for monitor entry header and the locked line in the stack trace. As per this Oracle document, the number 0x00007f2a21278000 in the line TP-Processor184" daemon prio=10 tid=0x00007f2a7c056800 nid=0x47e7 waiting for monitor entry [0x00007f2a21278000] refers to an estimate of the valid stack region for the thread. So it looks like a monitor ID but it isn't - and you can see that the two threads you gave are at different addresses in the stack).
When a thread locks an object but wait()s another thread can lock the same object. You should be able to see a number of threads "holding" the same lock all waiting.
AFAIK, the only other occasion is when multiple threads have locked and waited and are ready to re-acquire the the lock e.g. on a notifyAll(). They are not waiting any more but cannot continue until they have obtained the lock again. (only one thread at a time time can do this)
"http-0.0.0.0-8080-96" daemon prio=10 tid=0x00002abc000a8800 nid=0x3bc4 waiting for monitor entry [0x0000000050823000]
java.lang.Thread.State: BLOCKED (on object monitor)
at org.apache.lucene.search.FieldCacheImpl$Cache.get(FieldCacheImpl.java:195)
- locked <0x00002aadae12c048> (a java.util.WeakHashMap)
"http-0.0.0.0-8080-289" daemon prio=10 tid=0x00002abc00376800 nid=0x2688 waiting for monitor entry [0x000000005c8e3000]
java.lang.Thread.State: BLOCKED (on object monitor)
at org.apache.lucene.search.FieldCacheImpl$Cache.get(FieldCacheImpl.java:195)
- locked <0x00002aadae12c048> (a java.util.WeakHashMap
"http-0.0.0.0-8080-295" daemon prio=10 tid=0x00002abc00382800 nid=0x268e runnable [0x000000005cee9000]
java.lang.Thread.State: RUNNABLE
at org.apache.lucene.search.FieldCacheImpl$Cache.get(FieldCacheImpl.java:195)
- locked <0x00002aadae12c048> (a java.util.WeakHashMap)
In our thread dump, we have several threads lock same monitor, but only one thread is runnable. It probably because of lock competition, we have 284 other threads waiting for the lock. Multiple threads hold the same lock? said this only exists in the thread dump, for thread dump is not atomic operation.