How to find problematic thread in Eclipse remote debugger? - java

I have a web application running in a jboss application server (But it is not jboss specific so we could also assume it is a tomcat or any other server). Now I have the problem that one thread seems to be in dead-lock situation. It uses 100% CPU all the time. I have started the server with enabled debug port and I can connect Eclipse to it. But the problem is: There are a lot of threads running. How can I find the right thread? I know the process id (from Linux "top" command) but I think this will not help. Do I really have to open each thread separately and check what they are currently doing? Or is there a way to filter the threads for "most active" or something like that in Eclipse?

You can try and generate a thread dump (CTRL+Break as shown in this thread).
Or you could attach a JConsole to the remote session (so leaving Eclipse aside for now), monitor the threads and generate a thread dump.
alt text http://www.jroller.com/dumpster/resource/tdajconsole.png

Seems to be you need to narrow things down to the code that has the bug by identifying which thread is eating the CPU first, then which code is being executed by that thread and at that point you can remote debug.
I would suggest using something like JProfiler, jvisualvm, jconsole or something similar. Using one of these tools will allow you to get some insight into what the thread is doing and should allow you to sort the threads by cpu cycles used so you kind find the offending thread quickly.

Related

Thread starts and fails to stops with Tomcat. What's happening?

i have a java multi-threaded program that is running. i am running it on a tomcat server. when the threads are still running, some executing tasks, some still waiting for some thing to return and all kinds of things, assume i stop the server all of a sudden in this scenario.. when i do i get a warning on the tomcat terminal saying a thread named x is still running and the server is being stopped so this might lead to a memory leakage. what is the OS actually trying to tell me here? can someone help me understand this?? i am running this program on my system several times and i have stopped the server abruptly 3 times and i have seen this message when ever i do that. have i runined my server? (i mean my system). did i do something very dangerous????
please help.
Thanks in advance!
when i do i get a warning on the tomcat terminal saying a thread named x is still running and the server is being stopped so this might lead to a memory leakage. what is the OS actually trying to tell me here?
Tomcat (not the OS) is surmising from this extra thread that some part of your code forked a thread that may not be properly cleaning itself up. It is thinking that maybe this thread is forked more than once and if your process runs for a long time, it could fill up usable memory which would cause the JVM to lock up or at least get very slow.
have i ruined my server? (i mean my system). did i do something very dangerous????
No, no. This is about the tomcat process itself. It is worried that this memory leak may stop its ability to do its job as software -- nothing more. Unless you see more than one thread or until you see memory problems with your server (use jconsole for this) then I would only take it as a warning and a caution.
It sounds like your web server is forking processes which are not terminated when you stop the server. Those could lead to a memory leak because they represent processes that will never die unless you reboot or manually terminate them with the kill command.
I doubt that you will permanently damage your system, unless those orphaned processes are doing something bad, but that would be unrelated to you stopping the server. You should probably do something like ps aux | grep tomcat to find the leftover processes and do the following
Kill them so they don't take up more system resoures.
Figure out why they are persisting when the server is stopped. This sounds like a misbehaving server.

Standalone Java App dies after a few days

We have a Java App that connects via RMI to another Java app.
There are multiple instances of this app running at the same time, and after a few days an instance just stops processing... the CPU is in 0 and I have an extra thread listening to an specific port that helps to shutdown the App.
I can connect to the specific port but the app doesn't do anything.
We're using Log4j to keep a log and nothing is written, so there aren't any exceptions thrown.
We also use c3p0 for the DB connections.
Anyone have ideas?
Thanks,
I would suggest starting with a thread dump of the affected application.
You need to see what is going on on a thread by thread basis. It could be that you have a long running thread, or other process which is blocking other work from being done.
Since you are running linux, you can get your thread dump with the following command
kill -3 <pid>
If you need help reading the output, please post it in your original question.
If nothing is shown from the thread dump, other alternatives can be looked at.
Hum... I would suggest using JMetter to stress the Application and take note of anything weird that might be happening (such as Memory Leaks, Deadlocks and such). Also review the code for any Exceptions that might interrupt the program (or System.exit() calls). Finally, if other people have access to the computer, makes sense to check if the process wasn't killed manually somehow.

Use visualvm to find portal bottleneck

We have a liferay portal running on our intranet.
Everything works fine except the login. Very slow.
I'm thinking of using visualvm to monitor tomcat thread to see what happen in my webserver (like what hook it's calling or does it make some request to our active directory...)
Can I do it with visualvm? If not is there any other way?
I would look to see if you can increase the logging levels as you do the test and see if the logs show anything more specific. If the threads are simply waiting on a response from the active directory I doubt that visualvm will show you anything. One thing it might show you is that the thread is waiting.
I'd think about a network traffic monitor, like Fiddler.

Java process is hanging for no apparent reason

I am running a Java process with Xmx2000m, the host OS is linux centos, jdk 1.6 update 22. Lately I have been experiencing a weird behavior in the process, it becomes totally unresponsive with no apparent reason, no logs, no errors, nothing.. I am using jconsole to monitor the processor, heap and Perm memory are not full, threads and loaded classes are not leaking..
Explanation anyone?
I doubt anyone can give you an explanation since there are lots of possible reasons and not nearly enough information. However, I suggest that you jstack the process once it's hung to figure out what the threads are doing, and take it from there. It sounds like a deadlock or thrashing of some sort.
Do a thread dump. If you have access to the foreground process on Linux, use ctrl-\. Or use jstack to dump stack remotely. Or you can actually poke it through JMX via jconsole at MBeans/java.lang/Threading/Operations/dumpAllThreads.
Without knowing more about your app, it's hard to speculate about the cause. Presumably your threads are either a) blocked or b) exited. If they are blocked, they could be waiting for I/O on a database or other operation OR they could be waiting on a lock or monitor (deadlocked). If a deadlock exists, the thread dump will tell you which threads are deadlocked, which lock, and (in Java 6) annotate the stack with where locks have been taken. You can also search for deadlocks with the JMX method, available through jconsole at MBeans/java.lang/Threading/Operations/find[Monitor]DeadlockedThreads().
Or your threads may have received unhandled exceptions and exited. Check out Thread's uncaughtExceptionHandlers or (better) use Executors in java.util.concurrent.
And finally, the other classic source of pauses in Java is GC. Run with -verbose:gc and other GC flags to see if it's doing a full GC collection. You can also turn this on dynamically in jconsole by flipping the flag at MBeans/java.lang/Memory/Attributes/Verbose.
Agree with aix, but would like to add a couple of recommendataions.
1. check your system. Run top to see whether the system itself is healthy, CPU is not 100% and memory is available. If not, fix this.
2. application may freeze as a result of dead lock. Check this.
Ok here are some updates I wanted to share:
There is an incompatability between NTPL (Linux’s new thread library) and the Java 1.6+ JVM. A random bug causes the JVM to hang and eat up 100% CPU.
To work around it set LD_ASSUME_KERNEL=2.4.1 before running the JVM, export LD_ASSUME_KERMEL=2.4.1 . This disables NTPL: problem solved!
But for compatibility reasons, I'm still looking for a solution that uses NTPL.
Threads can be traced using jvisualvm and jconsole, and deadlocks can be avoided too. Note that there are several network services each with separate thread pools, and they all become unreachable.
Check the jvisualvm of the process right before the crash.
http://www.jadyounan.com/wp-content/uploads/2010/12/process.png
Could you elaborate more on what you are doing ? 2000 for memory is rather a lot.

Zombie threads eating my brainz (J2EE, Tomcat, Hibernate, Quartz)

It is Hallowe'en after all.
Here's the problem: I'm maintaining some old-ish J2EE code, using Quartz, in which I'm running out of threads. jconsole tells me that there are just short of 60K threads when it goes pear-shaped, of which about 100 (!!) are actually running. Intuition and some googling (see also here) suggest that what's happening is something (I'm betting Quartz) is creating unmanaged threads that never get cleaned up.
Several subquestions:
It there a tool that I can use easily to trace thread creation, so I can be certain the issue is really Quartz?
Most everything I've found about similar problems references Weblogic; is this a false lead for Tomcat?
Does anyone have a known solution?
It's been years since I did J2EE, so I wouldn't be too surprised if this is something that can be solved simply.
Update: It's clearly increasing threads without bound, see this plot from jconsole.
Try to increase the logging level of org.quartz.simpl.SimpleThreadPool to debug to get more information.
If that does not work, try a logging listener. Quartz has a JobListener interface, which is specified in its tutorial. A listener can help you trace job execution. Maybe jobs just don't finish and get deadlocked.
Configure org.quartz.threadPool.threadCount to stop running out of threads.
update:
Also, you might want to take a thread dump and see the thread stats. visual vm has a plugin called TDA, or you can use Thread dump analyzer directly.
Just in case, check the quartz version to see if there is no known bug.
Have you had a look with jvisualvm - it gives some more information.
Also, get stack traces to see what the threads are actually waiting on. You might have an aha-feeling right there.

Categories