We have a Java App that connects via RMI to another Java app.
There are multiple instances of this app running at the same time, and after a few days an instance just stops processing... the CPU is in 0 and I have an extra thread listening to an specific port that helps to shutdown the App.
I can connect to the specific port but the app doesn't do anything.
We're using Log4j to keep a log and nothing is written, so there aren't any exceptions thrown.
We also use c3p0 for the DB connections.
Anyone have ideas?
Thanks,
I would suggest starting with a thread dump of the affected application.
You need to see what is going on on a thread by thread basis. It could be that you have a long running thread, or other process which is blocking other work from being done.
Since you are running linux, you can get your thread dump with the following command
kill -3 <pid>
If you need help reading the output, please post it in your original question.
If nothing is shown from the thread dump, other alternatives can be looked at.
Hum... I would suggest using JMetter to stress the Application and take note of anything weird that might be happening (such as Memory Leaks, Deadlocks and such). Also review the code for any Exceptions that might interrupt the program (or System.exit() calls). Finally, if other people have access to the computer, makes sense to check if the process wasn't killed manually somehow.
Related
I have a java .jar file that i launch on an AWS instance in detached mode. So when i exit the ssh session, it still runs.
The app does some network stuff, and is expected to run for days until it finishes it task.
I have made logs all over the app, made log in the end of main method. I also made a global try/catch and added logging to the catch section.
Still, after some days i enter into ssh and see that the app just stopped running. No exceptions, main method did not complete because the log in the end did not trigger. It seems that the process was just killed in the middle of working. Sometimes it works for 5 hours, sometimes for 3-4 days without stopping.
I have no idea what could be the cause of this. I expect the java process to run until it finished, or until it crashes. Am i missing something?
upd:
it is an aws t2.micro, i think, the free tier one. It runs ubuntu 18.04.3 LTS
You need to monitor the server and application. The first thing to look at is your instance cloudwatch statistics for any CPU or memory spikes. If you find one, you know what you need to fix if you want to run your application on micro instance. For further reading
Monitoring Your Instances Using CloudWatch
Alternatively, you can collect and dump the java process statistics regularly when you are running the application. This can give insight of how heap,stack and cpu usage. Check this SO post for further details :
How do I monitor the computer's CPU, memory, and disk usage in Java?
i have a java multi-threaded program that is running. i am running it on a tomcat server. when the threads are still running, some executing tasks, some still waiting for some thing to return and all kinds of things, assume i stop the server all of a sudden in this scenario.. when i do i get a warning on the tomcat terminal saying a thread named x is still running and the server is being stopped so this might lead to a memory leakage. what is the OS actually trying to tell me here? can someone help me understand this?? i am running this program on my system several times and i have stopped the server abruptly 3 times and i have seen this message when ever i do that. have i runined my server? (i mean my system). did i do something very dangerous????
please help.
Thanks in advance!
when i do i get a warning on the tomcat terminal saying a thread named x is still running and the server is being stopped so this might lead to a memory leakage. what is the OS actually trying to tell me here?
Tomcat (not the OS) is surmising from this extra thread that some part of your code forked a thread that may not be properly cleaning itself up. It is thinking that maybe this thread is forked more than once and if your process runs for a long time, it could fill up usable memory which would cause the JVM to lock up or at least get very slow.
have i ruined my server? (i mean my system). did i do something very dangerous????
No, no. This is about the tomcat process itself. It is worried that this memory leak may stop its ability to do its job as software -- nothing more. Unless you see more than one thread or until you see memory problems with your server (use jconsole for this) then I would only take it as a warning and a caution.
It sounds like your web server is forking processes which are not terminated when you stop the server. Those could lead to a memory leak because they represent processes that will never die unless you reboot or manually terminate them with the kill command.
I doubt that you will permanently damage your system, unless those orphaned processes are doing something bad, but that would be unrelated to you stopping the server. You should probably do something like ps aux | grep tomcat to find the leftover processes and do the following
Kill them so they don't take up more system resoures.
Figure out why they are persisting when the server is stopped. This sounds like a misbehaving server.
We have a problem where our java process is hanging forever,
Unless a Kill -9 is issued against it.
The same Process is running successfully in the Other Solaris Envs,
Java process consist single thread which starts and end after doing some processing On the Data ,Though from the logs and data we can see that the code is completely executed and all the data is processed.
but if we do JPS we will always see that process is running.
we are Using EHcache with spring for caching purpose and UCP for the connection pool.
On The dB side we Have ORACLE RAC Structure.
took several Jstacks and can never See the Process sticking in the my code.
though from thread dump can see there are lot of UCP threads hanging there.
Also Adding a Shutdown hook and removing It in the end,but some reason seems the shutdownhook is never called.
Due to project restrictions ,cant paste the code.
can Anyone Please help
my customer is facing same problem with our installer hanging on Solaris. when installer ran in debug mode, we realized that java which is embedded with installer is hanging. Please post in case any of you found answer for it.
I have J2SE application running in 1.5 java VM in RHEL OS. One of the task of the application is to create 3 infinitely running user threads, during startup. The purpose is to check for a request of a particular type in backend DB table and do corresponding operations.
As we observed, the long running threads suddenly stops running, but still the application is alive and JVM process can be seen, in ps -ef|grep java
Can someone throw light on why threads which are created to run in infinite loop, stops suddenly? Any ideas on how to detect this issue and possible resolution will be of great help
With Regards,
Krishna
I would suggest sending a Ctrl+Break to your app, dumping the threads and analysing the output. Perhaps your threads are waiting for some input (IO). Perhaps they're deadlocked. Perhaps they've exited with an uncaught exception. The thread dump will tell you what's going on (and it helps if you name your threads in advance so you can identify them in the dump).
Perhaps you are having unhandled exceptions.
First of all, you should log all your thread activity (you can use log4j to achieve this).
You can also override the uncaughtException method of the ThreadGroup class and create alerts for when a thread dies due to an exception that has not been caught.
I have a web application running in a jboss application server (But it is not jboss specific so we could also assume it is a tomcat or any other server). Now I have the problem that one thread seems to be in dead-lock situation. It uses 100% CPU all the time. I have started the server with enabled debug port and I can connect Eclipse to it. But the problem is: There are a lot of threads running. How can I find the right thread? I know the process id (from Linux "top" command) but I think this will not help. Do I really have to open each thread separately and check what they are currently doing? Or is there a way to filter the threads for "most active" or something like that in Eclipse?
You can try and generate a thread dump (CTRL+Break as shown in this thread).
Or you could attach a JConsole to the remote session (so leaving Eclipse aside for now), monitor the threads and generate a thread dump.
alt text http://www.jroller.com/dumpster/resource/tdajconsole.png
Seems to be you need to narrow things down to the code that has the bug by identifying which thread is eating the CPU first, then which code is being executed by that thread and at that point you can remote debug.
I would suggest using something like JProfiler, jvisualvm, jconsole or something similar. Using one of these tools will allow you to get some insight into what the thread is doing and should allow you to sort the threads by cpu cycles used so you kind find the offending thread quickly.