In bash is a useful command trap. It intercepts various signals like SIGKILL, SIGHUP etc to process.
So... We have a problem that Tomcat sometimes dies without any visible reasons. And - without any helpful information in log-files.
My idea is add trap to its java-analog command to collect jstack before JVM with Tomcat will die.
How can I do it in Java? Please note - I'm not Java-programmer.
Thanks for any tips.
The JVM (Oracle's at least) already installs signal handlers and translates signals into exceptions and other useful things. See http://docs.oracle.com/javase/7/docs/webnotes/tsg/TSG-VM/html/signals.html.
Usually when Tomcat dies without logs, it's a symptom of running out of stack or heap space (Tomcat runs in the same JVM as the web apps, and a misbehaved app can crash the server before the logs are flushed).
What version of Tomcat are you using? If you are using Tomcat 6+, you can disable log buffering completely so that the final messages are flushed as they are written. See http://tomcat.apache.org/tomcat-6.0-doc/logging.html.
For JULI, a negative bufferSize will force flushes after each write.
Related
i have a java multi-threaded program that is running. i am running it on a tomcat server. when the threads are still running, some executing tasks, some still waiting for some thing to return and all kinds of things, assume i stop the server all of a sudden in this scenario.. when i do i get a warning on the tomcat terminal saying a thread named x is still running and the server is being stopped so this might lead to a memory leakage. what is the OS actually trying to tell me here? can someone help me understand this?? i am running this program on my system several times and i have stopped the server abruptly 3 times and i have seen this message when ever i do that. have i runined my server? (i mean my system). did i do something very dangerous????
please help.
Thanks in advance!
when i do i get a warning on the tomcat terminal saying a thread named x is still running and the server is being stopped so this might lead to a memory leakage. what is the OS actually trying to tell me here?
Tomcat (not the OS) is surmising from this extra thread that some part of your code forked a thread that may not be properly cleaning itself up. It is thinking that maybe this thread is forked more than once and if your process runs for a long time, it could fill up usable memory which would cause the JVM to lock up or at least get very slow.
have i ruined my server? (i mean my system). did i do something very dangerous????
No, no. This is about the tomcat process itself. It is worried that this memory leak may stop its ability to do its job as software -- nothing more. Unless you see more than one thread or until you see memory problems with your server (use jconsole for this) then I would only take it as a warning and a caution.
It sounds like your web server is forking processes which are not terminated when you stop the server. Those could lead to a memory leak because they represent processes that will never die unless you reboot or manually terminate them with the kill command.
I doubt that you will permanently damage your system, unless those orphaned processes are doing something bad, but that would be unrelated to you stopping the server. You should probably do something like ps aux | grep tomcat to find the leftover processes and do the following
Kill them so they don't take up more system resoures.
Figure out why they are persisting when the server is stopped. This sounds like a misbehaving server.
I have a Java 1.6 application that accesses a third party native module, through a JNI class provided as the interface. Recently we noticed that a SEGFAULT is occurring in the native module, and is causing our application to crash. Is it possible to catch and handle this event, at least to log it properly before dieing?
I tried both Java techniques in the article from kjp's answer. Neither worked. Attempting to install a signal handler on 'SEGV' results in the exception
Signal already used by VM: SEGV
The shutdown handler I installed simply failed to fire, presumably because of what the IBM article states:
Shutdown hooks will not be run if
Runtime.halt() method is called to terminate the JVM. Runtime.halt() is provided to allow a quick shutdown of the JVM.
The -Xrs JVM option is specified.
The JVM exits abnormally, such as an exception condition or forced abort generated by the JVM software.
If all you want to do is log and notify you can write a script which runs your application. When the application dies, the script can detect whether the application terminated normally and from the hs_errXXXX file which has all the crash/SEGV information and mail it to someone (and restart the application if you want)
What you need to do is to run the faulty JNI code in another JVM and communicate with that JVM using RMI or JMS or Sockets. This way when the library dies, it won't bring down your main application and you can restart it.
Based on several weeks of research at the time, as well as conversations with JVM engineers at a conference this is not possible. The system will not let you install a SignalHandler on the SEGV signal.
I am running a durable Java program on a remote Ubuntu server, where I have root user rights. After some time, the usage on some CPU cores goes up to 100%. The logs show nothing suspicious and the application still works, but with reduced throughput.
How can I debug the JVM so that I can find out the cause of this, while it's still running?
One option is VisualVM, which is included in the JDK starting with Java 1.6. I have found it useful in some situations in the past.
You may connect to local applications or remote applications.
To connect to a remote app, run jstatd on your remote server, and then run VisualVM locally and enter your server's IP address. You should be provided with a list of running Java applications including the one you wish to explore. If you have any trouble listing your application, good documentation is available at the VisualVM website.
Connect to the process with jvisualvm
This tool will allow you to connect to the running process and view all of the threads and their state. This could show you which thread is the culprit merely by looking at which one is awake all the time. You can do a thread dump to see the stack trace for each thread and see what each thread is doing.
It's a very powerful tool for just this kind of debugging. It is distributed with the JDK only, so you will need more than just the JVM runtime installed to have access. Be sure you install the same version of the JDK that the JVM is running.
You will need to have your X display forwarded for this to work.
If you want to see the stack trace on linux just issue kill -SIGQUIT <java-program-pid>. That is one way to see where the the code is executing.
We have trouble with Tomcat 5.5 which stops at night on our production servers (Linux CentOS 4.8) and we have no idea why it stops...
There is no Tomcat's log in catalina.out or any application's log.
We tried different things to find why the server stops:
configure Tomcat to be able to generate a core dump
instrument System.exit() method with javassist to find if the method was called
add a shutdown hook to the JVM (with Runtime.getRuntime().addShutdownHook())
None of them worked, we have no core dump, the Exit method and the shutdown hook are not called.
My conclusions are:
The VM is not terminated properly but crash without any log.
Any idea or log to read to find why Tomcat stops?
1) Make sure you know where stderr is redirected and check if anything got printed there.
2) Check the memory limits on Tomcat and how much free memory does the system have. Review the Linux system logs under /var/log to see if anything suspicious happened during the time. For example, kernel can randomly kill a process (almost) without a trace if the system is running low on memory.
We've ran 5.5 in production for years and never had any unexplained shutdowns, FWIW.
This worked for me.
As suggested here in other answers checked system logs in /var/log/messages but permission denied for me. So, I used dmesg command instead and got this in the logs
"Out of memory: Kill process 14606 (java) score 106 or sacrifice child".
In the output I also noticed Swap Memory free 0 K. Ran top command to confirm the same. So, somehow there was a high memory usage which caused the OS to kill my tomcat process.
After spending hours finally got the reason.
ps -ef | grep tomcat showed that there were several tomcat processes running for the same application. It seems that, earlier tomcat shutdowns might not have been completed successfully and due to some reason the processes were not killed even after the shutdown, which was causing the high memory usage.
So, killed all running tomcat processes using kill. SWAP memory got freed.
Started tomcat again, worked fine. :)
Tomcat 7 has an option inside catalina to prevent the System.exit class call or something similar: http://ci.apache.org/projects/tomcat/tomcat7/docs/security-manager-howto.html .
Maybe there's a similar option for the 5.5 version. Try the documentation.
There are options to redirect the output to the same console that you use to start Tomcat. This information is redirected to logs when you execute on Unix based systems, on Windows, it remains with the console if not redirected.
Most probably there is a stack-overflow exception. This is typical behavior of Tomcat when it happens. For example, you're trying to serialize to JSON or XML beans with cyclic dependencies (but without handling of the cycles).
Everytime I had this issue (several times) it always has been this one. All other stops are usually logged properly (like OutOfMemory etc).
This type of stops leaves no trace anywhere.
I'm having problems with jetty crashing intermittently, I'm using Jetty 6.1.24.
I'm running a neo4j Spring MVC webapp, Jetty will stay running for approx 1 hour and then I have to restart Jetty. It is running on small amazon ec2 instance, debian with 1.7gb of RAM.
I start Jetty using java -Xmx900m -server -jar start.jar
I am connecting to the server using putty, when Jetty crashes the putty session disconnects, I cannot see what error caused it to crash.
I would like to be able to see if it is an error generated by Spring, I'm not sure how to log the output from the spring app with Jetty. Or if it is Jetty or a memory issue, what would be the best way to monitor Jetty? I cannot recreate this on my local machine running windows. What do you think would be the best way to approach this? Thanks
This isn't really a programmer question; perhaps it'll be moved over to ServerFault.
You didn't specifically state which operating system you're using, but I'm hazarding a guess at some Linux distribution. You have two options of figuring out what's wrong:
Start your session in screen. Screen will live for as long as the actual machine is powered on, until you reboot the operating system (or you exit screen).
you start screen like this
screen
and you get a new prompt where you can start your program (cd foo, jetty, etc). When you're happy and you just need to go somewhere, you can disconnect the screen by hitting CTRL+A and then CTRL+D. you'll drop back to the place you were before you invoked screen.
To get back to seeing the screen you type screen -R which means to resume an existing screen. you should see jetty again.
The nice thing is that if you lose connection (or you close putty by accident or whatever) then you can use screen -list to get a list of running screens, and then forcibly detach them -D and reattach them to the current putty -R, no harm done!
Use nohup. Nohup more or less detaches the process you're running from the console, so none of its output comes to the terminal. You start your program in the normal fashion, but you add the word nohup to your command.
For example:
nohup ls -l &
After ls -l is complete, your output is stored in nohup.out.
When you say crash do you mean the JVM segfaults and disappears? If that's the case I'd check and make sure you aren't exhausting the machine's available memory. Java on linux will crash when the system memory gets so low the JVM cannot allocate up to its maximum memory. For example, you've set the max JVM memory to 500MB of which it's using 250MB at the moment. However, the Linux OS only has 128MB available. This produces unstable results and the JVM will segfault.
On windows the JVM is more well behaved in this scenario and throws OutOfMemoryError when the system is running low on memory.
Validate how much system memory is available around the time of your crashes.
Verify if other processes on your box are eating up a lot of memory. Turn off anything that could be competing with the JVM.
Run jconsole and connect it to your JVM. That will tell you how memory is being used in your JVM process and give you a history to look back through when it does crash.
Eliminate any native code you might be loading into the JVM when doing this type of testing.
I believe Jetty has some native code to do high volume request processing. Make sure that's not being used. You want to isolate the crashes to Java and NOT some strange native lib. If you take out the native stuff and find it works then you have your answer as to what's causing it. If it continues to crash then it very well could be what I'm describing.
You can force the JVM to allocate all the memory at startup with -Xms900m that can make sure the JVM doesn't fight with other processes for memory. Once it has the full Xmx amount allocated it won't crash. Not a solution, but you can easily test it this way.
When you start java, redirect both outputs (stdout and stderr) to a file:
Using Bash:
java -Xmx900m -server -jar start.jar > stdout.txt 2> stderr.txt
After the crash, inspect those files.
If the crash is due to a signal (like SEGV=segmentation fault), there should be a file dump by the JVM at the location you've started java. For Sun VM (hotspot), it's something like hs_err_pid12121.log (here 12121 is the process ID).
Putty disconnecting STRONGLY hints that the server is running out of memory and starts shutting down processes left and right. It is probably your jetty instance growing too big.
The easiest thing to do now, is adding 1-2 Gb more swap space and do it again. Also note that you can use the jvisualvm to attach to the jetty instance to get runtime information directly.