I need a way to be able to trigger full GC from a linux console script on ubuntu.
I know this is extremely bad practice but without going into too much detail this keeps my server running, This is only meant for 1 or 2 days while I fix the actual problem, so I don't have to wake up in the night and perform manual GC through jconsole or jvisualvm.
Alternatively I have to make a mouse script that clicks the button every 3-4 hours or so which is even worse.
Please help.
If you can have your application start a JMX server (which I believe is implied from your use of jconsole/jvisualvm), then you can invoke the Memory MBean's gc operation via command-line utilities.
Firstly you'll need some kind of command-line JMX client. I've used this one in the past for simple command-line invocations and it worked fine. (Edit: In fact I used it just now to test out the following command, and it invoked GC successfully on a local Tomcat process)
Then you'll need to work out the command to trigger garbage collection. I think this should work (you'll of course need to change hosts/ports/credentials as appropriate):
java -jar cmdline-jmxclient-X.X.jar - localhost:8081 java.lang:type=Memory gc
Finally, you can schedule invocation of this command via cron or equivalent.
Voila!
If you have oracle jvm 1.7, you can use jcmd to list the jvm PIDs, and then jcmd <pid> GC.run to GC.
jcmd <pid> help will show you what other commands are available.
jcmd <pid> GC.run
Example:
jcmd 20350 GC.run
It's not bad practice, it is impossible - even for the java application being executed by the JVM. There is a gc() call available but even it is only a hint to the JVM to run garbage collection. From the console, there is normally no way to influence the JVM while it is running.
Some has asked this question for the Windows platform, see question How to request JVM garbage collection (not from code) when run from Windows command-line
You might check out the JVM arguments for stack/heap sizes (both min and max). There are lots of tweaks you can do in that area but they are mostly specific to the JVM you are using.
JVM performance tuning for large applications
Related
I need to monitor when the GC (specially full GC) is called. Can be on a specified application or hole JVM.
Note that must be a terminal command, jvisualvm and apps like does't suits my case.
I was thinking about use jmx, but also did not find any thing that suits me.
Note that one use of this command will be monitory when Cassandra'a GC runs. (automatic, of course)
I know that this is not "best practice" but I would like to know if I can auto restart tomcat if my deployed app throws an outofmemory exception
You can try to use the OnOutOfMemoryError JVM option
-XX:OnOutOfMemoryError="/yourscripts/tomcat-restart"
It is also possible to generate the heap dump for later analysis:
-XX:+HeapDumpOnOutOfMemoryError
Be careful with combining these two options. If you force killing the process in "tomcat-restart" the heap dump might not be complete.
I know this isn't what you asked, but have you tried looking through a heap dump to see where you may be leaking memory?
Some very useful tools for tracking down memory leaks:
jdk/bin/jmap -histo:live pid
This will give you a histogram of all live objects currently in the JVM. Look for any odd object counts. You'll have to know your application pretty well to be able to determine what object counts are odd.
jdk/bin/jmap -dump:live,file=heap.hprof pid
This will dump the entire heap of the JVM identified by pid. You can then use the great Eclipse Memory Analyzer to inspect it and find out who is holding on to references of your objects. Your two biggest friends in Eclipse Memory Analyzer are the histo gram and a right click -> references -> exclude weak/soft references to see what is referencing your object.
jconsole is of course another good tool.
not easily, and definitely not through the JVM that just suffered the out of memory exception. Your best bet would be some combination of tomcat status monitor coupled with cron scripts or related scheduled system administrator scripts; something to check the status of the server and automatically stop and restart the service if it has failed.
Unfortunately when you kill the java process. Your script will keep a reference to the tomcat ports 8080 8005 8009 and you will not be able to start it again from the same script. The only way it works for me is:
-XX:OnOutOfMemoryError="kill -9 %p" and then another cron or monit or something similar to ensure you have the tomcat running again.
%p is actually the JVM pid , something the JVM provides for you.
Generally, no. The VM is a bad state, and cannot be completely trusted.
Typically, one can use a configurable wrapper process that starts and stops the "real" server VM you want. An example I've worked with is "Java Service Wrapper" from Tanuki Software http://wrapper.tanukisoftware.com/doc/english/download.jsp
I know there are others.
To guard against OOMs in the first place, there are ways to instrument modern VMs via interface beans to query the status of the heap and other memory structures. These can be used to, say, warn in a log or an email if some app specific operations are pushing some established limits.
I use
-XX:OnOutOfMemoryError='pkill java;/usr/local/tomcat/bin/start.sh'
What about something like this? -XX:OnOutOfMemoryError="exec \`ps --no-heading -p $$ -o cmd\`"
I'm having problems with jetty crashing intermittently, I'm using Jetty 6.1.24.
I'm running a neo4j Spring MVC webapp, Jetty will stay running for approx 1 hour and then I have to restart Jetty. It is running on small amazon ec2 instance, debian with 1.7gb of RAM.
I start Jetty using java -Xmx900m -server -jar start.jar
I am connecting to the server using putty, when Jetty crashes the putty session disconnects, I cannot see what error caused it to crash.
I would like to be able to see if it is an error generated by Spring, I'm not sure how to log the output from the spring app with Jetty. Or if it is Jetty or a memory issue, what would be the best way to monitor Jetty? I cannot recreate this on my local machine running windows. What do you think would be the best way to approach this? Thanks
This isn't really a programmer question; perhaps it'll be moved over to ServerFault.
You didn't specifically state which operating system you're using, but I'm hazarding a guess at some Linux distribution. You have two options of figuring out what's wrong:
Start your session in screen. Screen will live for as long as the actual machine is powered on, until you reboot the operating system (or you exit screen).
you start screen like this
screen
and you get a new prompt where you can start your program (cd foo, jetty, etc). When you're happy and you just need to go somewhere, you can disconnect the screen by hitting CTRL+A and then CTRL+D. you'll drop back to the place you were before you invoked screen.
To get back to seeing the screen you type screen -R which means to resume an existing screen. you should see jetty again.
The nice thing is that if you lose connection (or you close putty by accident or whatever) then you can use screen -list to get a list of running screens, and then forcibly detach them -D and reattach them to the current putty -R, no harm done!
Use nohup. Nohup more or less detaches the process you're running from the console, so none of its output comes to the terminal. You start your program in the normal fashion, but you add the word nohup to your command.
For example:
nohup ls -l &
After ls -l is complete, your output is stored in nohup.out.
When you say crash do you mean the JVM segfaults and disappears? If that's the case I'd check and make sure you aren't exhausting the machine's available memory. Java on linux will crash when the system memory gets so low the JVM cannot allocate up to its maximum memory. For example, you've set the max JVM memory to 500MB of which it's using 250MB at the moment. However, the Linux OS only has 128MB available. This produces unstable results and the JVM will segfault.
On windows the JVM is more well behaved in this scenario and throws OutOfMemoryError when the system is running low on memory.
Validate how much system memory is available around the time of your crashes.
Verify if other processes on your box are eating up a lot of memory. Turn off anything that could be competing with the JVM.
Run jconsole and connect it to your JVM. That will tell you how memory is being used in your JVM process and give you a history to look back through when it does crash.
Eliminate any native code you might be loading into the JVM when doing this type of testing.
I believe Jetty has some native code to do high volume request processing. Make sure that's not being used. You want to isolate the crashes to Java and NOT some strange native lib. If you take out the native stuff and find it works then you have your answer as to what's causing it. If it continues to crash then it very well could be what I'm describing.
You can force the JVM to allocate all the memory at startup with -Xms900m that can make sure the JVM doesn't fight with other processes for memory. Once it has the full Xmx amount allocated it won't crash. Not a solution, but you can easily test it this way.
When you start java, redirect both outputs (stdout and stderr) to a file:
Using Bash:
java -Xmx900m -server -jar start.jar > stdout.txt 2> stderr.txt
After the crash, inspect those files.
If the crash is due to a signal (like SEGV=segmentation fault), there should be a file dump by the JVM at the location you've started java. For Sun VM (hotspot), it's something like hs_err_pid12121.log (here 12121 is the process ID).
Putty disconnecting STRONGLY hints that the server is running out of memory and starts shutting down processes left and right. It is probably your jetty instance growing too big.
The easiest thing to do now, is adding 1-2 Gb more swap space and do it again. Also note that you can use the jvisualvm to attach to the jetty instance to get runtime information directly.
I am coding an application that creates JVMs and needs to control the memory usage of the processes spawned by the JVM.
You can connect to JVM process using JMX to get information about memory status / allocations and also provoke garbage collection. But you first need to enable JMX monitoring of your JVM: http://java.sun.com/j2se/1.5.0/docs/guide/management/agent.html.
I assume that you are talking about non-Java "processes" spawned using Runtime.exec(...) etc.
The answer is that this is OS specific and not something that the standard Java libraries support. But if you were going to do this in Linux (or UNIX) I can think of three approaches:
Have Java spawn the command via a shell wrapper script that uses the ulimit builtin to reduce the memory limits, then execs the actual command; see man 1 ulimit.
Write a little C command that does the same as the shell wrapper. This will have less overhead than the wrapper script approach.
Try to do the same with JNI and a native code library. Not recommended because you'd probably need to replicate the behavior of Process and ProcessBuilder, and that could be very difficult.
If by 'control' you mean 'limit to a known upper bound', then you can simply pass
-Xms`lower_bound`
and
-Xmx`upper_bound`
to the vm's args when you spawn the process. see the approproate setting here
I've been running Tomcat 5.5 with Java 1.4 for a while now with a huge webapp. Most of the time it runs fine, but sometimes it will just hang, with no exception generated, and no apparant way of getting it to run again other than re-starting Tomcat. The tomcat instance is allowed a gigabyte of memory on the heap, but rarely exceeds 300 MB. Has anyone else run into this issue, and is there a solution for it?
For clarification: I determined how much memory it is using via Task Manager and via Eclipse (I've also tried running it outside of Eclipse, but get the same problem eventually, though it takes a little longer). With Eclipse, I look at the memory allocated via its little (optional) memory pane and the amount allocated to javaw.exe via the task manager. I use the sysdeo? tomcat plugin for Eclipse.
For any jvm process, force a thread dump. In windows, this can be done with CTRL-BREAK, I believe, in the console window.
In *nix, it is almost always "kill -3 jvm-pid".
This may show if you have threads waiting on db connection pool/thread pool, etc.
Another thing to check out is how many connections you have currently to the JVM -- either use NETSTAT or SysInternals utility such as tcpconn/tcpview (google it).
Also, try to run with the verbose:gc JVM flag. For Sun's JVM, run like "java -verbose:gc". This will show your garbage collections. If it is collecting a lot (FULL COLLECTIONS, expecially) then you probably have a memory leak. The full collections are costly, especially on large heaps like that.
How are you determining that only 300mb are being used?
It sounds like you're hitting a deadlock.
If you can reproduce it in a dev environment then try attaching a debugger once it's happened. Take a look at your threads and see if you have any deadlocks.
If you can't get a debugger to attach you should be able to generate a thread dump, as Dustin pointed out.
Try increasing the logging sensitivity for the Tomcat application server.
http://tomcat.apache.org/tomcat-5.5-doc/logging.html
You can increase the sensitivity to FINEST or ALL for most of them for a few days and see if that helps you catch anything.
I agree with creating multiple thread dumps and viewing them though this: Thread Dump Analyzer