I am using Java17 in the container environment.
I set -XX:+ExitOnOutOfMemoryError because I want the JVM to exit when an OutOfMemoryError occurs.
And I also set dumponexit=true to output the JFR on JVM exit.
But when the JVM exits with an OutOfMemoryError, JFR does not print.
Is it possible to set it to output?
Below is an example of a boot parameter.
java -XX:+ExitOnOutOfMemoryError -XX:StartFlightRecording: dumponexit=true app.jar
JFR is dumped in a Java shutdown hook, which doesn't get a chance to run if the JVM exits due to -XX:+ExitOnOutOfMemoryError.
JFR does something called an emergency dump, a best effort to write contents in memory/disk to a file located next to the hs_err_pid file if the JVM crashes (without executing Java code), possibly due to OOM. It may, or may not, succeed.
If -XX:+ExitOnOutOfMemoryError is specified, that logic is also prevented from running.
Related
I'm developping a javafx application which has much ui interfaces, and while opening many windows, the jvm start consumming much memory (going up tp 350mb).
When it arrives to 360mb, the programs starts lagging and end up by being crashed (nothing works, screen blocks ...) and the console show a OutOfMemoryException with Java Heap Space error
I've 6gb of memory in my computer, and tried to start the .jar file using -Xmx param, but still the operating system doesn't allow the jvm to consume more memory.
Is there anything else i should specify so that the jvm may be able to get as much memory as it needs ?
You might want to ensure that you're using:
java -Xmx1024m -jar YourApplication.jar
and not:
java -jar YourApplication.jar -Xmx1024m
Anything after the .jar is considered as argument passed to your executable Jar.
On startup, a JVM finds a user specified class and runs the method contained therein with the signature "public static void main(String[])".
The thread executing the main method can obviously terminate while the JVM continues to run other threads that the main method had spawned. Therefore, extracting a Java stack trace (e.g. "jstack" output) is not sufficient to find out the initial class from which the JVM was started. I'm also not aware of other commands typically included in a JDK that will extract that information from a running JVM or core file.
I'm working on some automation for analysis of core files, and it would be helpful to understand the class from which a JVM was started, even when no threads are running code under that class at the time the core file was created.
Question: Do JVMs in general (and both Oracle and OpenJDK specifically) keep track of the class from which the main method was called?
jinfo utility (included in OpenJDK and Oracle JDK) can tell the main class. It works both for live JVMs and for core dumps.
E.g. here is how to find the Java command line from the core dump:
jinfo /path/to/java core.1234 | grep sun.java.command
Starting from JDK 9 jinfo works only for live processes while jhsdb jinfo works for core dumps.
I am running jMeter from the command line on a Mac. Today it threw an Out of memory, heap space error....
newbie$ sh jmeter.sh
Exception in thread "AWT-EventQueue-0" java.lang.OutOfMemoryError: Java heap space
at java.awt.image.DataBufferInt.<init>(DataBufferInt.java:41)
at java.awt.image.Raster.createPackedRaster(Raster.java:455)
I know I need to increase the memory allocated to it, but not sure how. I looked at this post Unable to increase heap size for JMeter on Mac OSX and found that jMeter script file in the bin folder it mentions and made the below updates..
HEAP="-Xms1024m -Xmx2048m"
NEW="-XX:NewSize=512m -XX:MaxNewSize=1024m"
But I am still getting the out of memory error. Do I just need to give it more or am I changing in the wrong place? Could it be I need to restart my whole machine?
As far as I understand:
You made changes to jmeter script
You're launching jmeter.sh script
You want to know why the changes are not applied?
If you changed jmeter script why don't you just launch it as ./jmeter ?
If you need to start JMeter via jmeter.sh for any reason, run it as follows:
JVM_ARGS="-Xms1024m -Xmx2048m -XX:NewSize=512m -XX:MaxNewSize=1024m" && export JVM_ARGS && ./jmeter.sh
See Running JMeter User Manual chapter in particular and The Ultimate JMeter Resource List in general for the relevant documentation.
If you have trouble finding it in the logs, then
You can use
ps -ef | grep jmeter
this may give you the details( Not a mac user, but I thing ps -ef would work)
The other option is to use jvisualvm, it ships already with jdk, so no extra tool is required.Run the visualvm and the jmeter, you can guess the name of the application ( entry of jemter ) on the left pane of visualvm , click on it, and all the jvm details will be available.
After this you can confirm whether jmeter is availed with 2GB max Ram. And increase if needed.
There could be different possible reasons for OutOfMemory Error. If you have successfully changed the allocated memory/heap size and still getting the issue then you can look into following factors:
Listeners: Do not use 'TreeView' and 'TableView' Listeners in actual load test as they consume lot of memory. Best practice is to save results in .JTL file, which can be later used for getting different reports.
Non-GUI Mode: Do not use GUI mode while performing actual load test. Run test from command line.
For more, visit the following blog as it has some really nice tips to solve OutOfMemory issues in JMeter.
http://www.testingdiaries.com/jmeter-out-of-memory-error/
I have a Java application running under JBoss AS 7 that is used to call a fairly complicated bash script using Runtime.getRuntime().exec(command). The bash script is failing because cvs is reporting that it is running out of memory (the error was "E342: Out of memory!" to be exact).
So should I be increasing the amount of memory available to JBoss AS (with JAVA_OPTS="-Xms256m -Xmx2048m" or something similar), or does this indicate that the OS itself has run out of memory?
The operating system is running out of memory. Increasing JBoss's heap size can only make things worse.
You should be looking at things like:
Adding more RAM.
Increasing the amount of swap disk space.
Cutting down on the other applications running.
It turns out it was vim. I was running the script that called cvs under the empty command, which may have caused the issue. In the end I just created a script to edit the message file with a generic message:
#!/bin/sh
# Add a generic message
echo "Some Generic Message" > $1
# Update the time stamp. If you don't, you'll get a
# "Log message unchanged or not specified" error.
# the cvs timestamp comparison routine has a resolution
# of one second, so sleep to ensure that the timestamps
# are detected as being different.
sleep 1
touch $1
exit 0
Then set the EDITOR or CVSEDITOR environment variable to point to the script.
I used to generate thread dumps by running kill -quit and I would get them in a log file where my server logs were there. When the file grew too large I removed it using rm and created a new file of the same name.
Now when I use kill -quit to take the thread dumps, nothing gets copied in the log file - its empty.
Can anyone help?
The default JBoss startup scripts on Unix usually look something like:
nohup $JBOSS_HOME/bin/run.sh $JBOSS_OPTS >/dev/null 2>&1 &
This is unfortunate because it's sends stderr to /dev/null. Usually this is not a problem, because once log4j initializes, then most application output will go to boot.log or server.log. However, for thread dumps, and other low level errors they get lost.
Your best bet is to change the startup script to redirect stdout and stderr to a file. Additionally, one thing that's overlooked in the default setup is redirect stdin. For daemon processes it's a best practice to redirect stdin to /dev/null. For example:
nohup $JBOSS_HOME/bin/run.sh $JBOSS_OPTS >> console-$(date +%Y%m%d).out 2>&1 < /dev/null &
Lastly, if you have a running process, you can use jstack, which is included with the JRE, to get a thread dump. This will output to the console from which it's invoked. I prefer the output from kill -3, but jstack also allows you to view native stack frames.
If this is on *nix, when you delete a file, everyone who has that file still open will continue to write to the old (now missing) file. The file will only be really deleted when all file handles to it are closed.
You would have to cause the JVM to close and re-open the log file. Not sure if this can be done without a restart.
If you go into jmx and find jboss.system:service=Logging,type=Log4jService you can then invoke the reconfigure method which should cause log4j to reopen any of its log files. Then the kill -quit should work.