I am running jMeter from the command line on a Mac. Today it threw an Out of memory, heap space error....
newbie$ sh jmeter.sh
Exception in thread "AWT-EventQueue-0" java.lang.OutOfMemoryError: Java heap space
at java.awt.image.DataBufferInt.<init>(DataBufferInt.java:41)
at java.awt.image.Raster.createPackedRaster(Raster.java:455)
I know I need to increase the memory allocated to it, but not sure how. I looked at this post Unable to increase heap size for JMeter on Mac OSX and found that jMeter script file in the bin folder it mentions and made the below updates..
HEAP="-Xms1024m -Xmx2048m"
NEW="-XX:NewSize=512m -XX:MaxNewSize=1024m"
But I am still getting the out of memory error. Do I just need to give it more or am I changing in the wrong place? Could it be I need to restart my whole machine?
As far as I understand:
You made changes to jmeter script
You're launching jmeter.sh script
You want to know why the changes are not applied?
If you changed jmeter script why don't you just launch it as ./jmeter ?
If you need to start JMeter via jmeter.sh for any reason, run it as follows:
JVM_ARGS="-Xms1024m -Xmx2048m -XX:NewSize=512m -XX:MaxNewSize=1024m" && export JVM_ARGS && ./jmeter.sh
See Running JMeter User Manual chapter in particular and The Ultimate JMeter Resource List in general for the relevant documentation.
If you have trouble finding it in the logs, then
You can use
ps -ef | grep jmeter
this may give you the details( Not a mac user, but I thing ps -ef would work)
The other option is to use jvisualvm, it ships already with jdk, so no extra tool is required.Run the visualvm and the jmeter, you can guess the name of the application ( entry of jemter ) on the left pane of visualvm , click on it, and all the jvm details will be available.
After this you can confirm whether jmeter is availed with 2GB max Ram. And increase if needed.
There could be different possible reasons for OutOfMemory Error. If you have successfully changed the allocated memory/heap size and still getting the issue then you can look into following factors:
Listeners: Do not use 'TreeView' and 'TableView' Listeners in actual load test as they consume lot of memory. Best practice is to save results in .JTL file, which can be later used for getting different reports.
Non-GUI Mode: Do not use GUI mode while performing actual load test. Run test from command line.
For more, visit the following blog as it has some really nice tips to solve OutOfMemory issues in JMeter.
http://www.testingdiaries.com/jmeter-out-of-memory-error/
Related
I've been able to create dashboards for small amounts of log data (3mb) with JMeter. However, when trying to create dashboards with large amounts of data (35mb), jmeter will throw a java.lang.OutOfMemoryError: Java Heap Space.
So far I've tried to create an environment variable called JVM_ARGS=-Xms1024m -Xmx10240m but I still do not have enough space.
Is there anything else I can try to create these dashboards? Or is there a way to reduce the number of entries that get written to the log file?
Thank you!
There are 2 possibilities :
Option 1 : your JVM options are not taken into account. Show the first lines or all content of jmeter.log
Option 2 : you have added some dynamic parameter to your http requests that has created a lot of different (name) SampleResult
Edit 8 october 2018:
Root cause was point Option 2
Make sure you've really created the environment variable and it has the anticipated value, double check this by running the following command in the terminal window where you will be launching JMeter from:
echo %JVM_ARGS% for Windows
echo $JVM_ARGS for Linux/Unix/MacOS
You should see your increased JVM heap settings
Make sure to use either jmeter.bat for Windows or jmeter.sh for other operating systems wrapper script
Make sure to use 64-bit version of JRE as 32-bit will not be able to allocate more than 3G heap
Make sure you can execute java command with your 10G heap
java -Xms1024m -Xmx10240m -version
you should see your Java version
Try running ApacheJMeter.jar executable directly:
java -Xms1024m -Xmx10240m -jar ApacheJMeter.jar -g result.jtl -o destination_folder
If nothing helps be aware that you can generate tables/charts using JMeterPluginsCMD Command Line Tool (it is not a part of standard JMeter installation, can be installed using JMeter Plugins Manager)
I am trying to update IntelliJ IDEA from build 141.177 to 141.178.
When the updated downloads all the files needed, and starts the update, I get this error:
Temp. directory: /tmp
java.lang.OutOfMemoryError: Java heap space
at ie.wombat.jbdiff.JBPatch.bspatch(JBPatch.java:91)
at com.intellij.updater.BaseUpdateAction.applyDiff(BaseUpdateAction.java:112)
at com.intellij.updater.UpdateAction.doApply(UpdateAction.java:44)
at com.intellij.updater.PatchAction.apply(PatchAction.java:184)
at com.intellij.updater.Patch$3.forEach(Patch.java:308)
at com.intellij.updater.Patch.forEach(Patch.java:360)
at com.intellij.updater.Patch.apply(Patch.java:303)
at com.intellij.updater.PatchFileCreator.apply(PatchFileCreator.java:84)
at com.intellij.updater.PatchFileCreator.apply(PatchFileCreator.java:75)
at com.intellij.updater.Runner.doInstall(Runner.java:295)
at com.intellij.updater.Runner.access$000(Runner.java:18)
at com.intellij.updater.Runner$2.execute(Runner.java:261)
at com.intellij.updater.SwingUpdaterUI$5.run(SwingUpdaterUI.java:191)
at java.lang.Thread.run(Thread.java:745)
The /tmp folder should be on my root partition which has 20GiB of size, and currently it still has about 8GiB left. So I don't really understand what the problem could be here right now. Plus I am not sure about the RAM part, my system is using 40% of my RAM when I do the update.
I hit this same problem. The issue is that idea.vmoptions change the memory for the main intellij process but not the update process. In my case the update process only had 500m allocated to it.
I got past the problem by leaving the update window open after it got the error. I then ran ps -Af | grep java (I'm running linux). This showed me the command line for the update process. I copied it out and changed -Xmx500m to -Xmx1024m. I then ran the modified command line in another console, once it was done I exited the original update window and all was good.
Read the JetBrains documentation & this answer about how to increase the maximum heap size for IntelliJ.
Set -Xmx2048m in idea.vmoptions (32-bit edition) or idea64.vmoptions (64-bit edition), and copy it to the appropriate location, according to the documentation I referred to above.
I would also do File > Invalidate Caches / Restart > Invalidate and Restart, just to be sure that the changes took (probably not necessary, but just in case).
I received the exact same stack trace today while attempting to upgrade from 14.1.1 to 14.1.2 via the automatic updater. On OS X, I solved it by renaming ~/Library/Preferences/IdeaIC14/idea.vmoptions to idea64.vmoptions. I already had -Xmx2048m set in that file, but apparently it wasn't being read until I renamed it correctly.
Also see YouTrack issue IDEA-139036 (thanks to #Meo).
I'm currently having this problem where I am executing a query that will load large of records. At first execution, it is successful, but when I
execute again, I am having java heap size out of memory.
I Know I can increase java heap size using command line, but that requires a compiled jar file.
But I am currently on the development process, so how can I increase java heap size in that case?
Im using Elipse as my IDE.
Thanks for any response.
It doesn't require a compiled jar file. Choose Run - Run configurations... - Select your run configuration - Arguments tab. Then enter the appropriate command line argument in the VM arguments text box: -Xmx1024m for example.
You can modify the eclipse.ini file located inside of our eclipse directory .There you will find Xms40m Xmx256m parameter with some value , you can increase it to Xms256m Xmx1024m . Then check whether outofmemory error is still there . If its there then try tuning these parameter slightly increasing and check.
Apperently that was not enough for an answer... well, let me just copy the text from those answers instead :S
You can use the environment variable _JAVA_OPTIONS to set the default heap size. This will change the heap size for all Java programs. Like this:
export _JAVA_OPTIONS="-Xmx1g"
I am doing some Java development on Windows 7 x64 running inside VMWare Fusion 3.x (OSX). I have installed JDK6 (update 26), set JAVA_HOME to the path (no trailing slash), and restarted my command prompt.
I can successfully launch the program. During startup it runs the calibration and then fails with this error:
"Could not create directory\VMWare-host\Shared Folders\ .nbprofiler" (no space after that slash but the markup was hiding the period)
I can click to continue, but when I'm in the program I cannot do CPU or Memory profiling. I throws up a similar error box:
"Error retrieving saved calibration data for target JVM: Could not create...(same as earlier)"
Once upon a time I had this working by passing the --userdir flag and -J-Dnbprofiler.home during startup, but that trick isn't working anymore.
(The complete command was:
jvisualvm --userdir c:\Users\myname -J-Dnbprofiler.home=c:\Users\myname
)
How can I force jvisualvm to save its calibration data on a "real" drive instead of the vmware network drive and get this working?
.nbprofiler directory is derived from user.home system property. I am not sure what you did to Windows installation, but your user.home points to directory\VMWare-host\Shared Folders. So one solution is to fix the Windows installation, so that Java recognize c:\Users\myname as your user home directory. If that fails for some reason you can use nbprofiler.home property to override it, as you correctly wrote. However you should point it to the nonexistent directory, so you should start VisualVM with the following commandline:
jvisualvm -J-Dnbprofiler.home=c:\Users\myname\nbprofiler --userdir c:\Users\myname\visualvm_userdir
One last note, even if the profiler part is not working, you should be able to use sampling in the 'Sampler' tab.
Try disable Sharing for the VM.
It works for me with Windows 7 x86 in Fusion with Sharing disabled (and Sharing is the mechanism providing the folder you cannot write to).
I found the following command works for me.
visualvm -J-Duser.home=%HOME%
Also, I needed to add -Duser.home=%HOME% to my app startup command.
I had defined nbprofile.home and userdir, but I was still getting an error when the Profiler was running against my app: Profiler Agent Error: Could not create directory\vmware-host\Shared Folders.nbprofiler.
I discovered that the Profiler was using user.home defined by my app rather than the one with visualvm. Both seem to be needed.
I'm using ASANT to run a xml file which points to a NARS.jar file.
I'm getting "java.lang.OutOfMemoryError: Java heap space" and i'm researching around this.
So i have found that i need to set "-XX:+HeapDumpOnOutOfMemoryError", to create a dump file to analyze.
I edited ASANT.bat and added the "-XX:+HeapDumpOnOutOfMemoryError" to ANT_OPTS:
set ANT_OPTS= "-XX:+HeapDumpOnOutOfMemoryError" "-Dos.name=Windows_NT" "-Djava.library.path=%AS_INSTALL%\lib;%AS_ICU_LIB%;%AS_NSS%" "-Dcom.sun.aas.installRoot=%AS_INSTALL%" "-Dcom.sun.aas.instanceRoot=%AS_INSTALL%" "-Dcom.sun.aas.instanceName=server" "-Dcom.sun.aas.configRoot=%AS_CONFIG%" "-Dcom.sun.aas.processLauncher=SE" "-Dderby.root=%AS_DERBY_INSTALL%"
But i can't seem to find any dump file.
I will use the Eclipse Memory Analyzer to analyze when i find the dump.
I also tried to set the option "-XX:HeapDumpPath=c:\memdump\bds.hprof", but no dump was created there.
Anyone got an idea of what i'm doing wrong?
Thanks in advance
It looks like your application is running on Windows. A Windows file path needs to be escaped with \. As per your example, -XX:HeapDumpPath should look like:
-XX:HeapDumpPath=c:\\memdump\\bds.hprof
Besides ‘-XX:+HeapDumpOnOutOfMemoryError’ there are several other options to capture heap dumps as well.
I found that i could use VisualVM from SUN to get a heapdump, and see it live.
Easy solution
It's in the working directory of the application (i.e. where you've started it). I'm not sure what happens if the process does not have the necessary privileges to do so. Probably, writing the dump would fail silently.
are you sure that ANT is the process with the OOME ? It may be a process started by ANT.
Add "-debug" to the ANT_OPTS for debugging information.
Are you seeing the targets being printed out during the execution?
You can also fork the various processes started by ant ( will slow things down but may help isolate the culprit )
Lastly, maybe you just need more memory than the default. Add:
-Xms256m -Xmx512m -XX:PermSize=64m -XX:MaxPermSize=256m
to the ANT_OPTS
Umm... how about wherever java.io.tmpdir is pointing?