Jmeter is out of heap memory - java

I'm trying to run a performance test in multiple databases, reading the info from a csv file, but after a while the Jmeter failed the test cases, because it rans out of memory.
I tried to increased "java -XX:MaxPermSize=1024m -Xms3072m -Xmx3072m -jar Apache-JMeter.jar" in this way but I having the same result.
Also Jmeter is creating a mysql connection to 5 different databases.

I'll assume you have a 64 bit operating system and JVM and lots of RAM available.
You can't guess about these things. Better to get some data using a profiler. Use something like dynaTrace, JProfiler or, if you have an Oracle JVM, the Visual VM tool that ships with the JVM.
You can't figure out what the problem is until you have some data. Maybe you need to write your tests differently so you don't have so much data in memory all at once.

first things first, are you modifying those command lines in the jmeter bat correct? because based on your dump we should be able to see how much ram you are ACTUALLY using..
e.g.
java.lang.OutOfMemoryError: Java heap space Dumping heap to java_pid999999.hprof ... Heap dump file created [591747609 bytes in 1321.244 secs] in this case this means that I was using a jmeter configured with a xmx of 591747609 bytes aka 512mb of ram..
By this we can understand if your settings are even being applied..
If that's fine, anyway post some profiler info and we'll see..
but consider these points:
1) jmeter in guimode is a real bottleneck if you want to see what's the performance of your java application... consider switching to remote testing.
2) check best practice on jmeter configuration, there are some sub-optimal settings in jmeter which hogs WAY too much memory, and you might want to turn them off...
give us an update and we'll see what advice can be given.

Make sure that you
Run JMeter in non-GUI mode
Disable all listeners, especially View Results Tree and View Results in Table ones
Use CSV as results output format
Avoid queries producing large result sets
Use Post Processors and Assertions only when and where required
Use latest version of JMeter on 64-bit OS and using latest version of 64-bit JDK
If you still experience OOM errors you will need to either switch to machine having more RAM or consider distributed testing.

Related

How do I analyze a Java heap dump when local memory is less than the size of the dumped heap? [duplicate]

I have a HotSpot JVM heap dump that I would like to analyze. The VM ran with -Xmx31g, and the heap dump file is 48 GB large.
I won't even try jhat, as it requires about five times the heap memory (that would be 240 GB in my case) and is awfully slow.
Eclipse MAT crashes with an ArrayIndexOutOfBoundsException after analyzing the heap dump for several hours.
What other tools are available for that task? A suite of command line tools would be best, consisting of one program that transforms the heap dump into efficient data structures for analysis, combined with several other tools that work on the pre-structured data.
Normally, what I use is ParseHeapDump.sh included within Eclipse Memory Analyzer and described here, and I do that onto one our more beefed up servers (download and copy over the linux .zip distro, unzip there). The shell script needs less resources than parsing the heap from the GUI, plus you can run it on your beefy server with more resources (you can allocate more resources by adding something like -vmargs -Xmx40g -XX:-UseGCOverheadLimit to the end of the last line of the script.
For instance, the last line of that file might look like this after modification
./MemoryAnalyzer -consolelog -application org.eclipse.mat.api.parse "$#" -vmargs -Xmx40g -XX:-UseGCOverheadLimit
Run it like ./path/to/ParseHeapDump.sh ../today_heap_dump/jvm.hprof
After that succeeds, it creates a number of "index" files next to the .hprof file.
After creating the indices, I try to generate reports from that and scp those reports to my local machines and try to see if I can find the culprit just by that (not just the reports, not the indices). Here's a tutorial on creating the reports.
Example report:
./ParseHeapDump.sh ../today_heap_dump/jvm.hprof org.eclipse.mat.api:suspects
Other report options:
org.eclipse.mat.api:overview and org.eclipse.mat.api:top_components
If those reports are not enough and if I need some more digging (i.e. let's say via oql), I scp the indices as well as hprof file to my local machine, and then open the heap dump (with the indices in the same directory as the heap dump) with my Eclipse MAT GUI. From there, it does not need too much memory to run.
EDIT:
I just liked to add two notes :
As far as I know, only the generation of the indices is the memory intensive part of Eclipse MAT. After you have the indices, most of your processing from Eclipse MAT would not need that much memory.
Doing this on a shell script means I can do it on a headless server (and I normally do it on a headless server as well, because they're normally the most powerful ones). And if you have a server that can generate a heap dump of that size, chances are, you have another server out there that can process that much of a heap dump as well.
First step: increase the amount of RAM you are allocating to MAT. By default it's not very much and it can't open large files.
In case of using MAT on MAC (OSX) you'll have file MemoryAnalyzer.ini file in MemoryAnalyzer.app/Contents/MacOS. It wasn't working for me to make adjustments to that file and have them "take". You can instead create a modified startup command/shell script based on content of this file and run it from that directory. In my case I wanted 20 GB heap:
./MemoryAnalyzer -vmargs -Xmx20g --XX:-UseGCOverheadLimit ... other params desired
Just run this command/script from Contents/MacOS directory via terminal, to start the GUI with more RAM available.
I suggest trying YourKit. It usually needs a little less memory than the heap dump size (it indexes it and uses that information to retrieve what you want)
The accepted answer to this related question should provide a good start for you (if you have access to the running process, generates live jmap histograms instead of heap dumps, it's very fast):
Method for finding memory leak in large Java heap dumps
Most other heap analysers (I use IBM http://www.alphaworks.ibm.com/tech/heapanalyzer) require at least a percentage of RAM more than the heap if you're expecting a nice GUI tool.
Other than that, many developers use alternative approaches, like live stack analysis to get an idea of what's going on.
Although I must question why your heaps are so large? The effect on allocation and garbage collection must be massive. I'd bet a large percentage of what's in your heap should actually be stored in a database / a persistent cache etc etc.
This person http://blog.ragozin.info/2015/02/programatic-heapdump-analysis.html
wrote a custom "heap analyzer" that just exposes a "query style" interface through the heap dump file, instead of actually loading the file into memory.
https://github.com/aragozin/heaplib
Though I don't know if "query language" is better than the eclipse OQL mentioned in the accepted answer here.
The latest snapshot build of Eclipse Memory Analyzer has a facility to randomly discard a certain percentage of objects to reduce memory consumption and allow the remaining objects to be analyzed. See Bug 563960 and the nightly snapshot build to test this facility before it is included in the next release of MAT. Update: it is now included in released version 1.11.0.
A not so well known tool - http://dr-brenschede.de/bheapsampler/ works well for large heaps. It works by sampling so it doesn't have to read the entire thing, though a bit finicky.
This is not a command line solution, however I like the tools:
Copy the heap dump to a server large enough to host it. It is very well possible that the original server can be used.
Enter the server via ssh -X to run the graphical tool remotely and use jvisualvm from the Java binary directory to load the .hprof file of the heap dump.
The tool does not load the complete heap dump into memory at once, but loads parts when they are required. Of course, if you look around enough in the file the required memory will finally reach the size of the heap dump.
I came across an interesting tool called JXray. It provides limited evaluation trial license. Found it very useful to find memory leaks. You may give it a shot.
Try using jprofiler , its works good in analyzing large .hprof, I have tried with file sized around 22 GB.
https://www.ej-technologies.com/products/jprofiler/overview.html
$499/dev license but has a free 10 day evaluation
When the problem can be "easily" reproduced, one unmentioned alternative is to take heap dumps before memory grows that big (e.g., jmap -dump:format=b,file=heap.bin <pid>).
In many cases you will already get an idea of what's going on without waiting for an OOM.
In addition, MAT provides a feature to compare different snapshots, which can come handy (see https://stackoverflow.com/a/55926302/898154 for instructions and a description).

Troubleshooting Java memory usage exception

I'm trying to troubleshoot a Java program that requires increasingly more memory until it cannot allocate any more and then it crashes.
EDIT More information about the program. The program is an indexer going through thousands of documents and indexing them for search. The documents are read from MongoDB and written to MongoDB as well after some processing is performed. During the processing I'm using RocksDB (rocksdb-jni version 5.13.4 from Maven). There is some mentioning in this GitHub issue of RocksDB memory usage growing uncontrollably, but I'm not sure it could be related.
Monitoring the process with visualvm results in the following plot:
but running htop on the machine shows totally different stats:
There is a difference of several GBs of memory that I'm unable to trace the source of.
The program is launched with the following VM arguments:
jvm_args: -Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.ssl=false -Dcom.sun.management.jmxremote.authenticate=false -Dcom.sun.management.jmxremote.port=<port> -Djava.rmi.server.hostname=<hostname> -Xmx12G -XX:+UseStringDeduplication
The system has 32GB of RAM and no swap. Out of this 32 GB, ~10GB are always taken by a tmpfs partition, ~8GB by MongoDB and the remaining 12GB are assigned to the program. EDIT The visualvm screenshot above shows 20GB of heap size, because it was from a previous run where I passed -Xmx20G; the behaviour, however, is the same whether I assign 12GB or 20GB to the heap. The behaviour also does not change if I remove the tmpfs partition, freeing 10 more GB of memory: it just takes longer but eventually it will get out of memory.
I have no idea where this memory usage that is not shown in visualvm but appears in htop is coming from. What tools should I use to understand what is going on? The application is running on a remote server, so I would like a tool that only works in the console or can be configured to work remotely, like visualvm.
I always use JProfiler but i hear jetbrains has a good one as well both can connect remotely.
If possible i would try to create a (local) setup where you can freely test it.
in the RocksDB several possible solutions are mentioned, do they work?
RocksDB seems to need some configuration, how did you configure it?
I am not familiar with RocksDB but i see there is some indexing and caching going on. How much data are you processing and what indexes/caching configuration do you have? are you sure this should fit in memory?
as far as i know the memory mismatch is because jni usage is not shown by default by most. There are some flags to improve this NativeMemoryTracking. i can't recall if this will add them to your visualvm overviews as well.
karbos-538 i just noticed this is quite an old issue, i hope this application is already working by now. what part of the question is still relevant to you?

Jenkins java.lang.OutOfMemoryError: GC overhead limit exceeded

I am currently working on creating a performance framework using jenkins and execute the performance test from Jenkins. I am using https://github.com/jmeter-maven-plugin/jmeter-maven-plugin this plugin. The sanity test with single user in this performance framework worked well and went ahead with an actual performance test of 200 users and within 2 mins received the error
java.lang.OutOfMemoryError: GC overhead limit exceeded
I tried the following in jenkins.xml
<arguments>-Xrs -Xmx2048m -XX:MaxPermSize=512m -Dhudson.lifecycle=hudson.lifecycle.WindowsServiceLifecycle -jar "%BASE%\jenkins.war" --httpPort=8080 --prefix=/jenkins --webroot="%BASE%\war"</arguments>
but it didn't work and also noted that whenever I increased the memory the jenkins service stops and had to reduce the memory to 1Gb and then the service restarts.
Had increased the memory for jmeter and java as well but no help.
In the .jmx file view results tree and every other listener is disabled but still the issue persists.
Since I am doing a POC jenkins is hosted in my laptop and high level specs as follows
System Model : Latitude E7270 Processor : Intel(R) Core(TM) i5-6300U CPU # 2.40GHZ(4CPU's), ~2.5GHZ Memory : 8192MB RAM
Any help please ?
The error about GC overhead implies that Jenkins is thrashing in Garbage Collection. This means it's probably spending more time doing Garbage Collection than doing useful work.
This situation normally comes about when the heap is too small for the application. With modern multi generational heap layouts it's difficult to say what exactly needs changing.
I would suggest you enable Verbose GC with the following options "-verbose:gc -XX:+PrintGCDetails -XX:+PrintGCTimeStamps"
Then follow the advice here: http://www.oracle.com/technetwork/articles/javase/gcportal-136937.html
Few points to note
You are using the integrated maven goal to run your jmeter tests. This will use Jenkins as the container to launch your jmeter tests thereby not only impacting your work but also other users of jenkins
It is better to defer the execution to a different client machine like a dedicated jmeter machine which uses its own JVM with parameters to launch your tests (OR) use the one that you provide
In summary,
1. Move the test execution out of jenkins
2. Provide the output of the report as an input to your performance plug-in [ this can also crash since it will need more JVM memory when you process endurance test results like an 8 hour result file]
This way, your tests will have better chance of scaling. Also, you haven't mentioned what type of scripting engine that you are using. AS per Jmeter documentation, JSR223 with groovy has a memory leak. Please refer
http://jmeter.apache.org/usermanual/component_reference.html#JSR223_Sampler
Try adding -Dgroovy.use.classvalue=true to see if that helps (provided you are using groovy). If you are using Java 8, there is a high chance that it is creating unique class for all your scripts in jmeter and it is increasing the meta space which is outside your JVM. In that case, restrict the meta space and use class unloading and a 64 bit JVM like
-d64 -XX:+CMSClassUnloadingEnabled.
Also, what is your new generation size. -XX:NewSize=1024m -XX:MaxNewSize=1024m ? Please note jmeter loads all the files permanently and it will go directly to the old generation thereby shrinking any available space for new generation.

Tool to monitor & log system metrics during automated Java performance tests

We have an application using Spring Integration in its core, and have created performance tests to see what is the processing speed (msgs/sec) for different generated input types.
This process is automated, so whenever such test is run, a separate instance is created in cloud, and disposed after done & output artefacts copied.
What I want to do is to have those performance tests monitored during the run for basic system metrics -- CPU, memory, I/O, GC runs/time. Obviously, the result of this should be some CSV files with metrics readings (e.g., once or twice a second).
So my question is: Are there any good and configurable tools for these purposes?
I'm in the middle of investigation, but profiling tools I reviewed so far mainly require human interaction and are UI oriented.
An option I'm considering is writing a separate tool to access MXBean & use it to log such data during the performance tests. Just wondering if anything good is around.
Please note that this application is running in Tomcat, however for the performance tests we are only using Spring Integration's File endpoints.
Please also note, that 'switchable' component within application is also possible solution. However, I'm currently looking for application-agnostic external tools-first solution.
Command-line tools come to help for this kind of a scenario:
On a Linux/Solaris based environment:
Before you run/trigger your JVM for Spring based application, you can run tools like vmstat, sar in a background mode with its output redirected to a flat file - which helps capture CPU, Memory and other such statistics. Use top with options or mpstat to get thread-level statistics if you seem to be hitting a performance problem, to do bottleneck analysis.
Now run the JVM with arguments like printgc, -Xloggc to write JVM verbose output to a flat file or print gc statictics. Look under Debugging options section for JVM arguments in Java HotSpot VM Options for more options you need.
Tip: create a shell script combining both commands above to run at the same time and achieve your requirement.
On a Windows environment:
For OS statistics gathering on commandline, you could use typeperf or tracerpt (CSV supported).
Now run the JVM with arguments like printgc, -Xloggc to write JVM verbose output to a flat file or print gc statictics. Look under Debugging options section for JVM arguments in Java HotSpot VM Options for more options you need.
Jmeter is a tool to develop performance and scalability tests (defining http requests and being able to load the server with them) , but also has a plugin to allow for monitoring a target system for system metrics such as CPU utilization, memory utlization etc and also JMX type statistics:
Available JMX metric types:
gc-time - time spent in garbage collection, milliseconds (used method)
memory-usage - heap memory used by VM, bytes (method used)
memory-committed - heap memory committed by VM, bytes (method used)
memorypool-usage - heap memory pool usage, bytes (method used)
memorypool-committed - heap memory pool committed size, bytes (method used)
class-count - loaded class count in VM (used method)
compile-time - time spent in compilation, milliseconds (used method)"
Check http://jmeter-plugins.org/wiki/PerfMonMetrics/ for more details of this plugin.

Java Visual VM skewing CPU

i am trying to analyze the CPU usage for a Java UI application running on Windows. I connected it to VisualVM, but it looks like the highest percentage for CPU usage is being used by
sum.rmi.transport.tcp.TCPTransport$ConnectionHandler.run();
I believe this is being used to supply information to VisualVM and hence VisualVM is skewing the results that i'm trying to investigate. Does any one have a way to get a better indication of what is occurring or a better method to determine what in a running java application is taking up so much CPU.
Try to use sampler first.
For detailed information use the profiler and set root methods. See Profiling With VisualVM, Part 1 and Profiling With VisualVM, Part 2 for more information about CPU and Memory profiling.
That sounds awfully suspicious. Try cross referencing the data with results from hprof. You won't need any external applications running, and the data will simply be dumped to a text file from your own process. Are you connecting to your process remotely?

Categories