Jenkins java.lang.OutOfMemoryError: GC overhead limit exceeded - java

I am currently working on creating a performance framework using jenkins and execute the performance test from Jenkins. I am using https://github.com/jmeter-maven-plugin/jmeter-maven-plugin this plugin. The sanity test with single user in this performance framework worked well and went ahead with an actual performance test of 200 users and within 2 mins received the error
java.lang.OutOfMemoryError: GC overhead limit exceeded
I tried the following in jenkins.xml
<arguments>-Xrs -Xmx2048m -XX:MaxPermSize=512m -Dhudson.lifecycle=hudson.lifecycle.WindowsServiceLifecycle -jar "%BASE%\jenkins.war" --httpPort=8080 --prefix=/jenkins --webroot="%BASE%\war"</arguments>
but it didn't work and also noted that whenever I increased the memory the jenkins service stops and had to reduce the memory to 1Gb and then the service restarts.
Had increased the memory for jmeter and java as well but no help.
In the .jmx file view results tree and every other listener is disabled but still the issue persists.
Since I am doing a POC jenkins is hosted in my laptop and high level specs as follows
System Model : Latitude E7270 Processor : Intel(R) Core(TM) i5-6300U CPU # 2.40GHZ(4CPU's), ~2.5GHZ Memory : 8192MB RAM
Any help please ?

The error about GC overhead implies that Jenkins is thrashing in Garbage Collection. This means it's probably spending more time doing Garbage Collection than doing useful work.
This situation normally comes about when the heap is too small for the application. With modern multi generational heap layouts it's difficult to say what exactly needs changing.
I would suggest you enable Verbose GC with the following options "-verbose:gc -XX:+PrintGCDetails -XX:+PrintGCTimeStamps"
Then follow the advice here: http://www.oracle.com/technetwork/articles/javase/gcportal-136937.html

Few points to note
You are using the integrated maven goal to run your jmeter tests. This will use Jenkins as the container to launch your jmeter tests thereby not only impacting your work but also other users of jenkins
It is better to defer the execution to a different client machine like a dedicated jmeter machine which uses its own JVM with parameters to launch your tests (OR) use the one that you provide
In summary,
1. Move the test execution out of jenkins
2. Provide the output of the report as an input to your performance plug-in [ this can also crash since it will need more JVM memory when you process endurance test results like an 8 hour result file]
This way, your tests will have better chance of scaling. Also, you haven't mentioned what type of scripting engine that you are using. AS per Jmeter documentation, JSR223 with groovy has a memory leak. Please refer
http://jmeter.apache.org/usermanual/component_reference.html#JSR223_Sampler
Try adding -Dgroovy.use.classvalue=true to see if that helps (provided you are using groovy). If you are using Java 8, there is a high chance that it is creating unique class for all your scripts in jmeter and it is increasing the meta space which is outside your JVM. In that case, restrict the meta space and use class unloading and a 64 bit JVM like
-d64 -XX:+CMSClassUnloadingEnabled.
Also, what is your new generation size. -XX:NewSize=1024m -XX:MaxNewSize=1024m ? Please note jmeter loads all the files permanently and it will go directly to the old generation thereby shrinking any available space for new generation.

Related

Troubleshooting Java memory usage exception

I'm trying to troubleshoot a Java program that requires increasingly more memory until it cannot allocate any more and then it crashes.
EDIT More information about the program. The program is an indexer going through thousands of documents and indexing them for search. The documents are read from MongoDB and written to MongoDB as well after some processing is performed. During the processing I'm using RocksDB (rocksdb-jni version 5.13.4 from Maven). There is some mentioning in this GitHub issue of RocksDB memory usage growing uncontrollably, but I'm not sure it could be related.
Monitoring the process with visualvm results in the following plot:
but running htop on the machine shows totally different stats:
There is a difference of several GBs of memory that I'm unable to trace the source of.
The program is launched with the following VM arguments:
jvm_args: -Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.ssl=false -Dcom.sun.management.jmxremote.authenticate=false -Dcom.sun.management.jmxremote.port=<port> -Djava.rmi.server.hostname=<hostname> -Xmx12G -XX:+UseStringDeduplication
The system has 32GB of RAM and no swap. Out of this 32 GB, ~10GB are always taken by a tmpfs partition, ~8GB by MongoDB and the remaining 12GB are assigned to the program. EDIT The visualvm screenshot above shows 20GB of heap size, because it was from a previous run where I passed -Xmx20G; the behaviour, however, is the same whether I assign 12GB or 20GB to the heap. The behaviour also does not change if I remove the tmpfs partition, freeing 10 more GB of memory: it just takes longer but eventually it will get out of memory.
I have no idea where this memory usage that is not shown in visualvm but appears in htop is coming from. What tools should I use to understand what is going on? The application is running on a remote server, so I would like a tool that only works in the console or can be configured to work remotely, like visualvm.
I always use JProfiler but i hear jetbrains has a good one as well both can connect remotely.
If possible i would try to create a (local) setup where you can freely test it.
in the RocksDB several possible solutions are mentioned, do they work?
RocksDB seems to need some configuration, how did you configure it?
I am not familiar with RocksDB but i see there is some indexing and caching going on. How much data are you processing and what indexes/caching configuration do you have? are you sure this should fit in memory?
as far as i know the memory mismatch is because jni usage is not shown by default by most. There are some flags to improve this NativeMemoryTracking. i can't recall if this will add them to your visualvm overviews as well.
karbos-538 i just noticed this is quite an old issue, i hope this application is already working by now. what part of the question is still relevant to you?

WebSphere 7 - How can I determine which objects are using up heap memory at runtime?

Issue: I have intermittent out of memory issues but WebSphere is recovering. I am trying to determine how I can find out what is using up most of the memory. I have app dynamics but it does not work for Websphere.
Is the only way the way to determine what is using up most of the memory to have a heap dump from out of memory crash?
Server: WebSphere 7.5
JAVA Version: IBM 1.6
The IBM JVM has dump triggers, which allow you to trigger dumps quite flexibly. For example, you can configure the JVM to dump when a given method is entered:
-Xtrace:trigger=method{java/lang/String.substring,coredump}
You can specify counts, too, so to produce a dump when a method is entered 1000 times and 1001 times:
-Xtrace:trigger=method{java/lang/String.getBytes,coredump,,1000,2}
Once you have the dump, using Eclipse Memory Analyser with the IBM extensions (http://www.ibm.com/developerworks/java/jdk/tools/memoryanalyzer/) is a good option for doing the analysis. The IBM extensions know how to parse the IBM dumps (as you'd expect), and also have intelligence about what patterns of memory usage indicate a potential problem.
You can generate a Heap Dump (Snapshot of the heap at a point of time) and a Thread Dump/Javacore (List of threads in the JVM at a point of time) while WebSphere Application Server is running.
To obtain the dumps, you will need to use the wsadmin tool. Start the wsadmin tool and execute the following commands.
JACL Version:
set jvm [$AdminControl queryNames WebSphere:type=JVM,process=<servername>,node=<nodename>,*]
$AdminControl invoke $jvm generateHeapDump
$AdminControl invoke $jvm dumpThreads
Jython Version (Untested):
jvm = AdminControl.queryNames ('WebSphere:type=JVM,process=<servername>,node=<nodename>,*')
AdminControl.invoke(jvm, 'generateHeapDump')
AdminControl.invoke(jvm, 'dumpThreads')
Replace servername & nodename with your values. Be sure to take multiple dumps, before the error and after the recovery.
Once the command is completed, the filenames will be returned. Move these files to a different workstation (because analysis is a resource-intensive process) and analyze them using any tool of your choice.
You need a java monitoring tool. Dynatrace is my favorite. It's not free (and not affordable for an individual), but it'll tell you exactly how your memory is being managed. And I've used it with Websphere.
Do you think you have a memory leak, or a load problem?

Tool to monitor & log system metrics during automated Java performance tests

We have an application using Spring Integration in its core, and have created performance tests to see what is the processing speed (msgs/sec) for different generated input types.
This process is automated, so whenever such test is run, a separate instance is created in cloud, and disposed after done & output artefacts copied.
What I want to do is to have those performance tests monitored during the run for basic system metrics -- CPU, memory, I/O, GC runs/time. Obviously, the result of this should be some CSV files with metrics readings (e.g., once or twice a second).
So my question is: Are there any good and configurable tools for these purposes?
I'm in the middle of investigation, but profiling tools I reviewed so far mainly require human interaction and are UI oriented.
An option I'm considering is writing a separate tool to access MXBean & use it to log such data during the performance tests. Just wondering if anything good is around.
Please note that this application is running in Tomcat, however for the performance tests we are only using Spring Integration's File endpoints.
Please also note, that 'switchable' component within application is also possible solution. However, I'm currently looking for application-agnostic external tools-first solution.
Command-line tools come to help for this kind of a scenario:
On a Linux/Solaris based environment:
Before you run/trigger your JVM for Spring based application, you can run tools like vmstat, sar in a background mode with its output redirected to a flat file - which helps capture CPU, Memory and other such statistics. Use top with options or mpstat to get thread-level statistics if you seem to be hitting a performance problem, to do bottleneck analysis.
Now run the JVM with arguments like printgc, -Xloggc to write JVM verbose output to a flat file or print gc statictics. Look under Debugging options section for JVM arguments in Java HotSpot VM Options for more options you need.
Tip: create a shell script combining both commands above to run at the same time and achieve your requirement.
On a Windows environment:
For OS statistics gathering on commandline, you could use typeperf or tracerpt (CSV supported).
Now run the JVM with arguments like printgc, -Xloggc to write JVM verbose output to a flat file or print gc statictics. Look under Debugging options section for JVM arguments in Java HotSpot VM Options for more options you need.
Jmeter is a tool to develop performance and scalability tests (defining http requests and being able to load the server with them) , but also has a plugin to allow for monitoring a target system for system metrics such as CPU utilization, memory utlization etc and also JMX type statistics:
Available JMX metric types:
gc-time - time spent in garbage collection, milliseconds (used method)
memory-usage - heap memory used by VM, bytes (method used)
memory-committed - heap memory committed by VM, bytes (method used)
memorypool-usage - heap memory pool usage, bytes (method used)
memorypool-committed - heap memory pool committed size, bytes (method used)
class-count - loaded class count in VM (used method)
compile-time - time spent in compilation, milliseconds (used method)"
Check http://jmeter-plugins.org/wiki/PerfMonMetrics/ for more details of this plugin.

Jmeter is out of heap memory

I'm trying to run a performance test in multiple databases, reading the info from a csv file, but after a while the Jmeter failed the test cases, because it rans out of memory.
I tried to increased "java -XX:MaxPermSize=1024m -Xms3072m -Xmx3072m -jar Apache-JMeter.jar" in this way but I having the same result.
Also Jmeter is creating a mysql connection to 5 different databases.
I'll assume you have a 64 bit operating system and JVM and lots of RAM available.
You can't guess about these things. Better to get some data using a profiler. Use something like dynaTrace, JProfiler or, if you have an Oracle JVM, the Visual VM tool that ships with the JVM.
You can't figure out what the problem is until you have some data. Maybe you need to write your tests differently so you don't have so much data in memory all at once.
first things first, are you modifying those command lines in the jmeter bat correct? because based on your dump we should be able to see how much ram you are ACTUALLY using..
e.g.
java.lang.OutOfMemoryError: Java heap space Dumping heap to java_pid999999.hprof ... Heap dump file created [591747609 bytes in 1321.244 secs] in this case this means that I was using a jmeter configured with a xmx of 591747609 bytes aka 512mb of ram..
By this we can understand if your settings are even being applied..
If that's fine, anyway post some profiler info and we'll see..
but consider these points:
1) jmeter in guimode is a real bottleneck if you want to see what's the performance of your java application... consider switching to remote testing.
2) check best practice on jmeter configuration, there are some sub-optimal settings in jmeter which hogs WAY too much memory, and you might want to turn them off...
give us an update and we'll see what advice can be given.
Make sure that you
Run JMeter in non-GUI mode
Disable all listeners, especially View Results Tree and View Results in Table ones
Use CSV as results output format
Avoid queries producing large result sets
Use Post Processors and Assertions only when and where required
Use latest version of JMeter on 64-bit OS and using latest version of 64-bit JDK
If you still experience OOM errors you will need to either switch to machine having more RAM or consider distributed testing.

SGE h_vmem vs java -Xmx -Xms

We have a couple of SGE clusters running various versions of RHEL at my work and we're testing a new one with a newer Redhat, all . On the old cluster ("Centos release 5.4"), I'm able to submit a job like the following one and it runs fine:
echo "java -Xms8G -Xmx8G -jar blah.jar ..." |qsub ... -l h_vmem=10G,virtual_free=10G ...
On the new cluster "CentOS release 6.2 (Final)", a job with those parameters fails due to running out of memory, and I have to change the h_vmem to h_vmem=17G in order for it to succeed. The new nodes have about 3x the RAM of the old node and in testing I'm only putting in a couple of jobs at a time.
On the old cluster, I'd set the -Xms/Xms to be N, I could use N+1 or so for the h_vmem. On the new cluster, I seem to be crashing unless I set h_vmem to be 2N+1.
I wrote a tiny perl script that all it does is progressively use consume more memory and periodically print out the memory used until it crashes or it reaches a limit. The h_vmem parameter makes it crash at the expected memory usage.
I've tried multiple versions of the JVM (1.6 and 1.7). If I omit the h_vmem, it works, but then things are riskier to run.
I have googled where others have seen similar issues, but no resolutions found.
The problem here appears to be an issue with the combination of the following factors:
The old cluster was RHEL5, and the new RHEL6
RHEL6 includes an update to glibc that changes the way MALLOC reports memory usage of multi-threaded programs.
the JVM includes a Multi-threaded garbage collector by default
To fix the problem I've used a combination of the following:
Export the MALLOC_ARENA_MAX environment variable to a small number (1-10) e.g. in the job script. I.e. include something like: export MALLOC_ARENA_MAX=1
Moderately increased the SGE memory requests by 10% or so
Explicitly set the number of java GC threads to a low number by using java -XX:ParallelGCThreads=1 ...
Increased the SGE thread requests. E.g. qsub -pe pthreads 2
Note that it's unclear that setting the MALLOC_ARENA_MAX all the way down to 1 is the right number, but low numbers seem to work well from my testing.
Here are the links that lead me to these conclusions:
https://www.ibm.com/developerworks/community/blogs/kevgrig/entry/linux_glibc_2_10_rhel_6_malloc_may_show_excessive_virtual_memory_usage?lang=en
What would cause a java process to greatly exceed the Xmx or Xss limit?
http://siddhesh.in/journal/2012/10/24/malloc-per-thread-arenas-in-glibc/

Categories