Jenkins Server cannot compile because no heap space could reserved - java

I have a big problem with my jenkins server: I Cant build a maven project, because the java vm cannot start:
Checkout:workspace / /var/lib/jenkins/jobs/SchwarzGoldTool/workspace - hudson.remoting.LocalChannel#b7193fc
Using strategy: Default
Last Built Revision: Revision c2d18fd7a5d7f112163e9440a8e7256a44e32f46 (origin/HEAD, origin/master)
Checkout:workspace / /var/lib/jenkins/jobs/SchwarzGoldTool/workspace - hudson.remoting.LocalChannel#b7193fc
Fetching changes from 1 remote Git repository
Fetching upstream changes from git://.../tsc.git
Seen branch in repository origin/HEAD
Seen branch in repository origin/master
Commencing build of Revision 2b4654302e8222509db5808c9071ec95daf0b495 (origin/HEAD, origin/master)
Checking out Revision 2b4654302e8222509db5808c9071ec95daf0b495 (origin/HEAD, origin/master)
Warning : There are multiple branch changesets here
Parsing POMs
Modules changed, recalculating dependency graph
[SchwarzGoldTool] $ java -Xmx512M -Xms512M -cp /var/lib/jenkins/plugins/maven-plugin/WEB-INF/lib/maven3-agent-1.2.jar:/var/lib/jenkins/tools/Maven_3.0.3/boot/plexus-classworlds-2.4.jar org.jvnet.hudson.maven3.agent.Maven3Main /var/lib/jenkins/tools/Maven_3.0.3 /var/run/jenkins/war/WEB-INF/lib/remoting-2.11.jar /var/lib/jenkins/plugins/maven-plugin/WEB-INF/lib/maven3-interceptor-1.2.jar 58359
Error occurred during initialization of VM
Could not reserve enough space for object heap
Could not create the Java virtual machine.
ERROR: Failed to launch Maven. Exit code = 1
Finished: FAILURE
I tried to add -Xmx and -Xms to the VM (as you can see) but that doesnt work either... someone has a idea whats going on there?

The problem is caused by Jenkins failed to reserve enough heap space to kick off a maven build. From what you said, it seems like there are few things which share your VM limit, (I included the estimation of the memory required to run each process)
The VM's OS (~200~300mb)
Jenkins (~min 256mb)
Webcontainers (~256~512mb)
etc. (~100mb)
My memory estimation is rather conservative, but still, it's easily adds up to over 1gb, which leave the available heap for Jenkins to reserve to less than the Xms (512m), hence, failed to kick off a build
Ideally you should increase the softlimit on your VM to a higher value. If that is not feasible, my advice will be to reduce the memory management of the build by change the job configuration in Jenkins to something like this, (Xmx512m, Xms128m) so that Jenkins can kick off a build with only 128m free heap. But this setting may cause out of memory error in the later stage, when the build required a heap which is below Xmx but above the available heap.
Alternatively, you can memory manage other processes I mentioned above or you can setup some configure the virtual memory on your VM.

Related

Set heap memory for neo4j-admin import

I'm tying to load a graph of several hundred million nodes using the neo4j-admin import tool to load the data from csv. The import will run for about two hours but then crashes with the following error:
Exception in thread "Thread-0" java.lang.OutOfMemoryError: GC overhead limit exceeded
at java.lang.String.substring(String.java:1969)
at java.util.Formatter.parse(Formatter.java:2557)
at java.util.Formatter.format(Formatter.java:2501)
at java.util.Formatter.format(Formatter.java:2455)
at java.lang.String.format(String.java:2940)
at org.neo4j.unsafe.impl.batchimport.input.BadCollector$RelationshipsProblemReporter.getReportMessage(BadCollector.java:209)
at org.neo4j.unsafe.impl.batchimport.input.BadCollector$RelationshipsProblemReporter.message(BadCollector.java:195)
at org.neo4j.unsafe.impl.batchimport.input.BadCollector.processEvent(BadCollector.java:93)
at org.neo4j.unsafe.impl.batchimport.input.BadCollector$$Lambda$110/603650290.accept(Unknown Source)
at org.neo4j.concurrent.AsyncEvents.process(AsyncEvents.java:137)
at org.neo4j.concurrent.AsyncEvents.run(AsyncEvents.java:111)
at java.lang.Thread.run(Thread.java:748)
I've been trying to adjust my max and initial heap size settings in a few different ways. First I tried simply creating a HEAP_SIZE= variable before running the command to load the data as described here and I tried setting the heap size on the JVM like this:
export JAVA_OPTS=%JAVA_OPTS% -Xms100g -Xmx100g
but whatever I setting I use when the import starts I get the same report:
Available resources:
Total machine memory: 1.48 TB
Free machine memory: 95.00 GB
Max heap memory : 26.67 GB
Processors: 48
Configured max memory: 1.30 TB
High-IO: true
As you can see, I'm building this on a large server that should have plenty of resources available. I'm assuming I'm not setting the JVM parameters correctly for Neo4j but I can't find anything online showing me the correct way to do this.
What might be causing my GC memory error and how can I resolve it? Is this something I can resolve by throwing more resources at the JVM and if so, how do I do that so the neo4j-admin import tool can use it?
RHEL 7 Neo4j CE 3.4.11 Java 1.8.0_131
The issue was resolved by increasing the maximum heap memory. The problem was I wasn't setting the heap memory allocation correctly.
It turns out there was a simple solution; it was just a matter of when I tried to set the heap memory. Initially, I had tried the command export JAVA_OPTS='-server -Xms300g -Xmx300g' at the command line then run my bash script to call neo4j-admin import. This was not working, neo4j-admin import continued to use the same heap space configuration regardless.
The solution was to simple include the command to set the heap memory in the shell script that called neo4j-admin import. My shell script ended up looking like this:
#!/bin/bash
export JAVA_OPTS='-server -Xms300g -Xmx300g'
/usr/local/neo4j-community-3.4.11/bin/neo4j-admin import \
--ignore-missing-nodes=true \
--database=mag_cs2.graphdb \
--multiline-fields=true \
--high-io=true \
This seems super obvious but it took me almost a week to realize what I needed to change. Hopefully, this saves someone else the same headache.

Increase memory for jMeter on command line

I am running jMeter from the command line on a Mac. Today it threw an Out of memory, heap space error....
newbie$ sh jmeter.sh
Exception in thread "AWT-EventQueue-0" java.lang.OutOfMemoryError: Java heap space
at java.awt.image.DataBufferInt.<init>(DataBufferInt.java:41)
at java.awt.image.Raster.createPackedRaster(Raster.java:455)
I know I need to increase the memory allocated to it, but not sure how. I looked at this post Unable to increase heap size for JMeter on Mac OSX and found that jMeter script file in the bin folder it mentions and made the below updates..
HEAP="-Xms1024m -Xmx2048m"
NEW="-XX:NewSize=512m -XX:MaxNewSize=1024m"
But I am still getting the out of memory error. Do I just need to give it more or am I changing in the wrong place? Could it be I need to restart my whole machine?
As far as I understand:
You made changes to jmeter script
You're launching jmeter.sh script
You want to know why the changes are not applied?
If you changed jmeter script why don't you just launch it as ./jmeter ?
If you need to start JMeter via jmeter.sh for any reason, run it as follows:
JVM_ARGS="-Xms1024m -Xmx2048m -XX:NewSize=512m -XX:MaxNewSize=1024m" && export JVM_ARGS && ./jmeter.sh
See Running JMeter User Manual chapter in particular and The Ultimate JMeter Resource List in general for the relevant documentation.
If you have trouble finding it in the logs, then
You can use
ps -ef | grep jmeter
this may give you the details( Not a mac user, but I thing ps -ef would work)
The other option is to use jvisualvm, it ships already with jdk, so no extra tool is required.Run the visualvm and the jmeter, you can guess the name of the application ( entry of jemter ) on the left pane of visualvm , click on it, and all the jvm details will be available.
After this you can confirm whether jmeter is availed with 2GB max Ram. And increase if needed.
There could be different possible reasons for OutOfMemory Error. If you have successfully changed the allocated memory/heap size and still getting the issue then you can look into following factors:
Listeners: Do not use 'TreeView' and 'TableView' Listeners in actual load test as they consume lot of memory. Best practice is to save results in .JTL file, which can be later used for getting different reports.
Non-GUI Mode: Do not use GUI mode while performing actual load test. Run test from command line.
For more, visit the following blog as it has some really nice tips to solve OutOfMemory issues in JMeter.
http://www.testingdiaries.com/jmeter-out-of-memory-error/

How do Maven and Fortify determine how much memory to use?

I am using the Fortify SCA plugin with Maven 3.2.1 to scan a pretty large Java webapp.
I have a custom .bat file that sets up my all my environment variables and makes a call to mvn.bat to start the scan.
Then mvn.bat reads my pom.xml and finds the custom profiles for clean, translate, and scan and then calls sourceanalyzer.
The trouble is, it never seems like sourceanalyzer uses the full amount of memory that I grant it in either the custom bat file or the pom file. This machine has 16GB of RAM, and when the scan is done 18-20 hours later it will print "memory used: 317 MB" and the report has a bunch of Out of Memory warnings. This machine is doing nothing besides this scan, and while it's running the Task Manager shows that something is using a bunch of memory.
The error message is "Scan progress is slowing due to JVM garbage collection."
My MAVEN_OPTS:
-Xmx4096m
-XX:MaxPermSize=1024m
-Dfortify.sca.64bit=true
-Dfortify.sca.Xmx=8000m
-DskipTests=true
-Dfortify.sca.verbose=true
I need to figure out how to both speed this scan up and remove the memory warnings.
Thanks
You can try using the SCA memory variable option. Set:
SCA_VM_OPTS=-Xmx8000M
You can also try posting Fortify issues to their online forum at https://protect724.hp.com. The support group monitors those forums.

sbt won't assemble Spark

I am having issues assembling Spark using the sbt on my machine.
Attempting the assembly without allocating extra memory either runs out or times out on the garbage collector; the exact issue has been different at different times I have tried. However, any attempt to modify the allocated memory, either through Xmx or Xms, and whether giving more or restricting, fails as the sbt doesn't recognize -Xmx or -Xms as a command.
Here is a sample of the kind of command I have been trying (from the source directory of Spark):
sbt -Xmx2g assembly
Here is the error I have been receiving:
java.util.concurrent.ExecutionException:java.lang.OutOfMemoryError: GC overhead limit exceeded
Use 'last' for the full log.
Not a valid command: Xmx2g
Not a valid project: Xmx2g
Expected ':' (if selecting a configuration)
Not a valid key: Xmx2g
Xmx2g
^
I am running 64-bit Java, version 1.8.0_20.
Try creating a new environment variable SBT_OPTS, with the value "-XX:MaxPermSize=1024m". That should give sbt the memory it needs without producing your error.
Check the docs: Building Spark with Maven.
Setting up Maven’s Memory Usage
You’ll need to configure Maven to use more memory than usual by setting MAVEN_OPTS. We recommend the following settings:
export MAVEN_OPTS="-Xmx2g -XX:MaxPermSize=512M -XX:ReservedCodeCacheSize=512m"
Then you can build it with:
mvn -DskipTests package

Maven compilation dies with "Killed"

I am running a Maven2 compile of a large Java project on a linux virtual machine
Compilation is failing with the following error "compiled with -X for debugging"
[DEBUG] Source roots:
[DEBUG] /home/{...}/src/main/java
[DEBUG] /home/{...}/target/generated-sources/meta
[INFO] Compiling 1377 source files to /home/{...}
Killed
(and I go back to bash prompt immediately)
I figure this could be:
A linux thing (I checked that my ulimit -Hn is okay, 10000)
A VM thing (this in on an amazon EC2 ubuntu instance)
A maven / java thing (Never seen this kind of death, usually just out of memory errors and the like)
Any thoughts to narrow down the culprit?
My first guess would be that you're running out of memory, and the kernel is killing the compile process.
I would start by looking to see if there are other resource limits; e.g. run ulimit -a.

Categories