Memory issue while building spark - java

I have installed Scala, sbt and hadoop 1.0.3 over Ubuntu 12.o4 client OS. With the reference of link - http://docs.sigmoidanalytics.com/index.php/How_to_Install_Spark_on_Ubuntu-12.04, I tried building Spark and got error related to reserve space.
Here is what I am trying to run:
hduser#vignesh-desktop:/usr/local/spark-1.1.0$ SPARK_HADOOP_VERSION=1.1.0 sbt/sbt assembly
Output with following error:
Using /usr/lib/jvm/java-6-openjdk-i386/ as default JAVA_HOME.
Note, this will be overridden by -java-home if it is set.
Error occurred during initialization of VM
Could not reserve enough space for object heap
Could not create the Java virtual machine.

I get this solved by passing mem attribute with sbt command as given below, (for 4 GB RAM system)
SPARK_HADOOP_VERSION=1.1.0 sbt/sbt assembly -mem 1024

Related

Chef Java malloc issues

I have having a strange issue installing a piece of software that uses a ANT based installation script executed from a JAR using the Oracle JVM (1.8) 64bit.
The issue only appears to happen when the JAR is executed via a Chef run. Note I am using Chef-zero to converge.
The issue appears to be with the JVM being able to allocate enough
memory to execute.
The issue ONLY appears when Chef executes the JAR, via the OS console
itself and CMD everything is fine.
We have been installing this software for years and have not seen
this issue ever prior, until we started using Chef.
There are different errors depending on what I try to set the HEAP values to (note never have we had to set any HEAP options to install previously).
One such error is :
Unable to allocate 65536KB bitmaps for parallel garbage collection for the requested 2097152KB heap.
I have also seen other similar errors depending on the values I provide for Xms and Xmx similar to the above.
This has only been seen on Windows systems and only on certain VirtualBox images.
Some downloaded from net (vagrant) work fine, some fail with above
error.
Any image I build locally fails.
Chef Scripts I have tried (note the java options I have tried many combination, this is latest try):
execute "Install Xstore: #{build_file}" do
command "#{java_home}\\bin\\java -jar #{build_file}"
cwd install_workdir
environment ({ 'PATH' => "c:\\Program Files\\Microsoft SQL Server\\Client SDK\\ODBC\\130\\Tools\\Binn\\;#{ENV['PATH']}", 'JAVA_TOOL_OPTIONS' => '-XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=c:/xstore/log/dump.bin' })
not_if { ::Dir.exist?(xstore_install_dir) }
end
ruby_block "Install Xstore: #{build_file}" do
block do
stdout, status = Open3.capture2("#{java_home}\\bin\\java -jar #{build_file}", chdir: "#{install_workdir}")
puts stdout
end
end
Some specs:
Chef-client: 14.14.29
Guest VM: Windows 10, Windows 10ltsc - 64bit
Host: PopOS Linux 19.10
VirtualBox version: 6.0.14
Java (guest): 1.8.0_241
Host Ram 32GB, VM Ram upto 16GB same issues (ram is always 90% free on guest taskmanager, paging file set to System, tried fixed as well)

While running a build inside Jenkins, I am getting "java/lang/OutOfMemoryError"

2020-02-25 10:11:24.986+0000 [id=79] INFO hudson.model.AsyncPeriodicWork#lambda$doRun$0: Started maven-repo-cleanup
2020-02-25 10:11:25.004+0000 [id=79] INFO hudson.model.AsyncPeriodicWork#lambda$doRun$0: Finished maven-repo-cleanup. 14 ms
JVMDUMP039I Processing dump event "systhrow", detail "java/lang/OutOfMemoryError" at 2020/02/25 16:31:47 - please wait.
JVMDUMP032I JVM requested System dump using 'C:\Users\KumariRupam\Documents\jenkins\core.20200225.163147.3284.0001.dmp' in response to an event
JVMDUMP010I System dump written to C:\Users\KumariRupam\Documents\jenkins\core.20200225.163147.3284.0001.dmp
JVMDUMP032I JVM requested Heap dump using 'C:\Users\KumariRupam\Documents\jenkins\heapdump.20200225.163147.3284.0002.phd' in response to an event
JVMDUMP010I Heap dump written to C:\Users\KumariRupam\Documents\jenkins\heapdump.20200225.163147.3284.0002.phd
JVMDUMP032I JVM requested Java dump using 'C:\Users\KumariRupam\Documents\jenkins\javacore.20200225.163147.3284.0003.txt' in response to an event
JVMDUMP010I Java dump written to C:\Users\KumariRupam\Documents\jenkins\javacore.20200225.163147.3284.0003.txt
JVMDUMP032I JVM requested Snap dump using 'C:\Users\KumariRupam\Documents\jenkins\Snap.20200225.163147.3284.0004.trc' in response to an event
JVMDUMP010I Snap dump written to C:\Users\KumariRupam\Documents\jenkins\Snap.20200225.163147.3284.0004.trc
JVMDUMP013I Processed dump event "systhrow", detail "java/lang/OutOfMemoryError".
Please help on same.
I don't have the full context of your problem, but maybe increasing the allocated memory when you run the Maven build could help:
mvn clean install -DargLine="-Xmx1536m"
Here are some other examples on how to increase the allocated memory:
Strange Maven out of memory error
Specifying Maven memory parameter without setting MAVEN_OPTS environment variable
Jenkins Windows setup comes with a 32-bit Java Runtime by default. Swapping that out with a 64-bit version and increasing the available memory through the -Xmx parameter did the trick for me. Jenkins machine is running stable for some months now.
How to replace default 32-bit Java Runtime with a 64-bit one:
Download OpenJDK 8 JRE for Windows x64
⚠️ Note: Newer runtime may cause issues with some plugins. See Java requirements.
Extract ZIP file to some folder, typically C:\Program Files\Java\JRE8
Edit "jenkins.xml" to point to JRE8 (typically located in C:\Program Files (x86)\Jenkins):
<executable>C:\Program Files\Java\JRE8\bin\java</executable>
How to increase available memory for Jenkins:
Edit "jenkins.xml" to increase argument of parameter -Xmx (typically located in C:\Program Files (x86)\Jenkins):
<arguments>-Xrs -Xmx1536m -Dhudson.lifecycle=hudson.lifecycle.WindowsServiceLifecycle -jar "%BASE%\jenkins.war" --httpPort=8080 --webroot="%BASE%\war"</arguments>
This is just a sample of my configuration. In my experience 1,5 GiB memory works quite well. You may set -Xmx to a higher value if you still get crashes.

issue in Attempting to fetch sbt

i am student and i am try to install apache spark on Ubuntu when ever i try to
built spark through sbt with sbt/sbt assembly command i find this error
i have already try to maximize the size of the heap still have not found
solution to this issue kindly help me through this issue.
the error i found
Attempting to fetch sbt
Launching sbt from build/sbt-launch-0.13.7.jar
Error occurred during initialization of VM
Could not reserve enough space for object heap
Error: Could not create the Java Virtual Machine.
Error: A fatal exception has occurred. Program will exit.
Check out this article:
http://viralpatel.net/blogs/jvm-java-increase-heap-size-setting-heap-size-jvm-heap/
Sometimes when you attempt builds and you don't have enough heap allocated it will bomb out, hence, in the example they increase heap size on the command line before running HelloWorld:
java -Xms64m -Xmx256m HelloWorld

How to resolve error with Android Studio

I am using Windows 7 x32bit and Android Studio v0.8.6 and after installation I got some error with gradle .
Error:Unable to start the daemon process.
This problem might be caused by incorrect configuration of the daemon.
For example, an unrecognized jvm option is used.
Please refer to the user guide chapter on the daemon at
http://gradle.org/docs/1.12/userguide/gradle_daemon.html
Please read below process output to find out more:
-----------------------
Error occurred during initialization of VM
Could not reserve enough space for object heap
Error: Could not create the Java Virtual Machine.
Error: A fatal exception has occurred. Program will exit.
Reducing android studio memory usage: https://code.google.com/p/android/issues/detail?id=82894#c3
Edit studio.exe.vmoptions or studio64.exe.vmoptions in your android studio installation directory, and edit the Xms, Xmx, MaxPermSize options.
Reducing gradle memory usage: https://code.google.com/p/android/issues/detail?id=82894#c1
set vmoptions for gradle in gradle.properties file. I set Xmx to 256m
and MaxPermSize to 128m.
However, this will probably slow down your android studio and/or gradle build process. If you can do it, upgrade your memory to 8 GB AND upgrade your OS to 64-bit.

Eclipse Gradle STS Extension: Could not reserve enough space for object heap

Once in a while I'll get the following error when the Gradle STS extension tries to execute my project's gradle build script after launching Eclipse 3.7 (Indigo) with the Gradle STS extension installed,
Unable to start the daemon process. The exit value was: 1.
This problem might be caused by incorrect configuration of the daemon.
For example, an unrecognized jvm option is used.
Please refer to the user guide chapter on the daemon at http://gradle.org/docs/current/userguide/gradle_daemon.html
Please read below process output to find out more:
-----------------------
Error occurred during initialization of VM
Could not reserve enough space for object heap
Error: Could not create the Java Virtual Machine.
Error: A fatal exception has occurred. Program will exit.
Could not fetch model of type 'EclipseProject' using Gradle distribution 'http://services.gradle.org/distributions/gradle-1.0-bin.zip'.
And here are my system specs:
Windows 7 32-bit
Eclipse 3.7 32-bit
Java jdk1.7.0_07 32-bit
Is this a known bug with this plugin? Any idea on how to fix it?
In eclipse, go to Window, Preferences, Gradle, Arguments
and add the gradle jvmargs -Xms128m -Xmx512m in the dialog:
Sounds like once in a while, your system can't reserve enough memory to start the Gradle daemon. Does the project have a gradle.properties containing memory settings (org.gradle.jvmargs)? Or, do you have a gradle.properties in ~/.gradle?
I had same problem with Gradle projects importing. (Windows 7 64-bit, sts-3.2.0.RELEASE 32bit, Java jdk1.7.0_13 32-bit).
Solved creating gradle.properties file in project directory(=sts workspace) with content: org.gradle.jvmargs=-Xms128m -Xmx512m.
Note that -Xmx=512m is max size of JVM memory I can use on my system. Bigger Xmx memory sizes leads to described error.

Categories