So, I'm trying to build Android. I select a device by lunch, make clean, and then run make updatepackage with various -j switches.
However, the build will fail with following error:
FAILED: /bin/bash out/target/common/obj/JAVA_LIBRARIES/core-all_intermediates/with-local/classes.dex.rsp
Out of memory error (version 1.2-a24 'Carnac' (283001 7e39a352cafc1eb3b4ae95846a101b93ccbc9cf0)).
Java heap space.
Try increasing heap size with java option '-Xmx<size>'.
Warning: This may have produced partial or corrupted output.
[ 42% 11683/27285] build out/target/common/obj/JAVA_LIBRARIES/sdk_v21_intermediates/classes.jack
ninja: build stopped: subcommand failed.
build/core/ninja.mk:144: recipe for target 'ninja_wrapper' failed
make: *** [ninja_wrapper] Error 1
The OS I'm using is Ubuntu 15.10 on a 4-core VM. I've tried adding more swap memory (currently it's 8GB and 24GB as swap), selecting various -j values (4 to 10), and as per GC overhead limit exceeded when building android source , changing -Xmx value.
As to the last one, the only reference to -Xmx is there:
APICHECK_COMMAND := $(APICHECK) -JXmx1024m -J"classpath $(APICHECK_CLASSPATH)"
However, changing it to more from 1024m to anything else changes nothing.
So, what could I do to make it build?
Related
I am trying to convert a Linux core dump of Java process to a heap dump file, suitable for analysing with Eclipse MAT. According to this blog post, adapted to the newer OpenJDK 12, I create a core dump and then run jhsdb jmap to convert the dump to HPROF format:
>sudo gcore -o dump 24934
[New LWP 24971]
...
[New LWP 17921]
warning: Could not load shared library symbols for /tmp/jffi4106753050390578111.so.
Do you need "set solib-search-path" or "set sysroot"?
[Thread debugging using libthread_db enabled]
Using host libthread_db library "/lib/x86_64-linux-gnu/libthread_db.so.1".
0x00007f94c7e9e98d in pthread_join (threadid=140276994615040, thread_return=0x7ffc716d47a8) at pthread_join.c:90
90 pthread_join.c: No such file or directory.
warning: target file /proc/24934/cmdline contained unexpected null characters
warning: Memory read failed for corefile section, 1048576 bytes at 0x7f93756a6000.
warning: Memory read failed for corefile section, 1048576 bytes at 0x7f9379bec000.
...
warning: Memory read failed for corefile section, 1048576 bytes at 0x7f94c82dd000.
Saved corefile dump.24934
> ls -sh dump.24934
22G dump.24934
> /usr/lib/jvm/zulu-12-amd64/bin/jhsdb jmap --exe /usr/lib/jvm/zulu-12-amd64/bin/java --core dump.24934 --binaryheap --dumpfile jmap-dump.24934
Attaching to core dump.24934 from executable /usr/lib/jvm/zulu-12-amd64/bin/java, please wait...
Debugger attached successfully.
Server compiler detected.
JVM version is 12.0.1+12
null
> ls -sh jmap-dump.24934
3.3M jmap-dump.24934
The core dump file is 22 Gb, while heap dump file is just 3 Mb, so it is likely that jhsdb jmap command fails to process the whole core dump. Also Eclipse MAT fails to open the heap dump file with a following message: The HPROF parser encountered a violation of the HPROF specification that it could not safely handle. This could be due to file truncation or a bug in the JVM.
Alex,
There are two possibilities for this.
First, gcore is the handy script of gdb. I see that it prompts some warning messages to say that it has difficulty to load a solib. gdb may generate a broken core file in the first place. You can try to load the core file using gdb and see if it can parse it.
Secondly, jhsdb parses the core file on its own. you can use an environment var LIBSAPROC_DEBUG=1 to get its traces. It will help you to know what's wrong in parsing.
Why don't you dump java heap using jmap -dump directly? This will skip coredump file.
I used to use gradle3.5 and gradle4.0 to build my project, which works.
But when I download gradle5.0 and set my gradle path to gradle5.0. Then I build the same project with the same build command, it causes an oom error.
gradle build -x test -x checkstyleMain -x checkstyleTest
Task :cok-common:compileJava FAILED
The system is out of resources.
Consult the following stack trace for details.
java.lang.OutOfMemoryError: Java heap space
at com.sun.tools.javac.util.Position$LineMapImpl.build(Position.java:153)
at com.sun.tools.javac.util.Position.makeLineMap(Position.java:77)
at com.sun.tools.javac.parser.JavaTokenizer.getLineMap(JavaTokenizer.java:763)
at com.sun.tools.javac.parser.Scanner.getLineMap(Scanner.java:127)
at com.sun.tools.javac.parser.JavacParser.parseCompilationUnit(JavacParser.java:3173)
at com.sun.tools.javac.main.JavaCompiler.parse(JavaCompiler.java:628)
at com.sun.tools.javac.main.JavaCompiler.parse(JavaCompiler.java:665)
at com.sun.tools.javac.main.JavaCompiler.parseFiles(JavaCompiler.java:950)
at com.sun.tools.javac.main.JavaCompiler.compile(JavaCompiler.java:857)
at com.sun.tools.javac.main.Main.compile(Main.java:523)
at com.sun.tools.javac.api.JavacTaskImpl.doCall(JavacTaskImpl.java:129)
at com.sun.tools.javac.api.JavacTaskImpl.call(JavacTaskImpl.java:138)
Setting Java Environment (in build setting and gradle setup)
+java.lang.OutOfMemoryError: Java heap Space
+java.lang.OutOfMemoryError: PermGen Space
So at first, you say it is ok in old version of gradle ~> Heap Space seem okay.
If not you can try: -Xmx512M (2048M if you think so) Be careful with 3 option of HeapSize (Init size, Max Heap Size, Minimum Size)
For PermGen, it is default 64M and use to save metadata and String pool.
Maybe Gradle 5 use more meta data, so this can be a problem (which hard to detect)
export JVM_ARGS = “- XX: PermSize = 64M -XX: MaxPermSize = 256m”
You can check to change the size of Perm
im getting an error while running elasticsearch on kubernetes. I dont believe this is a memory allocation issue but i dont know. Trying to set it up with discovery - not a single node.
here is my kubernetes config for elasticsearch - https://hastebin.com/ohiyivinit.bash
Here is my error on startup from kubectl logs
Exception in thread "main" java.lang.RuntimeException: starting java failed with [137]
output:
error:
OpenJDK 64-Bit Server VM warning: Option UseConcMarkSweepGC was deprecated in version 9.0 and will likely be removed in a future release.
at org.elasticsearch.tools.launchers.JvmErgonomics.flagsFinal(JvmErgonomics.java:111)
at org.elasticsearch.tools.launchers.JvmErgonomics.finalJvmOptions(JvmErgonomics.java:79)
at org.elasticsearch.tools.launchers.JvmErgonomics.choose(JvmErgonomics.java:57)
at org.elasticsearch.tools.launchers.JvmOptionsParser.main(JvmOptionsParser.java:89)
EDIT: my error was that the requested memory was the same as the max memory in limits
The error code strongly suggests this is a memory issue.
Since you're using GKE, if Stackdriver is enabled, you can use the following advanced filter to confirm if this is an OOM kill:
resource.type="container"
resource.labels.cluster_name="YOUR_CLUSTER"
resource.labels.namespace_id="NAMESPACE"
resource.labels.project_id="PROJECT_ID"
resource.labels.zone:"ZONE"
resource.labels.container_name="CONTAINER_NAME" # Container name, not pod name
resource.labels.pod_id:"POD_NAME-" # Notice that is not the full pod ID
"OOM"
If you find out this is an OOM issue, you can set requests and limits to your deployment to ensure the resources are enough to run your application.
enter image description hereTry to run command prompt run as administartor
After deploying a new version of our Java/Spring Boot software to the Swisscom Developer Cloud, running on CloudFoundry, the startup suddenly failed with the following error: OutOfMemoryError: Compressed class space. So, we decided to deploy a previous version of the software, the version which was running actually just before: The same error occurred. We did not switch from Java7 to Java8, nor did we change any configuration. This leads to the question: Is this really an error on our side or rather on the server's side?
We then tried to increase the MaxMetaspaceSize by setting the variable JBP_CONFIG_OPEN_JDK_JRE to one of the following lines:
[jre: {version: 1.8.0_+}, memory_calculator: {memory_sizes: {metaspace: 128m}}]
{memory_calculator: {memory_sizes: {metaspace: 128m}}}
{memory_sizes: {metaspace: 128m}}
The application always warned, that the value of memory_sizes was invalid. What is the correct format of this YAML variable?
[ConfigurationUtils] WARN User config value for 'memory_sizes' is not valid, existing property not present
We then deleted the Java app and the database service on the Swisscom Developer Console and recreated it. It had no effect, the same error occured.
And finally, do you know why this error does suddenly occur, even with a version which was just running fine a few minutes ago?
EDIT:
This is the manifest ([database-service-name] and [application-name] were replaced):
---
path: .
instances: 1
buildpack: https://github.com/cloudfoundry/java-buildpack
services:
- [database-service-name]
applications:
- name: [application-name]
domain: scapp.io
host: [application-name]
memory: 1024M
disk_quota: 1024M
env:
SPRING_PROFILES_ACTIVE: stage, cloudfoundry
The Java buildpack version is (according to the logs):
2017-03-03 11:47:02 [STG/0] OUT -----> Java Buildpack Version: b08a692 | https://github.com/cloudfoundry/java-buildpack#b08a692
This command seems to be executed (in the logs after the crash):
2017-03-03 11:46:25 [APP/PROC/WEB/0] OUT vcap 8 0 99 10:46 ? 00:01:09 /home/vcap/app/.java-buildpack/open_jdk_jre/bin/java -Djava.io.tmpdir=/home/vcap/tmp -XX:OnOutOfMemoryError=/home/vcap/app/.java-buildpack/open_jdk_jre/bin/killjava.sh -XX:MaxDirectMemorySize=10M -XX:MaxMetaspaceSize=68540K -XX:ReservedCodeCacheSize=240M -XX:CompressedClassSpaceSize=8731K -Xmx408104K -Djavax.net.ssl.trustStore=/home/vcap/app/.java-buildpack/container_certificate_trust_store/truststore.jks -Djavax.net.ssl.trustStorePassword=java-buildpack-trust-store-password -cp /home/vcap/app/. org.springframework.boot.loader.WarLauncher
The OutOfMemory error occurred because the Java buildpack was changed to use version 3.x of the memory calculator. Similar problems arising from this change are under discussion in GitHub issue 390. Please refer to this issue for details.
In general, v3.x of the memory calculator chooses values for various JVM memory settings based on the number of class files in the application and some default values which depend on the version of Java. It then sets the maximum heap size to the remaining amount of memory.
The previous version of the memory calculator was configured by setting JBP_CONFIG_OPEN_JDK_JRE. However, v3.x can be configured simply by setting the corresponding Java memory settings in JAVA_OPTS. For example, you could set the maximum metaspace size to 100 Mb as follows:
cf set-env app-name JAVA_OPTS '-XX:MaxMetaspaceSize=100m'
If you simply want a workaround, you can use the version of the Java buildpack released before the memory calculator change:
cf push -b https://github.com/cloudfoundry/java-buildpack.git\#v3.14 ...
comment from Swisscom Java developer:
For sure they will be reverting the memory heuristics to what they
were in 3.13 or at least refine the calculations. The current
recommendation to all customers is either
to use 3.13 or
to use some explicit options with the environment variable JAVA_OPTS.
Really stuck with this one... all im doing is running 'sbt' to get to the interactive mode so I can compile my scala program and I run into this:
java.io.IOException: No space left on device
at java.io.FileOutputStream.close0(Native Method)
at java.io.FileOutputStream.close(FileOutputStream.java:362)
at java.io.FilterOutputStream.close(FilterOutputStream.java:160)
at java.io.FilterOutputStream.close(FilterOutputStream.java:160)
at scala.tools.nsc.backend.jvm.BytecodeWriters$ClassBytecodeWriter$class.writeClass(BytecodeWriters.scala:93)
at scala.tools.nsc.backend.jvm.GenASM$AsmPhase$$anon$4.writeClass(GenASM.scala:67)
at scala.tools.nsc.backend.jvm.GenASM$JBuilder.writeIfNotTooBig(GenASM.scala:459)
at scala.tools.nsc.backend.jvm.GenASM$JPlainBuilder.genClass(GenASM.scala:1413)
at scala.tools.nsc.backend.jvm.GenASM$AsmPhase.run(GenASM.scala:120)
at sbt.compiler.Eval$$anonfun$compile$1$1.apply$mcV$sp(Eval.scala:177)
at sbt.compiler.Eval$$anonfun$compile$1$1.apply(Eval.scala:177)
at sbt.compiler.Eval$$anonfun$compile$1$1.apply(Eval.scala:177)
at scala.reflect.internal.SymbolTable.atPhase(SymbolTable.scala:207)
at sbt.compiler.Eval.compile$1(Eval.scala:177)
at sbt.compiler.Eval.compileAndLoad(Eval.scala:182)
at sbt.compiler.Eval.evalCommon(Eval.scala:152)
at sbt.compiler.Eval.eval(Eval.scala:96)
at sbt.EvaluateConfigurations$.evaluateDslEntry(EvaluateConfigurations.scala:177)
at sbt.EvaluateConfigurations$$anonfun$9.apply(EvaluateConfigurations.scala:117)
at sbt.EvaluateConfigurations$$anonfun$9.apply(EvaluateConfigurations.scala:115)
at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
at scala.collection.immutable.List.foreach(List.scala:318)
at scala.collection.TraversableLike$class.map(TraversableLike.scala:244)
at scala.collection.AbstractTraversable.map(Traversable.scala:105)
at sbt.EvaluateConfigurations$.evaluateSbtFile(EvaluateConfigurations.scala:115)
at sbt.Load$.sbt$Load$$loadSettingsFile$1(Load.scala:710)
at sbt.Load$$anonfun$sbt$Load$$memoLoadSettingsFile$1$1.apply(Load.scala:715)
at sbt.Load$$anonfun$sbt$Load$$memoLoadSettingsFile$1$1.apply(Load.scala:714)
at scala.Option.getOrElse(Option.scala:120)
at sbt.Load$.sbt$Load$$memoLoadSettingsFile$1(Load.scala:714)
...
Followed by:
[error] java.io.IOException: No space left on device
[error] Use 'last' for the full log.
Project loading failed: (r)etry, (q)uit, (l)ast, or (i)gnore? q
I have never run into this before and google searches have not helped.
Is it the JVM that is running out of space? What device is it referring?
I tried deleting all scala related target folders (basically trying to do a manual sbt clean) but that did not help.
Any help would be greatly appreciated!!
The df results are:
1K-blocks Used Available Use% Mounted on
10157368 1414320 8218864 15% /var
2097152 11284 2085868 1% /tmp
So its something else
You get the No space left on device error when the jvm tries to write a file. The hard drive partition that the jvm is trying to write to is probably out of free space. Since you have tried deleting folders without success, check if for example /tmp or /var is full. (I don't know where the Scala tools writes these compiled classes unfortunately, but it sounds reasonable that it would use /tmp for this).