Sometimes Intellij IDEA crashes for no obvious reason.
First, it becomes quite slow, CTRL + N (class search) often takes a bit longer than usual time to respond when you type something into it, jumping between files takes more time. And then it crashes..
What is the usual route to diagnose Intellij crash? I've been monitoring memory on the status bar when it crashed and it had about 100MB (out of 512MB) left at that time. Are there any useful logs that would point in the direction of the problem?
[UPDATE] 3 crashes in total.
1 instance:
A fatal error has been detected by the Java Runtime Environment:
EXCEPTION_ACCESS_VIOLATION (0xc0000005) at pc=0x6d93acab, pid=3120, tid=5588
JRE version: 6.0_24-b07
Java VM: Java HotSpot(TM) Client VM (19.1-b02 mixed mode windows-x86 )
Problematic frame:
V [jvm.dll+0x9acab]
2 instances:
A fatal error has been detected by the Java Runtime Environment:
java.lang.OutOfMemoryError: requested 32756 bytes for ChunkPool::allocate. Out of swap space?
Internal Error (allocation.cpp:166), pid=2484, tid=5568
Error: ChunkPool::allocate
Memory configuration:
-Xss2m
-Xms32m
-Xmx512m
Increased the memory to -Xmx768, hopefully that will delay the out of memory error by a bit. Increasing the memory to -Xmx1024 caused weird address mapping problems after running IDEA for a while (integer overflow?). The machine has 3GB of RAM.
Intellij wouldnt start up i.e. flashes IntelliJ and crashes.
Fix: Set -Xmx to default 512MB in idea.exe.vmoptions file located in folder C:\Program Files (x86)\JetBrains\IntelliJ IDEA Community Edition 14.1.4\bin
How I landed on this issue: I was having Xmx set to 2048MB earlier and it crashed hitting GC limit. Then on IntelliJ wouldn't start.
Please define "crashes". If the window just disappears, it usually means the JVM bug and there will be hs_err_pidXXX.log files in the IDEA working directory (usually IDEA_HOME/bin). In some cases updating JDK to the new version or changing Garbage Collector strategy (via vmoptions file) can workaround such issues.
If the IDE stops responding completely, you need to provide thread dumps.
If it behaves weird, then you need to check idea.log for exceptions. In some cases it can be caused by OutOfMemory issues. Increasing heap size in idea.vmoptions should help. Check the FAQ for IDEA files locations.
If IDEA is becoming very slow on certain operations, you need to provide CPU snapshot.
In addition to the above answer, I added:
-Dswing.noxp=true to .vmoptions file located at IDE_HOME\bin\[bits][.exe].vmoptions
This fixed the problem for me.
Related
I have a Spring app running in a Tomcat 9.0.6 on Linux 64. Because it needs a lot of memory, I would like to try the OpenJ9 JVM which is supposedly more efficient in that regard (current heap limit with Hotspot: -Xmx128G).
I installed the 64-bit adoptopenjdk-8-jdk-openj9:
/usr/lib/jvm/adoptopenjdk-8-jdk-openj9/bin/java -version
openjdk version "1.8.0_212"
OpenJDK Runtime Environment (build 1.8.0_212-b04)
Eclipse OpenJ9 VM (build openj9-0.14.2, JRE 1.8.0 Linux amd64-64-Bit Compressed References 20190521_315 (JIT enabled, AOT enable
OpenJ9 - 4b1df46fe
OMR - b56045d2
JCL - a8c217d402 based on jdk8u212-b04)
Starting the tomcat causes the following error:
This JVM package only includes the '-Xcompressedrefs' configuration. Please run the VM without specifying the '-Xnocompressedrefs' option or by specifying the '-Xcompressedrefs' option.
After I set this option I get the following error:
JVMJ9GC028E Option too large: '-Xmx'
JVMJ9VM015W Initialization error for library j9gc29(2): Failed to initialize
Error: Could not create the Java Virtual Machine.
Error: A fatal exception has occurred. Program will exit.
Documentation isn't that clear, but I found this:
https://www.ibm.com/support/knowledgecenter/SSYKE2_8.0.0/com.ibm.java.vm.80.doc/docs/mm_gc_compressed_refs.html
Compressed references are used by default on a 64-bit IBM SDK when the value of -Xmx, which sets the maximum Java heap size, is in the correct range. Start of changes for service refresh 2 fix pack 10On AIX®, Linux and Windows systems, the default range is 0 - 57 GB. For larger heap sizes, you can try to use compressed references by explicitly setting -Xcompressedrefs.End of changes for service refresh 2 fix pack 10 However, larger heap sizes might result in an out of memory condition at run time because the VM requires some memory at low addresses. You might be able to resolve an out of memory condition in low addresses by using the -Xmcrs option.
So basically, at least this build of the JDK only supports compressedrefs, and in order to use that, I must set it manually since my Xmx is above the range where it is enabled by default, but that fails because my OS already allocated to much of <4GB memory ranges, but some is needed to use compressedrefs. Since I can never guarantee that that won't be the case, is there any way I can use OpenJ9 without compressedrefs? And will that even yield the benefits in terms of memory consumption? Or is there any way I can use compressedrefs with very high Xmx settings?
I also tried setting this option, but it didn't help: https://www.ibm.com/support/knowledgecenter/SSYKE2_8.0.0/openj9/xmcrs/index.html?view=embed
How do I find the correct size for it? 1G and 64m failed. Even if I find the correct setting, how would this value guarantee that the OS hasn't already allocated all the lower memory addresses?
The limit to use the compressed refs JVM is 57G and you can't run it if the -Xnocompressedrefs option is specified.
The 57G division is documented here: https://www.eclipse.org/openj9/docs/xcompressedrefs/
The -Xnocompressedrefs problem is mentioned in the release notes: https://github.com/eclipse/openj9/blob/master/doc/release-notes/0.15/0.15.md
With a reference to: https://github.com/eclipse/openj9/issues/479
Creating a single JVM that supports both is covered by: https://github.com/eclipse/openj9/issues/643
https://github.com/eclipse/openj9/pull/7505
(With thanks to the help from the Eclipse OpenJ9 slack community, especially to Peter Shipton)
I found this build which allows noncompressedrefs and thus solves my issues: https://adoptopenjdk.net/releases.html?variant=openjdk8&jvmVariant=openj9#linuxxl
When I run my project in IntelliJ in debug mode I get the following error.
Does anybody know what is the cause?
I already increased my heap size in idea.vmoptions:
-ea
-server
-Xms1g
-Xmx3G
-Xss16m
-Xverify:none
-XX:PermSize=512m
-XX:MaxPermSize=1024m
I already increased my heap size for compiler to 1024 as bellow:
Try Run menu -> Edit Configurations... -> find your project in the tree of projects on the left, look for VM options: in the panel on the right, and enter something there, according to information found here: What are the -Xms and -Xmx parameters when starting JVM?
That having been said, I should also add that if you are running out of memory without knowingly doing extremely memory hungry stuff, then what you have in your hands is a bug which is causing your program to do runaway memory allocation, which will always be resulting in out-of-memory errors no matter how much you increase your heap size. In that case, you will need to look at your code, not at your project options.
It is very strange still i don't understand why but I resolved it by decreasing the size of VM Options: -Xmx820m.
Maybe because i use jre 32 bit en my Intellij IDE runs on 64 bit.
I am developing an application with GWT and GAE. When I try to rebuild it or create an artifact I get a lot of errors shown below in the picture.
I searched google and Stack Overflow and I got some answers but not to my particular problem.
From what I understand I get the error because my garbage collector consumes a lot of memory.
here is the main error Exception in thread "main" java.lang.OutOfMemoryError: GC overhead limit exceeded.
OK, I fixed the problem. just increase the memory that the virtual machine needs to compile the project. Previously it was 128 and now I change it be 512. as my project grown it needed more memory to compile the classes of the project.Here is how to do that in Injtellj IDEA. right click on the project
module -> open module settings -> Modules -> GWT -> compiler maximum heap size (Mb) -> changed to 512.
NOTE: In Ideal Intellij 12+ The project settings is in : File -> Project Structure OR Ctrl+Alrt+Shitf+S
I had encountered the same problem
Exception in thread “main” java.lang.OutOfMemoryError: GC overhead limit exceeded
and when I tried to fix this error it showed the same error. So don't panic and just increase size a little more by setting this option in Run->Run Configurations->Click on arguments->inside VM arguments type
-Xms1024M -Xmx2048M
Xms- for minimum limit
Xmx- for maximum limit
Adio's answer is correct, except that I needed to change it to 1000Mb when we added the "gwt-mobile" library - 512 Mb was still giving me the "GC overhead limit" error. I think 128Mb is a pretty poor default - that didn't work for us even when we began writing our app.
Changing the config through the project properties in netbeans didn't work.
My solution was to edit the nbproject/gwt.properties with:
# Additional JVM arguments for the GWT compiler
gwt.compiler.jvmargs=-Xmx1024M
I tried all the suggestions in a number of posts on the net and none of them worked.
After much experimenting, in the end I found that using the G1GC garbage collector on OSX made a big difference for me. If you are using ANT then you must make sure the build file launches the compiler with the G1GC garbage collector
In NetBeans 8.2 do the following
Right click on Project Name -> Properties -> Google Web Toolkit
Modify JVM Arguments to -Xmx512M
Click Ok
Run again
This work for me on Windows 10, Netbeans 8.2, GWT 2.8.2, JDK 1.8
First of all, I have a box with 8gb of ram, so I doubt total memory is the issue.
This application is running fine on machines with 6gb or less.
I am trying to reserve 3GB of space using -Xmx3G under "VM Arguments" in Run Configurations in Eclipse.
Every time I try to reserve more than 1500mb, I get this error:
“Error occurred during initialization of VM; Could not reserve enough space for object heap” using -Xmx3G
What is going on here?
Could it be that you're using a 32-bit jvm on that machine?
Here is how to fix it:
Go to Start->Control Panel->System->Advanced(tab)->Environment Variables->System
Variables->New:
Variable name: _JAVA_OPTIONS
Variable value: -Xmx512M
Variable name: Path
Variable value: ;C:\Program Files\Java\jre6\bin;F:\JDK\bin;
Change this to your appropriate path.
This is actually not an Eclipse-specific issue; it's a general
Java-on-Windows issue. It's because of how the JVM allocates memory on
Windows; it insists on allocating a contiguous chunk of memory, which
often Windows can't provide, even if there are enough separate chunks to
satisfy the allocation request.
There are utilities that will try to help Windows "defrag" its memory,
which would, in theory, help this situation; but I've not really tried
them in earnest so can't speak to their effectiveness.
One thing that I've heard sometimes that might help is to reboot Windows
and, before starting any other apps, launch the Java app that needs the
big chunk of memory. If you're lucky, Windows won't have fragmented its
memory space yet and Java will get the contiguous block that is asks for.
Somewhere out on the interwebs there are more technical explanations and
analyses of this issue, but I don't have any references handy.
I did find this, though, which looks helpful: https://stackoverflow.com/a/497757/639520
First the JRE of 32bits can't use more ~1.5Gb of ram. So if you want more, use a 64bits JRE.
Second, When a new JVM starts, this sum the -Xmx property of the all JVM that are running, and check if there is enough memory left on the system to run at their own -Xmx, if is not enough then the error occurs.
I was using Liferay with Tomcat server from eclipse IDE.
I was stuck with this same error on click on server start up.
Double click on server from eclipse.
it open up Server Overview page. Updated memory arguments from -Xmx1024m -XX:MaxPermSize=256m to -Xmx512m -XX:MaxPermSize=256m.
Then it was working for me.
Make sure that Eclipse is actually running the same JVM you think it's running. If you use java in your web browser ever, you likely have a 32-bit version floating around too that might be taking precedence if it installed or updated lately.
To be absolutely sure, I recommend adding these two lines to your eclipse.ini file at the top:
-vm
C:/Java/jdk1.6.0_27/bin
...where on my machine C:/Java/jdk1.6.0_27/bin where the JVM I know is 64-bit is located. Be sure to have the bin folder there.
(As a bonus, on Windows 7, this also allows you to actually "pin the tab" which is why I had to do this for my own usage)
This is the issue of Heap size. Edit your .bat (Batch file). It might be showing Heap size 1024. Change it to 512 Then it should work.
Just put # symbol in front of org.gradle.jvmargs=-Xmx1536m in gradle.properties
# org.gradle.jvmargs=-Xmx1536m
I also had the same problem while using Eclipse which was 32 bit and the JVM used by it was 64 bit.
When I routed the Eclipse to 32 bit JVM then it worked
I know that i am a bit late, but here my answer comes:
I just installed the Java online Version from Oracle(not the offline 64-Bit one).
After having added the JAVA_HOME ENV variable, it just worked!
Hope I could help :)
Probably you are trying wrong options anyways.
I got a similar error with supporting error log:
Java HotSpot(TM) Client VM warning: ignoring option PermSize=32M; support was removed in 8.0
Java HotSpot(TM) Client VM warning: ignoring option MaxPermSize=128M; support was removed in 8.0
Im my case, the software did not support java 8 yet(script was using old JVM arguments) but I had had java 8 by default.
One of the reason for this issue is no memory available for Tomcat to start. Try to delete the unwanted running software from windows and restart the eclipse and tomcat.
Solution is simple. No need to go deep into this issue.
If you are running on 64bit machine then follow below steps:
Unistall 32 bit java first (check in C:\Program Files (x86) for its existence)
Install the newer version JDK kit 64 bit (includes JRE)
Set the environment path (To avoid conflict error if you have two different 64bit JRE)
Check in command prompt by typing javac command.
Restart / Done
You can have two different Java installed but don't forgot to set path.
Please set JAVA_OPTS=-Xms256m -Xmx512m in environment variables, it should solve the issue, it worked for me.
Find out if you are using a 32bit version or 64bit version of Java. To know that use the command
java -version
The 3rd line of the output should give you if it 32bit or 64bit.
If it is 32bit uninstall and install a 64bit version.
EDIT: This reproducible SIGSEGV happens on a Linux machine with more than one proc and more than 2GB of mem, so Java is defaulting to the -server mode. Interestingly enough if I force "-client" there's no crash anymore... (I'm still not too sure what to do with my reproducible SIGSEGV but it's interesting nonetheless).
First note that this is a bit related but not identical to the following because in our case it's only a SIGSEGV that happens, and we can reliably trigger it:
JVM OutOfMemory error "death spiral" (not memory leak)
It's related because it happens when we feed our app with a "deluge of data": data are coming from text files and then number-crunched (yes, financial number crunching in Java).
I can reliably trigger a JVM to SIGSEGV using only valid Java code.
NOTE: I can invariably crash both JVM 1.6.0_17 adn JVM 1.6.0_18 and this question is not about how to workaround this issue (for example playing with VM parameters may fix the issue but I'm not after that, I want to know what to do with this always-reproducable SIGSEGV).
I've got a workaround which simply consists in using Java 1.5 when launching our app (while still using Java 1.6 to run IntelliJ IDEA, etc. on the same machine, simultaneously), but my question is if this should be reported or not and, if it should, how to report it knowing that the log itself contains proprietary information (the full hs_err_..._log).
Hardware error can be ruled out for:
this is happening on a workstation that regularly reaches months of uptime (I only reboot it when critical security patches affecting my trimmed down and hardened Debian Linux are issued, which really doesn't happen often) and on which applications never crash (making it very unlikely that it's an hardware issue on that machine [more below])
same application works perfectly on that same machine under a JVM 1.5 under the same load (this is how I'm testing the app: I simply launch it under a 1.5 VM)
same application works perfectly fine on more than one hundreds clients machine under the same (gigantic) load (never crashed once on Windows + JVM 1.5 or 1.6 and never crashed once on OS X + JVM 1.5 or 1.6 [a crash would mean an instant phone call from the client])
other application on that same machine and same 1.6.0_17 or 1.6.0_18 JVM never crash (for example I've got two instances of IntelliJ IDEA running as two different users on that same machine and they don't crash)
machine is tested with memtest "regularly" (before installing a new OS, which last happened when I installed Debian Lenny, not that long ago)
Here's the reproducible-on-demand SIGSEGV:
... $uname -a
Linux saturn 2.6.26-2-686 #1 SMP Wed Nov 4 20:45:37 UTC 2009 i686 GNU/Linux
... $ export /home/wizard/jdk1.6.0_17/bin:$PATH
... $ java -version
java version "1.6.0_17"
Java(TM) SE Runtime Environment (build 1.6.0_17-b04)
Java HotSpot(TM) Server VM (build 14.3-b01, mixed mode)
Launch the app, feed it a "deluge of data", wait a few seconds...
Then, invariably, for 1.6.0_17:
#
# A fatal error has been detected by the Java Runtime Environment:
#
# SIGSEGV (0xb) at pc=0xb76d0080, pid=30793, tid=2514328464
#
# JRE version: 6.0_17-b04
# Java VM: Java HotSpot(TM) Server VM (14.3-b01 mixed mode linux-x86 )
# Problematic frame:
# V [libjvm.so+0x4bc080]
#
# An error report file with more information is saved as:
# /home/wizard/hs_err_pid30793.log
#
# If you would like to submit a bug report, please visit:
# http://java.sun.com/webapps/bugreport/crash.jsp
(note that the line '[libjvm.so+0x4bc080]' is consistent for 1.6.0_17 at every SIGSEGV)
or for 1.6.0_18:
#
# A fatal error has been detected by the Java Runtime Environment:
#
# SIGSEGV (0xb) at pc=0xb77468f0, pid=722, tid=2514516880
#
# JRE version: 6.0_18-b07
# Java VM: Java HotSpot(TM) Server VM (16.0-b13 mixed mode linux-x86 )
# Problematic frame:
# V [libjvm.so+0x4d88f0]
#
# An error report file with more information is saved as:
# /home/wizard/hs_err_pid722.log
#
# If you would like to submit a bug report, please visit:
# http://java.sun.com/webapps/bugreport/crash.jsp
#
Aborted
(note that the line "[libjvm.so+0x4d88f0]" is consistent for 1.6.0_18 at every SIGSEGV)
The problem is that the log file contains proprietary information
that cannot be shared.
Reproducing a "tiny test case" that reproduce the issue ain't realistic either: it's similar to the issue linked above, this only happens when a "deluge of data" is feeded to the app.
Note that the exact same application, on exactly the same hardware, with exactly the same JVM but another version of Linux (I had Debian Etch previously) did NOT trigger that SIGSEGV once.
But this doesn't mean the JVM isn't at fault: it could still be a JVM issue.
Should I report this and how? (keeping in mind that writing a "reproducible tiny test case" is delusional and that the log contains proprietary information that shouldn't be leaked). Should I just edit the log and send it?
What's the procedure to report such reproducible SIGSEGV when your log contains proprietary information and when a test case reproducing the issue ain't realistically doable?
Did any of you have success opening such a bug and then see it solved in a subsequent Java release?
Do you think it's good "for the Java community" to report such an issue or I just shouldn't bother because it's not important?
I got similar problem upgrading to JDK 1.6_18 and it seems solved using the following options:
-server
-Xms256m
-Xmx748m
-XX:MaxPermSize=128m
-verbose:gc
-XX:+PrintGCTimeStamps
-Xloggc:/tmp/gc.log
-XX:+PrintHeapAtGC
-XX:+PrintGCDetails
-XX:+HeapDumpOnOutOfMemoryError
-XX:HeapDumpPath="/tmp"
-XX:+UseParallelGC
-XX:-UseGCOverheadLimit
# Following options just to remote monitoring with jconsole, useful to see JVM behaviour at runtime
-Dcom.sun.management.jmxremote
-Dcom.sun.management.jmxremote.port=12345
-Dcom.sun.management.jmxremote.authenticate=false
-Dcom.sun.management.jmxremote.ssl=false
-Djava.rmi.server.hostname=MyHost
I still didn't double check (it is a production environment), but I think the error was due to two reasons:
1) Wrong setting about heap and/or Permanent space (I think JDK 1.6 needs more space in heap and permanent than previous JVM versions) caused an OutOfMemoryError, but
2) in the wrong original setting somebody wrote
-XX:+HeapDumpOnOutOfMemoryError="/tmp"
and not
-XX:+HeapDumpOnOutOfMemoryError
-XX:HeapDumpPath="/tmp"
so probably JVM was not able to write the heapdump and we got SIGSEGV only (previous versions wrote heap dump in the working directory).
Check -server -XX:+UseParallelGC -XX:-UseGCOverheadLimit options too. I think playing with VM parameters is not a workaround, but the right approach also because garbage collector (and not only) changed between 1.5 and 1.6.
The problem is that the log file
contains proprietary information that
cannot be shared. Reproducing a "tiny
test case" that reproduce the issue
ain't realistic either
If you can't provide Sun with a reproducible test case, they won't even look at it. Chance are good that they will ignore it even if you do provide a usable test case. The bug submission process at Sun leaves a lot to be desired.
Should I report this and how?
If you can't come up with a reproducible test case, don't bother. If they can't reproduce the issue, what do you expect them to do?
Note that the exact same application,
on exactly the same hardware, with
exactly the same JVM but another
version of Linux (I had Debian Etch
previously) did NOT trigger that
SIGSEGV once.
Does it work on a different box with the same hardware and same version of Linux?
If it helps, the bug submission link in your crash report has this disclaimer:
In addition, Sun Microsystems respects your desire for privacy. Personal data collected from this program will not be sold, given or shared with organizations external to Sun. We will use this data for communications with you to clarify issues regarding the report you submitted and/or status of that report. The issues that you report may be made available to other JDC Members or Sun customers, however your personal data will be kept confidential. If you are not comfortable with the above conditions, please do not press the Submit button. If you have any questions, please refer to our Privacy Policy.
Personally, I would report it if it was feasible to hand over the code segment in question with logs, if the data is not too sensitive (perhaps data can be masked or obfuscated in logs?).
It's impossible for you to really judge if the bug is "important" or not for others unless you can know what actually causes it. Reporting it might be the first step in Sun's engineers finding out the cause of something serious.
The very first question you should ask yourself is:
Am I using an officially supported Linux distribution?
If not, switch to one that is.
If you are, then report it to Sun!