Java goes OutOfMemory even with enough RAM - java

I have an app that uses the following jvm options:
-Xmx512m -Xms256m -XX:+UseParNewGC -XX:+UseConcMarkSweepGC
-XX:MaxGCPauseMillis=2 -XX:MaxDirectMemorySize=1G
I run it on Windows 7 x64 with 8gb RAM. And when the task manager says that there's 60% of RAM is in use, it becomes impossible to run my program, Java says "Out of memory". Even though in theory I still have almost 3gb of free RAM left. Below are screenshots of profiling my project in NetBeans (until it suddenly crashes on a random spot). What could cause these problems? Is my program really so expensive?
(source: SSmaker.ru)
(source: SSmaker.ru)

You should greedy-allocate your minimum required overhead. That is,
use something like -Xms1g -XMx1g, so when your app actually starts running,
it has already reserved its maximal heap usage.

Related

Java - Could not reserve enough space for object heap

This might be an odd question but since its java related I'll ask it here, I'm trying to play minecraft with a mod that is requering at least 2GB RAM but everytime i try to put 2048MB it shows:
Error occurred during initialization of VM
Could not reserve enough space for 2097152KB object heap
Java HotSpot(TM) Client VM warning: Using incremental CMS is deprecated and will likely be removed in a future release
I'm using this codes I don't know if it will help anything -Xms2048m -XX:+UseConcMarkSweepGC -XX:+CMSIncrementalMode -XX:-UseAdaptiveSizePolicy -Xmn128m.
The only way to make it work is to set the xms to 1024MB and that make sjava run out of memory, how can i set java to us emore memory, I'm using Win10 64-Bit, Java 8 121 64bit, I have 16GB Ram.
Help?
You just need to edit the Xms and Xmn to whatever size you want them to be. You won't get much benefit from allocating big amounts of RAM, it'll just release pressure from the CPU since it'll have to do less deallocating. Tell me if it works, and if it doesn't try to be more specific on what you're doing.

Java out of memory on 64-bit jvm

I have got a problem. My program is really big and java is throwing OutOfMemoryException.
In .bat file, I have got the following:
java -server -Dfile.encoding=UTF-8 -Xmx1500m -Xbootclasspath/p:../libs/l2ft.jar -cp config/xml;../libs/*; l2ft.gameserver.GameServer
Java is using 6 GB of my RAM, next 6 GB is not used.
I typed System.getProperty("sun.arch.data.model"); and it says that I am using 64-bit JVM.
You have set the maximum heap size to 1500m and while the JVM can use a little more than that ~200 MB, that's all you limited the process to.
Try instead to limit your progress to around 5 GB. (You want to leave some memory for overhead and the OS)
-mx5g
Make the options before Xbootclasspath this:
java -XX:MaxPermSize=512m -server -Dfile.encoding=UTF-8 -Xmx8g -XX:+UseCompressedOops
This will make it use more memory and will also make sure it uses as little memory as possible to address the objects in a 64bits machine.
The -XX:MaxPermSize=512m makes the perm gem space larger (which you will probably need as you're using a lot of heap memory) and the -XX:+UseCompressedOops will make it use 32bits addressing for objects, instead of 64bits, as a 8gb heap can be correctly addressed with 32bits, so it makes you use less memory for allocating objects.

High CPU usage in Eclipse when idle

On my multicore machine, Eclipse uses between 100 and 250 % CPU power, even when idling on a new plain install and an empty workspace. When actually doing things, it becomes slow and unresponsive.
I have tried setting the memory settings as suggested here: Eclipse uses 100 % CPU randomly . That did not help. I also tried different Java versions, namely OpenJDK and Oracle Java 7, and Eclipse versions Juno and Indigo. I am on Ubuntu 12.04 LTS.
As another maybe unrelated issue when I close Eclipse the Java process still stays open with over 200% cpu usage and needs to be killed manually.
I was having the same problem today, and it turned out to be an indexing thread that was occupying the CPU. I had recently added quite a bit of files to a project and had forgotten about it. I realize it's not likely that anyone else has this problem, but it might be useful to post how I investigated it.
I'm running Ubuntu 12.10 with STS based on eclipse Juno.
Start eclipse from the command line and redirect output to a file so we can get a thread dump
Allow it to settle for a bit, then get a listing of the cpu usage for each thread: ps -mo 'pid lwp stime time pcpu' -C java. Here's a sample of the output that identified my cpu-hungry thread:
PID LWP STIME TIME %CPU
6974 - 07:42 00:15:51 133
7067 07:42 00:09:49 **86.1**
Convert the thread id (in my case 7067) to hex 0x1b9b (e.g. in the command line using: printf "0x%x\n" 7067)
Do a thread dump of the java process using kill -3, as in: kill -3 6974. The output is saved in the file you redirected stdout when you started eclipse
Open the file and look for the hex id of the thread:
"Link Indexer Delayed Write-10" prio=10 tid=0x00007f66b801a800 nid=**0x1b9b** runnable [0x00007f66a9e46000]
java.lang.Thread.State: RUNNABLE
at com.ibm.etools.references.internal.bplustree.db.ExtentManager$WriteBack.r
I've seen such behaviour only when the garbage collector went crazy because the allocated memory really reached the configured maximum memory limits of the VM. If you have a large Eclipse installation, your first step should always be to increase the memory settings in the eclipse.ini.
Please also activate Window -> Preferences -> General -> Show heap status. It will show you how much memory Eclipse currently uses (in the status line). If that goes up to the allowed maximum and doesn't drop anymore (i.e. the garbage collector cannot clean up unused objects), then that is exactly the indication for what I described above.
Edit: It would also be good to know what Eclipse package you use, as those contain different plugins by default. Classic, Modeling, Java EE developers,...?
I've had this problem with plugins, but never with Eclipse itself.
You can try to debug it by going to Help > About Eclipse > Installation details and disabling the plugins one by one.
Uninstalling mylyn plugins fixed the issue for me and the performance boost was so drastic that I am posting it as answer to a 6 year old question.
Go to Help->About Eclipse->Installation Details->Installed Software
and uninstall all plugins that you know you are not using. I uninstalled only mylyn plugins and it did the wonder for me.
EDIT:
In the eclipse version : 2018-09 (4.9.0), the eclipse freeze/unresponsive issue can be solved by - closing the package & project explorer.
I know this may sound like a dumb solution, but I have tested this on about 5 peer machines, multiple times and believe me when I say this simple solution removed the freeze issue in each of them. As long as package/project explorer was not reopened, none of them complained about unresponsive eclipse.
Problem: Eclipse and the Eclipse indexer take up all my resources / CPU%
Tested in Eclipse IDE for C/C++ Developers Version: 2022-09 (4.25.0) on Linux Ubuntu 18.04.
Quick summary
Solution: decrease the max number of threads Eclipse can use, down to 1/2 as many as your computer has. So, if your computer has 8 physical "cores" (actually: hyperthreads), then decrease the max number of threads that Eclipse can use to 4, or <= half of your number of cores for your system, as follows:
In $HOME/eclipse/cpp-2022-09/eclipse/eclipse.ini on Linux Ubuntu, or equivalent for your OS, make this change (reducing from 10 threads max, to 4, in my case):
Change from:
-Declipse.p2.max.threads=10
to:
-Declipse.p2.max.threads=4
Restart Eclipse.
Now, Eclipse can only take up to 4 of my 8 hyperthreads, and my system runs much better!
If on Linux, you should also reduce your "swappiness" setting to improve system performance. See below.
Details and additional improvements to make
I noticed a huge improvement in my ability to use my computer while Eclipse was indexing projects once I made this change. Eclipse used to make my computer almost totally unusable for hours or days at a time, before, as it indexes my huge repos--many GiB.
You should also give Eclipse more RAM, if needed. In that same eclipse.ini file mentioned above, the -Xms setting sets the starting RAM given to Eclipse's Java runtime environment, and the -Xmx setting sets the max RAM given to it. For indexing large projects, ensure it has a large enough max RAM to successfully index the project. The defaults, if I remember correctly, are:
-Xms256m
-Xmx2048m
...which means: starting RAM given to the Eclipse Java runtime environment is 256 MiB, and max it is allowed to grow to if needed is 2048 MiB.
I have 32 GiB of RAM and 64 GiB of swap space, and my indexer was stalling if I gave Eclipse < 12 GiB of max RAM, so I set my settings as follows to start Eclipse with 1 GiB (1024 MiB) of RAM, and allow it up to 12 GiB (12288 MiB) of RAM:
-Xms1024m
-Xmx12288m
So, my total changes were from:
-Declipse.p2.max.threads=10
-Xms256m
-Xmx2048m
...to:
-Declipse.p2.max.threads=4
-Xms1024m
-Xmx12288m
Here is my final /home/gabriel/eclipse/cpp-2022-09/eclipse/eclipse.ini file, with those changes in-place:
-startup
plugins/org.eclipse.equinox.launcher_1.6.400.v20210924-0641.jar
--launcher.library
/home/gabriel/.p2/pool/plugins/org.eclipse.equinox.launcher.gtk.linux.x86_64_1.2.600.v20220720-1916
-product
org.eclipse.epp.package.cpp.product
-showsplash
/home/gabriel/.p2/pool/plugins/org.eclipse.epp.package.common_4.25.0.20220908-1200
--launcher.defaultAction
openFile
--launcher.appendVmargs
-vm
/home/gabriel/.p2/pool/plugins/org.eclipse.justj.openjdk.hotspot.jre.full.linux.x86_64_19.0.1.v20221102-1007/jre/bin
-vmargs
--add-opens=java.base/java.io=ALL-UNNAMED
--add-opens=java.base/sun.nio.ch=ALL-UNNAMED
--add-opens=java.base/java.net=ALL-UNNAMED
--add-opens=java.base/sun.security.ssl=ALL-UNNAMED
-Dosgi.requiredJavaVersion=17
-Dosgi.instance.area.default=#user.home/eclipse-workspace
-Dsun.java.command=Eclipse
-XX:+UseG1GC
-XX:+UseStringDeduplication
--add-modules=ALL-SYSTEM
-Dosgi.requiredJavaVersion=11
-Dosgi.dataAreaRequiresExplicitInit=true
-Dorg.eclipse.swt.graphics.Resource.reportNonDisposed=true
-Xms1024m
-Xmx12288m
--add-modules=ALL-SYSTEM
-Declipse.p2.max.threads=4
-Doomph.update.url=https://download.eclipse.org/oomph/updates/milestone/latest
-Doomph.redirection.index.redirection=index:/->http://git.eclipse.org/c/oomph/org.eclipse.oomph.git/plain/setups/
--add-opens=java.base/java.lang=ALL-UNNAMED
-Djava.security.manager=allow
How to see how many "cores" (again, actually: hyperthreads) you have on your hardware
On Linux Ubuntu, simply open the "System Monitor" app. Count the cores. You can see here I have 8:
How many threads should I give Eclipse?
A good starting point is to give Eclipse half of your total cores, to keep it from bogging down your system all the time while indexing and refreshing large projects. So, I have 8 cores (hyperthreads), so I should give Eclipse 4 of them by setting -Declipse.p2.max.threads=4 in the .ini file.
This may sound counter-intuitive, but the larger your project and the weaker your computer, the fewer threads you should give Eclipse! This is because the larger your project and the weaker your computer, the more your computer will get bogged down using things like your Chrome web browser. So, to keep Eclipse from sucking up all your resources and freezing your computer, limit the number of threads it can have even more. If I find Eclipse to be bogging down my computer again, I'll reduce its threads to 2 or 3 max instead of 4. I previously gave it 7 of my 8 threads, and it was horrible! My computer ran so stinking slow and I could never use things like Chrome or Slack properly!
How much max RAM (-Xmx) should I give Eclipse?
The starting setting of -Xmx2048m (2048 MiB, or 2 GiB) is fine for most users. It handles most normal projects you'll encounter.
Perhaps as few as -Xmx512m (512 MiB, or 0.5 GiB) or so can index the entire Arduino AVR (8-bit mcu) source code just fine
I need at least -Xmx12288m (12288 MiB, or 12 GiB) for my large mono-repo.
You might need a whopping 32 GiB ~ 64 giB (-Xmx32768m to -Xmx65536m) to index the entire C++ Boost library, which is totally nuts. So, in most cases, exclude the Boost library from your indexer. I mention that in my Google document linked-to below.
The rule-of-thumb is to increase your -Xmx setting a bit whenever you see your indexer struggling or stalled, and Eclipse's usage of the available RAM is continually maxed-out. Here is a screenshot at the bottom of my Eclipse window showing that Eclipse is currently using 8456 MiB of the available 12288 MiB which it has currently allocated on the heap:
Zoomed-in view:
If it was rapidly increasing to the max often and staying there frequently, I'd need to increase my -Xmx setting further, to let Eclipse further grow the heap.
To turn on showing the heap status at the bottom of the Eclipse window (if it isn’t already on by default):
Window → Preferences → General → check the box for "Show heap status" → click "Apply and Close".
NB: When Eclipse first starts, the memory usage indicator will show the right-number in the above heap usage as being equal to your starting heap allocation, which is defined by the -Xms number. As Eclipse needs more memory, it will allocate more, growing that right number up to the -Xmx value you've defined. Again, if your indexer stalls or freezes because it's out of RAM, increase that -Xmx number to allow Eclipse's indexer to use more heap memory (RAM).
What other options can I pass to Eclipse's underlying Java virtual machine (JVM)?
Eclipse's article, FAQ How do I increase the heap size available to Eclipse?, states (emphasis added):
Some JVMs put restrictions on the total amount of memory available on the heap. If you are getting OutOfMemoryErrors while running Eclipse, the VM can be told to let the heap grow to a larger amount by passing the -vmargs command to the Eclipse launcher. For example, the following command would run Eclipse with a heap size of 2048MB:
eclipse [normal arguments] -vmargs -Xmx2048m [more VM args]
The arguments after -vmargs are directly passed to the VM. Run java -X for the list of options your VM accepts. Options starting with -X are implementation-specific and may not be applicable to all VMs.
You can also put the extra options in eclipse.ini.
So, as it says, run this:
java -X
...for a list of all possible arguments you can pass to the underlying Java virtual machine (JVM). Here are the descriptions from that output for -Xms and -Xmx:
-Xms<size> set initial Java heap size
-Xmx<size> set maximum Java heap size
(For Linux users) reduce your system's "swappiness"
If on Linux, you should also reduce your "swappiness" setting from the default of 60 down to the range of 0 to 10 (I prefer 0) to increase your system's performance and reduce lagging and freezing when you get above about 75% RAM usage.
"Swappiness" describes how likely your system is to move the contents of RAM to your "swap space", or virtual memory, which is on your hard disk drive (HDD) or solid state drive (SSD). Swappiness setting values range from 0 to 200 (see my answer quoting the Linux kernel source code here), where 0 means it will try not to use your swap space until it has to, and 200 means it will favor using your swap space earlier.
The benefit of virtual memory, or swap space, is that it can expand your computer's "RAM-like" memory for free practically, allowing you to run a program or do a heavy task like compiling a large application. Such a heavy process might want 64 GiB of RAM even if you only have 8 GiB of RAM. Normally, your computer would crash and couldn't do it, but with swap space it can, as it treats your swap file or partition like extra RAM. That's pretty amazing. The downside of swap memory, however, is that it's much slower than RAM, even when it is running on a high-speed m.2 SSD.
So, to limit swapping and improve performance, just reduce your swappiness to 0. Follow my instructions here: How do I configure swappiness?.
I mentioned and described how decreasing my system's swappiness from 60 to 0 really improved my performance and decreased periodic freezing in these two places here:
https://github.com/ElectricRCAircraftGuy/bug_reports/issues/3#issuecomment-1347864603
Unix & Linux: what is the different between settings swappiness to 0 to swapoff
As an alternative, if you have >= 64 GB of RAM (since that's a large enough amount for me to reasonably consider doing this), you may consider disabling all swap space entirely, and just running on RAM. On my bigger machines with that much RAM, that's what I've done.
References:
My Google document: Eclipse setup instructions on a new Linux (or other OS) computer
"Troubleshooting" section of that doc
My answer: java.lang.OutOfMemoryError when running bazel build
My answer: Ask Ubuntu: How do I increase the size of swapfile without removing it in the terminal?
I cross-linked back to here from my short answer: Eclipse uses 100 % CPU randomly and on Super User here: High CPU usage and very slow performance with Eclipse
How to view memory usage in eclipse (beginner)
I also put this info. in my Google document linked-to above.
https://wiki.eclipse.org/FAQ_How_do_I_increase_the_heap_size_available_to_Eclipse%3F
My answer: How do I configure swappiness?
Java multi thread garbage collector is a garbage.
add -XX:-UseLoopPredicate option to java command line.
See e.g. the bug https://bugzilla.redhat.com/show_bug.cgi?id=882968
Was facing the same issue, Passed following VM Argument in eclipse and it worked fine for me.
-Xmx1300m
-XX:PermSize=256m
-XX:MaxPermSize=256m

Why Sun Java on Solaris take more than twice RSS memory?

I've got a problem on my Solaris servers. When I launch a Sun Java process with restricted memory it takes more than twice the ressources.
For example, I have 64 Go of memory on my servers. 1 is on Linux, the others are on Solaris. I ran the same softwares on all servers (only java).
When servers starts they took between 400Mb and 1,2Gb of RAM. I launch my java process (generally between 4 and 16go per java process) and I can't run more than 32 Gb defined with Xmx and Xmx values. I got this kind of errors :
> /java -d64 -Xms8G -Xmx8G -version
Error occurred during initialization of VM
Could not reserve enough space for object heap
As we can see here, I got a lot of reserved memory and it's made by java process :
> swap -s
total: 22303112k bytes allocated + 33845592k reserved = 56148704k used, 704828k available
As soon as I kill them 1 by 1, I recover my reserved space and could launch others. But in fact I can't use more than a half my memory.
Anybody know how to resolve this problem ?
Thanks
I believe the issue is Linux over committing memory allocation while Solaris is make sure what you allocate fit in virtual memory.
If you think that's a Linux advantage, you might reconsider it when Linux OOM killer randomly kill your mission critical application at it worst stage.
To fix the issue, just add more swap space to Solaris.

How can a track down a non-heap JVM memory leak in Jboss AS 5.1?

After upgrading to JBoss AS 5.1, running JRE 1.6_17, CentOS 5 Linux, the JRE process runs out of memory after about 8 hours (hits 3G max on a 32-bit system). This happens on both servers in the cluster under moderate load. Java heap usage settles down, but the overall JVM footprint just continues to grow. Thread count is very stable and maxes out at 370 threads with a thread stack size set at 128K.
The footprint of the JVM reaches 3G, then it dies with:
java.lang.OutOfMemoryError: requested 32756 bytes for ChunkPool::allocate. Out of swap space?
Internal Error (allocation.cpp:117), pid=8443, tid=1667668880
Error: ChunkPool::allocate
Current JVM memory args are:
-Xms1024m -Xmx1024m -XX:MaxPermSize=256m -XX:ThreadStackSize=128
Given these settings, I would expect the process footprint to settle in around 1.5G. Instead, it just keeps growing until it hits 3G.
It seems none of the standard Java memory tools can tell me what in the native side of the JVM is eating all this memory. (Eclipse MAT, jmap, etc). Pmap on the PID just gives me a bunch of [ anon ] allocations which don't really help much. This memory problem occurs when I have no JNI nor java.nio classes loaded, as far as I can tell.
How can I troubleshoot the native/internal side of the JVM to find out where all the non-heap memory is going?
Thank you! I am rapidly running out of ideas and restarting the app servers every 8 hours is not going to be a very good solution.
As #Thorbjørn suggested, profile your application.
If you need more memory, you could go for a 64bit kernel and JVM.
Attach with Jvisualvm in the JDK to get an idea on what goes on. jvisualvm can attach to a running process.
Walton:
I had similar issue, posted my question/finding in https://community.jboss.org/thread/152698 .
Please try adding -Djboss.vfs.forceCopy=false to java start up parameter to see if it helps.
WARN: even if it cut down process size, you need to test more to make sure everything all right.

Categories