I face a problem with java application I built in javaFx. It consumes only 2-3% of cpu usage and around 50 to 80 MB of memory in windows. But in mac same application initially starts with 50 mb of memory and continuously increases to 1 GB and uses over 90% of CPU Usage. I found this information when I checked Mac task manager. When I use a java profiler to find memory leaks, the profiler shows memory usage same like window (not more than 100 MB).
I am confused with this behaviour in Mac.
Has anyone encountered this problem before, or am I doing something wrong with my application?
Lots of things possible, but i suspect this: Depending on the memory size and cpu count, the jvm may run in server mode, which causes memory management to be different. Use -server option to force it to be server mode always and compare again.
Can also take heap dumps (jmap -dump) to see what is taking up so much memory, and stack traces (kill -3) to see what is taking up so much cpu.
Related
I can see in jconsole, that my simple java hello world app takes 1 MB or 2 Mb, however in task manager it shows 12 MB. I need to understand it in order to analyze a problem in our java-native layer application which shows only 40 MB memory in jconsole, which we find normal and even on native layer there are not any memory intensive operations. In production environment, task manager shows 373 MB, which is much beyond our expectations.
Note: we don't have out of memory error yet, we rather have a watchdog service, which complains when memory goes beyond 250 MB and start logging it in a log file.
This article might help you. The reason is that windows does not show only heap memory instead its overall memory of windows process. The jvm tools like jvisualvm or jconsole show the exact heap space used by a java process
We are facing a peculiar problem in our clustered application. After running the system for sometime, suddenly the application is freezing and couldn't find any clue what is causing this.
After enabling JVM hotspot logs we see that "ParallelGCFailedAllocation", "Revoke Bias" is taking more time.
Refer to attached graph which is plotted by parsing the hotspot logs and converted to csv.
The graph shows at certain time the "ParallelGCFailedAllocation", "Revoke Bias" is spiking and take around 13 secs which is not normal.
We are trying to find what is causing it to take so much time.
Anybody having clue on how to debug such issue?
Enviroment details:
32 core machine running in VMWare hypervisor.
Heap Size: 12GB
RHEL 7 with Open JDK 8
wow, you have about 2800 threads in your application, its too much!
Also your heap is too huge, 4GB in young gen and 8 GB in old gen. What are you expecting in this case ?
From PrintSafepointStatistics output, you have no problems with safepoint sync, actually vm operation takes all time.
You can disable biased locking -XX:-UseBiasedLocking and use concurrent gc's (CMS\G1) instead of parallel old gc, maybe this will help you and reduce pauses a little bit, but the main problem is bad configuration and maybe code/design.
Use size-limited thread pools, ~2800 threads is too much
12 GB is huge heap, also young gen should be not so big.
profile your apllication (JFR, yourkit, jprofiler, visualvm) can help you to find allocation hotspots.
also eclipse MAT can help you to analyze heap
if you want to trace revokeBias, add -XX:+TraceBiasedLocking
I'm debugging a fairly large project I've been working on (but did not originally create) and I've noticed that sometimes it crashes with an OutOfMemoryError. The code is loading a lot of data from files so this isn't entirely surprising in general.
However, what confuses me is that I'm using VisualVM 1.3.4 to profile the program, and it behaves inconsistently. Most times I've run it, the heap gradually expands up to about 2GB (the computer has 16GB of RAM; it's for academic research) with the used heap spiking higher and higher underneath it. Around 2GB, it will crash. The program isn't proccessing more information as time goes on though, so it shouldn't grow the heap to 2GB in just a few minutes.
Sometimes, though, I get a sudden crash after about 30 seconds, with a heap size of 250MB and only about 100MB in use. How am I getting a java.lang.OutOfMemoryError: Java heap space if my heap isn't full?
Edit: I'm using Eclipse to run the program, and I have the VisualVM plugin so it gets launched automatically. Also, I'm using Java 7.
Start the application with the VM argument -XX:+HeapDumpOnOutOfMemoryError.
Analyse the Heap Dump and find out what is causing the issue.
Eclipse MAT is an excellent tool for finding out such issues.
you need to setup the JVMs min and max heap memory
set JAVA_OPTS="-Xms128m -Xmx256m"
something like that but with bigger values like 2G, 4G whatever
LE: As you all know you can't force JVM to run the garbage collector (even though you can ask for it), but there are some ways of convincing it to get rid of some items by null-ing their references. Another thing to watch is the database object that might be lazy initialised. That error could appear when you try to create an object exceding the max heap memory.
Another ideea could be some retarded developer that programatically threw the OutOfMemoryError in some method for some retarded reason. When you reach that part of code, that's what you get (search the project)
There can be at least 2 reasons for the application to crash with OutOfMemoryError.
Your java heap is just too small for the amount of data it needs to process. Then you can either increase it as suggested Matei, or analyze heap dump as suggest Ajay.
Your application leaks memory. Which means that it leaves some unneeded data in memory after processing it. Then increasing heap will not help in the long run. And your options are either heap dump analysis (again) or specialised memory leak detection tool, such as Plumbr
Turned out the crash was caused by using the OpenJDK JRE rather than Oracle's JRE. I don't know exactly what the bug is in OpenJDK that makes it crash like this, but changing to Oracle's JRE ultimately solved the problem.
(I was using OpenJDK because I'm on a Linux computer that someone was using for open-source work before me. When I mentioned the crash to him he had the idea that that might be the cause. He was right.)
Do you have a 32bit operative system without large memory support (PAE on win, huge mem kernel on linux...)? If yes, you may encounter the 2GB memory segment limit per process on 32 bit systems.
As a work around, try to set the JVM parameter -Xss192k to dispose 192kb of stack space per thread, and the parameter -Xmx1024m to use no more than 1GB of heap.
After upgrading to JBoss AS 5.1, running JRE 1.6_17, CentOS 5 Linux, the JRE process runs out of memory after about 8 hours (hits 3G max on a 32-bit system). This happens on both servers in the cluster under moderate load. Java heap usage settles down, but the overall JVM footprint just continues to grow. Thread count is very stable and maxes out at 370 threads with a thread stack size set at 128K.
The footprint of the JVM reaches 3G, then it dies with:
java.lang.OutOfMemoryError: requested 32756 bytes for ChunkPool::allocate. Out of swap space?
Internal Error (allocation.cpp:117), pid=8443, tid=1667668880
Error: ChunkPool::allocate
Current JVM memory args are:
-Xms1024m -Xmx1024m -XX:MaxPermSize=256m -XX:ThreadStackSize=128
Given these settings, I would expect the process footprint to settle in around 1.5G. Instead, it just keeps growing until it hits 3G.
It seems none of the standard Java memory tools can tell me what in the native side of the JVM is eating all this memory. (Eclipse MAT, jmap, etc). Pmap on the PID just gives me a bunch of [ anon ] allocations which don't really help much. This memory problem occurs when I have no JNI nor java.nio classes loaded, as far as I can tell.
How can I troubleshoot the native/internal side of the JVM to find out where all the non-heap memory is going?
Thank you! I am rapidly running out of ideas and restarting the app servers every 8 hours is not going to be a very good solution.
As #Thorbjørn suggested, profile your application.
If you need more memory, you could go for a 64bit kernel and JVM.
Attach with Jvisualvm in the JDK to get an idea on what goes on. jvisualvm can attach to a running process.
Walton:
I had similar issue, posted my question/finding in https://community.jboss.org/thread/152698 .
Please try adding -Djboss.vfs.forceCopy=false to java start up parameter to see if it helps.
WARN: even if it cut down process size, you need to test more to make sure everything all right.
When running the same java process (a jar) under Windows and Linux (Debian) the Linux proces uses a lot more (12MB vs 36 MB), just from starting up. Even when trying to limit the heap size with -Xmx/Xms/etc, it stays the same. Nothing I try seems to help and the process always takes 36 MB. What explains this difference between Linux and Windows and how can I reduce the memory usage?
EDIT:
I measure memory with the windows task manager and Linux top command.
The JVM are the same and they are both 32-bit systems.
I recommend using a profiler such as VisualVM to get a more granular view on what's going on.
One question I would ask to help me understand the problem better is :
Does my Java application's memory profile look dramatically different on the two platforms? You can answer this by running with -loggc and viewing the output in a gc visualizer like HPjmeter. You should try to look at a sample set with a statistically significant amount of data, perhaps 1000 or 10000 gc plots. If the answer is no, I would be tempted to attribute the difference you see to the JVM heap allocation requirements for start up. As 'nos' pointed out, pinpointing the difference can be notoriously hard. When you specified the -Xmx value on Linux, did the memory utilization exceed your Xmx value?
It is probably measuring the shared memory as well