I can see in jconsole, that my simple java hello world app takes 1 MB or 2 Mb, however in task manager it shows 12 MB. I need to understand it in order to analyze a problem in our java-native layer application which shows only 40 MB memory in jconsole, which we find normal and even on native layer there are not any memory intensive operations. In production environment, task manager shows 373 MB, which is much beyond our expectations.
Note: we don't have out of memory error yet, we rather have a watchdog service, which complains when memory goes beyond 250 MB and start logging it in a log file.
This article might help you. The reason is that windows does not show only heap memory instead its overall memory of windows process. The jvm tools like jvisualvm or jconsole show the exact heap space used by a java process
I have a java app. Its use is to automate login to websites and create wifi hotspots. It is a GUI app with many features such as a notification manager and system tray. My JAR file has a size of 3 MB but, it consumes about 100 MB of RAM. Should I be worried?
I checked if any of my methods were recursive and I could not find any.
My java app's code can be found here : https://github.com/mavrk/bitm-cyberoam-client/tree/master/JavaApplication13
A one line program can use 8 GB of memory.
If you are concerned about the size of the heap you can either
reduce it further, though this might slow the application or prevent it from working.
use a memory profiler to see where the memory is being utilised.
not worry about about 50 cents worth of memory. If you are minimum wage you shouldn't spend more then 6 minutes on it or your time will be worth more than the memory you save.
I face a problem with java application I built in javaFx. It consumes only 2-3% of cpu usage and around 50 to 80 MB of memory in windows. But in mac same application initially starts with 50 mb of memory and continuously increases to 1 GB and uses over 90% of CPU Usage. I found this information when I checked Mac task manager. When I use a java profiler to find memory leaks, the profiler shows memory usage same like window (not more than 100 MB).
I am confused with this behaviour in Mac.
Has anyone encountered this problem before, or am I doing something wrong with my application?
Lots of things possible, but i suspect this: Depending on the memory size and cpu count, the jvm may run in server mode, which causes memory management to be different. Use -server option to force it to be server mode always and compare again.
Can also take heap dumps (jmap -dump) to see what is taking up so much memory, and stack traces (kill -3) to see what is taking up so much cpu.
I have a server java component which has a huge memory demand at startup which mellows down gradually. So as an example at startup the memory requirement might shoot unto 4g; which after the initial surge is over will go down to 2g. I have configured the component to start with memory of 5g and the component starts well; the used memory surges upto 4g and then comes down close to 2g. The memory consumed by the heap at this point still hovers around 4g and I want to bring this down (essentially keep the free memory down to few hundred mb's rather than 2g. I tried using the MinFreeHeapRatio and MaxFreeHeapRatio by lowering them down from the default values but this resulted in garbage collection not being triggered after the initial run during the initial spike and the used memory stayed at a higher than usual level. Any pointers would greatly help.
First, I ask why you are worried about freeing up 2 GB of ram on a server? 2GB of ram is about $100 or less. If this is on a desktop I guess I can understand the worry.
If you really do have a good reason to think about it, this may be tied to the garbage collection algorithm you are using. Some algorithms will release unused memory back to the OS, some will not. There are some graphs and such related to this at http://www.stefankrause.net/wp/?p=14 . You might want to try the G1 collector, as it seems to release memory back to the OS easily.
Edit from comments
What if they all choose to max their load at once? Are you ok with some of them paging memory to disk and slowing the server to a crawl? I would run them on a server with enough memory to run ALL applications at max heap, + another 4-6GB for the OS / caching. Servers with 32 or 64 GB are pretty common, and you can get more.
You have to remember that the JVM reserves virtual memory on startup and never gives it back to the OS (until the program exits) The most you can hope for is that unused memory is swapped out (this is not a good plan) If you don't want the application to use more than certain amount of memory, you have to restrict it to that amount.
If you have a number of components which use a spike of memory, you should consider re-writing them to use less memory on startup or use multiple components in the same JVM so the extra memory is less significant.
Over the past year I've made huge improvements in my application's Java heap usage--a solid 66% reduction. In pursuit of that, I've been monitoring various metrics, such as Java heap size, cpu, Java non-heap, etc. via SNMP.
Recently, I've been monitoring how much real memory (RSS, resident set) by the JVM and am somewhat surprised. The real memory consumed by the JVM seems totally independent of my applications heap size, non-heap, eden space, thread count, etc.
Heap Size as measured by Java SNMP
Java Heap Used Graph http://lanai.dietpizza.ch/images/jvm-heap-used.png
Real Memory in KB. (E.g.: 1 MB of KB = 1 GB)
Java Heap Used Graph http://lanai.dietpizza.ch/images/jvm-rss.png
(The three dips in the heap graph correspond to application updates/restarts.)
This is a problem for me because all that extra memory the JVM is consuming is 'stealing' memory that could be used by the OS for file caching. In fact, once the RSS value reaches ~2.5-3GB, I start to see slower response times and higher CPU utilization from my application, mostly do to IO wait. As some point paging to the swap partition kicks in. This is all very undesirable.
So, my questions:
Why is this happening? What is going on "under the hood"?
What can I do to keep the JVM's real memory consumption in check?
The gory details:
RHEL4 64-bit (Linux - 2.6.9-78.0.5.ELsmp #1 SMP Wed Sep 24 ... 2008 x86_64 ... GNU/Linux)
Java 6 (build 1.6.0_07-b06)
Tomcat 6
Application (on-demand HTTP video streaming)
High I/O via java.nio FileChannels
Hundreds to low thousands of threads
Low database use
Spring, Hibernate
Relevant JVM parameters:
-Xms128m
-Xmx640m
-XX:+UseConcMarkSweepGC
-XX:+AlwaysActAsServerClassMachine
-XX:+CMSIncrementalMode
-XX:+PrintGCDetails
-XX:+PrintGCTimeStamps
-XX:+PrintGCApplicationStoppedTime
-XX:+CMSLoopWarn
-XX:+HeapDumpOnOutOfMemoryError
How I measure RSS:
ps x -o command,rss | grep java | grep latest | cut -b 17-
This goes into a text file and is read into an RRD database my the monitoring system on regular intervals. Note that ps outputs Kilo Bytes.
The Problem & Solutions:
While in the end it was ATorras's answer that proved ultimately correct, it kdgregory who guided me to the correct diagnostics path with the use of pmap. (Go vote up both their answers!) Here is what was happening:
Things I know for sure:
My application records and displays data with JRobin 1.4, something I coded into my app over three years ago.
The busiest instance of the application currently creates
Over 1000 a few new JRobin database files (at about 1.3MB each) within an hour of starting up
~100+ each day after start-up
The app updates these JRobin data base objects once every 15s, if there is something to write.
In the default configuration JRobin:
uses a java.nio-based file access back-end. This back-end maps MappedByteBuffers to the files themselves.
once every five minutes a JRobin daemon thread calls MappedByteBuffer.force() on every JRobin underlying database MBB
pmap listed:
6500 mappings
5500 of which were 1.3MB JRobin database files, which works out to ~7.1GB
That last point was my "Eureka!" moment.
My corrective actions:
Consider updating to the latest JRobinLite 1.5.2 which is apparently better
Implement proper resource handling on JRobin databases. At the moment, once my application creates a database and then never dumps it after the database is no longer actively used.
Experiment with moving the MappedByteBuffer.force() to database update events, and not a periodic timer. Will the problem magically go away?
Immediately, change the JRobin back-end to the java.io implementation--a line line change. This will be slower, but it is possibly not an issue. Here is a graph showing the immediate impact of this change.
Java RSS memory used graph http://lanai.dietpizza.ch/images/stackoverflow-rss-problem-fixed.png
Questions that I may or may not have time to figure out:
What is going on inside the JVM with MappedByteBuffer.force()? If nothing has changed, does it still write the entire file? Part of the file? Does it load it first?
Is there a certain amount of the MBB always in RSS at all times? (RSS was roughly half the total allocated MBB sizes. Coincidence? I suspect not.)
If I move the MappedByteBuffer.force() to database update events, and not a periodic timer, will the problem magically go away?
Why was the RSS slope so regular? It does not correlate to any of the application load metrics.
Just an idea: NIO buffers are placed outside the JVM.
EDIT:
As per 2016 it's worth considering #Lari Hotari comment [ Why does the Sun JVM continue to consume ever more RSS memory even when the heap, etc sizes are stable? ] because back to 2009, RHEL4 had glibc < 2.10 (~2.3)
Regards.
RSS represents pages that are actively in use -- for Java, it's primarily the live objects in the heap, and the internal data structures in the JVM. There's not much that you can do to reduce its size except use fewer objects or do less processing.
In your case, I don't think it's an issue. The graph appears to show 3 meg consumed, not 3 gig as you write in the text. That's really small, and is unlikely to be causing paging.
So what else is happening in your system? Is it a situation where you have lots of Tomcat servers, each consuming 3M of RSS? You're throwing in a lot of GC flags, do they indicate the process is spending most of its time in GC? Do you have a database running on the same machine?
Edit in response to comments
Regarding the 3M RSS size - yeah, that seemed too low for a Tomcat process (I checked my box, and have one at 89M that hasn't been active for a while). However, I don't necessarily expect it to be > heap size, and I certainly don't expect it to be almost 5 times heap size (you use -Xmx640) -- it should at worst be heap size + some per-app constant.
Which causes me to suspect your numbers. So, rather than a graph over time, please run the following to get a snapshot (replace 7429 by whatever process ID you're using):
ps -p 7429 -o pcpu,cutime,cstime,cmin_flt,cmaj_flt,rss,size,vsize
(Edit by Stu so we can have formated results to the above request for ps info:)
[stu#server ~]$ ps -p 12720 -o pcpu,cutime,cstime,cmin_flt,cmaj_flt,rss,size,vsize
%CPU - - - - RSS SZ VSZ
28.8 - - - - 3262316 1333832 8725584
Edit to explain these numbers for posterity
RSS, as noted, is the resident set size: the pages in physical memory. SZ holds the number of pages writable by the process (the commit charge); the manpage describes this value as "very rough". VSZ holds the size of the virtual memory map for the process: writable pages plus shared pages.
Normally, VSZ is slightly > SZ, and very much > RSS. This output indicates a very unusual situation.
Elaboration on why the only solution is to reduce objects
RSS represents the number of pages resident in RAM -- the pages that are actively accessed. With Java, the garbage collector will periodically walk the entire object graph. If this object graph occupies most of the heap space, then the collector will touch every page in the heap, requiring all of those pages to become memory-resident. The GC is very good about compacting the heap after each major collection, so if you're running with a partial heap, there most of the pages should not need to be in RAM.
And some other options
I noticed that you mentioned having hundreds to low thousands of threads. The stacks for these threads will also add to the RSS, although it shouldn't be much. Assuming that the threads have a shallow call depth (typical for app-server handler threads), each should only consume a page or two of physical memory, even though there's a half-meg commit charge for each.
Why is this happening? What is going on "under the hood"?
JVM uses more memory than just the heap. For example Java methods, thread stacks and native handles are allocated in memory separate from the heap, as well as JVM internal data structures.
In your case, possible causes of troubles may be: NIO (already mentioned), JNI (already mentioned), excessive threads creation.
About JNI, you wrote that the application wasn't using JNI but... What type of JDBC driver are you using? Could it be a type 2, and leaking? It's very unlikely though as you said database usage was low.
About excessive threads creation, each thread gets its own stack which may be quite large. The stack size actually depends on the VM, OS and architecture e.g. for JRockit it's 256K on Linux x64, I didn't find the reference in Sun's documentation for Sun's VM. This impacts directly the thread memory (thread memory = thread stack size * number of threads). And if you create and destroy lots of thread, the memory is probably not reused.
What can I do to keep the JVM's real memory consumption in check?
To be honest, hundreds to low thousands of threads seems enormous to me. That said, if you really need that much threads, the thread stack size can be configured via the -Xss option. This may reduce the memory consumption. But I don't think this will solve the whole problem. I tend to think that there is a leak somewhere when I look at the real memory graph.
The current garbage collector in Java is well known for not releasing allocated memory, although the memory is not required anymore. It's quite strange however, that your RSS size increases to >3GB although your heap size is limited to 640MB. Are you using any native code in your application or are you having the native performance optimization pack for Tomcat enabled? In that case, you may of course have a native memory leak in your code or in Tomcat.
With Java 6u14, Sun introduced the new "Garbage-First" garbage collector, which is able to release memory back to the operating system if it's not required anymore. It's still categorized as experimental and not enabled by default, but if it is a feasible option for you, I would try to upgrade to the newest Java 6 release and enable the new garbage collector with the command line arguments "-XX:+UnlockExperimentalVMOptions -XX:+UseG1GC". It might solve your problem.