Related
I'm investigating some memory bloat in a Java project. Confounded by the different statistics reported by different tools (we are using Java 8 on Solaris 10).
jconsole gives me three numbers:
Committed: the amount reserved for this process by the OS
Used: the amount actually being used by this process
Max: the amount available to the process (in our case it is limited to 128MB via Java command line option -Xmx128m).
For my project, jconsole reports 119.5MB max, 61.9MB committed, 35.5MB used.
The OS tools report something totally different:
ps -o vsz,rss and prstat -s rss and pmap -x all report that this process is using around 310MB virtual, 260MB physical
So my questions are:
Why does the OS report that I'm using around 5x as much as jconsole says is "committed" to my process?
Which of these measurements is actually accurate? (By "accurate", I mean, if I have 12GB of memory, can I run 40 of these (# 300MB) before I hit OutOfMemoryException? Or can I run 200 of them (# 60MB)? (Yes, I know I can't use all 12GB of memory, and yes I understand that virtual memory exists; I'm just using that number to illuminate the question better.)
This question goes quite deep. I'm just going to mention 3 of the many reasons:
VMs
Shared libraries
Stacks and permgen
VMs
Java is like a virtual mini computer. Imagine you ran an emulator on your computer that emulates an old macintosh computer, for example. The emulator app has a config screen where you set how much RAM is in the virtual computer. If you pick 1GB and start the emulator, your OS is going to say the 'Old Mac Emulator' application is taking 1GB. Eventhough inside the virtual machine, that virtual old mac might be reporting 800MB of 1GB free.
A JVM is the same thing. The JVM has its own memory management. As far as the OS is concerned, java.exe is an app that takes 1GB. As far as the JVM is concerned, there's 400MB available on the heap right now.
A JVM is slightly more convoluted, in that the total amount of memory a JVM 'claims' from the OS can fluctuate. Out of the box, a JVM will generally not ask for the maximum right away, but will ask for more over time before kicking in the garbage collector, or a combination thereof: Heap full? Garbage collect. That only freed up maybe 20% or so? Ask the OS for more. -Xms and -Xmx control this; set them to the same, and the JVM will on bootup ask for that much memory and will never ask for more. In general a JVM will never relinquish any memory it claimed.
JVMs, still, are primarily aimed at server deployments, where you want the RAM dedicated to your VM to be constant. There's little point in having each app take whatever they want when they want it, generally. In contrast to desktop apps where you tend to have a ton of apps running and given that a human is 'operating' it, generally only one app has particularly significant ram requirements.
This explains jconsole, which is akin to reporting the free memory inside the virtual old mac app: It's reporting on the state of the heap as the JVM sees it.
Whereas ps -o and friends are memory introspection tools at the OS level, and they just see the JVM as a big black box.
Which one is actually accurate
They both are. From their perspective, they are correct.
Shared library
OSes are highly complex beasts, these days. To put things in java terms, you can have a single JVM that is concurrently handling 100 simultaneous incoming https calls. One could want to see a breakdown of how much memory each of the currently 100 running 'handlers' is taking up. Okay... so how do we 'file' the memory load of String, the class itself (not any particular instance of String - the code. e.g. the instructions for how .toLowerCase() runs. Those are in memory too, someplace!). The web framework needs it, so does the core JVM, and so does probably every single last one of those 100 concurrent handlers. So how do we 'bookkeep' this?
In other words, the memory load on an entire system cannot be strictly divided up as 'that memory is 100% part of that app, and this memory is 10)% part of this app'. Shared libraries make that difficult.
The JVM is technically capable of rendering UIs, processing images, opening files both using the synchronous as well as the asynchronous API, and even the random access API if your OS offers a separate access library for it, sending network requests in async mode, in sync mode, and more. In effect, a JVM will immediately tell the OS: I can do allllll these things.
In my experience/recollection, most OSes report the total memory load of a single application as the sum of the memory they need as well as all the memory any (shared) library they load, in full.
That means ps and friends overreport JVMs considerably: The JVM loads in a ton of libraries. This doesn't actually cost RAM (The OS also loaded these libraries, the JVM doesn't use any large DLLs/.SO/.JNILIB files of its own, just hooks up the ones the OS provides, pretty much all of them), but is often 'bookkept' as such. You know this is happening if this trivial app:
class Test { public static void main(String[] args) throws Exception {
System.out.println("Hello!");
Thread.sleep(100000L);
}}
Already takes more than ~60MB or so.
I mean, if I have 12GB of memory, can I run 40 of these (# 300MB)
That shared library stuff means each VM's memory load according to ps and friends are over-inflated by however much the shared libraries 'cost', because each JVM is going to share that library - the OS only loads it once, not 40 times.
Stacks and permgen
The 'heap', which is where newly created objects go, is the largest chunk of any JVM's memory load. It's also generally the only one JVM introspection tools like jconsole show you. However, it's not the only memory a JVM needs. There's a small slice it needs for its core self (the 'C code', so to speak). Each active thread has a stack and each stack also needs memory. By default it's whatever you pass to -Xss, but times the number of concurrent threads. But that's not a certainty: You can construct a new thread with an alternate size (check the constructors of j.l.Thread). There used to be 'permgen' which is where class code lived. Modern JVM versions got rid of it; in general newer JVM versions try to do more and more on heap instead of in magic hard-to-introspect things like permgen.
I mean, if I have 12GB of memory, can I run 40 of these (# 300MB) before I hit OutOfMemoryException?
Run all 40 at once, and always specify both -Xms and -Xmx, setting them to equal sizes. Assuming all those 40 JVMs are relatively stable in terms of how many concurrent threads they ever run, if you're ever going to run into memory issues, it'll happen immediately (due to -Xms and -Xmx being equal you've removed the dynamism from this situation. All JVMs pretty much instaclaim all the memory they will ever claim, so it either 'works' or it won't. Stacks mess with the cleanliness of this somewhat, hence the caveat of stable-ish thread counts).
I am building a very complex software that will be used for production and will run on a server as a service.
I need to make this jar have set max RAM usage when running with some calculations made by my program, i have seen that there are ways for setting the memory before running the built program, but i would like to set how much memory the jar is going to use when i am running it, is this possible?
There are two issues here. As mentioned above, you can only request up to a specific amount of memory. Efficient garbage collection can help you reclaim memory that is no longer needed.
The second, and probably real, issue here is metering how much memory is actually used by the application. There are many frameworks (e.g., JMeter) for measuring how much memory is used - and this can be done with respect to the amount of data used. When doing NP-complete (or even just more than O(n) problems) this can be very useful from the users perspective ("This works well with up to 2 ||| objects")
I am trying to get a handle on proper memory usage and garbage collection in Java. I'm not a novice programmer by any means, but it always seems to me that once Java touches some memory, it will never be released for other applications to use. In that case, you have to make sure your peak memory is never too high, or your application will continually use whatever the peak memory usage was.
I wrote a small sample program trying to demonstrate this. It basically has 4 buttons...
Fill class scope variable BigList = new ArrayList<string>() with about 25,000,000 long string items.
Call BigList.clear()
Reallocate the list - BigList = new ArrayList<string>() again (to shrink the list size)
A call to System.gc() - Yes, I know this doesn't mean that GC will really run, but it's what we have.
So next I did some testing on Windows, Linux, and Mac OS while using the default task monitors to check on the processes reported memory usage. Here is what I found...
Windows - Pumping the list, calling clear, and then calling GC several times will not reduce memory usage at all. However, reallocating the list using new and then calling GC several times will reduce the memory usage back to starting levels. IMO, this is acceptable.
Linux (I used Mint 11 distro with Sun JVM) - Same results as Windows.
Mac OS - I followed the sames steps as above, but even when reinitializing the list calls to GC seemingly have no effect. The program will sit using hundreds of MB of RAM even though I have nothing in memory.
Can anyone explain this to me? Some people have told me some stuff about "heap" memory, but I still don't fully understand it and I'm not sure it applies here. From what I have heard about it, I shouldn't be seeing the behavior I am on Windows and Linux anyways.
Is this just a difference in the way Mac OS's Activity Monitor measures memory usage or is there something else going on? I would prefer to not have my program idling with tons of RAM usage. Thanks for your insight.
The Sun/Oracle JVM does not return unneeded memory to the system. If you give it a large, maximum heap size, and you actually use that heap space at some point, the JVM won't give it back to the OS for other uses. Other JVMs will do that (JRockit used to, but I don't think it does any more).
So, for Oracles JVM you need to tune your app and your system for peak usage, that's just how it works. If the memory that you're using can be managed with byte arrays (such as working with images or something), then you can use mapped byte buffers instead of Java byte arrays. Mapped byte buffers are taken straight from the system, and are not part of the heap. When you free up these objects (AND they are GC'd, I believe, but not sure), the memory will be returned to the system. You'll likely have to play with that one assuming it's even applicable at all.
... but it always seems to me that once Java touches some memory, it's gone forever. You will never get it back.
It depends on what you mean by "gone forever".
I've also heard it said that some JVMs do give memory back to the OS when they are ready and able to. Unfortunately, given the way that the low-level memory APIs typically work, the JVM has to give back entire segments, and it tends to be complicated to "evacuate" a segment so that it can be given back.
But I wouldn't rely on that ... because there are various things that could prevent the memory being given back. The chances are that the JVM won't give the memory back to the OS. But it is not "gone forever" in the sense that the JVM will continue to make use of it. Even if the JVM never approaches the peak usage again, all of that memory will help to make the garbage collector run more efficiently.
In that case, you have to make sure your peak memory is never too high, or your application will continually eat up hundreds of MB of RAM.
That is not true. Assuming that you are adopting the strategy of starting with a small heap and letting it grow, the JVM won't ask for significantly more memory than the peak memory. The JVM won't continually eat up more memory ... unless your application has a memory leak and (as a result) its peak memory requirement has no bound.
(The OP's comments below indicate that this is not what he was trying to say. Even so, it is what he did say.)
On the topic of garbage collection efficiency, we can model the cost of a run of an efficient garbage collector as:
cost ~= (amount_of_live_data * W1) + (amount_of_garbage * W2)
where W1 and W2 are (we assume) constants that depend on the collector. (Actually, this is an over-simplification. The first part is not a linear function of the number of live objects. However, I claim that it doesn't matter for the following.)
The efficiency of the collector can then be stated as:
efficiency = cost / amount_of_garbage_collected
which (if we assume that the GC collects all data) expands to
efficiency ~= (amount_of_live_data * W1) / amount_of_garbage + W2.
When the GC runs,
heap_size ~= amount_of_live_data + amount_of_garbage
so
efficiency ~= W1 * (amount_of_live_data / (heap_size - amount_of_live_data) )
+ W2.
In other words:
as you increase the heap size, the efficiency tends to a constant (W2), but
you need a large ratio of heap_size to amount_of_live_data for this to happen.
The other point is that for an efficient copying collector, W2 covers just the cost of zeroing the space occupied by the garbage objects in 'from space'. The rest (tracing, copying of live objects to 'to space", and zeroing the 'from space' that they occupied) is part of the first term of the initial equation; i.e. covered by W1. What this means is that W2 is likely to be considerably smaller than W1 ... and that the first term of the final equation is significant for longer.
Now obviously this is a theoretical analysis, and the cost model is a simplification of how real garbage collectors really work. (And it doesn't take account of the "real" work that the application is doing, or the system-level effects of tying down too much memory.) However, the maths tells me that from the standpoint of GC efficiency, a big heap really does help a lot.
Some JVMs do not or are not able to release previously acquired memory back to the host OS if it isn't needed atm. This is because it's a costly and complex task. The garbage collector only applies to the heap memory within the Java virtual machine. Therefore it does not give back (free() in C terms) memory to the OS. E.g. if a big object isn't used any more, the memory will be marked as free within the heap of the JVM by the GC and not released to OS.
However, the situation is changing, for example ZGC will return memory to the operating system.
Once the program terminates, is the memory usage getting down in taskmanager in windows ? I think the memory is getting released but not shown as released by the default task monitors in the OS you are monitoring. Go through this question on C++ Problem with deallocating vector of pointers
A common misconception is that Java uses up memory as it runs and there for it should be able to return memory to the OS. Actually the Oracle/Sun JVM reserves the virtual memory as a continuous block of memory as soon as it starts. If the isn't enough continuous virtual memory available it fails on start up even if the program isn't going to use that much.
What then happens is the OS is smart enough not to allocate physical memory to the program until it is used. It cannot easily reclaim the memory but it can be swapped to disk if it needs to and it hasn't been used for a while. Java doesn't handle having parts of the heap swapped to disk very well so this should be avoided.
Java allocate memory only to objects. There is no explicit allocation of memory. In-fact Java even treats array types as objects. Each time an object created it comes in heap.
The Java runtime employs a garbage collector that reclaims the memory occupied by an object once it determines that object is no longer accessible. This is automatic process.
Calling System.gc() may not collect garbage at the time you call it; thats why your memory is not reduced. In general, it is better to let the system decide when it needs to collect the heap, and whether or not to do a full collection.
System.gc() doesn't even force a garbage collection; it's simply a hint to the JVM that "now may be a good time to clean up a bit"
Java memory explained here link2
There are some great documents produced by Sun/Oracle describing Java's Garbage Collection. A quick search on "Java Garbage Collection Tuning" yeilds results such as;
http://www.oracle.com/technetwork/java/gc-tuning-5-138395.html
and
http://java.sun.com/docs/hotspot/gc1.4.2/
The introduction of the Oracle doc states;
The Java TM 2 Platform Standard Edition (J2SE TM platform) is used for
a wide variety of applications from small applets on desktops to web
services on large servers. In the J2SE platform version 1.4.2 there
were four garbage collectors from which to choose but without an
explicit choice by the user the serial garbage collector was always
chosen. In version 5.0 the choice of the collector is based on the
class of the machine on which the application is started.
This “smarter choice” of the garbage collector is generally better but
is not always the best. For the user who wants to make their own
choice of garbage collectors, this document will provide information
on which to base that choice. This will first include the general
features of the garbage collections and tuning options to take the
best advantage of those features. The examples are given in the
context of the serial, stop-the-world collector. Then specific
features of the other collectors will be discussed along with factors
that should considered when choosing one of the other collectors.
They describe the various types of collectors available and the situations in which they should be used. I remember using this alongside JConsole to montior how the application performed when started with various different options.
These docs will give you a bit more insight into how collection occurs depending on the parameters you are using.
I ran into this problem on Windows and have found a solution, so I'm posting it as an answer in case it can help others.
A lot of answers on here suggest that Java's behavior is 1. good and/or 2. an unavoidable consequence of garbage collecting. These are both false.
The Problem:
If you are like me and you want to write Java to write small applications for a workstation or even run multiple smaller processes on a server, then Oracle's JVM memory allocation behavior makes it almost completely useless. Even when running with -client, every JVM process hoards memory once allocated and never gives it back. This behavior cannot be disabled. As the OP notices: each jvm process holds on to its unused memory indefinitely even if it will never use it again and even while other jvm processes are starving. This inexplicable behavior makes Oracle's a useless implementation for all but monolithic, single-application scenarios.
Also: this is NOT a consequence of garbage collection. Witness .Net applications which run on Windows, use garbage collection, and do not suffer from this problem at all.
The Solution:
The solution I found to this was to use the IKVM.NET JVM which you use as a drop-in replacement for java.exe on windows. It compiles Java bytecode to .Net IL code and runs as a .Net process. It also contains utilities to convert .jar files into .Net .dll and .exe assemblies. The performance is often better than Oracle's JVM and after a GC, memory is instantly returned to the OS. (Note: this also works in Linux with Mono)
To be clear, I still rely on Oracle's JVM for all but my small applications and also to debug my small applications, but once stable, I use ikvm to run them as if they were native windows applications and this works so well, I've been amazed. It has numerous beneficial side effects. Once compiled, DLLs shared between processes are loaded only once, and applications show up in the task manager as .exe instead of all showing as javaw.exe.
Unfortunately, not everyone can use ikvm to solve this problem, but I hope this helps those in my situation.
It seems that the JVM uses some fixed amount of memory. At least I have often seen parameters -Xmx (for the maximum size) and -Xms (for the initial size) which suggest that.
I got the feeling that Java applications don't handle memory very well. Some things I have noticed:
Even some very small sample demo applications load huge amounts of memory. Maybe this is because of the Java library which is loaded. But why is it needed to load the library for each Java instance? (It seems that way because multiple small applications linearly take more memory. See here for some details where I describe this problem.) Or why is it done that way?
Big Java applications like Eclipse often crash with some OutOfMemory exception. This was always strange because there was still plenty of memory available on my system. Often, they consume more and more memory over runtime. I'm not sure if they have some memory leaks or if this is because of fragmentation in the memory pool -- I got the feeling that the latter is the case.
The Java library seem to require much more memory than similar powerful libraries like Qt for example. Why is this? (To compare, start some Qt applications and look at their memory usage and start some Java apps.)
Why doesn't it use just the underlying system technics like malloc and free? Or if they don't like the libc implementation, they could use jemalloc (like in FreeBSD and Firefox) which seems to be quite good. I am quite sure that this would perform better than the JVM memory pool. And not only perform better, also require less memory, esp. for small applications.
Addition: Does somebody have tried that already? I would be much interested in a LLVM based JIT-compiler for Java which just uses malloc/free for memory handling.
Or maybe this also differs from JVM implementation to implementation? I have used mostly the Sun JVM.
(Also note: I'm not directly speaking about the GC here. The GC is only responsible to calculate what objects can be deleted and to initialize the memory freeing but the actual freeing is a different subsystem. Afaik, it is some own memory pool implementation, not just a call to free.)
Edit: A very related question: Why does the (Sun) JVM have a fixed upper limit for memory usage? Or to put it differently: Why does JVM handle memory allocations differently than native applications?
You need to keep in mind that the Garbage Collector does a lot more than just collecting unreachable objects. It also optimizes the heap space and keeps track of exactly where there is memory available to allocate for the creation of new objects.
Knowing immediately where there is free memory makes the allocation of new objects into the young generation efficient, and prevents the need to run back and forth to the underlying OS. The JIT compiler also optimizes such allocations away from the JVM layer, according to Sun's Jon Masamitsu:
Fast-path allocation does not call
into the JVM to allocate an object.
The JIT compilers know how to allocate
out of the young generation and code
for an allocation is generated in-line
for object allocation. The interpreter
also knows how to do the allocation
without making a call to the VM.
Note that the JVM goes to great lengths to try to get large contiguous memory blocks as well, which likely have their own performance benefits (See "The Cost of Missing the Cache"). I imagine calls to malloc (or the alternatives) have a limited likelihood of providing contiguous memory across calls, but maybe I missed something there.
Additionally, by maintaining the memory itself, the Garbage Collector can make allocation optimizations based on usage and access patterns. Now, I have no idea to what extent it does this, but given that there's a registered Sun patent for this concept, I imagine they've done something with it.
Keeping these memory blocks allocated also provides a safeguard for the Java program. Since the garbage collection is hidden from the programmer, they can't tell the JVM "No, keep that memory; I'm done with these objects, but I'll need the space for new ones." By keeping the memory, the GC doesn't risk giving up memory it won't be able to get back. Naturally, you can always get an OutOfMemoryException either way, but it seems more reasonable not to needlessly give memory back to the operating system every time you're done with an object, since you already went to the trouble to get it for yourself.
All of that aside, I'll try to directly address a few of your comments:
Often, they consume more and more
memory over runtime.
Assuming that this isn't just what the program is doing (for whatever reason, maybe it has a leak, maybe it has to keep track of an increasing amount of data), I imagine that it has to do with the free hash space ratio defaults set by the (Sun/Oracle) JVM. The default value for -XX:MinHeapFreeRatio is 40%, while -XX:MaxHeapFreeRatio is 70%. This means that any time there is only 40% of the heap space remaining, the heap will be resized by claiming more memory from the operating system (provided that this won't exceed -Xmx). Conversely, it will only* free heap memory back to the operating system if the free space exceeds 70%.
Consider what happens if I run a memory-intensive operation in Eclipse; profiling, for example. My memory consumption will shoot up, resizing the heap (likely multiple times) along the way. Once I'm done, the memory requirement falls back down, but it likely won't drop so far that 70% of the heap is free. That means that there's now a lot of underutilized space allocated that the JVM has no intention of releasing. This is a major drawback, but you might be able to work around it by customizing the percentages to your situation. To get a better picture of this, you really should profile your application so you can see the utilized versus allocated heap space. I personally use YourKit, but there are many good alternatives to choose from.
*I don't know if this is actually the only time and how this is observed from the perspective of the OS, but the documentation says it's the "maximum percentage of heap free after GC to avoid shrinking," which seems to suggest that.
Even some very small sample demo
applications load huge amounts of
memory.
I guess this depends on what kind of applications they are. I feel that Java GUI applications run memory-heavy, but I don't have any evidence one way or another. Did you have a specific example that we could look at?
But why is it needed to load the
library for each Java instance?
Well, how would you handle loading multiple Java applications if not creating new JVM processes? The isolation of the processes is a good thing, which means independent loading. I don't think that's so uncommon for processes in general, though.
As a final note, the slow start times you asked about in another question likely come from several intial heap reallocations necessary to get to the baseline application memory requirement (due to -Xms and -XX:MinHeapFreeRatio), depending what the default values are with your JVM.
Java runs inside a Virtual Machine, which constrains many parts of its behavior. Note the term "Virtual Machine." It is literally running as though the machine is a separate entity, and the underlying machine/OS are simply resources. The -Xmx value is defining the maximum amount of memory that the VM will have, while the -Xms defines the starting memory available to the application.
The VM is a product of the binary being system agnostic - this was a solution used to allow the byte code to execute wherever. This is similar to an emulator - say for old gaming systems. It is emulating the "machine" that the game runs on.
The reason why you run into an OutOfMemoryException is because the Virtual Machine has hit the -Xmx limit - it has literally run out of memory.
As far as smaller programs go, they will often require a larger percentage of their memory for the VM. Also, Java has a default starting -Xmx and -Xms (I don't remember what they are right now) that it will always start with. The overhead of the VM and the libraries becomes much less noticable when you start to build and run "real" applications.
The memory argument related to QT and the like is true, but is not the whole story. While it uses more memory than some of those, those are compiled for specific architectures. It has been a while since I have used QT or similar libraries, but I remember the memory management not being very robust, and memory leaks are still common today in C/C++ programs. The nice thing about Garbage Collection is that it removes many of the common "gotchas" that cause memory leaks. (Note: Not all of them. It is still very possible to leak memory in Java, just a bit harder).
Hope this helps clear up some of the confusion you may have been having.
To answer a portion of your question;
Java at start-up allocates a "heap" of memory, or a fixed size block (the -Xms parameter). It doesn't actually use all this memory right off the bat, but it tells the OS "I want this much memory". Then as you create objects and do work in the Java environment, it puts the created objects into this heap of pre-allocated memory. If that block of memory gets full then it will request a little more memory from the OS, up until the "max heap size" (the -Xmx parameter) is reached.
Once that max size is reached, Java will no longer request more RAM from the OS, even if there is a lot free. If you try to create more objects, there is no heap space left, and you will get an OutOfMemory exception.
Now if you are looking at Windows Task Manager or something like that, you'll see "java.exe" using X megs of memory. That sort-of corresponds to the amount of memory that it has requested for the heap, not really the amount of memory inside the heap thats used.
In other words, I could write the application:
class myfirstjavaprog
{
public static void main(String args[])
{
System.out.println("Hello World!");
}
}
Which would basically take very little memory. But if I ran it with the cmd line:
java.exe myfirstjavaprog -Xms 1024M
then on startup java will immediately ask the OS for 1,024 MB of ram, and thats what will show in Windows Task Manager. In actuallity, that ram isnt being used, but java reserved it for later use.
Conversely, if I had an app that tried to create a 10,000 byte large array:
class myfirstjavaprog
{
public static void main(String args[])
{
byte[] myArray = new byte[10000];
}
}
but ran it with the command line:
java.exe myfirstjavaprog -Xms 100 -Xmx 100
Then Java could only alocate up to 100 bytes of memory. Since a 10,000 byte array won't fit into a 100 byte heap, that would throw an OutOfMemory exception, even though the OS has plenty of RAM.
I hope that makes sense...
Edit:
Going back to "why Java uses so much memory"; why do you think its using a lot of memory? If you are looking at what the OS reports, then that isn't what its actually using, its only what its reserved for use. If you want to know what java has actually used, then you can do a heap dump and explore every object in the heap and see how much memory its using.
To answer "why doesn't it just let the OS handle it?", well I guess that is just a fundamental Java question for those that designed it. The way I look at it; Java runs in the JVM, which is a virtual machine. If you create a VMWare instance or just about any other "virtualization" of a system, you usually have to specify how much memory that virtual system will/can consume. I consider the JVM to be similar. Also, this abstracted memory model lets the JVM's for different OSes all act in a similar way. So for example Linux and Windows have different RAM allocation models, but the JVM can abstract that away and follow the same memory usage for the different OSes.
Java does use malloc and free, or at least the implementations of the JVM may. But since Java tracks allocations and garbage collects unreachable objects, they are definitely not enough.
As for the rest of your text, I'm not sure if there's a question there.
Even some very small sample demo applications load huge amounts of memory. Maybe this is because of the Java library which is loaded. But why is it needed to load the library for each Java instance? (It seems that way because multiple small applications linearly take more memory. See here for some details where I describe this problem.) Or why is it done that way?
That's likely due to the overhead of starting and running the JVM
Big Java applications like Eclipse often crash with some OutOfMemory exception. This was always strange because there was still plenty of memory available on my system. Often, they consume more and more memory over runtime. I'm not sure if they have some memory leaks or if this is because of fragmentation in the memory pool -- I got the feeling that the latter is the case.
I'm not entirely sure what you mean by "often crash," as I don't think this has happened to me in quite a long time. If it is, it's likely due to the "maximum size" setting you mentioned earlier.
Your main question asking why Java doesn't use malloc and free comes down to a matter of target market. Java was designed to eliminate the headache of memory management from the developer. Java's garbage collector does a reasonably good job of freeing up memory when it can be freed, but Java isn't meant to rival C++ in situations with memory restrictions. Java does what it was intended to do (remove developer level memory management) well, and the JVM picks up the responsibility well enough that it's good enough for most applications.
The limits are a deliberate design decision from Sun. I've seen at least two other JVM's which does not have this design - the Microsoft one and the IBM one for their non-pc AS/400 systems. Both grows as needed using as much memory as needed.
Java doesn't use a fixed size of memory it is always in the range from -Xms to -Xmx.
If Eclipse crashes with OutOfMemoryError, than it required more memory than granted by -Xmx (a coniguration issue).
Java must not use malloc/free (for object creation) since its memory handling is much different due to garbage collection (GC). GC removes automatically unused objects, which is a benefit compared to be responsible for memory management.
For details on this complex topic see Tuning Garbage Collection
We ship Java applications that are run on Linux, AIX and HP-Ux (PA-RISC). We seem to struggle to get acceptable levels of performance on HP-Ux from applications that work just fine in the other two environments. This is true of both execution time and memory consumption.
Although I'm yet to find a definitive article on "why", I believe that measuring memory consumption using "top" is a crude approach due to things like the shared code giving misleading results. However, it's about all we have to go on with a customer site where memory consumption on HP-Ux has become an issue. It only became an issue this time when we moved from Java 1.4 to Java 1.5 (on HP-Ux 11.23 PA-RISC). By "an issue", I mean that the machine ceased to create new processes because we had exhausted all 16GB of physical memory.
By measuring "before" and "after" total "free memory" we are trying to gauge how much has been consumed by a Java application. I wrote a quick app that stores 10,000 random 64 bit strings in an ArrayList and tried this approach to measuring consumption on Linux and HP-Ux under Java 1.4 and Java 1.5.
The results:
HP Java 1.4 ~60MB
HP Java 1.5 ~150MB
Linux Java 1.4 ~24MB
Linux Java 1.5 ~16MB
Can anyone explain why these results might arise? Is this some idiosyncrasy of the way "top" measures free memory? Does Java 1.5 on HP really consume 2.5 times more memory than Java 1.4?
Thanks.
The JVMs might just have different default parameters. The heap will grow to the size that you have configured to let it. The default on the Sun VM is a certain percentage of the RAM in the machine - that's to say that Java will, by default, use more memory if you use a machine with more memory on it.
I'd be really surprised if the HP-UX VM hadn't had lots of tuning for this sort of thing by HP. I'd suggest you fiddle with the parameters on both - figure out what the smallest max heap size you can use without hurting performance or throughput.
I don't have a HP box right now to test my hypothesis. However, if I were you, I would use a profiler like JConsole(comes with JDK) OR yourkit to measure what is happening.
However, it appears that you started measuring after you saw something amiss; So, I'm NOT discounting that it's happening -- just pointing you at something I'd have done in the same situation.
First, it's not clear what did you measure by "10,000 random 64 bit strings" test. You supposed to start the application, measure it's bootstrap memory footprint, and then run your test. It could easily be that Java 1.5 acquires more heap right after start (due to heap manager settings, for instance).
Second, we do run Java apps under 1.4, 1.5 and 1.6 under HP-UX, and they don't demonstrate any special memory requirements. We have Itanium hardware, though.
Third, why do you use top? Why not just print Runtime.getRuntime().totalMemory()?
Fourth, by adding values to ArrayList you create memory fragmentation. ArrayList has to double it's internal storage now and then. Depending on GC settings and ArrayList.ensureCapacity() implementation the amount of non-collected memory may differ dramatically between 1.4 and 1.5.
Essentially, instead of figuring out the cause of problem you have run a random test that gives you no useful information. You should run a profiler on the application to figure out where the memory leaks.
You might also want to look at the problem you are trying to solve... I don't imagine there are many problems that eat 16GB of memory that aren't due for a good round of optimization.
Are you launching multiple VMs? Are you reading large datasets into memory, and not discarding them quickly enough? etc etc etc.