I have to run a couple of java services on my machine to obtain a certain dev environment (and get my not-java-related work done)
java -Xmx400m -jar foo-app/target/foo-app-SNAPSHOT.jar
java -Xmx250m -jar bar-app/target/bar-app-SNAPSHOT.jar
...
To not run out of memory, I need to limit the memory usage. The default (512m afaik) ist too high for my machine so I lowered them somewhat (on a wild as guessing basis). Except for one, where I learned the hard way (crashed, even freezes, and thankfully some .pid error files left behind in the project folder...), that I better settle a little higher:
java -Xmx800m -jar doo-app/target/doo-app-SNAPSHOT.jar
Question: is there a way, to track memory usage of a certain app over time?
By some java command line parameter or even with ps -ae, htop or similar? (thus not fiddling in the source itself, remap garbage collectors, etc, etc)
I see plenty of numbers, but figuring out which belong to which java project running, and what could roughly indicate me a proper peak memory consumption (in a -Xmx___m sense)... I have no idea.
I work under Ubuntu-MATE 16.04, x64.
The best way to analyze memory consumption is a profiler. In your jdk there comes the jvisualvm profiler, which is absolutely sufficient for this task. A (lengthy) tutorial can be found here: https://engineering.talkdesk.com/ninjas-guide-to-getting-started-with-visualvm-f8bff061f7e7
Other approaches are basically shotgun-style -reduce the xmx and then generate load in the system and see if it runs oom. If you do NOT have a straight controll flow you have no way to predict the used memory.
Related
I'm investigating some memory bloat in a Java project. Confounded by the different statistics reported by different tools (we are using Java 8 on Solaris 10).
jconsole gives me three numbers:
Committed: the amount reserved for this process by the OS
Used: the amount actually being used by this process
Max: the amount available to the process (in our case it is limited to 128MB via Java command line option -Xmx128m).
For my project, jconsole reports 119.5MB max, 61.9MB committed, 35.5MB used.
The OS tools report something totally different:
ps -o vsz,rss and prstat -s rss and pmap -x all report that this process is using around 310MB virtual, 260MB physical
So my questions are:
Why does the OS report that I'm using around 5x as much as jconsole says is "committed" to my process?
Which of these measurements is actually accurate? (By "accurate", I mean, if I have 12GB of memory, can I run 40 of these (# 300MB) before I hit OutOfMemoryException? Or can I run 200 of them (# 60MB)? (Yes, I know I can't use all 12GB of memory, and yes I understand that virtual memory exists; I'm just using that number to illuminate the question better.)
This question goes quite deep. I'm just going to mention 3 of the many reasons:
VMs
Shared libraries
Stacks and permgen
VMs
Java is like a virtual mini computer. Imagine you ran an emulator on your computer that emulates an old macintosh computer, for example. The emulator app has a config screen where you set how much RAM is in the virtual computer. If you pick 1GB and start the emulator, your OS is going to say the 'Old Mac Emulator' application is taking 1GB. Eventhough inside the virtual machine, that virtual old mac might be reporting 800MB of 1GB free.
A JVM is the same thing. The JVM has its own memory management. As far as the OS is concerned, java.exe is an app that takes 1GB. As far as the JVM is concerned, there's 400MB available on the heap right now.
A JVM is slightly more convoluted, in that the total amount of memory a JVM 'claims' from the OS can fluctuate. Out of the box, a JVM will generally not ask for the maximum right away, but will ask for more over time before kicking in the garbage collector, or a combination thereof: Heap full? Garbage collect. That only freed up maybe 20% or so? Ask the OS for more. -Xms and -Xmx control this; set them to the same, and the JVM will on bootup ask for that much memory and will never ask for more. In general a JVM will never relinquish any memory it claimed.
JVMs, still, are primarily aimed at server deployments, where you want the RAM dedicated to your VM to be constant. There's little point in having each app take whatever they want when they want it, generally. In contrast to desktop apps where you tend to have a ton of apps running and given that a human is 'operating' it, generally only one app has particularly significant ram requirements.
This explains jconsole, which is akin to reporting the free memory inside the virtual old mac app: It's reporting on the state of the heap as the JVM sees it.
Whereas ps -o and friends are memory introspection tools at the OS level, and they just see the JVM as a big black box.
Which one is actually accurate
They both are. From their perspective, they are correct.
Shared library
OSes are highly complex beasts, these days. To put things in java terms, you can have a single JVM that is concurrently handling 100 simultaneous incoming https calls. One could want to see a breakdown of how much memory each of the currently 100 running 'handlers' is taking up. Okay... so how do we 'file' the memory load of String, the class itself (not any particular instance of String - the code. e.g. the instructions for how .toLowerCase() runs. Those are in memory too, someplace!). The web framework needs it, so does the core JVM, and so does probably every single last one of those 100 concurrent handlers. So how do we 'bookkeep' this?
In other words, the memory load on an entire system cannot be strictly divided up as 'that memory is 100% part of that app, and this memory is 10)% part of this app'. Shared libraries make that difficult.
The JVM is technically capable of rendering UIs, processing images, opening files both using the synchronous as well as the asynchronous API, and even the random access API if your OS offers a separate access library for it, sending network requests in async mode, in sync mode, and more. In effect, a JVM will immediately tell the OS: I can do allllll these things.
In my experience/recollection, most OSes report the total memory load of a single application as the sum of the memory they need as well as all the memory any (shared) library they load, in full.
That means ps and friends overreport JVMs considerably: The JVM loads in a ton of libraries. This doesn't actually cost RAM (The OS also loaded these libraries, the JVM doesn't use any large DLLs/.SO/.JNILIB files of its own, just hooks up the ones the OS provides, pretty much all of them), but is often 'bookkept' as such. You know this is happening if this trivial app:
class Test { public static void main(String[] args) throws Exception {
System.out.println("Hello!");
Thread.sleep(100000L);
}}
Already takes more than ~60MB or so.
I mean, if I have 12GB of memory, can I run 40 of these (# 300MB)
That shared library stuff means each VM's memory load according to ps and friends are over-inflated by however much the shared libraries 'cost', because each JVM is going to share that library - the OS only loads it once, not 40 times.
Stacks and permgen
The 'heap', which is where newly created objects go, is the largest chunk of any JVM's memory load. It's also generally the only one JVM introspection tools like jconsole show you. However, it's not the only memory a JVM needs. There's a small slice it needs for its core self (the 'C code', so to speak). Each active thread has a stack and each stack also needs memory. By default it's whatever you pass to -Xss, but times the number of concurrent threads. But that's not a certainty: You can construct a new thread with an alternate size (check the constructors of j.l.Thread). There used to be 'permgen' which is where class code lived. Modern JVM versions got rid of it; in general newer JVM versions try to do more and more on heap instead of in magic hard-to-introspect things like permgen.
I mean, if I have 12GB of memory, can I run 40 of these (# 300MB) before I hit OutOfMemoryException?
Run all 40 at once, and always specify both -Xms and -Xmx, setting them to equal sizes. Assuming all those 40 JVMs are relatively stable in terms of how many concurrent threads they ever run, if you're ever going to run into memory issues, it'll happen immediately (due to -Xms and -Xmx being equal you've removed the dynamism from this situation. All JVMs pretty much instaclaim all the memory they will ever claim, so it either 'works' or it won't. Stacks mess with the cleanliness of this somewhat, hence the caveat of stable-ish thread counts).
I've been tinkering with GlassFish 2.1.1 lately, on both an Ubuntu Linux box as well as a Windows XP one.
Looking at the "java" processes representing asadmin, JavaDB server, and the GlassFish app server domain itself on Windows (using the Task Manager), they add up to just over 100 MB of memory.
However, looking at the same processes on the Linux box (using "ps aux" and the Gnome System Monitor) show memory usage in the ballpark of 800 MB.
This seems extremely odd to me. If anything, I would have assumed memory usage to be less favorable on Windows. Either way, I wouldn't have expected the swing between the two to be THAT dramatic. Is there something fundamental that I'm missing here? I don't necessarily need detailed profiling information, I just need a roughly accurate figure for total memory use (real world) on the two platforms.
Because you're measuring it differently.
It is notoriously difficult to measure memory usage on systems which support virtual memory and shared memory; both Linux and Windows fall into this category.
Basically it stems around
Do you count pages which are allocated but not mapped in just now?
Do you count potentially shared pages? (e.g. those from mapped files / executables / libraries etc)
The answers aren't so trivial.
Linux provides two "easy" memory measurements, RSS and VM size, neither of which exactly represents what people typically think they mean when they say "how much memory is it using". What programmers think they mean often falls somewhere in between RSS and VM size.
Tomcat 5.5.x and 6.0.x
Grails 1.6.x
Java 1.6.x
OS CentOS 5.x (64bit)
VPS Server with memory as 384M
JAVA_OPTS : tried many combinations- including the following
export JAVA_OPTS='-Xms128M -Xmx512M -XX:MaxPermSize=1024m'
export JAVA_OPTS='-server -Xms128M -Xmx128M -XX:MaxPermSize=256M'
(As advised by http://www.grails.org/Deployment)
I have created a blank Grails application i.e simply by giving the command grails create-app and then WARed it
I am running Tomcat on a VPS Server
When I simply start the Tomcat server, with no apps deployed, the free memory is about 236M
and used memory is about 156M
When I deploy my "blank" application, the memory consumption spikes to 360M and finally the Tomcat instance is killed as soon as it takes up all free memory
As you have seen, my app is as light as it can be.
Not sure why the memory consumption is as high it is.
I am actually troubleshooting a real application, but have narrowed down to this scenario which is easier to share and explain.
UPDATE
I tested the same "blank" application on my local Tomcat 5.5.x on Windows and it worked fine
The memory consumption of the Java process shot from 32 M to 107M. But it did not crash and it remained under acceptable limits
So the hunt for answer continues... I wonder if something is wrong about my Linux box. Not sure what though...
UPDATE 2
Also see this http://www.grails.org/Grails+Test+On+Virtual+Server
It confirms my belief that my simple-blank app should work on my configuration.
It is a false economy to try to run a long running Java-based application in the minimal possible memory. The garbage collector, and hence the application will run much more efficiently if it has plenty of regular heap memory. Give an application too little heap and it will spend too much time garbage collecting.
(This may seem a bit counter-intuitive, but trust me: the effect is predictable in theory and observable in practice.)
EDIT
In practical terms, I'd suggest the following approach:
Start by running Tomcat + Grails with as much memory as you can possibly give it so that you have something that runs. (Set the permgen size to the default ... unless you have clear evidence that Tomcat + Grails are exhausting permgen.)
Run the app for a bit to get it to a steady state and figure out what its average working set is. You should be able to figure that out from a memory profiler, or by examining the GC logging.
Then set the Java heap size to be (say) twice the measured working set size or more. (This is the point I was trying to make above.)
Actually, there is another possible cause for your problems. Even though you are telling Java to use heaps of a given size, it may be that it is unable to do this. When the JVM requests memory from the OS, there are a couple of situations where the OS will refuse.
If the machine (real or virtual) that you are running the OS does not have any more unallocated "real" memory, and the OS's swap space is fully allocated, it will have to refuse requests for more memory.
It is also possible (though unlikely) that per-process memory limits are in force. That would cause the OS to refuse requests beyond that limit.
Finally, note that Java uses more virtual memory that can be accounted for by simply adding the stack, heap and permgen numbers together. There is the memory used by the executable + DLLs, memory used for I/O buffers, and possibly other stuff.
384MB is pretty small. I'm running a small Grails app in a 512MB VPS at enjoyvps.net (not affiliated in any way, just a happy customer) and it's been running for months at just under 200MB. I'm running a 32-bit Linux and JDK though, no sense wasting all that memory in 64-bit pointers if you don't have access to much memory anyway.
Can you try deploying a tomcat monitoring webapp e.g. psiprobe and see where the memory is being used?
We ship Java applications that are run on Linux, AIX and HP-Ux (PA-RISC). We seem to struggle to get acceptable levels of performance on HP-Ux from applications that work just fine in the other two environments. This is true of both execution time and memory consumption.
Although I'm yet to find a definitive article on "why", I believe that measuring memory consumption using "top" is a crude approach due to things like the shared code giving misleading results. However, it's about all we have to go on with a customer site where memory consumption on HP-Ux has become an issue. It only became an issue this time when we moved from Java 1.4 to Java 1.5 (on HP-Ux 11.23 PA-RISC). By "an issue", I mean that the machine ceased to create new processes because we had exhausted all 16GB of physical memory.
By measuring "before" and "after" total "free memory" we are trying to gauge how much has been consumed by a Java application. I wrote a quick app that stores 10,000 random 64 bit strings in an ArrayList and tried this approach to measuring consumption on Linux and HP-Ux under Java 1.4 and Java 1.5.
The results:
HP Java 1.4 ~60MB
HP Java 1.5 ~150MB
Linux Java 1.4 ~24MB
Linux Java 1.5 ~16MB
Can anyone explain why these results might arise? Is this some idiosyncrasy of the way "top" measures free memory? Does Java 1.5 on HP really consume 2.5 times more memory than Java 1.4?
Thanks.
The JVMs might just have different default parameters. The heap will grow to the size that you have configured to let it. The default on the Sun VM is a certain percentage of the RAM in the machine - that's to say that Java will, by default, use more memory if you use a machine with more memory on it.
I'd be really surprised if the HP-UX VM hadn't had lots of tuning for this sort of thing by HP. I'd suggest you fiddle with the parameters on both - figure out what the smallest max heap size you can use without hurting performance or throughput.
I don't have a HP box right now to test my hypothesis. However, if I were you, I would use a profiler like JConsole(comes with JDK) OR yourkit to measure what is happening.
However, it appears that you started measuring after you saw something amiss; So, I'm NOT discounting that it's happening -- just pointing you at something I'd have done in the same situation.
First, it's not clear what did you measure by "10,000 random 64 bit strings" test. You supposed to start the application, measure it's bootstrap memory footprint, and then run your test. It could easily be that Java 1.5 acquires more heap right after start (due to heap manager settings, for instance).
Second, we do run Java apps under 1.4, 1.5 and 1.6 under HP-UX, and they don't demonstrate any special memory requirements. We have Itanium hardware, though.
Third, why do you use top? Why not just print Runtime.getRuntime().totalMemory()?
Fourth, by adding values to ArrayList you create memory fragmentation. ArrayList has to double it's internal storage now and then. Depending on GC settings and ArrayList.ensureCapacity() implementation the amount of non-collected memory may differ dramatically between 1.4 and 1.5.
Essentially, instead of figuring out the cause of problem you have run a random test that gives you no useful information. You should run a profiler on the application to figure out where the memory leaks.
You might also want to look at the problem you are trying to solve... I don't imagine there are many problems that eat 16GB of memory that aren't due for a good round of optimization.
Are you launching multiple VMs? Are you reading large datasets into memory, and not discarding them quickly enough? etc etc etc.
We have a java program that requires a large amount of heap space - we start it with (among other command line arguments) the argument -Xmx1500m, which specifies a maximum heap space of 1500 MB. When starting this program on a Windows XP box that has been freshly rebooted, it will start and run without issues. But if the program has run several times, the computer has been up for a while, etc., when it tries to start I get this error:
Error occurred during initialization of VM
Could not reserve enough space for object heap
Could not create the Java virtual machine.
I suspect that Windows itself is suffering from memory fragmentation, but I don't know how to confirm this suspicion. At the time that this happens, Task manager and sysinternals procexp report 2000MB free memory. I have looked at this question related to internal fragmentation
So the first question is, How do I confirm my suspicion?
The second question is, if my suspicions are correct, does anyone know of any tools to solve this problem? I've looked around quite a bit, but I haven't found anything that helps, other than periodic reboots of the machine.
ps - changing operating systems is also not currently a viable option.
Agree with Torlack, a lot of this is because other DLLs are getting loaded and go into certain spots, breaking up the amount of memory you can get for the VM in one big chunk.
You can do some work on WinXP if you have more than 3G of memory to get some of the windows stuff moved around, look up PAE here:
http://www.microsoft.com/whdc/system/platform/server/PAE/PAEdrv.mspx
Your best bet, if you really need more than 1.2G of memory for your java app, is to look at 64 bit windows or linux or OSX. If you're using any kind of native libraries with your app you'll have to recompile them for 64 bit, but its going to be a lot easier than trying to rebase dlls and stuff to maximize the memory you can get on 32 bit windows.
Another option would be to split your program up into multiple VMs and have them communicate with eachother via RMI or messaging or something. That way each VM can have some subset of the memory you need. Without knowing what your app does, i'm not sure that this will help in any way, though...
Unless you are running out of page file space, this issue isn't that the computer is running out of memory. The whole point of virtual memory is to allow the processes to use more virtual memory than is physically available.
Not knowing how the JVM handles the heap, it is a bit hard to say exactly what the problem is, but one of the common issues is that there isn't enough contiguous free address space available in your process to allow the heap to be extended. Why this would be a problem after the machine has been running a while is a bit confusing.
I've been working on a similar problem at work. I have found that running the program using WinDBG and using the "!address" and "!address -summary" commands have been invaluable in tracking down why a processes' virtual address space has become fragmented. You can also try running the program after reboot and using the "!address" command to take a picture of the address space and then do the same when the program no longer runs. This might clue you in on the problem. Maybe something simple as an extra DLL getting loading might cause the problem.
I suspect that the problem is Windows memory fragmentation. There is another question here on StackOverflow called Java Maximum Memory on Windows XP that mentions using Process Explorer to look at where DLLs are mapped into memory, and then to address the problem by rebasing the DLLs so that load into memory in a more compact way.
Using Minimem (http://minimem.kerkia.net/) for that application might fix your problem. However, I'm not sure this is the answer you are looking for. I hope it helps.
Maybe you should consider to start the program and reserving the memory and not
end the VM after each run. Look for different GC options and release your objects.
Use vmmap from Microsoft's SysInternals tools to view the fragmentation of the virtual address space, and identify what's breaking up the space