Does Java 6 consume more memory than you expect for largish applications?
I have an application I have been developing for years, which has, until now taken about 30-40 MB in my particular test configuration; now with Java 6u10 and 11 it is taking several hundred while active. It bounces around a lot, anywhere between 50M and 200M, and when it idles, it does GC and drop the memory right down. In addition it generates millions of page faults. All of this is observed via Windows Task Manager.
So, I ran it up under my profiler (jProfiler) and using jVisualVM, and both of them indicate the usual moderate heap and perm-gen usages of around 30M combined, even when fully active doing my load-test cycle.
So I am mystified! And it not just requesting more memory from the Windows Virtual Memory pool - this is showing up as 200M "Mem Usage".
CLARIFICATION: I want to be perfectly clear on this - observed over an 18 hour period with Java VisualVM the class heap and perm gen heap have been perfectly stable. The allocated volatile heap (eden and tenured) sits unmoved at 16MB (which it reaches in the first few minutes), and the use of this memory fluctuates in a perfect pattern of growing evenly from 8MB to 16MB, at which point GC kicks in an drops it back to 8MB. Over this 18 hour period, the system was under constant maximum load since I was running a stress test. This behavior is perfectly and consistently reproducible, seen over numerous runs. The only anomaly is that while this is going on the memory taken from Windows, observed via Task Manager, fluctuates all over the place from 64MB up to 900+MB.
UPDATE 2008-12-18: I have run the program with -Xms16M -Xmx16M without any apparent adverse affect - performance is fine, total run time is about the same. But memory use in a short run still peaked at about 180M.
Update 2009-01-21: It seems the answer may be in the number of threads - see my answer below.
EDIT: And I mean millions of page faults literally - in the region of 30M+.
EDIT: I have a 4G machine, so the 200M is not significant in that regard.
In response to a discussion in the comments to Ran's answer, here's a test case that proves that the JVM will release memory back to the OS under certain circumstances:
public class FreeTest
{
public static void main(String[] args) throws Exception
{
byte[][] blob = new byte[60][1024*1024];
for(int i=0; i<blob.length; i++)
{
Thread.sleep(500);
System.out.println("freeing block "+i);
blob[i] = null;
System.gc();
}
}
}
I see the JVM process' size decrease when the count reaches around 40, on both Java 1.4 and Java 6 JVMs (from Sun).
You can even tune the exact behaviour with the -XX:MaxHeapFreeRatio and -XX:MinHeapFreeRatio options -- some of the options on that page may also help with answering the original question.
I don't know about the page faults. but about the huge memory allocated for Java:
Sun's JVM only allocates memory, never deallocates it (until JVM death) deallocates memory only after a specific ratio between internal memory needs and allocated memory drops beneath a (tunable) value. The JVM starts with the amount specified in -Xms and can be extended up to the amount specified in -Xmx. I'm not sure what the defaults are. Whenever the JVM needs more memory (new objects / primitives / arrays) it allocates an entire chunk from the OS. However, when the need subsides (a momentary need, see 2 as well) it doesn't deallocates the memory back the the OS immediately, but keeps it to itself until that ratio has been reached. I was once told that JRockit behaves better, but I can't verify it.
Sun's JVM runs a full GC based on several triggers. One of them is the amount of available memory - when it falls down too much the JVM tries to perform a full GC to free some more. So, when more memory is allocated from the OS (momentary need) the chance for a full GC is lowered. This means that while you may see 30Mb of "live" objects, there might be a lot more "dead" objects (not reachable), just waiting for a GC to happen. I know yourkit has a great view called "dead objects" where you may see these "left-overs".
In "-server" mode, Sun's JVM runs GC in parallel mode (as opposed the older serial "stop the world" GC). This means that while there may be garbage to collect, it might not be collected immediately because of other threads taking all available CPU time. It will be collected before reaching out of memory (well, kinda. see http://java.sun.com/javase/technologies/hotspot/gc/gc_tuning_6.html), if more memory can be allocated from the OS, it might be before the GC runs.
Combined, a large initial memory configuration and short bursts creating a lot of short-lived objects might create a scenario as described.
edit: changed "never deallcoates" to "only after ratio reached".
Excessive thread creation explains your problem perfectly:
Each Thread gets its own stack, which is separate from heap memory and therefore not registered by profilers
The default thread stack size is quite large, IIRC 256KB (at least it was for Java 1.3)
Tread stack memory is probably not reused, so if you create and destroy lots of threads, you'll get lots of page faults
If you ever really need to have hundreds of threads aound, the thread stack size can be configured via the -Xss command line parameter.
Garbage collection is a rather arcane science. As the state of the art develops, un-tuned behaviour will change in response.
Java 6 has different default GC behaviour and different "ergonomics" to earlier JVM versions. If you tell it that it can use more memory (either explicitly on the command line, or implicitly by failing to specify anything more explicit), it will use more memory if it believes that this is likely to improve performance.
In this case, Java 6 appears to believe that reserving the extra space which the heap could grow into will give it better performance - presumably because it believes that this will cause more objects to die in Eden space, and limit the number of objects promoted to the tenured generation space. And from the specifications of your hardware, the JVM doesn't think that this extra reserved heap space will cause any problems. Note that many (though not all) of the assumptions the JVM makes in reaching its conclusion are based on "typical" applications, rather than your specific application. It also makes assumptions based on your hardware and OS profile.
If the JVM has made the wrong assumptions, you can influence its behaviour through the command line, though it is easy to get things wrong...
Information about performance changes in java 6 can be found here.
There is a discussion about memory management and performance implications in the Memory Management White Paper.
Over the last few weeks I had cause to investigate and correct a problem with a thread pooling object (a pre-Java 6 multi-threaded execution pool), where is was launching far more threads than required. In the jobs in question there could be up to 200 unnecessary threads. And the threads were continually dying and new ones replacing them.
Having corrected that problem, I thought to run a test again, and now it seems the memory consumption is stable (though 20 or so MB higher than with older JVMs).
So my conclusion is that the spikes in memory were related to the number of threads running (several hundred). Unfortunately I don't have time to experiment.
If someone would like to experiment and answer this with their conclusions, I will accept that answer; otherwise I will accept this one (after the 2 day waiting period).
Also, the page fault rate is way down (by a factor of 10).
Also, the fixes to the thread pool corrected some contention issues.
Lots of memory allocated outside Java's heap after upgrading to Java 6u10? Can only be one thing:
Java6 u10 Release Notes: "New Direct3D Accelerated Rendering Pipeline (...) Enabled by Default"
Sun enabled Direct 3D accelerations by default in Java 6u10. This option creates lots of (temporary?) native memory buffers, which are allocated outside the Java Heap. Add the following vm argument to disable it again:
-Dsun.java2d.d3d=false
Note that this will NOT disable 2D hardware acceleration, just some features that can make use of 3D hardware acceleration. You will see that your Java heap usage will increase by up to 7MB, but that's a good trade-off because you'll save ~100MB(+) of this temporary volatile memory.
I did a fair amount of testing within 2 Swing desktop application, on two platforms:
a high-end Intel-i7 with nVidia GTX 260 graphics card,
a 3-year laptop with Intel graphics.
On both hardware platforms the option made practically zero subjective difference. (Tests included: scrolling tables, zooming graphical flowsheets, charts, etc.). On the few tests where something was subtly different, disabling d3d counter-intuitively increased performance. I suspect that memory management/bandwidth problems counteracted whatever benefits the d3d accelerated functions were supposed to achieve. (Your mileage may vary!)
If you need to do some performance tuning, here's an excellent reference (e.g. "Troubleshooting Java 2D")
Are you using the ConcMarkSweep collector? It can increase the amount of memory required for your application due to increased memory fragmentation, and "floating garbage" - objects that become unreachable only after the collector has examined them, and therefore are not collected until the next pass.
Related
Hello I am having a case of 150GB heap memory program using In Memory Data grid. I have some crazy requirement from the operational department to use a single machine. Now we all know what happens in if the parallel garbage collector is used over 150GB probably it will be tens of minutes of garbage collection if the FULL GC is invoked.
My hope was that with Java 9 is coming Shenandoah low pause GC. Unfortunately from what I see it is not listed for delivery in Java 9. Does anyone knows anything about that ?
Never the less, I am wondering how G1 GC will perform for this amount of Heap memory.
And one last question. Since I have non interactive batch application that is supposed to complete in 2 hours lets say. The main goal here is to ensure that the Full GC never kicks in. If I ensure that there is plenty of memory lets say if the maximum heap that can be reached is 150 and I allocate it 250GB may I say with good confidence that the Full GC will never kick in or ? Usually full GC is triggered if the new generation + the old generation touches the maximum heap. Can it be triggered in a different way ?
There is a duplicate request made I will try to explain here why this question is not a duplicate. First we are talking about 150GB Heap which adds completely different dimension to the question. Second I dont use RMI as it is in the question mentioned, third I am asking question about G1 garbage collector in between the lines.Also once we go beyond the 32GB heap barrier we are entering the 64 bit address space you can not convince me that a question in regards of <32GB Heap is the same as a question with heap >32GB Not to mentioned that things have changed a bit since Java 7 for instance PermSpace does not exist.
The rule of thumb for a compacting GC is that it should be able to process 1 GB of live objects per core per second.
Example on an Haswell i7 (4 cores/8 threads) and 20GB heap with the parallel collector:
[24.757s][info][gc,heap ] GC(109) PSYoungGen: 129280K->0K(917504K)
[24.757s][info][gc,heap ] GC(109) ParOldGen: 19471666K->7812244K(19922944K)
[24.757s][info][gc ] GC(109) Pause Full (Ergonomics) 19141M->7629M(20352M) (23.791s, 24.757s) 966.174ms
[24.757s][info][gc,cpu ] GC(109) User=6.41s Sys=0.02s Real=0.97s
The live set after compacting is 7.6GB. It takes 6.4 seconds worth of cpu-time, due to parallelism this translates to <1s pause time.
In principle the parallel collector should be able to handle a 150GB heap with full GC times < ~2 minutes on a multi-core system, even when most of the heap consists of live objects.
Of course this is just a rule of thumb. Some things that can affect it negatively:
paging
thermal CPU throttling
workloads consisting of very large, reference-heavy objects
non-local memory traffic in NUMA configurations
other processes competing for CPU time
heavy use of weak/soft references
In some cases tuning may be necessary to achieve this throughput.
If the Parallel collector does not work despite all that then CMS and G1 can be viable alternatives but only if there is enough spare heap capacity and CPU cores available to the JVM. They need significant breathing room to do their concurrent work without risking a full GC.
It is correct I said no interactive, but still I have a strict license agreements. I need to be finished with the whole processing in an hour. So I can no afford 30 minutes stop the world event.
Basically, you don't really need low pause times in the sense that CMS, G1, Shenandoah or Zing aim for (they aim for <100ms or even <10ms even on large heaps).
All you need is that STW pauses are not so catastrophically bad that they eat a significant portion of your compute time.
This should be feasible with most of the available collectors, ignoring the serial one.
In practice there are some pathological edge cases where they may fall down, but to get to that point you need setup a system with your actual workload and do some test runs. If you experience some real problems, then you can ask a question with more details.
My simple goal: monitor the memory usage of a Java application so I can be warned when the application is getting dangerously close to throwing an OutOfMemoryError.
Yes, simple to state, but coming up with a correct solution seems very complicated. Some of the complicating factors are:
There are different heap regions, each of which can throw an OutOfMemoryError:
The permgen space, which has it's own size limit (set via -XX:MaxPermSize=)
The overall heap space (set via -Xmx)
The VM may allocate almost all of the heap before bothering to garbage collect. If the application uses a lot of soft references, then in fact this will surely happen. So just a high heap allocation percentage does not imply the application is near to throwing an OutOfMemoryError.
It would be nice if System.gc() guaranteed that the VM would reclaim all possibly reclaimable object (unreferenced and/or weakly referenced object), but it doesn't. So invoking System.gc() and then Runtime.freeMemory() is not reliable.
Objects that are queued for finalization take up memory, but (usually) are freed after they are finalized. So whether the finalizer thread has gotten to them or not affects the (apparent) memory usage (does the VM run the finalizer as a last desparate act before throwing OOM? Doesn't look like it.)
Native code takes up memory as well and too much usage of it can lead to OOM (this is not a likely case in my specific application, but does add another complication to the overall picture).
So what is a good and reliable way to answer the question: Is my Java application getting to throwing an OutOfMemoryError?
Put another way, suppose application version X runs fine and has no memory leak, but version X + 1 has a slow unrecognized memory leak. I'd like to be alerted by this monitoring before version X + 1 throws an OutOfMemoryError, but I'd like the exact same monitoring to not give false positives for version X. There may be some tuning required in setting up this monitoring - that's OK.
One possible answer might be something like: what is the maximum, over the past N "full" GC runs, of the heap utilization immediately after the GC run? If this value exceeds X% of the total allocated memory, then sound the alarms.
The idea is to determine "application memory usage" in simple number like a percentage, or even something like LOW, MEDIUM, or HIGH, and then monitor this value.
The jstat command gives lots of relevant information, the problem is boiling it down to a simple answer and avoiding false positives (or negatives) caused by the complicating factors listed above.
If you watch a memory graph of a long-running application (collected with a tool like jconsole, for example) you'll see a characteristic sawtooth pattern: memory usage climbs, then is GC'd back to a baseline, and then it climbs again. For a healthy app, the peaks and valleys are in two straight horizontal lines. For a leaking app, though, the baseline climbs. That's really what you need to watch for: if each successive GC is less effective than the last, then something is rotten in Denmark.
Search the Oracle docs page for the term Detecting Low Memory and Threshold Notifications -- you may be able to devise some alert system based upon built-in MXBeans. Garbage collection appears to be a focus of at least some of the metrics collection.
I have developed a J2ME web browser application, it is working fine. I am testing its memory consumption. It seems to me that it has a memory leak, because the green curve that represents the consumed memory of the memory monitor (of the wireless toolkit) reaches the maximum allocated memory (which is 687768 bytes) every 7 requests done by the browser, (i.e. when the end user navigates in the web browser from one page to other for 7 pages) after that the garbage collector runs and frees the allocated memory.
My question is:
is it a memory leak when the garbage collector runs automatically every 7 page navigation?
Do I need to run the garbage collector (System.gc()) manually one time per request to prevent the maximum allocated memory to be reached?
Please guide me, thanks
To determine if it is a memory leak, you would need to observe it more.
From your description, i.e. that once the maximum memory is reached, the GC kicks in and is able to free memory for your application to run, it does not sound like there is a leak.
Also you should not call GC yourself since
it is only an indication
could potentially affect the underlying algorithm affecting its performance.
You should instead focus on why your application needs so much memory in such a short period.
My question is: is it a memory leak when the garbage collector runs automatically every 7 page navigation?
Not necessarily. It could also be that:
your heap is too small for the size of problem you are trying to solve, or
your application is generating (collectable) garbage at a high rate.
In fact, given the numbers you have presented, I'm inclined to think that this is primarily a heap size issue. If the interval between GC runs decreased over time, then THAT would be evidence that pointed to a memory leak, but if the rate stays steady on average, then it would suggest that the rate of memory usage and reclamation are in balance; i.e. no leak.
Do I need to run the garbage collector (System.gc()) manually one time per request to prevent the maximum allocated memory to be reached?
No. No. No.
Calling System.gc() won't cure a memory leak. If it is a real memory leak, then calling System.gc() will not reclaim the leaked memory. In fact, all you will do is make your application RUN A LOT SLOWER ... assuming that the JVM doesn't ignore the call entirely.
Direct and indirect evidence that the default behaviour of HotSpot JVMs is to honour System.gc() calls:
"For example, the default setting for the DisableExplicitGC option causes JVM to honor Explicit garbage collection requests." - http://pic.dhe.ibm.com/infocenter/wasinfo/v7r0/topic/com.ibm.websphere.express.doc/info/exp/ae/rprf_hotspot_parms.html
"When JMX is enabled in this way, some JVMs (such as Sun's) that do distributed garbage collection will periodically invoke System.gc, causing a Full GC." - http://static.springsource.com/projects/tc-server/2.0/getting-started/html/ch11s07.html
"It is best to disable explicit GC by using the flag -XX:+DisableExplicitGC." - http://docs.oracle.com/cd/E19396-01/819-0084/pt_tuningjava.html
And from the Java 7 source code:
./openjdk/hotspot/src/share/vm/runtime/globals.hpp
product(bool, DisableExplicitGC, false, \
"Tells whether calling System.gc() does a full GC") \
where the false is the default value for the option. (And note that this is in the OS / M/C independent part of the code tree.)
I wrote a library that makes a good effort to force the GC. As mentioned before, System.gc() is asynchronous and won't do anything by itself. You may want to use this library to profile your application and find the spots where too much garbage is being produced. You can read more about it in this article where I describe the GC problem in detail.
That is (semi) normal behavior. Available (unreferenced) storage is not collected until the size of the heap reaches some threshold, triggering a collection cycle.
You can reduce the frequency of GC cycles by being a bit more "heap aware". Eg, a common error in many programs is to parse a string by using substring to not only parse off the left-most word, but also shorten the remaining string by substringing to the right. Creating a new String for the word is not easily avoided, but one can easily avoid repeatedly substringing the "tail" of the original string.
Running System.GC will accomplish nothing -- on most platforms it's a no-op, since it's so commonly abused.
Note that (outside of brain-dead Android) you can't have a true "memory leak" in Java (unless there's a serious JVM bug). What's commonly referred to as a "leak" in Java is the failure to remove all references to objects that will never be used again. Eg, you might keep putting data into a chain and never clear pointers to the stuff on the far end of the chain that is no longer going to be used. The resulting symptom is that the MINIMUM heap used (ie, the size immediately after GC runs) keeps rising each cycle.
Adding to the other excellent answers:
Looks like you are confusing memory leak with garbage collection.
Memory leak is when unused memory cannot be garbage collected because it still has references somewhere (although they're not used for anything).
Garbage collection is when a piece of software (the garbage collector) frees unreferenced memory automatically.
You should not call the garbage collector manually because that would affect its performance.
Currently in our testing environment the max and min JVM heap size are set to the same value, basically as much as the dedicated server machine will allow for our application. Is this the best configuration for performance or would giving the JVM a range be better?
Peter 's answer is correct in that -Xms is allocated at startup and it will grow up to -Xmx (max heap size) but it's a little misleading in how he has worded his answer. (Sorry Peter I know you know this stuff cold).
Setting ms == mx effectively turns off this behavior. While this used to be a good idea in older JVMs, it is no longer the case. Growing and shrinking the heap allows the JVM to adapt to increases in pressure on memory yet reduce pause time by shrinking the heap when memory pressure is reduced. Sometimes this behavior doesn't give you the performance benefits you'd expect and in those cases it's best to set mx == ms.
OOME is thrown when heap is more than 98% of time is spent collecting and the collections cannot recover more than 2% of that. If you are not at max heaps size then the JVM will simply grow so that you're beyond that boundaries. You cannot have an OutOfMemoryError on startup unless your heap hits the max heap size and meets the other conditions that define an OutOfMemoryError.
For the comments that have come in since I posted. I don't know what the JMonitor blog entry is showing but this is from the PSYoung collector.
size_t desired_size = MAX2(MIN2(eden_plus_survivors, gen_size_limit()),
min_gen_size());
I could do more digging about but I'd bet I'd find code that serves the same purpose in the ParNew and PSOldGen and CMS Tenured implementations. In fact it's unlikely that CMS would be able to return memory unless there has been a Concurrent Mode Failure. In the case of a CMF the serial collector will run and that should include a compaction after which top of heap would most likely be clean and therefore eligible to be deallocated.
Main reason to set the -Xms is for if you need a certain heap on start up. (Prevents OutOfMemoryErrors from happening on start up.) As mentioned above, if you need the startup heap to match the max heap is when you would match it. Otherwise you don't really need it. Just asks the application to take up more memory that it may ultimately need. Watching your memory use over time (profiling) while load testing and using your application should give you a good feel for what to need to set them to. But it isn't the worse thing to set them to the same on start up. For a lot of our apps, I actually start out with something like 128, 256, or 512 for min (startup) and one gigabyte for max (this is for non application server applications).
Just found this question on stack overflow which may also be helpful side-effect-for-increasing-maxpermsize-and-max-heap-size. Worth the look.
AFAIK, setting both to the same size does away with the additional step of heap resizing which might be in your favour if you pretty much know how much heap you are going to use. Also, having a large heap size reduces GC invocations to the point that it happens very few times. In my current project (risk analysis of trades), our risk engines have both Xmx and Xms to the same value which pretty large (around 8Gib). This ensures that even after an entire day of invoking the engines, almost no GC takes place.
Also, I found an interesting discussion here.
Definitely yes for a server app. What's the point of having so much memory but not using it?
(No it doesn't save electricity if you don't use a memory cell)
JVM loves memory. For a given app, the more memory JVM has, the less GC it performs. The best part is more objects will die young and less will tenure.
Especially during a server startup, the load is even higher than normal. It's brain dead to give server a small memory to work with at this stage.
From what I see here at http://java-monitor.com/forum/showthread.php?t=427
the JVM under test begins with the Xms setting, but WILL deallocate memory it doesn't need and it will take it upto the Xmx mark when it needs it.
Unless you need a chunk of memory dedicated for a big memory consumer initially, there's not much of a point in putting in a high Xms=Xmx. Looks like deallocation and allocation occur even with Xms=Xmx
Background
I have a Spring batch program that reads a file (example file I am working with is ~ 4 GB in size), does a small amount of processing on the file, and then writes it off to an Oracle database.
My program uses 1 thread to read the file, and 12 worker threads to do the processing and database pushing.
I am churning lots and lots and lots of young gen memory, which is causing my program to go slower than I think it should.
Setup
JDK 1.6.18
Spring batch 2.1.x
4 Core Machine w 16 GB ram
-Xmx12G
-Xms12G
-NewRatio=1
-XX:+UseParallelGC
-XX:+UseParallelOldGC
Problem
With these JVM params, I get somewhere around 5.x GB of memory for Tenured Generation, and around 5.X GB of memory for Young Generation.
In the course of processing this one file, my Tenured Generation is fine. It grows to a max of maybe 3 GB, and I never need to do a single full GC.
However, the Young Generation hits it's max many times. It goes up to 5 GB range, and then a parallel minor GC happens and clears Young Gen down to 500MB used. Minor GCs are good and better than a full GC, but it still slows down my program a lot (I am pretty sure the app still freezes when a young gen collection occurs, because I see the database activity die off). I am spending well over 5% of my program time frozen for minor GCs, and this seems excessive. I would say over the course of processing this 4 GB file, I churn through 50-60GB of young gen memory.
I don't see any obvious flaws in my program. I am trying to obey the general OO principles and write clean Java code. I am trying not to create objects for no reason. I am using thread pools, and whenever possible passing objects along instead of creating new objects. I am going to start profiling the application, but I was wondering if anyone had some good general rules of thumb or anti patterns to avoid that lead to excessive memory churn? Is 50-60GB of memory churn to process a 4GB file the best I can do? Do I have to revert to JDk 1.2 tricks like Object Pooling? (although Brian Goetz give a presentation that included why object pooling is stupid, and we don't need to do it anymore. I trust him a lot more than I trust myself .. :) )
I have a feeling that you are spending time and effort trying to optimize something that you should not bother with.
I am spending well over 5% of my program time frozen for minor GCs, and this seems excessive.
Flip that around. You are spending just under 95% of your program time doing useful work. Or put it another way, even if you managed to optimize the GC to run in ZERO time, the best you can get is something over 5% improvement.
If your application has hard timing requirements that are impacted by the pause times, you could consider using a low-pause collector. (Be aware that reducing pause times increases the overall GC overheads ...) However for a batch job, the GC pause times should not be relevant.
What probably matters most is the wall clock time for the overall batch job. And the (roughly) 95% of the time spent doing application specific stuff is where you are likely to get more pay-off for your profiling / targeted optimization efforts. For example, have you looked at batching the updates that you send to the database?
So.. 90% of my total memory is in char[] in "oracle.sql.converter.toOracleStringWithReplacement"
That would tend to indicate that most of your memory usage occurs in the Oracle JDBC drivers while preparing stuff to be sent to the database. There's very little you about that. I'd chalk it up as an unavoidable overhead.
It would be really usefull if you clarify your terms "young" and "tentured" generation because Java 6 has a slightly different GC-Model: Eden, S0+S1, Old, Perm
Have you experimented with the different garbage collection algorithms? How has "UseConcMarkSweepGC" or "UseParNewGC" performed.
And don't forget simply increasing the available space is NOT the solution, because a gc run will take much longer, decrease the size to normal values ;)
Are you sure you have no memory-leaks? In a consumer-producer-pattern - you describe - rarely seldom data should be in the Old Gen because those jobs are proccessed really fast and then "thrown away", or is your work queue filling up?
You should defintely observe your program with a memory analyzer.
I think a session with a memory profiler will shed a lot of light on the subject. This gives a nice overview how many objects are created and this is somtimes revealing.
I am always amazed how many strings are generated.
For domain objects crossreferencing them is also revealing. If you see suddenly 3 times more objects from a derived object than from the source then there something going on there.
Netbeans has a nice one built it. I used JProfiler in the past. I think if you bang long enough on eclipse you can get the same info from the PPTP tools.
You need to profile your application to see what is happening exactly. And I would also try first to use the ergonomics feature of the JVM, as recommended:
2. Ergonomics
A feature referred to here as
ergonomics was introduced in J2SE 5.0.
The goal of ergonomics is to provide
good performance with little or no
tuning of command line options by
selecting the
garbage collector,
heap size,
and runtime compiler
at JVM startup, instead of using fixed
defaults. This selection assumes that
the class of the machine on which the
application is run is a hint as to the
characteristics of the application
(i.e., large applications run on large
machines). In addition to these
selections is a simplified way of
tuning garbage collection. With the
parallel collector the user can
specify goals for a maximum pause time
and a desired throughput for an
application. This is in contrast to
specifying the size of the heap that
is needed for good performance. This
is intended to particularly improve
the performance of large applications
that use large heaps. The more general
ergonomics is described in the
document entitled “Ergonomics in the
5.0 Java Virtual Machine”. It is recommended that the ergonomics as
presented in this latter document be
tried before using the more detailed
controls explained in this document.
Included in this document are the
ergonomics features provided as part
of the adaptive size policy for the
parallel collector. This includes the
options to specify goals for the
performance of garbage collection and
additional options to fine tune that
performance.
See the more detailed section about Ergonomics in the Java SE 6 HotSpot[tm] Virtual Machine Garbage Collection Tuning guide.
In my opinion, the young generation should not be equally big as the old generation, so that the small garbage collections stay fast.
Do you have many objects that represent the same value? If you do, merge these duplicate objects using a simple HashMap:
public class MemorySavingUtils {
ConcurrentHashMap<String, String> knownStrings = new ConcurrentHashMap<String, String>();
public String unique(String s) {
return knownStrings.putIfAbsent(s, s);
}
public void clear() {
knownStrings.clear();
}
}
With the Sun Hotspot compiler, the native String.intern() is really slow for large numbers of Strings, that's why I suggest to build your own String interner.
Using this method, strings from the old generation are reused and strings from the new generation can be garbage collected quickly.
Read a line from a file, store as a string and put in a list. When the list has 1000 of these strings, put it in a queue to be read by worker threads. Have said worker thread make a domain object, peel a bunch of values off the string to set the fields (int, long, java.util.Date, or String), and pass the domain object along to a default spring batch jdbc writer
if that's your program, why not set a smaller memory size, like 256MB?
I'm guessing with a memory limit that high you must be reading the file entirely into memory before doing the processing. Could you consider using a java.io.RandomAccessFile instead?