Java 6 Update 25 VM crash: insufficient memory - java

For an update of this question - see below.
I experience a (reproducible, at least for me) JVM crash (not an OutOfMemoryError)
(The application which crashes is eclipse 3.6.2).
However, looking at the crash log makes me wonder:
#
# There is insufficient memory for the Java Runtime Environment to continue.
# Native memory allocation (malloc) failed to allocate 65544 bytes for Chunk::new
# Possible reasons:
# The system is out of physical RAM or swap space
# In 32-bit mode, the process size limit was hit
# Possible solutions:
# Reduce memory load on the system
# Increase physical memory or swap space
# Check if swap backing store is full
# Use 64 bit Java on a 64 bit OS
# Decrease Java heap size (-Xmx/-Xms)
# Decrease number of Java threads
# Decrease Java thread stack sizes (-Xss)
# Set larger code cache with -XX:ReservedCodeCacheSize=
# This output file may be truncated or incomplete.
Current thread (0x531d6000): JavaThread "C2 CompilerThread1" daemon
[_thread_in_native, id=7812, stack(0x53af0000,0x53bf0000)]
Stack: [0x53af0000,0x53bf0000], sp=0x53bee860, free space=1018k
Native frames: (J=compiled Java code, j=interpreted, Vv=VM code, C=native code)
V [jvm.dll+0x1484aa]
V [jvm.dll+0x1434fc]
V [jvm.dll+0x5e6fc]
V [jvm.dll+0x5e993]
V [jvm.dll+0x27a571]
V [jvm.dll+0x258672]
V [jvm.dll+0x25ed93]
V [jvm.dll+0x260072]
V [jvm.dll+0x24e59a]
V [jvm.dll+0x47edd]
V [jvm.dll+0x48a6f]
V [jvm.dll+0x12dcd4]
V [jvm.dll+0x155a0c]
C [MSVCR71.dll+0xb381]
C [kernel32.dll+0xb729]
I am using Windows XP 32-bit SP3. I have 4GB RAM.
Before starting the application I had 2 GB free according to the task manager (+ 1 GB system cache which might be freed as well.). I am definitely having enough free RAM.
From the start till the crash I logged the JVM memory statistics using visualvm and jconsole.
I acquired the memory consumption statistics until the last moments before the crash.
The statistics shows the following allocated memory sizes:
HeapSize: 751 MB (used 248 MB)
Non-HeapSize(PermGen & CodeCache): 150 MB (used 95 MB)
Size of memory management areas (Edenspace, Old-gen etc.): 350 MB
Thread stack sizes: 17 MB (according to oracle and due the fact that 51 threads are running)
I am running the application (jre 6 update 25, server vm) using the parameters:
-XX:PermSize=128m
-XX:MaxPermSize=192m
-XX:ReservedCodeCacheSize=96m
-Xms500m
-Xmx1124m
Question:
Why does the JVM crash when there's obviously enough memory on the VM and OS?
With the above settings I think that I cannot hit the 2GB 32-bit limit (1124MB+192MB+96MB+thread stacks < 2GB). In any other case (too much heap allocation), I would rather expect an OutOfMemoryError than a JVM crash
Who can help me to figure out what is going wrong here?
(Note: I upgraded recently to Eclipse 3.6.2 from Eclipse 3.4.2 and from Java 5 to Java 6. I suspect that there's a connection between the crashes and these changes because I haven't seen these before)
UPDATE
It seems to be a JVM bug introduced in Java 6 Update 25 and has something to do with the new jit compiler. See also this blog entry.
According to the blog, the fix of this bug should be part of the next java 6 updates.
In the meanwhile, I got a native stack trace during a crash. I've updated the above crash log.
The proposed workaround, using the VM argument -XX:-DoEscapeAnalysis works (at least it notably lowers the probability of a crash)

2GB on 32-bit JVM on Windows is incorrect. https://blogs.sap.com/2019/10/07/does-32-bit-or-64-bit-jvm-matter-anymore/
Since you are on Windows-XP you are stuck with a 32 bit JVM.
The max heap is 1.5GB on 32 bit VM on Windows. You are at 1412MB to begin with without threads. Did you try decreasing the swap stack size -Xss, and have you tried eliminating the PermSize allocated initially: -XX:PermSize=128m? Sounds like this is an eclipse problem, not a memory-problem per-se.
Can you move to a newer JVM or different (64-bit) JVM on a different machine? Even if you are targeting windows-XP there is no reason to develop on it, unless you HAVE to. Eclipse can run, debug and deploy code on remote machines easily.
Eclipse's JVM can be different then the JVM of things you run in or with eclipse. Eclipse is a memory pig. You can eliminate unnecessary eclipse plug-ins to use less eclipse memory, it comes with things out of the box you probably don't need or want.
Try to null out references (to eliminate circularly un-collectible GC objects), re-use allocated memory, use singletons, and profile your memory usage to eliminate unnecessary objects, references and allocations. Additional tips:
Prefer static memory allocation, i.e allocate once per VM as opposed
to dynamically.
Avoid creation of temporary objects within functions - consider a reset() method which can allow the object to reused
Avoid String mutations and mutation of auto boxed types.

I think that #ggb667 has nailed it with the reason your JVM is crashing. 32-bit Windows architectural constraints limit the available RAM for a Java application to 1.5GB1 ... not 2GB as you surmised. Also, you have neglected to include the address space occupied by the code segment of the executable, shared libraries, the native heap, and "other things".
Basically, this is not a JVM bug. You are simply running against the limitations of your hardware and operating system.
There is a possible solution in the form of PAE (Physical Address Extension) support in some versions of Windows. According to the link, Windows XP with PAE makes available up to 4GB of usable address spaces to user processes. However, there are caveats about device driver support.
Another possible solution is to reduce the max heap size, and do other things to reduce the application's memory utilization; e.g. in Eclipse reduce the number of "open" projects in your workspace.
See also: Java maximum memory on Windows XP
1 - Different sources say different things about the actual limit, but it is significantly less than 2GB. To be frank, it doesn't matter what the actual limit is.
In an ideal world this question should no longer be of practical interest to anyone. In 2020:
You shouldn't be running Windows XP. It has been end of life since April 2014
You shouldn't be running Java 6. It has been end of life since April 2013
If you are still running Java 6, you should be at the last public patch release: 1.6.0_45. (Or a later 1.6 non-public release if you have / had a support contract.)
Either way, you should not be running Eclipse on this system. Seriously, you can get a new 64-bit machine for a few hundred dollars with more memory, etc that will allow you to run an up-to-date operating system and an up-to-date Java release. You should use that to run Eclipse.
If you really need to do Java development on an old 32-bit machine with an old version of Java (because you can't afford a newer machine) you would be advised to use a simple text editor and the Java 6 JDK command line tools (and a 3rd-party Java build tool like Ant, Maven, Gradle).
Finally, if you are still trying to run / maintain Java software that is stuck on Java 6, you should really be trying to get out of that hole. Life is only going to get harder for you:
If the Java 6 software was developed in-house or you have source code, port it.
If you depend on proprietary software that is stuck on Java 6, look for a new vendor.
If management says no, put it to them that they may need to "turn it off".
You / your organization should have dealt with this issue this SEVEN years ago.

I stumbled upon a similar problem at work. We had set -Xmx65536M for our application but kept getting exactly the same kind of errors. The funny thing is that the errors happened always at a time when our application was actually doing pretty lightweight calculations, relatively speaking, and was thus nowhere near this limit.
We found a possible solution for the problem online: http://www.blogsoncloud.com/jsp/techSols/java-lang-OutOfMemoryError-unable-to-create-new-native-thread.jsp , and it seemed to solve our problem. After lowering -Xmx to 50G, we've had none of these issues.
What actually happens in the case is still somewhat unclear to us.

The JVM has its own limits that will stop it long before it hits the physical or virtual memory limits. What you need to adjust is the heap size, which is with another one of the -X flags. (I think it's something creative like -XHeapSizeLimit but I'll check in a second.)
Here we go:
-Xmsn Specify the initial size, in bytes, of the memory allocation pool.
This value must be a multiple of 1024
greater than 1MB. Append the letter k
or K to indicate kilobytes, or m or M
to indicate megabytes. The default
value is 2MB. Examples:
-Xms6291456
-Xms6144k
-Xms6m
-Xmxn Specify the maximum size, in bytes, of the memory allocation pool.
This value must a multiple of 1024
greater than 2MB. Append the letter k
or K to indicate kilobytes, or m or M
to indicate megabytes. The default
value is 64MB. Examples:
-Xmx83886080
-Xmx81920k
-Xmx80m

Related

Java JVM Heap Size

I have an application that on launch requests a specific amount of RAM using the following command.
java -Xms512m -Xmx985m -jar someJarfile.jar
This command fails to run on my computer with 8.0GB of RAM because it can not create an object heap of the specified size. If I lower the max range to something below 700MB it works fine.
What is even stranger is that even doing a simple java -Xmx768m -version fails when the -Xmx flag value exceeds 700m. I am trying to run it with Java 1.7Uu67 32-bit(that is what the jar was built with) and even newer versions of Java 1.7 and event Java 1.8. I would understand if the max heap was higher and I was using 32bit, but it is not above the ~1.4GB cap of 32-bit java
Is there a configuration parameter that I am missing somewhere that would be causing this, some sort of software that may be interfering? It does not make sense to me as to why I can not allocate 700MB of RAM on a machine with 8.0GB of RAM. I
I should also note that there are no other processes running that are taking up all of my RAM. It is a fresh install of Windows 7.
While 700 MB is pretty low, it is not surprising.
The 32-bit Windows XP emulator in Windows works the same way as Windows XP with all it's limitations. It means you lose 2 GB or a potential 4 GB to the OS. This means programs already running use up virtual memory space. Also if your program uses shared libraries or off heap storage like direct memory and memory mapped files this will means you lose virtual memory for the heap. Effectively you are limited to 1.4 GB of virtual memory for your applications no matter how much memory you actually have.
The simple way around this it to use the 64-bit JVM which runs in your 64-bit OS and is also limited but instead to 192 TB of virtual memory on Windows.
You should try using a 64 bit Java Runtime. It is probably the case that there is no 985 MB large one-piece memory chunk free within the 32-bit address space of your computer (the 32 bit address space 4GB). When you use a 64 bit Java Runtime, Java can allocate the memory within the 64 bit address space, in which the free memory is much more likely to be available.
It doesn't matter that your JAR file was built using a 32 bit version.
The answer to your question may lie in the fact that Windows tries and fails to find a contiguous block of memory that is large enough: see http://javarevisited.blogspot.nl/2013/04/what-is-maximum-heap-size-for-32-bit-64-JVM-Java-memory.html. (Though this suggests that other processes are hogging memory, which seems to be contradicted by your last remark.)

Analyze/track down potential native memory leak in JVM

We're running an application on Linux using Java 1.6 (OpenJDK as well as Oracle JDK). The JVM itself has a maximum of 3.5 GB heap and 512 MB permgen space. However, after running a while top reports the process is using about 8 GB of virtual memory and smem -s swap p reports about 3.5 GB being swapped.
After running a bigger import of thousands of image files on one server, almost no swap space is left and calls to native applications (in our case Im4java calls to Image Magick) fail due to the OS failing to allocate memory for those applications.
In another case the swap space filled over the course of several weeks resulting in the OS killing the JVM due to being out of swap space.
I understand that the JVM will need more than 4 GB of memory for heap (max 3.5 GB), permgen (max 512 MB), code cache, loaded libraries, JNI frames etc.
The problem I'm having is how to find out what is actually using how much of the memory. If the JVM was out of heap memory, I'd get a dump which I could analyze, but in our case it's the OS memory that is eaten up and thus the JVM doesn't generate a dump.
I know there's jrcmd for JRockit, but unfortunately we can't just switch the JVM.
There also seem to be a couple of libraries that allow to track native memory usage but most of those seem to need native code to be recompiled - and besides Im4java (which AFAIK just runs a native process, we don't use DLL/SO-integration here) and the JVM there's no other native code involved that we know of.
Besides that, we can't use a library/tool that might have a huge impact on performance or stability in order to track memory usage on a production system over a long period (several weeks).
So the question is:
How can we get information on what the JVM is actually needing all that memory for, ideally with some detailed information?
You may find references to "zlib/gzip" (pdf handling or http encoding since Java 7), "java2d" or "jai" when replacing memory allocator (jemalloc or tcmalloc) in JVM.
But to really diagnose native memory leak, JIT code symbol mapping and Linux recent profiling tools are required: perf, perf-map-agent and bcc.
Please refer to details in related answer https://stackoverflow.com/a/52767721/737790
Many thanks to Brendan Gregg

Java Heap Space Error - How To Increase Maximum Amount Able To Be Reserved? [Cannot Exceed 1505M]

I am running 64-bit windows 7 with 4GB of RAM. I have 32-bit java I am trying to run a graph search algorithm in eclipse. I commented absolutely everything out except for a simple println("Hello World") After a lot of tinkering, I found that I cannot reserve more than 1505M-1507M (it varies between that-- I've no idea why). That is to say, I set the following as my JVM arguments:
-Xms1505M
I read online that I should be able to reserve a maximum of 2G. A quick ctrl-alt-del check showed that I have 2400M available and 1200 cached. Here is where things get strange: As a stupid experiment, I opened 50 tabs of on google chrome such that I had 400 available memory, 450 cached. I ran my eclipse program with the flag above and it still ran. I reserved 1500M of non-existent RAM.
Someone please help! This program is for a grade and I've been stuck on this for hours.
An operating system with virtual memory can perform strange tricks, and the memory-usage statistics may not always tell you what you think they are. Some of the memory may be swapped out to disk, which sounds like what you're describing here, but some of the memory that's listed for each program is actually shared (e.g., copies of system libraries that are used by each program, but only one copy is loaded in memory).
The more fundamental question is why your graph algorithm is taking up such an inordinate amount of memory; unless you're trying to work on the global Internet routing table, you're probably implementing the algorithm incorrectly.
-Xms is to set the minimum heap, in your case you need to change the max heap using -Xmx
There are other posts here in SO that discuss -Xms vs -Xmx, here is one of them
A 32-bit windows program runs in an emulated 32-bit environment which is designed to work just like Windows XP for compatibility. This means it also has the same limitations as 32-bit windows and you cannot have a heap larger than 1.2 - 1.4 GB depending on what you have run before.
The simplest solution is; don't use 32-bit Java. The 64-bit Java will run better/faster unless you are forced to use 32-bit DLL. In that case I suggest you have one JVM running in 32-bit and you communicate (RMI/messaging/shared memory) with it from a 64-bit program which does all the real work.
I read online that I should be able to reserve a maximum of 2G.
That was never possible with windows 32-bit in Java. The problem was that the heap had to be continous and use what memory was left after all the shareed libraries were loaded.
A quick ctrl-alt-del check showed that I have 2400M available and 1200 cached.
Time to get some more memory I think. I wouldn't buy a laptop with less than 8 GB if you want to use memory seriously and I wouldn't buy a PC with less than 32 GB.
Here is where things get strange: As a stupid experiment, I opened 50 tabs of on google chrome such that I had 400 available memory, 450 cached. I ran my eclipse program with the flag above and it still ran. I reserved 1500M of non-existent RAM.
The OS has access to more memory, it just won't let you use it in a 32-bit emulation. You can have 32 GB of main memory and a 32-bit JVM still will not be able to allocate more.

JVM Initialization error [duplicate]

I've always been able to allocate 1400 megabytes for Java SE running on 32-bit Windows XP (Java 1.4, 1.5 and 1.6).
java -Xmx1400m ...
Today I tried the same option on a new Windows XP machine using Java 1.5_16 and 1.6.0_07 and got the error:
Error occurred during initialization of VM
Could not reserve enough space for object heap
Could not create the Java virtual machine.
Through trial and error it seems 1200 megabytes is the most I can allocate on this machine.
Any ideas why one machine would allow 1400 and another only 1200?
Edit: The machine has 4GB of RAM with about 3.5GB that Windows can recognize.
Keep in mind that Windows has virtual memory management and the JVM only needs memory that is contiguous in its address space. So, other programs running on the system shouldn't necessarily impact your heap size. What will get in your way are DLL's that get loaded in to your address space. Unfortunately optimizations in Windows that minimize the relocation of DLL's during linking make it more likely you'll have a fragmented address space. Things that are likely to cut in to your address space aside from the usual stuff include security software, CBT software, spyware and other forms of malware. Likely causes of the variances are different security patches, C runtime versions, etc. Device drivers and other kernel bits have their own address space (the other 2GB of the 4GB 32-bit space).
You could try going through your DLL bindings in your JVM process and look at trying to rebase your DLL's in to a more compact address space. Not fun, but if you are desperate...
Alternatively, you can just switch to 64-bit Windows and a 64-bit JVM. Despite what others have suggested, while it will chew up more RAM, you will have much more contiguous virtual address space, and allocating 2GB contiguously would be trivial.
This has to do with contiguous memory.
Here's some info I found online for somebody asking that before, supposedly from a "VM god":
The reason we need a contiguous memory
region for the heap is that we have a
bunch of side data structures that are
indexed by (scaled) offsets from the
start of the heap. For example, we
track object reference updates with a
"card mark array" that has one byte
for each 512 bytes of heap. When we
store a reference in the heap we have
to mark the corresponding byte in the
card mark array. We right shift the
destination address of the store and
use that to index the card mark array.
Fun addressing arithmetic games you
can't do in Java that you get to (have
to :-) play in C++.
Usually we don't have trouble getting
modest contiguous regions (up to about
1.5GB on Windohs, up to about 3.8GB on Solaris. YMMV.). On Windohs, the
problem is mostly that there are some
libraries that get loaded before the
JVM starts up that break up the
address space. Using the /3GB switch
won't rebase those libraries, so they
are still a problem for us.
We know how to make chunked heaps, but
there would be some overhead to using
them. We have more requests for faster
storage management than we do for
larger heaps in the 32-bit JVM. If you
really want large heaps, switch to the
64-bit JVM. We still need contiguous
memory, but it's much easier to get in
a 64-bit address space.
The Java heap size limits for Windows are:
maximum possible heap size on 32-bit Java: 1.8 GB
recommended heap size limit on 32-bit Java: 1.5 GB (or 1.8 GB with /3GB option)
This doesn't help you getting a bigger Java heap, but now you know you can't go beyond these values.
Oracle JRockit, which can handle a non-contiguous heap, can have a Java heap size of 2.85 GB on Windows 2003/XP with the /3GB switch. It seems that fragmentation can have quite an impact on how large a Java heap can be.
The JVM needs contiguous memory and depending on what else is running, what was running before, and how windows has managed memory you may be able to get up to 1.4GB of contiguous memory. I think 64bit Windows will allow larger heaps.
Sun's JVM needs contiguous memory. So the maximal amount of available memory is dictated by memory fragmentation. Especially driver's dlls tend to fragment the memory, when loading into some predefined base address. So your hardware and its drivers determine how much memory you can get.
Two sources for this with statements from Sun engineers: forum blog
Maybe another JVM? Have you tried Harmony? I think they planned to allow non-continuous memory.
I think it has more to do with how Windows is configured as hinted by this response:
Java -Xmx Option
Some more testing: I was able to allocate 1300MB on an old Windows XP machine with only 768MB physical RAM (plus virtual memory). On my 2GB RAM machine I can only get 1220MB. On various other corporate machines (with older Windows XP) I was able to get 1400MB. The machine with a 1220MB limit is pretty new (just purchased from Dell), so maybe it has newer (and more bloated) Windows and DLLs (it's running Window XP Pro Version 2002 SP2).
I got this error message when running a java program from a (limited memory) virtuozzo VPS. I had not specified any memory arguments, and found I had to explicitly set a small amount as the default must have been too high. E.g. -Xmx32m (obviously needs to be tuned depending on the program you run).
Just putting this here in case anyone else gets the above error message without specifying a large amount of memory like the questioner did.
sun's JDK/JRE needs a contiguous amount of memory if you allocate a huge block.
The OS and initial apps tend to allocate bits and pieces during loading which fragments the available RAM. If a contiguous block is NOT available, the SUN JDK cannot use it. JRockit from Bea(acquired by Oracle) can allocate memory from pieces.
Everyone seems to be answering about contiguous memory, but have neglected to acknowledge a more pressing issue.
Even with 100% contiguous memory allocation, you can't have a 2 GiB heap size on a 32-bit Windows OS (*by default). This is because 32-bit Windows processes cannot address more than 2 GiB of space.
The Java process will contain perm gen (pre Java 8), stack size per thread, JVM / library overhead (which pretty much increases with each build) all in addition to the heap.
Furthermore, JVM flags and their default values change between versions. Just run the following and you'll get some idea:
java -XX:+PrintFlagsFinal
Lots of the options affect memory division in and out of the heap. Leaving you with more or less of that 2 GiB to play with...
To reuse portions of this answer of mine (about Tomcat, but applies to any Java process):
The Windows OS
limits the memory allocation of a 32-bit process to 2 GiB in total (by
default).
[You will only be able] to allocate around 1.5 GiB heap
space because there is also other memory allocated to the process
(the JVM / library overhead, perm gen space etc.).
Why does 32-bit Windows impose a 2 GB process address space limit, but
64-bit Windows impose a 4GB limit?
Other modern operating systems [cough Linux] allow 32-bit processes to
use all (or most) of the 4 GiB addressable space.
That said, 64-bit Windows OS's can be configured to increase the limit
of 32-bit processes to 4 GiB (3 GiB on 32-bit):
http://msdn.microsoft.com/en-us/library/windows/desktop/aa366778(v=vs.85).aspx
Here is how to increase the Paging size
right click on mycomputer--->properties--->Advanced
in the performance section click settings
click Advanced tab
in Virtual memory section, click change. It will show ur current paging
size.
Select Drive where HDD space is available.
Provide initial size and max size ...e.g. initial size 0 MB and max size
4000 MB. (As much as you will require)
**There are numerous ways to change heap size like,
file->setting->build, exceution, deployment->compiler here you will find heap size
file->setting->build, exceution, deployment->compiler->andriod here also you will find heap size. You can refer this for andriod project if you facing same issue.
What worked for me was
Set proper appropriate JAVA_HOME path incase you java got updated.
create new system variable computer->properties->advanced setting->create new system variable
name: _JAVA_OPTION value: -Xmx750m
FYI:
you can find default VMoption in Intellij
help->edit custom VM option , In this file you see min and max size of heap.**
First, using a page-file when you have 4 GB of RAM is useless. Windows can't access more than 4GB (actually, less because of memory holes) so the page file is not used.
Second, the address space is split in 2, half for kernel, half for user mode. If you need more RAM for your applications use the /3GB option in boot.ini (make sure java.exe is marked as "large address aware" (google for more info).
Third, I think you can't allocate the full 2 GB of address space because java wastes some memory internally (for threads, JIT compiler, VM initialization, etc). Use the /3GB switch for more.

32-bit Java on 64-bit OS: is there a limit to number of JVMs?

I have a Solaris sparc (64-bit) server, which has 16 GB of memory. There are a lot of small Java processes running on it, but today I got the "Could not reserve enough space for object heap" error when trying to launch a new one. I was surprised, since there was still more than 4GB free on the server. The new process was able to successfully launch after some of the other processes were shut down; the system had definitely hit a ceiling of some kind.
After searching the web for an explanation, I began to wonder if it was somehow related to the fact that I'm using the 32-bit JVM (none of the java processes on this server require very much memory).
I believe the default max memory pool is 64MB, and I was running close to 64 of these processes. So that would be 4GB all told ... right at the 32-bit limit. But I don't understand why or how any of these processes would be affected by the others. If I'm right, then in order to run more of these processes I'll either have to tune the max heap to be lower than the default, or else switch to using the 64-bit JVM (which may mean raising the max heap to be higher than the default for these processes). I'm not opposed to either of these, but I don't want to waste time and it's still a shot in the dark right now.
Can anyone explain why it might work this way? Or am I completely mistaken?
If I am right about the explanation, then there is probably documentation on this: I'd very much like to find it. (I'm running Sun's JDK 6 update 17 if that matters.)
Edit: I was completely mistaken. The answers below confirmed my gut instinct that there's no reason why I shouldn't be able to run as many JVMs as I can hold. A little while later I got an error on the same server trying to run a non-java process: "fork: not enough space". So there's some other limit I'm encountering that is not java-specific. I'll have to figure out what it is (no, it's not swap space). Over to serverfault I go, most likely.
I believe the default max memory pool
is 64MB, and I was running close to 64
of these processes. So that would be
4GB all told ... right at the 32-bit
limit.
No. The 32bit limit is per process (at least on a 64bit OS). But the default maximum heap is not fixed at 64MB:
initial heap size: Larger of 1/64th of
the machine's physical memory on the
machine or some reasonable minimum.
maximum heap size: Smaller of 1/4th of
the physical memory or 1GB.
Note: The boundaries and fractions given for the heap size are correct for J2SE 5.0. They are likely to be different in subsequent releases as computers get more powerful.
I suspect the memory is fragmented. Check also Tools to view/solve Windows XP memory fragmentation for a confirmation that memory fragmentation can cause such errors.

Categories