I heard that on real time system it is preferred to be used pre-allocated memory to avoid garbage as much as possible. But what exactly does it mean? As I know whenever we call new operator we use heap memory on runtime. So how achieve to use pre-allocated memory?
"Pre-allocated memory" means that a program should allocate all the required memory blocks once after startup (using the new operator, as usual), rather than allocate memory multiple times during execution and leave memory which is no longer needed for the garbage collector to free.
Pre-allocated memory means a memory which is allocate at the time of loading a program, In java using static keyword we can achieve.
For more info refer this
Related
In graalvm project, gc is written in java in substratevm. What makes me curious is how to manage the memory of a garbage collector written in a language with gc.
If he manages his own memory by himself, it may cause an infinite loop. Here I assume that the garbage collector includes the functions of memory allocation and recovery, and give an example.
For example: My code is a garbage collector-> I need to create an object and allocate memory-> I called the garbage collector (myself)-> I need to create an object and allocate memory > I called the garbage collector (myself) ......
How does it solve infinite loop problems? My idea is to use a lightweight garbage collector written in an additional local language (like C language) to run itself (garbage collector written in java). Although substratevm seems to be compiled into a local executable binary file in native-image, I think the problem still exists.
SubstrateVM GC is written in a subset of Java that has a few restrictions. One of them is that GC code never allocates memory on the Java heap -- see com.oracle.svm.core.heap.RestrictHeapAccess.NO_ALLOCATION. That makes sense as GC is often started in response to the heap being full, so it would not be able to allocate anything anyway. Instead it requests memory chunks directly from the OS using mmap and the like -- see CommittedMemoryProvider and VirtualMemoryProvider classes
The Task
Allocate X=4..8MB of byte array (on heap), e.g. using ByteBuffer.allocate() such that it will not cause an OutOfMemoryError. It is not allowed to split the array and process it in smaller portions. Note that the allocation happens on heap, this is not a direct ByteBuffer.
The Challenges
Memory can be fragmented, and if there is enough memory (greater than X), a continuous portion of size X bytes may still be unavailable to allocate the array (any API to find out is there a continuous region of X bytes is available probably would help).
Heap memory is divided into regions to keep objects of different generations, and an object cannot span two or more regions of the heap: Huge arrays throws out of memory despite enough memory available and Large Array allocation across young and tenured portions of java Heap
Large objects are immediately allocated in a tenured region, but it is tricky to reliably reason about which region exactly even using ManagementFactory.getMemoryPoolMXBeans(): how can I know size of each generation in java heap with jmx Some JVMs dynamically adjust LOAs: https://www.ibm.com/docs/en/sdk-java-technology/8?topic=SSYKE2_8.0.0/com.ibm.java.vm.80.doc/docs/mm_allocation_loa.html
Question
Is there a way in Java to code as follows?
if (<I can reliably allocate an array sized X bytes on heap right now>) {
ByteBuffer.allocate(X);
}
There’s a fundamental problem with the idea to do
if (<I can reliably allocate an array sized X bytes on heap right now>) {
ByteBuffer.allocate(X);
}
known as “check-then-act” anti-pattern. Regardless of how the check in the if’s condition is supposed to work, you need to ensure that it doesn’t change between the check and the subsequent action, i.e. the allocation.
To ensure that the result doesn’t change, you’d not only need to stop all other threads of the same JVM from performing allocations (or concurrent garbage collection from completing) but also prevent all other processes of the same machine from allocating memory, as it is possible that the operating system did not reserve memory for your JVM exclusively but still allows other processing to take it right at this point.
The condition itself has the challenges already named in your question and, as you said yourself, all this fiddling with implementation specific memory regions might be moot when the JVM is capable of reconfiguring them on-the-fly. Since this is usually done as response to the result of a garbage collection, you’d need to perform a full garbage collection first, to determine the resulting situation. Only in this case we were able to be sure that another GC won’t change the situation, if we were able to stop all other threads and processes from doing allocations.
And on some JVMs the only way to reliably trigger a garbage collection, is to perform an actual allocation.
So you need a way to atomically perform the check, followed by an actual allocation that ensures that the memory stays available to you no matter what happens in the environment or an answer that the memory is not available. This mechanism does exist. Just call ByteBuffer.allocate(X) and if it completes normally, the returned reference ensures that the memory stays available as long as you keep it. Otherwise, the thrown OutOfMemoryError signals the unavailability of the memory. Since this mechanism exist, there is no reason to provide a second one with the same outcome.
No, there is no reliable way to do this in Java.
There are several ways to get estimates or best-effort guesses for the available memory, but nothing reliable. Also note that even if there were such a thing, another thread could change the available amount between the condition and the call to allocate.
This related answer contains a way to get such an estimate, and also explains some of the reasons why this can not be reliable.
I'm allocating a lot of byte buffers. After I'm done with them I set all reference to null. This is supposedly the "correct" way to release bytebuffers? Dereference it and let the GC clean it up ? I also call System.gc() to try and help it along.
Anyways, I create a bunch of buffers, deference them; but after "some time" I get all sorts of memory errors: java.lang.OutOfMemoryError: Direct buffer memory
I can increase the MaxDirectMemorySize but it just delays the above error.
I'm 99% positive I don't have anything referencing the old ByteBuffers. Is there a way to check this to see what the heck still has a ByteBuffer allocated?
You can use a tool like MAT that's free with Eclipse to see what is keeping your byte buffer by letting it do some heapdump analysis.
Another way I can think of is to wrap your byte buffer with something else that has a finalizer method.
Also Systen.gc() does not guarantee that finalizers will be executed you need to do System.runFinalization() to increase the likelihood.
Setting the references to null is the correct way to let the garbage collector that you are finished with that object. There must still be some other dangling reference. The best tool I have found for finding memory leaks is YourKit. A free alternative that is also very good is Visual VM from the JDK.
Remember that the slice() operation creates a new byte buffer that references the first one.
This is a problem with older versions of Java. The latest version of Java 6 will call System.gc() before throwing an OutOfMemoryError. If you don't want to trigger a GC you can release the direct memory manually on the Oracle/Sun JVM with
((DirectBuffer) buffer).cleaner().clean();
However, it is a better approach to recycle the direct buffers yourself so doing this is not so critical. (Creating direct ByteBuffers is relatively expensive)
Direct java.nio.ByteBuffer is (by definition) stored outside of java heap space. It is not freed until GC runs on heap, and frees it. But you can imagine a situation where heap usage is low therefore it does not need GC, but non-heap memory gets allocated and runs out of bounds.
Based on very interesting read:
http://www.ibm.com/developerworks/library/j-nativememory-linux/
The pathological case would be that the native heap becomes full and
one or more direct ByteBuffers are eligible for GC (and could be freed
to make some space on the native heap), but the Java heap is mostly
empty so GC doesn't occur.
Project: Java, JNI (C++), Android.
I'm going to manage native C++ object's lifetime by creating a managed wrapper class, which will hold a pointer to the native object (as a long member) and will delete the native object in it's overridden finalize() method. See this question for details.
The C++ object does not consume other types of resources, only memory. The memory footprint of the object is not extremely high, but it is essentially higher than 64 bit of a long in Java. Is there any way to tell Java's GC, that my wrapper is responsible for more than just a long value, and it's not a good idea to create millions of such objects before running garbage collection? In .NET there is a GC's AddMemoryPressure() method, which is there for exactly this purpose. Is there an equivalent in Java?
After some more googling, I've found a good article from IBM Research Center.
Briefly, they recommend using Java heap instead of native heap for native objects. This way memory pressure on JVM garbage collector is more realistic for the native objects, referenced from Java code through handles.
To achieve this, one needs to override the default C++ heap allocation and deallocation functions: operator new and operator delete. In the operator new, if JVM is available (JNI_OnLoad has been already called), then the one calls NewByteArray and GetByteArrayElements, which returns the allocated memory needed. To protect the created ByteArray from being garbage collected, the one also need to create a NewGlobalRef to it, and store it e.g. in the same allocated memory block. In this case, we need to allocate as much memory as requested, plus the memory for the references. In the operator delete, the one needs to DeleteGlobalRef and ReleaseByteArrayElements. In case JVM is not available, the one uses native malloc and free functions instead.
I believe that native memory is allocated outside the scope of Java's heap size. Meaning, you don't have to worry about your allocation taking memory away from the value you reserved using -Xmx<size>.
That being said, you could use ByteBuffer.allocateDirect() to allocate a buffer and GetDirectBufferAddress to access it from your native code. You can control the size of the direct memory heap using -XX:MaxDirectMemorySize=<size>
I have an application, basically, create a new byte array (less than 1K) store some data after few seconds (generally less than 1 minute, but some data stored up to 1 hour) write to disk and data will goes to garbage. Approximatelly 400 packets per second created. I read some articles that say don't worry about GC especially quickly created and released memory parts (on Java 6).
GC runs too long cause some problem about on my application.
I set some GC parameters(Bigger XMX and ParalelGC),this decrease Full GC time decrease but not enough yet. I have 2 idea,
Am I focus GC parameters or create Byte array memory pool mechanism? Which one is better?
The frequency of performing a GC is dependant on the object size, but the cost (the clean up time) is more dependant on the number of objects. I suspect the long living arrays are being copied between the spaces until it end up in the old space and finally discarded. Cleaning the old gen is relatively expensive.
I suggest you try using ByteBuffer to store data. These are like byte[] but have a variable size and can be slightly more efficient if you can use direct byte buffers with NIO. Pre-allocating your buffers can be more efficient to preallocate your buffers. (though can waste virtual memory)
BTW: The direct byte buffers use little heap space as they use memory in the "C" space.
I suggest you do some analysis into why GC is not working well enough for you. You can use jmap to dump out the heap and then use jhat or Eclipse Memory Analyser to see what objects are living in it. You might find that you are holding on to references that you no longer need.
The GC is very clever and you could actually make things worse by trying to outsmart it with your own memory management code. Try tuning the parameters and maybe you can try out the new G1 Garbage Collector too.
Also, remember, that GC loves short-lived, immutable objects.
Use profiler to identify the code snippet
Try with WeakReferences.
Suggest an GC algo to the VM
-Xgc: parallel
Set a big Heap and shared mem
-XX:+UseISM -XX:+AggressiveHeap
set below for garbage collection.
-XX:SurvivorRatio 8
This may help
http://download.oracle.com/docs/cd/E12840_01/wls/docs103/perform/JVMTuning.html#wp1130305