Can I get how much memory for my program remains? - java

My program has many works that need a lot of memory that I can't exactly know when I need to stop it, but in case there's very few memory left, I can force it stop using resources. So can I get how many remaining (in byte) memory that my program can use?
P/s: There's NO way to release the process memory. They need memory, as much as possible, and that is how it works (and, no trash for collector, since old ones still be need).

Try something like:
Debug.MemoryInfo memoryInfo = new Debug.MemoryInfo();
Debug.getMemoryInfo(memoryInfo);
String memMessage = String.format("Memory: Pss=%.2f MB,
Private=%.2f MB, Shared=%.2f MB",
memoryInfo.getTotalPss() / 1000,
memoryInfo.getTotalPrivateDirty() / 1000,
memoryInfo.getTotalSharedDirty() / 1000);
You can read more at this blog: http://huenlil.pixnet.net/blog/post/26872625

http://www.javaspecialists.eu/archive/Issue029.html
http://www.exampledepot.com/egs/java.lang/GetHeapSize.html

public static long getCurrentFreeMemoryBytes() {
long heapSize = Runtime.getRuntime().totalMemory();
long heapRemaining = Runtime.getRuntime().freeMemory();
long nativeUsage = Debug.getNativeHeapAllocatedSize();
return Runtime.getRuntime().maxMemory() - (heapSize - heapRemaining) - nativeUsage;
}
While not perfect it should do the trick for the most part.

Check out the tools that Android provides for memory tracking here.

Related

Drop part of a List<> when encountering OutOfMemoryException

I'm writing a program that is supposed to continually push generated data into a List sensorQueue. The side effect is that I will eventually run out of memory. When that happens, I'd like drop parts of the list, in this example the first, or older, half. I imagine that if I encounter an OutOfMemeryException, I won't be able to just use sensorQueue = sensorQueue.subList((sensorQueue.size() / 2), sensorQueue.size());, so I came here looking for an answer.
My code:
public static void pushSensorData(String sensorData) {
try {
sensorQueue.add(parsePacket(sensorData));
} catch (OutOfMemoryError e) {
System.out.println("Backlog full");
//TODO: Cut the sensorQueue in half to make room
}
System.out.println(sensorQueue.size());
}
Is there an easy way to detect an impending OutOfMemoryException then?
You can have something like below to determine MAX memory and USED memory. Using that information you can define next set of actions in your programme. e.g. reduce its size or drop some elements.
final int MEGABYTE = (1024*1024);
MemoryMXBean memoryBean = ManagementFactory.getMemoryMXBean();
MemoryUsage heapUsage = memoryBean.getHeapMemoryUsage();
long maxMemory = heapUsage.getMax() / MEGABYTE;
long usedMemory = heapUsage.getUsed() / MEGABYTE;
Hope this would helps!
The problem with subList is that it creates sublist keeping the original one in memory. However, ArrayList or other extension of AbstractList has removeRange(int fromIndex, int toIndex) which removes elements of current list, so doesn't require additional memory.
For the other List implementations there is similar remove(int index) which you can use multiple times for the same purpose.
I think you idea is severely flawed (sorry).
There is no OutOfMemoryException, there is OutOfMemoryError only! Why that is important? Because errors leaves app in unstable state, well I'm not that sure about that claim in general, but it definitely holds for OutOfMemoryError. Because there is no guarantee, that you will be able to catch it! You can consume all of memory within you try-catch block, and OutOfMemoryError will be thrown somewhere in JDK code. So your catching is pointless.
And what is the reason for this anyways? How many messages do you want in list? Say that your message is 1MB. And your heap is 1000MB. So if we stop considering other classes, your heap size define, that your list will contain up to 1000 messages, right? Wouldn't it be easier to set heap sufficiently big for your desired number of messages, and specify message count in easier, intergral form? And if your answer is "no", then you still cannot catch OutOfMemoryError reliably, so I'd advise that your answer rather should be "yes".
If you really need to consume all what is possible, then checking memory usage in % as #fabsas recommended could be way. But I'd go with integral definition — easier to managed. Your list will contain up-to N messages.
You can drop a range of elements from a ArrayList using subList:
list.subList(from, to).clear();
Where from is the first index of the range to be removed and to is the last. In your case, you can do something like:
list.subList(0, sensorQueue.size() / 2).clear();
Note that this command will return a List.

Is there a maximum of files that Java application can have open at one moment?

I am writing a Java application that needs to work with many files open at the same time. I know how to get the maximum available memory for the files to be read, but is there such a thing as having too many files open, regardless of available memory?
I've seen questions about "to many open files" error, but reading the documentation, I am not able to reproduce the conditions and therefore not sure how to handle such an error/exception.
I've looked into java.io.File and java.nio.Files for such errors or exceptions. Of course looked for it online, especially on stackoverflow, and the search for similar questions did not return what I need.
public static int MAXIMUM_NUMBER_OF_FILES;
/* maximum file number inside a folder depends on the system */
/* maximum number of open files inside RAM ? */
//methods
public static long getAvailableMemory(){
System.gc(); //clear as much as possible
Runtime r = Runtime.getRuntime();
long maxMemory = r.maxMemory(); //add case if this is greater than Long.max_value
long totalMemory = r.totalMemory(); //add case if this is greater than Long.max_value
long freeMemory = r.freeMemory();
return maxMemory - (totalMemory-freeMemory);
}
public static long determineChunkSize(final long fileSize){
return getAvailableMemory()/MAXIMUM_NUMBER_OF_FILES;
}

Measure memory usage of a certain datastructure

I'm trying to measure the memory usage of my own datastructure in my Tomcat Java EE application at various levels of usage.
To measure the memory usage I have tried two strategies:
Runtime freeMemory and totalMemory:
System.gc(); //about 20 times
long start = Runtime.getRuntime().freeMemory();
useMyDataStructure();
long end = Runtime.getRuntime().freeMemory();
System.out.println(start - end);
MemoryPoolMXBean.getPeakUsage():
long before = Runtime.getRuntime().totalMemory() - Runtime.getRuntime().freeMemory();
List<MemoryPoolMXBean> memorymxbeans = ManagementFactory.getMemoryPoolMXBeans();
for(MemoryPoolMXBean memorybean: memorymxbeans){
memorybean.resetPeakUsage();
}
useMyDataStructure();
for(MemoryPoolMXBean memorybean: memorymxbeans){
MemoryUsage peak = memorybean.getPeakUsage();
System.out.println(memorybean.getName() + ": " + (peak.getUsed() - before));
}
Method 1 does not output reliable data at all. The data is useless.
Method 2 outputs negative values. Besides it's getName() tells me it's outputting Code Cache, PS Eden Space, PS Survivor Space and PS Old Gen seperately.
How can I acquire somewhat consistent memory usage numbers before and after my useMyDataStructure() call in Java? I do not wish to use VirtualVM, I prefer to catch the number in a long object and write it to file myself.
Thanks in advance.
edit 1:
useMyDatastructure in the above examples was an attempt to simplify the code. What's really there:
int key = generateKey();
MyOwnObject obj = makeAnObject();
MyContainerClass.getSingleton().addToHashMap(key, obj);
So in essence I'm really trying to measure how much memory the HashMap<Integer, MyOwnObject> in MyContainerClass takes. I will use this memory measurement to perform an experiment where I fill up both the HashMap and MyOwnObject instances.
1st of all sizing objects in java is non-trivial (as explained very well here).
if you wish to know the size of a particular object, there are at least 2 open source libraries that will do the math for you - java.sizeof and javabi-sizeof
now, as for your specific test - System.gc() is mostly ignored by modern (hotspot) jvms, no matter how many times you call it. also, is it possible your useMyDataStructure() method does not retain a reference to the object(s) it creates? in that case measuring free memory after calling it is no good as any allocated Objects might have been cleared out.
You could try https://github.com/jbellis/jamm, this works great for me.

Java: Filling in-memory sorted batches

So I'm using Java to do multi-way external merge sorts of large on-disk files of line-delimited tuples. Batches of tuples are read into a TreeSet, which are then dumped into on-disk sorted batches. Once all of the data have been exhausted, these batches are then merge-sorted to the output.
Currently I'm using magic numbers for figuring out how many tuples we can fit into memory. This is based on a static figure indicating how may tuples can be roughly fit per MB of heap space, and how much heap space is available using:
long max = Runtime.getRuntime().maxMemory();
long used = Runtime.getRuntime().totalMemory();
long free = Runtime.getRuntime().freeMemory();
long space = free + (max - used);
However, this does not always work so well since we may be sorting different length tuples (for which the static tuple-per-MB figure might be too conservative) and I now want to use flyweight patterns to jam more in there, which may make the figure even more variable.
So I'm looking for a better way to fill the heap-space to the brim. Ideally the solution should be:
reliable (no risk of heap-space exceptions)
flexible (not based on static numbers)
efficient (e.g., not polling runtime memory estimates after every tuple)
Any ideas?
Filling the heap to the brim might be a bad idea due to garbage collector trashing. (As the memory gets nearly full, the efficiency of garbage collection approaches 0, because the effort for collection depends on heap size, but the amount of memory freed depends on the size of the objects identified as unreachable).
However, if you must, can't you simply do it as follows?
for (;;) {
long freeSpace = getFreeSpace();
if (freeSpace < 1000000) break;
for (;;freeSpace > 0) {
treeSet.add(readRecord());
freeSpace -= MAX_RECORD_SIZE;
}
}
The calls to discover the free memory will be rare, so shouldn't tax performance much. For instance, if you have 1 GB heap space, and leave 1MB empty, and MAX_RECORD_SIZE is ten times average record size, getFreeSpace() will be invoked a mere log(1000) / -log(0.9) ~= 66 times.
Why bother with calculating how many items you can hold? How about letting java tell you when you've used up all your memory, catching the exception and continuing. For example,
// prepare output medium now so we don't need to worry about having enough
// memory once the treeset has been filled.
BufferedWriter writer = new BufferedWriter(new FileWriter("output"));
Set<?> set = new TreeSet<?>();
int linesRead = 0;
{
BufferedReader reader = new BufferedReader(new FileReader("input"));
try {
String line = reader.readLine();
while (reader != null) {
set.add(parseTuple(line));
linesRead += 1;
line = reader.readLine();
}
// end of file reached
linesRead = -1;
} catch (OutOfMemoryError e) {
// while loop broken
} finally {
reader.close();
}
// since reader and line were declared in a block their resources will
// now be released
}
// output treeset to file
for (Object o: set) {
writer.write(o.toString());
}
writer.close();
// use linesRead to find position in file for next pass
// or continue on to next file, depending on value of linesRead
If you still have trouble with memory, just make the reader's buffer extra large so as to reserve more memory.
The default size for the buffer in a BufferedReader is 4096 bytes. So when finishing reading you will release upwards of 4k of memory. After this your additional memory needs will be minimal. You need enough memory to create an iterator for the set, let's be generous and assume 200 bytes. You will also need memory to store the string output of your tuples (but only temporarily). You say the tuples contain about 200 characters. Let's double that to take account for separators -- 400 characters, which is 800 bytes. So all you really need is an additional 1k bytes. So you're fine as you've just released 4k bytes.
The reason you don't need to worry about the memory used to store the string output of your tuples is because they are short lived and only referred to within the output for loop. Note that the Writer will copy the contents into its buffer and then discard the string. Thus, the next time the garbage collector runs the memory can be reclaimed.
I've checked and, a OOME in add will not leave a TreeSet in an inconsistent state, and the memory allocation for a new Entry (the internal implementation for storing a key/value pair) happens before the internal representation is modified.
You can really fill the heap to the brim using direct memory writing (it does exist in Java!). It's in sun.misc.Unsafe, but isn't really recommended for use. See here for more details. I'd probably advise writing some JNI code instead, and using existing C++ algorithms.
I'll add this as an idea I was playing around with, involving using a SoftReference as a "sniffer" for low memory.
SoftReference<Byte[]> sniffer = new SoftReference<String>(new Byte[8192]);
while(iter.hasNext()){
tuple = iter.next();
treeset.add(tuple);
if(sniffer.get()==null){
dump(treeset);
treeset.clear();
sniffer = new SoftReference<String>(new Byte[8192]);
}
}
This might work well in theory, but I don't know the exact behaviour of SoftReference.
All soft references to softly-reachable objects are guaranteed to have been cleared before the virtual machine throws an OutOfMemoryError. Otherwise no constraints are placed upon the time at which a soft reference will be cleared or the order in which a set of such references to different objects will be cleared. Virtual machine implementations are, however, encouraged to bias against clearing recently-created or recently-used soft references.
Would like to hear feedback as it seems to me like an elegant solution, although behaviour might vary between VMs?
Testing on my laptop, I found that it the soft-reference is cleared infrequently, but sometimes is cleared too early, so I'm thinking to combine it with meriton's answer:
SoftReference<Byte[]> sniffer = new SoftReference<String>(new Byte[8192]);
while(iter.hasNext()){
tuple = iter.next();
treeset.add(tuple);
if(sniffer.get()==null){
free = MemoryManager.estimateFreeSpace();
if(free < MIN_SAFE_MEMORY){
dump(treeset);
treeset.clear();
sniffer = new SoftReference<String>(new Byte[8192]);
}
}
}
Again, thoughts welcome!

JNI_ENOMEM from JNI_CreateJavaVM when calling dll that uses JNI from VB6

I work on a legacy system that has a VB6 app that needs to call Java code. The solution we use is to have the VB app call a C++ dll that uses JNI to call the Java code. A bit funky, but it's actually worked pretty well. However, I'm moving to a new dev box, and I've just run into a serious problem with this. The built VB app works fine on the new box, but when I try to run it from VB, the dll fails to load the VM, getting a return code of -4 (JNI_ENOMEM) from JNI_CreateJavaVM.
Both the built app and VB are calling the exact same dll, and I've tried it with both Java 1.5 and 1.6. I've tried the suggestions here (redirecting stdout and stderr to files, adding a vfprint option, adding an -Xcheck:jni option), but to no avail. I can't seem to get any additional information out of the jvm. As far as I can tell, the new box is configured pretty much the same as the old one (installed software, Path, Classpath, etc.), and both are running the same release of Windows Server 2003. The new machine is an x64 box with more memory (4GB rather than 2GB), but it's running 32-bit Windows.
Any suggestions or ideas about what else to look into? Rewriting the whole thing in a more sane way is not an option -- I need to find a way to have the dll get the jvm to load without thinking that it's out of memory. Any help would be much appreciated.
OK, I've figured it out. As kschneid points out, the JVM needs a pretty large contiguous chunk of memory inside the application's memory space. So I used the sysinternals VMMap utility to see what VB's memory looked like. There was, in fact, no large chunk of memory available, and there were some libraries belonging to Visio that were loadeed in locations that seemed to be designed to fragment memory. It turns out that when I installed Visio on the new machine, it automatically installed the Visio UML add-in into VB. Since I don't use this add-in, I disabled it. With the add-in disabled, there was a large contiguous chunk of free memory available, and now the JVM loads just fine.
FYI - I found the following extremely useful article: https://forums.oracle.com/forums/thread.jspa?messageID=6463655
I'm going to repeat some insanely useful code here because I'm not sure that I trust Oracle to keep the above forum around.
When I set up my JVM, I use a call to getMaxHeapAvailable(), then set my heap space accordingly (-Xmxm) - works great for workstations with less RAM available, without having to penalize users with large amounts of RAM.
bool canAllocate(DWORD bytes)
{
LPVOID lpvBase;
lpvBase = VirtualAlloc(NULL, bytes, MEM_RESERVE, PAGE_READWRITE);
if (lpvBase == NULL) return false;
VirtualFree(lpvBase, 0, MEM_RELEASE);
return true;
}
int getMaxHeapAvailable(int permGenMB, int maxHeapMB)
{
DWORD originalMaxHeapBytes = 0;
DWORD maxHeapBytes = 0;
int numMemChunks = 0;
SYSTEM_INFO sSysInfo;
DWORD maxPermBytes = permGenMB * NUM_BYTES_PER_MB; // Perm space is in addition to the heap size
DWORD numBytesNeeded = 0;
GetSystemInfo(&sSysInfo);
// jvm aligns as follows:
// quoted from size_t GenCollectorPolicy::compute_max_alignment() of jdk 7 hotspot code:
// The card marking array and the offset arrays for old generations are
// committed in os pages as well. Make sure they are entirely full (to
// avoid partial page problems), e.g. if 512 bytes heap corresponds to 1
// byte entry and the os page size is 4096, the maximum heap size should
// be 512*4096 = 2MB aligned.
// card_size computation from CardTableModRefBS::SomePublicConstants of jdk 7 hotspot code
int card_shift = 9;
int card_size = 1 << card_shift;
DWORD alignmentBytes = sSysInfo.dwPageSize * card_size;
maxHeapBytes = maxHeapMB * NUM_BYTES_PER_MB;
// make it fit in the alignment structure
maxHeapBytes = maxHeapBytes + (maxHeapBytes % alignmentBytes);
numMemChunks = maxHeapBytes / alignmentBytes;
originalMaxHeapBytes = maxHeapBytes;
// loop and decrement requested amount by one chunk
// until the available amount is found
numBytesNeeded = maxHeapBytes + maxPermBytes;
while (!canAllocate(numBytesNeeded + 50*NUM_BYTES_PER_MB) && numMemChunks > 0) // 50 is an overhead fudge factory per https://forums.oracle.com/forums/thread.jspa?messageID=6463655 (they had 28, I'm bumping it 'just in case')
{
numMemChunks --;
maxHeapBytes = numMemChunks * alignmentBytes;
numBytesNeeded = maxHeapBytes + maxPermBytes;
}
if (numMemChunks == 0) return 0;
// we can allocate the requested size, return it now
if (maxHeapBytes == originalMaxHeapBytes) return maxHeapMB;
// calculate the new MaxHeapSize in megabytes
return maxHeapBytes / NUM_BYTES_PER_MB;
}
I had the same problem described by "the klaus" and read "http://support.microsoft.com/kb/126962". Changed the registry as described in the mentioned article. I exagerated my change to : "%SystemRoot%\system32\csrss.exe ObjectDirectory=\Windows SharedSection=3072,3072,3072 Windows=On SubSystemType=Windows ServerDll=basesrv,1 ServerDll=winsrv:UserServerDllInitialization,3 ServerDll=winsrv:ConServerDllInitialization,2 ProfileControl=Off MaxRequestThreads=16"
The field to look at is "SharedSection=3072,3072,3072". It solved my problem, but I may have side-effects because of this change.

Categories