So I'm working on project for my algorithms class that I'm currently taking. I'm doing some research online and see that some people are using an ArrayList<Integer> and some people are using an int array[]. My question is, what's better to use for a min heap and why. The project requires me to keep the top 10000 largest numbers in the list from a very large list of numbers
If you know the array size at compile time, using a bare int[] array is faster. Of course, the performance difference is probably negligible -- but the idea is that ArrayList is internally implemented as an Object[] array, so you're saving yourself that overhead, plus the overhead of dealing with Integer vs int.
An int[] will consume less memory than an ArrayList<Integer>. Part of that is simply the overhead which is added from an Integer which adds ~16 bytes per instance. This video goes through memory impact of various objects and collections in 32bit and 64bit jvms. At about the 9:30 mark it talks about memory associated with each object. At about the 11:15 mark it talks about how much memory various types (including Object references) take.
For an int[], you have 1 Object (the int[]) and it will actually contain all of the individual int values as contiguous memory.
For an ArrayList<Integer>, you have the ArrayList object, the Object[] object and all of the Integer objects. Additionally, the Object[] doesn't actually contain the Integer objects in contiguous memory, rather it contains object references in contiguous memory. The Integer objects themselves are elsewhere on the heap.
So the end result is that an ArrayList<Integer> requires ~6x the amount of memory as an int[]. The backing Object[] and the int[] take the same amount of memory (~40,000 bytes). The 10k Integer objects take ~20 bytes each for a total of 200,000 bytes. So the ArrayList will be a minimum of 240,000 bytes compared to the int[] at approximately 40,000 bytes.
Related
I have a use-case where I need to store Key - Value pairs of size approx. 500 Million entries in sinle VM of size 8 GB. Key and Value are of type Long. Key is auto incremented starting from 1, 2 ,3, so on..
Only once I build this Map[K-V] structure at the start of program as a exclusive operation, Once this is build, used only for lookup, No update or delete is performed in this structure.
I have tried this with java.util.hashMap but as expected it consumes a lot of memory and program give OOM : Heap usage exceeds Error.
I need some guidance on following which helps in reducing the memory footprint, I am Ok with some degradation in access performance.
What are the other alternative (from java collection or other libraries)
that can be tried here.
What is a recommended way to get the memory footprint by this Map, for
comparison purpose.
Just use a long[] or long[][].
500 million ascending keys is less than 2^31. And if you go over 2^31, use a long[][] where the first dimension is small and the second one is large.
(When the key type is an integer, you only need a complicated "map" data structure if the key space is sparse.)
The space wastage in a 1D array is insignificant. Every Java array node has 12 byte header, and the node size is rounded up to a multiple of 8 bytes. So a 500 million entry long[] will take so close to 500 million x 8 bytes == 4 billion bytes that it doesn't matter.
However, a JVM typically cannot allocate a single object that takes up the entire available heap space. If virtual address space is at a premium, it would be advisable to use a 2-D array; e.g. new long[4][125_000_000]. This makes the lookups slightly more complicated, but you will most likely reduce the memory footprint by doing this.
If you don't know beforehand the number of keys to expect, you could do the same thing with a combination of arrays and ArrayList objects. But an ArrayList has the problem that if you don't set an (accurate) capacity, the memory utilization is liable to be suboptimal. And if you populate an ArrayList by appending to it, the instantaneous memory demand for the append can be as much as 3 times the list's current space usage.
There is no reason for using a Map in your case.
If you just have a start index and further indizes are just constant increments, just use a List:
List<Long> data=new ArrayList<>(510_000_000);//capacity should ideally not be reached, if it is reached, the array behind the ArrayList needs to be reallocated, the allocated memory would be doubled by that
data.add(1337L);//inserting, how often you want
long value=data.get(1-1);//1...your index that starts with 1, -1...because your index starts with 1, you should subtract one from the index.
If you don't even add more elements and know the size from the start, an array will be even better:
long[] data=long[510_000_000];//capacity should surely not be reached, you will need to create a new array and copy all data if it is higher
int currentIndex=0;
data[currentIndex++]=1337L//inserting, as often as it is smaller than the size
long value=data[1-1];//1...your index that starts with 1, -1...because your index starts with 1, you should subtract one from the index.
Note that you should check the index (currentIndex) before inserting so that it is smaller than the array length.
When iterating, use currentIndex+1 as length instead of .length.
Create an array with the size you need and whenever you need to access it, use arr[i-1] (-1 because your indizes start with 1 instead of zero).
If you "just" have 500 million entries, you will not reach the integer limit and a simple array will be fine.
If you need more entries and you have sufficient memories, use an array of arrays.
The memory footprint of using an array this big is the memory footprint of the data and a bit more.
However, if you don't know the size, you should use a higher length/capacity then you may need. If you use an ArrayList, the memory footprint will be doubled (temporarily tripled) whenever the capacity is reached because it needs to allocate a bigger array.
A Map would need an object for each entry and an array of lists for all those object that would highly increase the memory footprint. The increasing of the memory footprint (using HashMap) is even worse than with ÀrrayLists as the underlaying array is reallocated even if the Map is not completely filled up.
But consider saving it to the HDD/SSD if you need to store that much data. In most cases, this works much better. You can use RandomAccessFile in order to access the data on the HDD/SSD on any point.
I am parsing data where precision is not my main concern. I often get java.lang.OutOfMemoryError even if I use maximum Java heap size. So my main concern here is memory usage, and java heap space. Should I use double or float data type?
I consistently get OOM exceptions because I use a great number of ArrayLists with numbers.
Well that is your problem!
An ArrayList of N 32-bit floating point values takes at least1 20 * N bytes in a 32-bit JVM and 24 * N bytes in a 64-bit JVM2.
An ArrayList of N 64-bit floating point values takes the same amount of space3.
The above only accounts for the backing array and the list elements. If you have huge numbers of small ArrayList objects, the overhead of the ArrayList object itself may be significant. (Add 16 or 24 bytes for each ArrayList object`.)
If you make use of dynamic resizing, this may generate object churn as the backing array grows. At some points, the backing array may be as much as twice as large as it needs to be.
By contrast:
An array of 32-bit floating point values takes approximately 4 * N bytes4.
An array of 64-bit floating point values takes approximately 8 * N bytes4.
There is no wastage due to dynamic resizing.
Solutions:
ArrayList<Float> versus ArrayList<Double> makes no difference. It is NOT a solution
For maximal saving, use float[] or double[] depending on your precision requirements. Preallocate the arrays to hold the exact number of elements required.
If you want the flexibility of dynamic resizing there are 3rd-party libraries that implement space efficient lists of primitive types. Alternatively implement your own. However, you won't be able to use the standard List<...> API because that forces you down the path of using Float OR Double.
1 - The actual space used depends on how the ArrayList was created and populated. If you pre-allocate an ArrayList with exactly the correct capacity, you will use the space I said above. If you build the array by repeatedly appending to an ArrayList with the default initial capacity, you will use on average N * 2 bytes extra space for a 32-bit JVM. This is due to the heuristic that ArrayList uses to grow the backing array when it is full.
2 - On a 64-bit JVM, a pointer occupies 8 bytes rather than 4 ... unless you are using compressed oops.
3 - The reason it takes the same amount of bytes is that on a typical JVM a Float and a Double are both 16 bytes due to heap node padding.
4 - There is a header overhead of (typically) 12 bytes per array, and the array's heap node size is padded to a multiple of 8 bytes.
If your memory usage is related to a huge amount (many millions) of floating-point numbers (which can be verified with a decent memory profiler), then you're most probably storing them in some data structures like arrays or lists.
Recommendations (I guess, you are already following most of them...):
Prefer float over double if number range and precision are sufficient, as that consumes only half the size.
Do not use the java.lang.Float or java.lang.Double classes for storage, as they hav a considerable memory overhead compared to the naked scalar values.
Be sure to use arrays, not collections like java.util.List, as they store boxed java.lang.Float instances instead of the naked numbers.
But above that, have a decent memory profiler show you which instances occupy most of your memory. Maybe there are other memory consumers besides the float/double data.
EDIT:
The OP's recent comment "I consistently get OOM exceptions because I use a great number of ArrayLists with numbers" makes it clear. ArrayList<Float> wastes a lot of memory when compared to float[] (Stephen C gave detailed numbers in his answer), but gives the benefit of dynamic resizing.
So, I see the following possibilities:
If you can tell the array size from the beginning, then immediately use float[] arrays.
If you need the dynamic size while initializing instances, use ArrayList<Float> while building one object (when size still increases), and then copy the contents to a float[] array for long-term storage. Then the wasteful ArrayLists exist only for a limited timespan.
If you need dynamic sizes over the whole lifespan of your data, create your own FloatArrayList class based on a float[] array, resembling the ArrayList<Float> as far as your code needs it (that can range from a very shallow implementation up to a full-featured List, maybe based on AbstractList).
I'm trying to get a general idea of the memory cost difference between an Integer array and int array. While there seems to be a lot of information out there about the differences between a primitive int and Integer object, I'm still a little confused as to how to calculate the memory costs of an int[] and Integer[] array (overhead costs, padding, etc).
Any help would be appreciated. Thanks!
In addition to storing the length of the array, an array of ints needs space for N 4-byte elements, while an array of Integers needs space for N references, whose size is platform-dependent; commonly, that would be 4 bytes on 32-bit platforms or 8 bytes on 64-bit platforms.
As far as int[] goes, there is no additional memory required to store data. Integer[], on the other hand, needs objects of type Integer, which could be all distinct or shared (e.g. through interning of small numbers implemented by the Java platform itself). Therefore, Integer[] requires up to N additional objects, each one containing a 4-byte int.
Assuming that all Integers in an Integer[] array are distinct objects, the array and its content will take two to three times the space of an int[] array. On the other hand, if all objects are shared, and the memory costs of shared objects are accounted for, there may be no additional overhead at all (on 32-bit platforms) or the there would be a 2x overhead on 64-bit platforms.
Here is a comparison on jdk6u26 of the size of an array of 1024 Integers as opposed to 1024 ints. Note that in the case of anInteger[] array containing low number Integers, these can be shared with other uses of these Integers in the JVM by the auto-box cache.
I'm using a java program to get some data from a DB. I then calculate some numbers and start storing them in an array. The machine I'm using has 4 gigs of RAM. Now, I don't know how many numbers there will be in advance, so I use an ArrayList<Double>. But I do know there will be roughly 300 million numbers.
So, since one double is 8 bytes a rough estimate of the memory this array will consume is 2.4 gigs (probably more because of the overheads of an ArrayList). After this, I want to calculate the median of this array and am using the org.apache.commons.math3.stat.descriptive.rank.Median library which takes as input a double[] array. So, I need to convert the ArrayList<Double> to double[].
I did see many questions where this is raised and they all mention there is no way around looping through the entire array. Now this is fine, but since they also maintain both objects in memory, this brings my memory requirements up to 4.8 gigs. Now we have a problem since the total RAM available us 4 gigs.
First of all, is my suspicion that the program will at some point give me a memory error correct (it is currently running)? And if so, how can I calculate the median without having to allocate double the memory? I want to avoid sorting the array as calculating the median is O(n).
Your problem is even worse than you realize, because ArrayList<Double> is much less efficient than 8 bytes per entry. Each entry is actually an object, to which the ArrayList keeps an array of references. A Double object is probably about 12 bytes (4 bytes for some kind of type identifier, 8 bytes for the double itself), and the reference to it adds another 4, bringing the total up to 16 bytes per entry, even excluding overhead for memory management and such.
If the constraints were a little wider, you could implement your own DoubleArray that is backed by a double[] but knows how to resize itself. However, the resizing means you'll have to keep a copy of both the old and the new array in memory at the same time, also blowing your memory limit.
That still leaves a few options though:
Loop through the input twice; once to count the entries, once to read them into a right-sized double[]. It depends on the nature of your input whether this is possible, of course.
Make some assumption on the maximum input size (perhaps user-configurable), and allocate a double[] up front that is this fixed size. Use only the part of it that's filled.
Use float instead of double to cut memory requirements in half, at the expense of some precision.
Rethink your algorithm to avoid holding everything in memory at once.
There are many open source libraries that create dynamic arrays for primitives. One of these:
http://trove.starlight-systems.com/
The Median value is the value at the middle of a sorted list. So you don't have to use a second array, you can just do:
Collections.sort(myArray);
final double median = myArray.get(myArray.size() / 2);
And since you get that data from a DB anyways, you could just tell the DB to give you the median instead of doing it in Java, which will save all the time (and memory) for transmitting the data as well.
I agree, use Trove4j TDoubleArrayList class (see javadoc) to store double or TFloatArrayList for float. And by combining previous answers, we gets :
// guess initialcapacity to remove requirement for resizing
TDoubleArrayList data = new TDoubleArrayList(initialcapacity);
// fill data
data.sort();
double median = data.get(data.size()/2);
An array of ints in java is stored as a block of 32-bit values in memory. How is an array of Integer objects stored? i.e.
int[] vs. Integer[]
I'd imagine that each element in the Integer array is a reference to an Integer object, and that the Integer object has object storage overheads, just like any other object.
I'm hoping however that the JVM does some magical cleverness under the hood given that Integers are immutable and stores it just like an array of ints.
Is my hope woefully naive? Is an Integer array much slower than an int array in an application where every last ounce of performance matters?
No VM I know of will store an Integer[] array like an int[] array for the following reasons:
There can be null Integer objects in the array and you have no bits left for indicating this in an int array. The VM could store this 1-bit information per array slot in a hiden bit-array though.
You can synchronize in the elements of an Integer array. This is much harder to overcome as the first point, since you would have to store a monitor object for each array slot.
The elements of Integer[] can be compared for identity. You could for example create two Integer objects with the value 1 via new and store them in different array slots and later you retrieve them and compare them via ==. This must lead to false, so you would have to store this information somewhere. Or you keep a reference to one of the Integer objects somewhere and use this for comparison and you have to make sure one of the == comparisons is false and one true. This means the whole concept of object identity is quiet hard to handle for the optimized Integer array.
You can cast an Integer[] to e.g. Object[] and pass it to methods expecting just an Object[]. This means all the code which handles Object[] must now be able to handle the special Integer[] object too, making it slower and larger.
Taking all this into account, it would probably be possible to make a special Integer[] which saves some space in comparison to a naive implementation, but the additional complexity will likely affect a lot of other code, making it slower in the end.
The overhead of using Integer[] instead of int[] can be quiet large in space and time. On a typical 32 bit VM an Integer object will consume 16 byte (8 byte for the object header, 4 for the payload and 4 additional bytes for alignment) while the Integer[] uses as much space as int[]. In 64 bit VMs (using 64bit pointers, which is not always the case) an Integer object will consume 24 byte (16 for the header, 4 for the payload and 4 for alignment). In addition a slot in the Integer[] will use 8 byte instead of 4 as in the int[]. This means you can expect an overhead of 16 to 28 byte per slot, which is a factor of 4 to 7 compared to plain int arrays.
The performance overhead can be significant too for mainly two reasons:
Since you use more memory, you put on much more pressure on the memory subsystem, making it more likely to have cache misses in the case of Integer[]. For example if you traverse the contents of the int[] in a linear manner, the cache will have most of the entries already fetched when you need them (since the layout is linear too). But in case of the Integer array, the Integer objects itself might be scattered randomly in the heap, making it hard for the cache to guess where the next memory reference will point to.
The garbage collection has to do much more work because of the additional memory used and because it has to scan and move each Integer object separately, while in the case of int[] it is just one object and the contents of the object doesn't have to be scanned (they contain no reference to other objects).
To sum it up, using an int[] in performance critical work will be both much faster and memory efficient than using an Integer array in current VMs and it is unlikely this will change much in the near future.
John Rose working on fixnums in the JVM to fix this problem.
I think your hope is woefully naive. Specifically, it needs to deal with the issue that Integer can potentially be null, whereas int can not be. That alone is reason enough to store the object pointer.
That said, the actual object pointer will be to a immutable int instance, notably for a select subset of integers.
It won't be much slower, but because an Integer[] must accept "null" as an entry and int[] doesn't have to, there will be some amount of bookkeeping involved, even if Integer[] is backed by an int[].
So if every last ounce of performance matters, user int[]
The reason that Integer can be null, whereas int cannot, is because Integer is a full-fledged Java object, with all of the overhead that includes. There's value in this since you can write
Integer foo = new Integer();
foo = null;
which is good for saying that foo will have a value, but it doesn't yet.
Another difference is that int performs no overflow calculation. For instance,
int bar = Integer.MAX_VALUE;
bar++;
will merrily increment bar and you end up with a very negative number, which is probably not what you intended in the first place.
foo = Integer.MAX_VALUE;
foo++;
will complain, which I think would be better behavior.
One last point is that Integer, being a Java object, carries with it the space overhead of an object. I think that someone else may need to chime in here, but I believe that every object consumes 12 bytes for overhead, and then the space for the data storage itself. If you're after performance and space, I wonder whether Integer is the right solution.