Java programs can be very memory hungry. For example, a Double object has 24 bytes: 8 bytes of data and 16 bytes of JVM-imposed overhead. In general, the objects that represent the primitive types are very expensive.
The same happens for any collection in the Java Standard Library. There are even some counterintuitive facts such as a HashSet being more memory hungry than a HashMap, since a HashSet contains a HashMap inside (http://docs.oracle.com/javase/7/docs/api/java/util/HashSet.html).
Could you come up with some advice when modeling data and delegation of objects in high performance settings so that these "weaknesses" of Java are mitigated?
Some techniques I use to reduce memory:
Make your own IntArrayList (etc) class that prevents boxing
Make your own IntHashMap (etc) class where keys are primitives
Use nio's ByteBuffer to store large arrays of data efficiently (and in native memory, outside heap). It's like a byte array but contains methods to store/retrieve all primitive types from the buffer at any arbitrary offset (trade memory for speed)
Don't use pooling because pools keep unused instances explicitly alive.
Use threads scarcely, they're super memory hungry (in native memory, outside heap)
When making substrings of big strings, and discarding the original, the substrings still refer to the original. So use new String to dispose of the old big string.
A linear array is smaller than a multidimensional array, and if the size of all but the last dimension is a power of two, calculating indices is fastest: array[x|y<<4] for a 16xN array.
Initialize collections and StringBuilder with an initial capacity chosen such that it prevents internal reallocation in a typical circumstance.
Use StringBuilder instead of string concatenation, because the compiled class files use new StringBuilder() without initial capacity to concatenate strings.
Depends on the application, but generally speaking
Layout data structures in (parallel) arrays of primitives
Try to make big "flat" objects, inlining otherwise sensible sub-structures
Specialize collections of primitives
Reuse objects, use object pools, ThreadLocals
Go off-heap
I cannot say these practices are "best", because they, unfortunately, make you suffer, losing the point why you are using Java, reduce flexibility, supportability, reliability, testability and other "good" properties of the codebase.
But, they certainly allow to lower memory footprint and GC pressure.
One of the memory problems that are easy to overlook in Java is memory leakage. Nicholas Greene already pointed you to memory profiling.
Many people assume that Java's garbage collection prevents memory leaks, but that is not actually true - all it takes is one forgotten reference somewhere to keep an object around in perpetuity. Paradoxically, trying to optimize your program may introduce more opportunities for memory leaks because you end up with more complex data structures.
One example for a memory leak if you are implementing, for instance, a stack:
Integer stack[];
stack = new Integer[10];
int stackPtr = 0;
// a few push operation on our stack.
stack[stackPtr++] = new Integer(5);
stack[stackPtr++] = new Integer(3);
// and pop from the stack again
--stackPtr;
--stackPtr;
// at this point, the stack is logically empty, but
// the Integer objects are still referenced by the array,
// and are basically leaked.
The correct solution would have been:
stack[--stackPtr] = null;
If you have high performance constraints and need to use collections for simple types, you might take a look on some implementations of Primitive Collections for Java.
Some are:
HPPC
GNU Trove
Apache Commons Primitives
Also, as a reference take a look at this question: Why can Java Collections not directly store Primitives types?
Luís Bianchin already gave you a few libraries which implement optimal collections in Java.
Nevertheless, it seems that you are specially concerned about Java collections' memory allocation. In that case, there are a few alternatives which are quite straight forward.
Cache
You could use a cache to limit the memory the collection (the cache) can allocate. By doing that, you only load in main memory the most frequently used entries and you don't need to load the whole data set form disk/network/whatever. I highly recommend Guava Cache as it's very well documented and pretty mature.
Persistent Collections
Sometimes a cache is not a solution for your problem. For example, in an ETL solution, you might know you will only load each entry once. For this scenario I recommend to go for persistent collections. These are disk stored collections that are way faster than traditional databases but have nice Java APIs.
MapDB and PCollections are for me the best libraries.
Profile memory usage
On top of that, if you really want to know the actual state of your program's memory allocation I highly recommend you to use a profiler. This way you will not only know how much memory you collections occupy, but also how the GC behaves over time.
In fact, you should only try an alternative to Java's collections and data structures if there is an actual memory problem, and that is something a profiler can tell you.
The JDK has a profiler called VisualVM which does a great job. Nevertheless, I recommend you to use a commercial profiler if you can afford it. The commercial profilers usually have a low impact in the application's performance when compared to VisualVM.
Memory optimal data is nice with the network.
Finally, that it's not strictly related to your question, but it's closely connected. In case you want to serialize your Java objects into an optimal binary representation I recommend you Google Protocol Buffers in Java. Protocol buffers are ideal to transfer data structures thought the network using the least bandwidth possible and having a really fast coding/decoding.
Well there is a lot of things you can do.
Here are a few problems and solutions:
When you change the value of a string in java, the string is not actually overwritten. Instead, a new string is created to replace the old one. However, the old string still exists. This can be a problem when using RAM efficiently is a concern. Here are some solutions to this problem:
When using a string to specify something like the "state" of an object or anything else that can only have a specific set of possible values, don't use a string. Instead use an enum. If you don't know what an enum is or how to use one yet, here's a link to a tutorial on what enums are and how to use them!
If you are using a string as a variable who's value will change at some point in the program, don't define a string how you usually would. Instead, use the StringBuilder class from the java.lang package. StringBuilder is a class which is used to create strings and change their values. This class handles strings differently than usual. When it is used to change the value of a string, StringBuilder doesn't create a duplicate string with a different value to replace the old string, it actually changes the value of the original string. Therefore, since you aren't creating duplicate strings, this saves RAM. Here is a link to to the StringBuilder class in the java api.
Writer and reader objects such as fileWriters and fileReaders also take up RAM. If you have a lot of them, this can also cause problems. Here are some solutions:
All reader and writer objects have a method called close(). As you can probably guess, it closes the writer or reader object. All it does is get rid of the reader or writer object. Whenever you have a reader or writer object and you reach the point in your code when you know you will never use the reader or writer object anymore, use this method. It will get rid of the reader or writer object and will free some RAM.
Every object in java takes up memory. When you have an object that you won't use anymore, it's not very convenient to keep it around.
The Object class has a method called finalize(). This method has the same effect as the close() method in reader and writer objects. When you aren't going to use an object anymore, use the finalize() method to get rid of it and free some RAM.
Beware of early optimisation.
See When is optimisation premature?
While not knowing the exact requirements of your application or runtime environment, in my experience java was able to handle anything I threw it at. Doing some profiling on your demo /proof of concept app might be time well spent if performance or garbage collection (you tagged memory leaks) is an issue.
Related
Suppose I have a file with a lot (most likely 100K+, potentially millions) of serialized objects of the same class. I read these objects and do something with them:
//open stream
try{
while(true) {
Object o = ois.readObject();
foo(o);
}
}catch(EOFException){
}
//close stream...
When this is done, a very uncomfortably large amount of objects have been created. My problem is that I don't have control of those objects, and they won't be freed until the GC decides to do it.
Is there a way to put an upper limit on the amount of new objects created? For example, if my file has 100K serialized objects, is there a way to tweak the readObject mechanism so that a fixed size pool is used?
More Details
The ~100K object file is the merged result of many smaller files. What this small process is doing, is to create a sorted CSV file.
None of the suggested comments or answers so far will work (most of them are also unnecessary), because the ObjectInputStream itself holds a reference to every object it has ever deserialized, for preservation of object graphs.
You need to constrain how much data is written to the file, so you don't have to handle 100,000 objects per file, and if possible you should also make use of ObjectOutputStream.reset() or ObjectOutputStream.writeUnshared(), for the reasons described in their respective Javadoc comments.
You can try creating a fixed-sized collection of PhantomReferences to each of the objects from the file.
Once the collection is full, you only read another object from the file if and only if an existing PhantomReference can be retrieved/removed (as a blocking call) from the ReferenceQueue, after which you remove it from the fixed-sized collection and allow another to be created.
Remember to call 'clear()' on the PhantomReference after you remove it from the ReferenceQueue.
Hope this helps!
Refer to this document for more information regarding Phantom References:
https://weblogs.java.net/blog/kcpeppe/archive/2011/09/29/mysterious-phantom-reference
And Here:
http://java.dzone.com/articles/finalization-and-phantom
I guess you also have some influence on the design of the program that wrote those serialized objects. Isn't this kind of problem suggesting that the Java serialization format is not a good fit for your problem? Perhaps you should write and read the objects in some other format, which allows you to discard old objects as garbage during processing of the stream?
If you have to read Objects, you have to create Objects, there is little you can do about this. Changing your code to foo(ois.readObject()); gives the compiler a hint that it doesn't need to store the reference, but still Objects are created.
That leaves you two options:
You trust the Garbage Collector to be highly efficient and well designed.
You change your underlying data structure to not store Objects, but design it in a form that completely relies on primitive data types.
As I understand, EJP proposed to regenerate your input files using writeUnshared technique instead of writeObject to make object available for GC during reading. If it's about regenerating original content, then may be you could switch to some other serializer like Kryo?
Java's built-in serialization is slow, inefficient, and has many well-known problems (see Effective Java, by Josh Bloch pp. 213).
Their promised serialized object size is 5x smaller than standard Java's, so memory consumption should be at least 5x smaller, I think.
EDIT
Better wording: 5x-7x heavier serialized objects most probably mean that ObjectInputStream is a memory eater, e.g. uses too much for the job, though frees that memory in the end.
Once the String object is created , we can't modify it But if we do any operations on it JVM will create New Object. Here by creating new objects then JVM consumes more memory. Then i think it causes to memory issue right.?
You are correct. It is definitely worth being aware of this issue, even if it doesn't affect you every time.
As you say, Strings cannot change after creation - they're immutable and they don't expose many ways to change them.
However, operations such as a split() will be generating additional string objects in the background, and each of those strings have a memory overhead if you are holding onto references to them.
As the other posters note, the objects will be small and garbage collection will usually clean up the old ones after they have gone out of scope, so you generally won't have to worry about this.
However, if you're doing something specific and holding onto large amounts of string references then this could bite you.
Look at String interning depending on your use case, noting the warnings on the linked page.
Two things to note:
1) Hard coded String literals will be automatically interned by Java, reducing the impact of this.
2) The + operator is more efficient in this regard, it will use String Builders underneath giving performance & memory benefits.
No, that does not. If you do not hold strong links to String instances they eventually will be collected by a garbage collector.
For example:
while (true) {
new String("that is a string");
}
in this snippet you continuously create new object instances, however you will never get OutOfMemoryException as created instances become garbage (there are obviously no strong links).
It consumes more memory for new objects, that's right. But that fact in itself does not create an issue, because garbage collector promptly reclaims all inaccessible memory. Of course you can turn it into an issue by creating links to the newly created strings, but that would be an issue of your program, not of JVM.
The biggest memory issue you have to know about is taking a small substring of a huge string. That substring shares the original string's char array and even if the original string gets gc'd, the huge char array will still be referenced by the substring. The workaround is to use new String(hugeString.substring(i)).
The issue that is generated is the fact that garbage is generated. This issue is resolved by the virtual machine by calling the garbage collector which frees the memory used by that garbage.
As soon as the old object is not used anymore, it can be removed by the garbage collector. (Which will be done far before any memory issue arises).
If you want to prevent the copying of the data, use a StringBuilder.
Unused objects are collected by GC.
and Immutability got many benefits in java.
In Java achieving as much immutability as possible is a good practice.
They can be safely used in Collections frameworks also.
Check this
As far as I know StringBuilder (or StringBuffer for thread safe) is useful for managing String and make them mutable.
Manipulate some characters in a huge String do not 'eat' many bytes in memory.
It is also more powerful/speed for concate.
Since a string instance is immutable it can be reused by the jvm. The String class is implemented with Flyweight Design Pattern that is used to avoid memory issues.
This question already has answers here:
Closed 10 years ago.
Possible Duplicate:
Why can't strings be mutable in Java and .NET?
Why .NET String is immutable?
Several languages have chosen for this, such as C#, Java, and Python. If it is intended to save memory or gain efficiency for operations like compare, what effect does it have on concatenation and other modifying operations?
Immutable types are a good thing generally:
They work better for concurrency (you don't need to lock something that can't change!)
They reduce errors: mutable objects are vulnerable to being changed when you don't expect it which can introduce all kinds of strange bugs ("action at a distance")
They can be safely shared (i.e. multiple references to the same object) which can reduce memory consumption and improve cache utilisation.
Sharing also makes copying a very cheap O(1) operation when it would be O(n) if you have to take a defensive copy of a mutable object. This is a big deal because copying is an incredibly common operation (e.g. whenever you want to pass parameters around....)
As a result, it's a pretty reasonable language design choice to make strings immutable.
Some languages (particularly functional languages like Haskell and Clojure) go even further and make pretty much everything immutable. This enlightening video is very much worth a look if you are interested in the benefits of immutability.
There are a couple of minor downsides for immutable types:
Operations that create a changed string like concatenation are more expensive because you need to construct new objects. Typically the cost is O(n+m) for concatenating two immutable Strings, though it can go as low as O(log (m+n)) if you use a tree-based string data structure like a Rope. Plus you can always use special tools like Java's StringBuilder if you really need to concatenate Strings efficiently.
A small change on a large string can result in the need to construct a completely new copy of the large String, which obviously increases memory consumption. Note however that this isn't usually a big issue in garbage-collected languages since the old copy will get garbage collected pretty quickly if you don't keep a reference to it.
Overall though, the advantages of immutability vastly outweigh the minor disadvantages. Even if you are only interested in performance, the concurrency advantages and cheapness of copying will in general make immutable strings much more performant than mutable ones with locking and defensive copying.
It's mainly intended to prevent programming errors. For example, Strings are frequently used as keys in hashtables. If they could change, the hashtable would become corrupted. And that's just one example where having a piece of data change while you're using it causes problems. Security is another: if you checking whether a user is allowed to access a file at a given path before executing the operation they requested, the string containing the path better not be mutable...
It becomes even more important when you're doing multithreading. Immutable data can be safely passed around between threads while mutable data causes endless headaches.
Basically, immutable data makes the code that works on it easier to reason about. Which is why purely functional languages try to keep everything immutable.
In Java not only String but all primitive Wrapper classes (Integer, Double, Character etc) are immutable. I am not sure of the exact reason but I think these are the basic data types on which all the programming schemes work. If they change, things could go wild. To be more specific, I'll use an example: Say you have opened a socket connection to a remote host. The host name would be a String and port would be Integer. What if these values are modified after the connection is established.
As far as performance is concerned, Java allocates memory to these classes from a separate memory section called Literal Pool, and not from stack or Heap. The Literal Pool is indexed and if you use a string "String" twice, they point to the same object from Literal pool.
Having strings as immutable also allows the new string references easy, as the same/similar strings will be readily available from the pool of the Strings previously created. Thereby reducing the cost of new object creation.
Would a hashtable/hashmap use a lot of memory if it only consists of object references and int's?
As for a school project we had to map a database to objects (that's what being done by orm/hibernate nowadays) but eager to find a good way not to store id's in objects in order to save them again we thought of putting all objects we created in a hashmap/hashtable, so we could easily retrieve it's ID. My question is if it would cost me performance using this, in my opinion more elegant way to solve this problem.
Would a hashtable/hashmap use a lot of
memory if it only consists of object
references and int's?
"a lot" depends on how many objects you have. For a few hundreds or a few thousands, you're not going to notice.
But typically the default Java collections are really incredibly inefficient when you're working with primitives (because of the constant boxing/unboxing from "primitive to wrapper" going on, like say "int to Integer") , both from a performances and memory standpoint (the two being related but not identical).
If you have a lot of entries, like hundreds of thousands or millions, I suggest using for example the Trove collections.
In your case, you'd use this:
TIntObjectHashMap<SomeJavaClass>
or this:
TObjectIntHashMap<SomeJavaClass>
In any case, that shall run around circle the default Java collections perf-wise and cpu-wise (and it shall trigger way less GC, etc.).
You're dodging the unnecessary automatic (un)boxing from/to int/Integer, the collections are creating way less garbage, resizing in a much smarter way, etc.
Don't even get me started on the default Java HashMap<Integer,Integer> compared to Trove's TIntIntHashMap or I'll go berzerk ;)
Minimally, you'd need an implementation of the Map.Entry interface with a reference to the key object and a reference to the value object. If either the the key or value are primitive types, such as int, you'll need a wrapper type (e.g. Integer) to wrap it as well. The Map.Entrys are stored in an array and allocated in blocks.
Take a look at this question for more information on how to measure your memory consumption in Java.
It's impossible to answer this without some figures. How many objects are you looking to store? Don't forget you're storing the objects already, so the key/object reference combination should be fairly small.
The only sensible thing to do is to try this and see if it works for you. Don't forget that the JVM will have a default maximum memory allocation and you can increase this (if you need) via -Xmx
Which type of data structure uses more memory?
Hashtable
Hashmap
ArrayList
Could you please give me a brief explanation which one is less prone to memory leakage?
...which one to use for avoiding the memory leakage
The answer is all of them and none of them.
Memory leakage is not related to the data structure, but the way you use them.
The amount of memory used by each one is irrelevant when you aim to avoid "memory leakage".
The best thing you can do is: When you detect an object won't be used any longer in the application, you must remove it from the collection ( not only those you've listed, but any other you might use; List, Map, Set or even arrays ).
That way the garbage collector will be able to release the memory used by that object.
You can take a look at this article "How does garbage collector works" for further explanation on how Java release memory from the objects it uses.
Edit:
There are others data structures in Java which help for the references management such as WeakHashMap, but this may be considered as "advanced topics".
Most likely you should really just use a Collection that suits your current need. In the most common cases, if you need a List, use ArrayList, and if you need a Map, use HashMap. For a tutorial, see e.g. http://java.sun.com/docs/books/tutorial/collections/
When your profiler shows you there is an actual memory leak related to the use of Java Collections, then do something about it.
Your question is woefully underspecified because the concrete data structures you specify are not of comparable structure.
The HashMap/HashTable are comparable since they both function as maps (key -> value lookups).
ArrayLists (and lists in general) do not.
The HashMap/HashTable part is easy to answer as they are largely identical (the major difference is null keys) but the former is not synchronized and the latter is, thus HashMap will generally be faster (assuming the synchronization is not required) Modern JVM's are reasonably fast at uncontended locks though so the difference will be small in a micro benchmark.
Well, I've actually been, recently, in a situation where I had to hold onto large collections of custom objects, where the size of the collections was one of the applications limiting factors. If that's your situation, a few suggestions -
there are a few implementations of
collections using primitives (list
here). Played around a bit with
trove4j, and found a somewhat smaller
memory footprint (as long as you're
dealing with primitives, of course).
If you're dealing with large
collections, you'll probably get more
bang for your buck, in terms of
reducing memory footprint, by
optimizing the objects you're
holding. After all, you've got a lot
more of them, otherwise you wouldn't
need a collection, right?
Some collections are naturally smaller (e.g. LinkedList will be a bit smaller than an ArrayList) but the difference will probably be swamped by the differences in how they're used)
Most of the java collections can be manually sized - you can set your arraylist of 100 elements to be initialized to 100 elements, and you can set your maps to keep less open space at the cost of slower performance. All in the javadocs.
Ultimately the simplest thing to do is to test for yourself.
You're not comparing like with like: HashMap and Hashtable implement the Map interface, and ArrayList implements the List interface.
In a direct comparison between Hashtable and HashMap, HashMap will probably offer better performance because Hashtable is synchronized.
If you give some indication about what you're using the collections for, you might get a more insightful answer.
Hashtables (be it HashMap or HashTable) would take a little more memory than what they use to actually store the information.
Hashing performance comes at a price.
A java.util.Collection stores references to objects.
A java.util.Map stores references to Map.Entry instances, which hold references to keys and objects.
So a java.util.Collection holding N references to objects will require less memory than a java.util.Map holding onto the same N references, because the Map has to point to the keys as well.
Performance for reading and writing differs depending on the implementation of each of these interfaces.
I don't see any java.util.Collection analogous to WeakHashMap. I'd read about that class if you're worried about garbage collection and memory leaks.
As others have pointed out, your question is underspecified.
Still, sometimes an ArrayList based implementation can replace a HashMap based implementation (I would not consider Hashtable at all, it's obsolete). You might need to search the ArrayList linearly but for small Lists that might still be fast enough and the ArrayList will need less memory (for the same data), because it has less overhead.
In most languages it depends on how good you are at picking up your toys after you're done with them.
In Java, it doesn't matter since garbage collection is done automatically and you don't really need to worry about memory leaks.