Memory problems with Java in the context of Hadoop - java

I want to compute a multiway join in Hadoop framework. When the records of each relation get bigger from a threshold and beyond I face two memory problems,
1) Error: GC overhead limit exceeded,
2) Error: Java heap space.
The threshold is the 1.000.000 / relation for a chain join and a star join.
In the join computation I use some hash tables i.e.
Hashtable< V, LinkedList< K>> ht = new Hashtable< V, LinkedList< K>>( someSize, o.75F);
These errors occur when I hash the input records and only then for the moment. During the hashing I have quite many for loops which, produce a lot of temporary objects. For this reason I get the 1) problem. So, I solved the 1) problem by setting K = StringBuilder which is a final class. In other words I reduced the amount of temporary objects by having only few objects which their value, content changes but not themselves.
Now, I am dealing with the 2) problem. I increased the heap space in each of the nodes of my cluster by setting the appropriate variable in the file $HADOOP_HOME/hadoop/conf/hadoop-env.sh. The problem still remained. I did a very basic monitoring of the heap by using VisualVM. I monitored only the master node and especially the JobTracker and the local TaskTracker daemons. I didn't notice any heap overflow during this monitoring. Also the PermGen space didn't overflow.
So for the moment, in the declaration,
Hashtable< V, LinkedList< K>> ht = new Hashtable< V, LinkedList< K>>( someSize, o.75F);
I am thinking of setting V = SomeFinalClass. This SomeFinalClass will help me to keep the amount of objects low and consequently the memory usage. Of course a SomeFinalClass object will have the same hash code independently of its content by default. So I will not be able to use this SomeFinalClass as a key in the hash table above. In order to solve this problem I am thinking of overriding the default hashCode() method and by a similar String.hashCode() method. This method will produce a hash code based on the content of a SomeFinalClass object.
What is your opinion on the problems and the solutions above? What would you do?
Should I monitor also the DataNode daemon? Both of the errors above are TaskTracker errors, DataNode errors or both?
Finally, will the solutions above solve the problems for an arbitrary amount of records / relation? Or soon or later I will have the same problem again?

Use an ArrayList instead of a LinkedList and it will use a lot less memory.
Also I suggest using a HashMap instead of Hastable as the later is a legacy class.

Related

Spark streaming gc setup questions

My logic is as follows.
Use createDirectStream to get a topic by log type in Kafka.
After repartition, the log is processed through various processing.
Create a single string using combineByKey for each log type (use StringBuilder).
Finally, save to HDFS by log type.
There are a lot of operations that add strings, so GC happens frequently.
How is it better to set up GC in this situation?
//////////////////////
There are various logic, but I think there is a problem in doing combineByKey.
rdd.combineByKey[StringBuilder](
(s: String) => new StringBuilder(s),
(sb: StringBuilder, s: String) => sb.append(s),
(sb1: StringBuilder, sb2: StringBuilder) => sb1.append(sb2)
).mapValues(_.toString)
The simplest thing with greatest impact you can do with that combineByKey expression is to size the StringBuilder you create so that it does not have to expand its backing character array as you merge string values into it; the resizing amplifies the allocation rate and wastes memory bandwidth by copying from old to new backing array. As a guesstimate, I would say pick the 90th percentile of string length of the resulting data set's records.
A second thing to look at (after collecting some statistics on your intermediate values) would be for your combiner function to pick the StringBuilder instance that has room to fit in the other one when you call sb1.append(sb2).
A good thing to take care of would be to use Java 8; it has optimizations that make a significant difference when there's heavy work on strings and string buffers.
Last but not least, profile to see where you are actually spending your cycles. This workload (excluding any additional custom processing you are doing) shouldn't need to promote a lot of objects (if any) to old generation, so you should make sure that young generation has ample size and is collected in parallel.

JVM Tunning of Java Class

My java class reads in a 60MB file and produces a HashMap of a HashMap with over 300 million records.
HashMap<Integer, HashMap<Integer, Double>> pairWise =
new HashMap<Integer, HashMap<Integer, Double>>();
I already tunned the VM argument to be:
-Xms512M -Xmx2048M
But system still goes for:
Exception in thread "main" java.lang.OutOfMemoryError: Java heap space
at java.util.HashMap.createEntry(HashMap.java:869)
at java.util.HashMap.addEntry(HashMap.java:856)
at java.util.HashMap.put(HashMap.java:484)
at com.Kaggle.baseline.BaselineNew.createSimMap(BaselineNew.java:70)
at com.Kaggle.baseline.BaselineNew.<init>(BaselineNew.java:25)
at com.Kaggle.baseline.BaselineNew.main(BaselineNew.java:315)
How big of the heap will it take to run without failing with an OOME?
Your dataset is ridiculously large to process it in memory, this is not a final solution, just an optimization.
You're using boxed primitives, which is a very painful thing to look at.
According to this question, a boxed integer can be 20 bytes larger than an unboxed integer. This is not what I call memory efficient.
You can optimize this with specialized collections, which don't box the primitive values. One project providing these is Trove. You could use a TIntDoubleMap instead of your HashMap<Integer, Double> and a TIntObjectHashMap instead of your HashMap<Integer, …>.
Therefore your type would look like this:
TIntObjectHashMap<TIntDoubleHashMap> pairWise =
new TIntObjectHashMap<TIntDoubleHashMap>();
Now, do the math.
300.000.000 Doubles, each 24 bytes, use 7.200.000.000 bytes of memory, that is 7.2 GB.
If you store 300.000.000 doubles, taking 4 bytes each, you only need 1.200.000.000 bytes, which is 1.2 GB.
Congrats, you saved around 83% of the memory you previously used for storing your numbers!
Note that this calculation is rough, depends on the platform and implementation, and does not account for the memory used for the HashMap/T*Maps.
Your data set is large enough that holding all of it in memory at one time is not going to happen.
Consider storing the data in a database and loading partial data sets to perform manipulation.
Edit: My assumption was that you were going to do more than one pass on the data. If all you are doing is loading it and performing one action on each item, then Lex Webb's suggestion (comment below) is a better solution than a database. If you are performing more than one action per item, then database appears to be a better solution. The database does not need to be an SQL database, if your data is record oriented a NoSQL database might be a better fit.
You are using the wrong data structures for data of this volume. Java adds significant overhead in memory and time for every object it creates -- and at the 300 million object level you're looking at a lot of overhead. You should consider leaving this data in the file and use random access techniques to address it in place -- take a look at memory mapped files using nio.

What is a fast alternative to HashMap for mapping to primitive types?

First of all let me tell you that i have read the following questions that has been asked before Java HashMap performance optimization / alternative and i have a similar question.
What i want to do is take a LOT of dependencies from New york times text that will be processed by stanford parser to give dependencies and store the dependencies in a hashmap along with their scores, i.e. if i see a dependency twice i will increment the score from the hashmap by 1.
The task starts off really quickly, about 10 sentences a second but scales off quickly. At 30 000 sentences( which is assuming 10 words in each sentence and about 3-4 dependences for each word which im storing) is about 300 000 entries in my hashmap.
How will i be able to increase the performance of my hashmap? What kind of hashkey can i use?
Thanks a lot
Martinos
EDIT 1:
ok guys maybe i phrased my question wrongly ok , well the byte arrays are not used in MY project but in the similar question of another person above. I dont know what they are using it for hence thats why i asked.
secondly: i will not post code as i consider it will make things very hard to understand but here is a sample:
With sentence : "i am going to bed" i have dependencies:
(i , am , -1)
(i, going, -2)
(i,to,-3)
(am, going, -1)
.
.
.
(to,bed,-1)
These dependencies of all sentences(1 000 000 sentences) will be stored in a hashmap.
If i see a dependency twice i will get the score of the existing dependency and add 1.
And that is pretty much it. All is well but the rate of adding sentences in hashmap(or retrieving) scales down on this line:
dependancyBank.put(newDependancy, dependancyBank.get(newDependancy) + 1);
Can anyone tell me why?
Regards
Martinos
Trove has optimized hashmaps for the case where key or value are of primitive type.
However, much will still depend on smart choice of structure and hash code for your keys.
This part of your question is unclear: The task starts off really quickly, about 10 sentences a second but scales off quickly. At 30 000 sentences( which is assuming 10 words in each sentence and about 3-4 dependences for each word which im storing) is about 300 000 entries in my hashmap.. But you don't say what the performance is for the larger data. Your map grows, which is kind of obvious. Hashmaps are O(1) only in theory, in practice you will see some performance changes with size, due to less cache locality, and due to occasional jumps caused by rehashing. So, put() and get() times will not be constant, but still they should be close to that. Perhaps you are using the hashmap in a way which doesn't guarantee fast access, e.g. by iterating over it? In that case your time will grow linearly with size and you can't change that unless you change your algorithm.
Google 'fastutil' and you will find a superior solution for mapping object keys to scores.
Take a look at the Guava multimaps: http://www.coffee-bytes.com/2011/12/22/guava-multimaps They are designed to basically keep a list of things that all map to the same key. That might solve your need.
How will i be able to increase the performance of my hashmap?
If its taking more than 1 micro-second per get() or put(), you have a bug IMHO. You need to determine why its taking as long as it is. Even in the worst case where every object has the same hasCode, you won't have performance this bad.
What kind of hashkey can i use?
That depends on the data type of the key. What is it?
and finally what are byte[] a = new byte[2]; byte[] b = new byte[3]; in the question that was posted above?
They are arrays of bytes. They can be used as values to look up but its likely that you need a different value type.
An HashMap has an overloaded constructor which takes initial capacity as input. The scale off you see is because of rehashing during which the HashMap will virtually not be usable. To prevent frequent rehashing you need to start with a HashMap of greater initial capacity. You can also set a loading factor which indicates how much percentage do you load the hashes before rehashing.
public HashMap(int initialCapacity).
Pass the initial capacity to the HashMap during object construction. It is preferable to set a capacity to almost twice the number of elements you would want to add in the map during the course of execution of your program.

Tokenize big files to hashtable in Java

I'm having this problem: I'm reading 900 files and, after processing the files, my final output will be an HashMap<String, <HashMap<String, Double>>. First string is fileName, second string is word and the double is word frequency. The processing order is as follows:
read the first file
read the first line of the file
split the important tokens to a string array
copy the string array to my final map, incrementing word frequencies
repeat for all files
I'm using string BufferedReader. The problem is, after processing the first files, the Hash becomes so big that the performance is very low after a while. I would like to hear solution for this. My idea is to create a limited hash, after the limit reached store into a file. do that until everything is processed, mix all the hashs at the end.
Why not just read one file at a time, and dump that file's results to disk, then read the next file etc? Clearly each file is independent of the others in terms of the mapping, so why keep the results of the first file while you're writing the second?
You could possibly write the results for each file to another file (e.g. foo.txt => foo.txt.map), or you could create a single file with some sort of delimiter between results, e.g.
==== foo.txt ====
word - 1
the - 3
get - 3
==== bar.txt ====
apple - 2
// etc
By the way, why are you using double for the frequency? Surely it should be an integer value...
The time for a hash map to process shouldn't increase significantly as it grows. It is possible that your map is skewing because of an unsuited hashing function or filling up too much. Unless you're using more RAM than you can get from the system, you shouldn't have to break things up.
What I have seen with Java when running huge hash maps (or any collection) with a lots of objects in memory is that the VM goes crazy trying to run the garbage collector. It gets to the point where 90% of the time is spent with the JVM kicking off the garbage collector which takes a while and finds almost every object has a reference.
I suggest profiling your application, and if it is the garbage collector, then increasing heap space and tuning the garbage collector. Also, it will help if you can approximate the needed size of your hash maps and provide sufficiently large allocations (see initialCapacity and loadFactor options in the constructor).
I am trying to rethink your problem:
Since you are trying to construct an inverted index:
Use Multimap rather then Map<String, Map<String, Integer>>
Multimap<word, frequency, fileName, .some thing else tomorrow>
Now, read one file, construct the Multimap and save it on disk. (similar to Jon's answer)
After reading x files, merge all the Multimaps together: putAll(multimap) if you really need one common map of all the values.
You could try using this library to improve your performance.
http://high-scale-lib.sourceforge.net/
It is similar to the java collections api, but for high performance. It would be ideal if you can batch and merge these results after processing them in small batches.
Here is an article that will help you with some more inputs.
http://www.javaspecialists.eu/archive/Issue193.html
Why not use a custom class,
public class CustomData {
private String word;
private double frequency;
//Setters and Getters
}
and use your map as
Map<fileName, List<CustomData>>
this way atleast you will have only 900 keys in your map.
-Ivar

Java Object memory usage - ibm jvm 1.4.2

Is it possible to find memory usage of object in java within application?
I want to have object memory usage to be part of debug output when application runs.
I don't want to connect using external application to VM.
I have a problem that few classes eats up huge amount of memory and causes memory
problems, my app gets crash. I need to find that memory usage (I am working with limited memory resources).
EDIT: I am using java 1.4:/
See my pet project, MemoryMeasurer. A tiny example:
long memory = MemoryMeasurer.measureBytes(new HashMap());
You may also derive more qualitative memory breakdown:
Footprint footprint = ObjectGraphMeasurer.measure(new HashMap());
For example, I used the latter to derive the per entry cost of various data structures, where the overhead is measured in number of objects created, references, and primitives, instead of just bytes (which is also doable). So, next time you use a (default) HashSet, you can be informed that each element in it costs 1 new object (not your element), 5 references, and an int, which is the exact same cost for an entry in HashMap (not unexpectedly, since any HashSet element ends up in a HashMap), and so on.
You can use it on any object graph. If your object graph contains links other structures you do wish to ignore, you should use a predicate to avoid exploring them.
Edit Instrumentation is not available to Java 1.4 (wow, people still use that?!), so the memoryBytes call above wouldn't work for you. But the second would. Then you can write something like this (if you are on a 32bit machine):
long memory = footprint.getObjects() * 8 + footprint.getReferences() * 4 +
footprint.getPrimitives().count(int.class) * 4 +
footprint.getPrimitives().count(long.class) * 8 + ...;
That gives you an approximation. A better answer would be to ceil this to the closest multiple of 16:
long alignedMemory = (x + 15) & (~0xF); //the last part zeros the lowest 4 bits
But the answer might still be off, since if you find, say, 16 booleans, it's one thing if they are found in the same object, and quite another if they are spread in multiple objects (and cause excessive space usage due to aligning). This logic could be implemented as another visitor (similar to how MemoryMeasurer and ObjectGraphMeasurer are implemented - quite simply as you may see), but I didn't bother, since that's what Instrumentation does, so it would only make sense of Java versions below 1.5.
Eclipse MAT is a really good tool to analyze memory.
There are tools that comes with jdk such as jmap and jhat which provides object level details.
The folowing link provides a piece of Java Code computing the size of objects:
http://www.javaworld.com/javaworld/javatips/jw-javatip130.html

Categories