Java collection and memory optimization - java

I wrote a custom index to a custom table which uses 500MB of heap for 500k strings. Only 10% of the strings are unique; the rest are repeats. Every string is of length 4.
How i can optimize my code? Should I use another collection? I tried to implement a custom string pool to save memory:
public class StringPool {
private static WeakHashMap<String, String> map = new WeakHashMap<>();
public static String getString(String str) {
if (map.containsKey(str)) {
return map.get(str);
} else {
map.put(str, str);
return map.get(str);
}
}
}
private void buildIndex() {
if (monitorModel.getMessageIndex() == null) {
// the index, every columns create an index
ArrayList<HashMap<String, TreeSet<Integer>>> messageIndex = new ArrayList<>(filterableColumn.length);
for (int i = filterableColumn.length; i >= 0; i--) {
// key -> string, value -> treeset, the row wich contains the key
HashMap<String, TreeSet<Integer>> hash = new HashMap<>();
messageIndex.add(hash);
}
// create index for every column
for (int i = monitorModel.getParser().getMyMessages().getMessages().size() - 1; i >= 0; --i) {
TreeSet<Integer> tempList;
for (int j = 0; j < filterableColumn.length; j++) {
String value = StringPool.getString(getValueAt(i, j).toString());
if (!messageIndex.get(j).containsKey(value)) {
tempList = new TreeSet<>();
messageIndex.get(j).put(value, tempList);
} else {
tempList = messageIndex.get(j).get(value);
}
tempList.add(i);
}
}
monitorModel.setMessageIndex(messageIndex);
}
}

No need to come up with a custom pool. Just use String.intern().

You might want to examine your memory heap in a profiler. My guess is that the memory consumption isn't primarily in the String storage, but in the many TreeSet<Integer> instances. If so, you could optimize considerably by using primitive arrays (int[], short[], or byte[], depending on the actual size of the integer values you're storing). Or you could look into a primitive collection type, such as those provided by FastUtil or Trove.
If you do find that the String storage is problematic, I'll assume that you want to scale your application beyond 500k Strings, or that especially tight memory constraints require you to deduplicate even short Strings.
As Dev said, String.intern() will deduplicate Strings for you. One caveat, however - in the Oracle and OpenJDK virtual machines, String.intern() will store those Strings in the VM permanent-generation, such that they will not be garbage-collected in the future. That's appropriate (and helpful) if:
The Strings you're storing do not change throughout the life of the VM (e.g., if you read in a static list at startup and use it throughout the life of your application).
The Strings you need to store fit comfortably in the VM permanent generation (with adequate room for classloading and other consumers of PermGen). Update: see below.
If either of those conditions is false, you are probably correct to build a custom pool. But my recommendation is that you consider a simple HashMap in place of the WeakHashMap you're currently using. You probably don't want these values to be garbage-collected while they're in your cache, and WeakHashMap adds another level of indirection (and the associated object pointers), increasing memory consumption further.
Update: I'm told that JDK 7 stores interned Strings (String.intern()) in the main heap, not in perm-gen, as earlier JDKs did. That makes String.intern() less risky if you're using JDK 7.

Related

Java JNI interface to implement an object destructor

So my question is basically if there is even the possibility to implement a custom made JNI so that it can give the possibility to remove immediately the object from the heap instead of waiting for the Garbage Collector to act over it.
My question goal is focusing mostly on the memory that consumption when using temporary functions that create a lot of objects to calculate somthing.
for example: I have a function like this:
public Integer[] countDuplicates(int[] values) {
Map<Integer, Integer> l = new HashMap<Integer, Integer>();
Integer c;
for (int v : values) {
c = l.get(v);
if (c == null) {
c = 0;
}
l.put(v, c + 1);
}
Integer[] result = l.values().toArray(new int[l.size()]);
// < A Way to free the Map from the heap>
return result;
}
as you can see, there is no need for the Map after the end of the method, so, my question is, is there any way, either using JNI, or Implementing any command on the GC, or even a tagword that will force the removal from the heap after the method run?

HashMap performs better than array? [duplicate]

Is it (performance-wise) better to use Arrays or HashMaps when the indexes of the Array are known? Keep in mind that the 'objects array/map' in the example is just an example, in my real project it is generated by another class so I cant use individual variables.
ArrayExample:
SomeObject[] objects = new SomeObject[2];
objects[0] = new SomeObject("Obj1");
objects[1] = new SomeObject("Obj2");
void doSomethingToObject(String Identifier){
SomeObject object;
if(Identifier.equals("Obj1")){
object=objects[0];
}else if(){
object=objects[1];
}
//do stuff
}
HashMapExample:
HashMap objects = HashMap();
objects.put("Obj1",new SomeObject());
objects.put("Obj2",new SomeObject());
void doSomethingToObject(String Identifier){
SomeObject object = (SomeObject) objects.get(Identifier);
//do stuff
}
The HashMap one looks much much better but I really need performance on this so that has priority.
EDIT: Well Array's it is then, suggestions are still welcome
EDIT: I forgot to mention, the size of the Array/HashMap is always the same (6)
EDIT: It appears that HashMaps are faster
Array: 128ms
Hash: 103ms
When using less cycles the HashMaps was even twice as fast
test code:
import java.util.HashMap;
import java.util.Random;
public class Optimizationsest {
private static Random r = new Random();
private static HashMap<String,SomeObject> hm = new HashMap<String,SomeObject>();
private static SomeObject[] o = new SomeObject[6];
private static String[] Indentifiers = {"Obj1","Obj2","Obj3","Obj4","Obj5","Obj6"};
private static int t = 1000000;
public static void main(String[] args){
CreateHash();
CreateArray();
long loopTime = ProcessArray();
long hashTime = ProcessHash();
System.out.println("Array: " + loopTime + "ms");
System.out.println("Hash: " + hashTime + "ms");
}
public static void CreateHash(){
for(int i=0; i <= 5; i++){
hm.put("Obj"+(i+1), new SomeObject());
}
}
public static void CreateArray(){
for(int i=0; i <= 5; i++){
o[i]=new SomeObject();
}
}
public static long ProcessArray(){
StopWatch sw = new StopWatch();
sw.start();
for(int i = 1;i<=t;i++){
checkArray(Indentifiers[r.nextInt(6)]);
}
sw.stop();
return sw.getElapsedTime();
}
private static void checkArray(String Identifier) {
SomeObject object;
if(Identifier.equals("Obj1")){
object=o[0];
}else if(Identifier.equals("Obj2")){
object=o[1];
}else if(Identifier.equals("Obj3")){
object=o[2];
}else if(Identifier.equals("Obj4")){
object=o[3];
}else if(Identifier.equals("Obj5")){
object=o[4];
}else if(Identifier.equals("Obj6")){
object=o[5];
}else{
object = new SomeObject();
}
object.kill();
}
public static long ProcessHash(){
StopWatch sw = new StopWatch();
sw.start();
for(int i = 1;i<=t;i++){
checkHash(Indentifiers[r.nextInt(6)]);
}
sw.stop();
return sw.getElapsedTime();
}
private static void checkHash(String Identifier) {
SomeObject object = (SomeObject) hm.get(Identifier);
object.kill();
}
}
HashMap uses an array underneath so it can never be faster than using an array correctly.
Random.nextInt() is many times slower than what you are testing, even using array to test an array is going to bias your results.
The reason your array benchmark is so slow is due to the equals comparisons, not the array access itself.
HashTable is usually much slower than HashMap because it does much the same thing but is also synchronized.
A common problem with micro-benchmarks is the JIT which is very good at removing code which doesn't do anything. If you are not careful you will only be testing whether you have confused the JIT enough that it cannot workout your code doesn't do anything.
This is one of the reason you can write micro-benchmarks which out perform C++ systems. This is because Java is a simpler language and easier to reason about and thus detect code which does nothing useful. This can lead to tests which show that Java does "nothing useful" much faster than C++ ;)
arrays when the indexes are know are faster (HashMap uses an array of linked lists behind the scenes which adds a bit of overhead above the array accesses not to mention the hashing operations that need to be done)
and FYI HashMap<String,SomeObject> objects = HashMap<String,SomeObject>(); makes it so you won't have to cast
For the example shown, HashTable wins, I believe. The problem with the array approach is that it doesn't scale. I imagine you want to have more than two entries in the table, and the condition branch tree in doSomethingToObject will quickly get unwieldly and slow.
Logically, HashMap is definitely a fit in your case. From performance standpoint is also wins since in case of arrays you will need to do number of string comparisons (in your algorithm) while in HashMap you just use a hash code if load factor is not too high. Both array and HashMap will need to be resized if you add many elements, but in case of HashMap you will need to also redistribute elements. In this use case HashMap loses.
Arrays will usually be faster than Collections classes.
PS. You mentioned HashTable in your post. HashTable has even worse performance thatn HashMap. I assume your mention of HashTable was a typo
"The HashTable one looks much much
better "
The example is strange. The key problem is whether your data is dynamic. If it is, you could not write you program that way (as in the array case). In order words, comparing between your array and hash implementation is not fair. The hash implementation works for dynamic data, but the array implementation does not.
If you only have static data (6 fixed objects), array or hash just work as data holder. You could even define static objects.

Calculate all permutations of a collection in parallel

I need to calculate all permutations of a collection and i have a code for that but the problem is that it is linear and takes a lot of time.
public static <E> Set<Set<E>> getAllCombinations(Collection<E> inputSet) {
List<E> input = new ArrayList<>(inputSet);
Set<Set<E>> ret = new HashSet<>();
int len = inputSet.size();
// run over all numbers between 1 and 2^length (one number per subset). each bit represents an object
// include the object in the set if the corresponding bit is 1
for (int i = (1 << len) - 1; i > 0; i--) {
Set<E> comb = new HashSet<>();
for (int j = 0; j < len; j++) {
if ((i & 1 << j) != 0) {
comb.add(input.get(j));
}
}
ret.add(comb);
}
return ret;
}
I am trying to make the computation run in parallel.
I though of the option to writing the logic using recursion and then parallel execute the recursion call but i am not exactly sure how to do that.
Would appreciate any help.
There is no need to use recursion, in fact, that might be counter-productive. Since the creation of each combination can be performed independently of the others, it can be done using parallel Streams. Note that you don’t even need to perform the bit manipulations by hand:
public static <E> Set<Set<E>> getAllCombinations(Collection<E> inputSet) {
// use inputSet.stream().distinct().collect(Collectors.toList());
// to get only distinct combinations
// (in case source contains duplicates, i.e. is not a Set)
List<E> input = new ArrayList<>(inputSet);
final int size = input.size();
// sort out input that is too large. In fact, even lower numbers might
// be way too large. But using <63 bits allows to use long values
if(size>=63) throw new OutOfMemoryError("not enough memory for "
+BigInteger.ONE.shiftLeft(input.size()).subtract(BigInteger.ONE)+" permutations");
// the actual operation is quite compact when using the Stream API
return LongStream.range(1, 1L<<size) /* .parallel() */
.mapToObj(l -> BitSet.valueOf(new long[] {l}).stream()
.mapToObj(input::get).collect(Collectors.toSet()))
.collect(Collectors.toSet());
}
The inner stream operation, i.e. iterating over the bits, is too small to benefit from parallel operations, especially as it would have to merge the result into a single Set. But if the number of combinations to produce is sufficiently large, running the outer stream in parallel will already utilize all CPU cores.
The alternative is not to use a parallel stream, but to return the Stream<Set<E>> itself instead of collecting into a Set<Set<E>>, to allow the caller to chain the consuming operation directly.
By the way, hashing an entire Set (or lots of them) can be quite expensive, so the cost of the final merging step(s) are likely to dominate the performance. Returning a List<Set<E>> instead can dramatically increase the performance. The same applies to the alternative of returning a Stream<Set<E>> without collecting the combinations at all, as this also works without hashing the Sets.

Java G1: Monitoring for memory leaks in production

For years, we've been running Java services with modest heap sizes using +UseParallelOldGC. Now, we're starting to roll out a new service using a larger heap and the G1 collector. This is going pretty well.
For our services that use +UseParallelOldGC, we monitor for memory leaks by looking at the old generation size after collection and alerting on a threshold. This works quite well, and in fact saved our bacon just two weeks ago.
Specifically, for +UseParallelOldGC, we do the following:
ManagementFactory.getMemoryPoolMXBeans()
Search for the MemoryPoolMXBean result with the name ending in "Old Gen"
Compare getCollectionUsage().getUsed() (if available) with getMax()
Unfortunately, it seems like G1 no longer has a concept of getCollectionUsage().
Fundamentally, though, we'd like to monitor the G1 heap size following the last mixed collection it chooses to do in a mixed cycle, or something similar.
For example, outside the VM I would be happy with an awk script that merely found the last '(mixed)' was that's followed by a '(young)' and look what the final heap size was (e.g., '1540.0M' 'Heap: 3694.5M(9216.0M)->1540.0M(9216.0M)')
Is there any way to do this inside the Java VM?
Yes, JVM gives you enough tools to retrieve such information for G1. For instance, you could use something like this class that prints all the details about garbage collections (just call MemoryUtil.startGCMonitor()):
public class MemoryUtil {
private static final Set<String> heapRegions;
static {
heapRegions = ManagementFactory.getMemoryPoolMXBeans().stream()
.filter(b -> b.getType() == MemoryType.HEAP)
.map(MemoryPoolMXBean::getName)
.collect(Collectors.toSet());
}
private static NotificationListener gcHandler = (notification, handback) -> {
if (notification.getType().equals(GarbageCollectionNotificationInfo.GARBAGE_COLLECTION_NOTIFICATION)) {
GarbageCollectionNotificationInfo gcInfo = GarbageCollectionNotificationInfo.from((CompositeData) notification.getUserData());
Map<String, MemoryUsage> memBefore = gcInfo.getGcInfo().getMemoryUsageBeforeGc();
Map<String, MemoryUsage> memAfter = gcInfo.getGcInfo().getMemoryUsageAfterGc();
StringBuilder sb = new StringBuilder(250);
sb.append("[").append(gcInfo.getGcAction()).append(" / ").append(gcInfo.getGcCause())
.append(" / ").append(gcInfo.getGcName()).append(" / (");
appendMemUsage(sb, memBefore);
sb.append(") -> (");
appendMemUsage(sb, memAfter);
sb.append("), ").append(gcInfo.getGcInfo().getDuration()).append(" ms]");
System.out.println(sb.toString());
}
};
public static void startGCMonitor() {
for(GarbageCollectorMXBean mBean: ManagementFactory.getGarbageCollectorMXBeans()) {
((NotificationEmitter) mBean).addNotificationListener(gcHandler, null, null);
}
}
public static void stopGCMonitor() {
for(GarbageCollectorMXBean mBean: ManagementFactory.getGarbageCollectorMXBeans()) {
try {
((NotificationEmitter) mBean).removeNotificationListener(gcHandler);
} catch(ListenerNotFoundException e) {
// Do nothing
}
}
}
private static void appendMemUsage(StringBuilder sb, Map<String, MemoryUsage> memUsage) {
memUsage.entrySet().forEach((entry) -> {
if (heapRegions.contains(entry.getKey())) {
sb.append(entry.getKey()).append(" used=").append(entry.getValue().getUsed() >> 10).append("K; ");
}
});
}
}
In this code, gcInfo.getGcAction() gives you enough information to separate minor collections from major/mixed ones.
But there's an important caveat to using your approach (with a threshold) to G1. A single mixed collection in G1 usually affects only several old gen regions - many enough to free sufficient amount of memory but not too many in order to keep the GC pause low. So, after a mixed collection in G1 you cannot be sure that all your garbage has gone. As a result, you need to find more sophisticated strategy to detect memory leaks (maybe based on collections frequency, gathering statistics from several collections, etc.)

Performance issue - clear and reuse a collection OR throw it and get a new one [duplicate]

This question already has answers here:
list.clear() vs list = new ArrayList<Integer>(); [duplicate]
(8 answers)
Closed 8 years ago.
Say we try to implement a merge sort algorithm, given an Array of Arrays to merge what is a better approach, this:
public void merge(ArrayList<ArrayList<E>> a) {
ArrayList<ArrayList<E>> tmp = new ArrayList<ArrayList<E>>() ;
while (a.size()>1) {
for (int i=1; i<a.size();i+=2) {
tmp.add(merge(a.get(i-1),a.get(i)));
}
if (a.size()%2==1) tmp.add(a.get(a.size()-1));
a = tmp;
tmp = new ArrayList<ArrayList<E>>() ;
}
}
or this :
public void merge(ArrayList<ArrayList<E>> a) {
ArrayList<ArrayList<E>> tmp = new ArrayList<ArrayList<E>>(),tmp2 ;
while (a.size()>1) {
for (int i=1; i<a.size();i+=2) {
tmp.add(merge(a.get(i-1),a.get(i)));
}
if (a.size()%2==1) tmp.add(a.get(a.size()-1));
tmp2 = a;
a = tmp;
tmp = tmp2;
tmp.clear();
}
}
to make it clearer, what i was doing is to merge each couple of neighbors in a and put the resulting merged arrays in an external Array of Arrays tmp, after merging all couples, one approach is to clear a and then move tmp to a, and then move the cleared a to tmp.
second approach is to "throw" old tmp and get a new tmp instead of reusing the old one.
As a general rule, don't spend energy trying to reuse old collections; it just makes your code harder to read (and frequently doesn't give you any actual benefit). Only try optimizations like these if you already have your code working, and you have hard numbers that say the speed of your algorithm is improved.
Always allocating a new ArrayList and filling it, will result in more garbage collections which generally slows down everything (minor GCs are cheap but not free).
Reusing the ArrayList will result in less Arrays.copyOf() which is used when the array inside the ArrayList needs to be resized (resizing is cheap but not free).
On the other hand: clear() will also nullify the array content to allow the GC to collect unused object which is of course also not free.
Still, if execution speed is concerned, I would reuse the ArrayList.

Categories