The performance of the HashMap in JAVA8 - java

I had a question when I learned the HashMap source code in java8。
Source code is so complicated, how much efficiency?
So I wrote a code about the hash conflict。
public class Test {
final int i;
public Test(int i) {
this.i = i;
}
public static void main(String[] args) {
java.util.HashMap<Test, Test> set = new java.util.HashMap<Test, Test>();
long time;
Test last;
Random random = new Random(0);
int i = 0;
for (int max = 1; max < 200000; max <<= 1) {
long c1 = 0, c2 = 0;
int t = 0;
for (; i < max; i++, t++) {
last = new Test(random.nextInt());
time = System.nanoTime();
set.put(last, last);
c1 += (System.nanoTime() - time);
last = new Test(random.nextInt());
time = System.nanoTime();
set.get(last);
c2 += (System.nanoTime() - time);
}
System.out.format("%d\t%d\t%d\n", max, (c1 / t), (c2 / t));
}
}
public int hashCode() {
return 0;
}
public boolean equals(Object obj) {
if (obj == null)
return false;
if (!(obj instanceof Test))
return false;
Test t = (Test) obj;
return t.i == this.i;
}
}
I show the results in Excel。
enter image description here
I am using java6u45 java7u80 java8u131。
I do not understand why the performance of java8 will be so bad
I'm trying to write my own HashMap.
I would like to learn HashMap in java8 which is better, but I did not find it.

Your test scenario is non-optimal for Java 8 HashMap. HashMap in Java 8 optimizes collisions by using binary trees for any hash chains longer than a given threshold. However, this only works if the key type is comparable. If it isn't then the overhead of testing to see if the optimization is possible actually makes Java 8 HashMap slower. (The slow-down is more than I expected ... but that's another topic.)
Change your Test class to implement Comparable<Test> ... and you should see that Java 8 performs better than the others when the proportion of hash collisions is large enough.
Note that the tree optimization should be considered as a defensive measure for the case where the hash function doesn't perform. The optimization turns O(N) worst-case performance to O(logN) worst-case.
If you want your HashMap instances to have O(1) lookup, you should make sure that you use a good hash function for the key type. If the probability of collision is minimized, the optimization is moot.
Source code is so complicated, how much efficiency?
It is explained in the comments in the source code. And probably other places that Google can find for you :-)

Related

Large Map freezes my program

I have large Map where I store some objects. The Map is large: it has around 200k objects. When I try to run some methods, that require to read map values, the program freezes. When I debug it, it seems that my IDE is 'collecting data' (picture). It has never completed the task.
I have 16GB RAM.
What can I do to speed this up?
I get performance issues around 61 million elements.
import java.util.*;
public class BreakingMaps{
public static void main(String[] args){
int count = Integer.MAX_VALUE>>5;
System.out.println(count + " objects tested");
HashMap<Long, String> set = new HashMap<>(count);
for(long i = 0; i<count; i++){
Long l = i;
set.put(l, l.toString());
}
Random r = new Random();
for(int i = 0; i<1000; i++){
long k = r.nextInt()%count;
k = k<0?-k:k;
System.out.println(set.get(k));
}
}
}
I run the program with java -Xms12G -Xmx13G BreakingMaps
I suspect your problem is not the map, but circumstances surrounding the map. If I write the same program, but use a class with hashcode colisions then the program cannot handle 200K elements.
static class Key{
final long l;
public Key(long l){
this.l = l;
}
#Override
public int hashCode(){
return 1;
}
#Override
public boolean equals(Object o){
if(o!=null && o instanceof Key){
return ((Key)o).l==l;
}
return false;
}
}
Look at this - as the solution you can increase the heap size for your app:
java -Xmx6g myprogram.
But it's not very good. I'd suggest you to try to rework your data processing approach. Maybe you can apply some filtering before fetching the data to decrease the data size or implement some calculation on database level.

Using Java 7 HashMap in Java 8

I have updated a Java application to Java 8. The application heavily relies on HashMaps.
When I run the benchmarks, I see unpredictable behavoir. For some inputs, the application runs faster than before, but for larger inputs, it's constantly slower.
I've checked the profiler and the most time consuming operation is HashMap.get. I suspect the changes
are due to the HashMap modification in Java 8, but it may not be true, as I have changed some other parts.
Is there an easy way that I hook in the original Java 7 HashMap into my Java 8 application so that I only change the hashmap implementation to see if I still observe the change in performance.
The following is a minimal program that tries to simulate what my application is doing.
The basic idea is that i need to share nodes in the application. At some runtime point, a node
should be retrieved or created if it already does not exist based on some integer properties. The following only uses two integer, but in the real application I have one, two and three integer keys.
import java.util.HashMap;
import java.util.Map;
import java.util.Random;
public class Test1 {
static int max_k1 = 500;
static int max_k2 = 500;
static Map<Node, Node> map;
static Random random = new Random();
public static void main(String[] args) {
for (int i = 0; i < 15; i++) {
long start = System.nanoTime();
run();
long end = System.nanoTime();
System.out.println((end - start) / 1000_000);
}
}
private static void run() {
map = new HashMap<>();
for (int i = 0; i < 10_000_000; i++) {
Node key = new Node(random.nextInt(max_k1), random.nextInt(max_k2));
Node val = getOrElseUpdate(key);
}
}
private static Node getOrElseUpdate(Node key) {
Node val;
if ((val = map.get(key)) == null) {
val = key;
map.put(key, val);
}
return val;
}
private static class Node {
private int k1;
private int k2;
public Node(int k1, int k2) {
this.k1 = k1;
this.k2 = k2;
}
#Override
public int hashCode() {
int result = 17;
result = 31 * result + k1;
result = 31 * result + k2;
return result;
}
#Override
public boolean equals(Object obj) {
if (this == obj)
return true;
if (!(obj instanceof Node))
return false;
Node other = (Node) obj;
return k1 == other.k1 && k2 == other.k2;
}
}
}
The benchmarking is primitive, but still, this is the result of 15 runs on Java 8:
8143
7919
7984
7973
7948
7984
7931
7992
8038
7975
7924
7995
6903
7758
7627
and this is for Java 7:
7247
6955
6510
6514
6577
6489
6510
6570
6497
6482
6540
6462
6514
4603
6270
The benchmarking is primitive, so I appreciate if someone who is familiar with JMH or other benchmarking tools run it, but from what I observe the results are better for Java 7. Any ideas?
Your hashCode() is very poor. In example you posted you have 250000 unique values but only 15969 unique hash codes. Because of lot of collisions, Java 8 swaps lists with trees. In your case it only adds overhead, because many elements not only have the same position in hash table but also the same hash code. The tree ends up as a linked list anyway.
There are couple of ways to fix this:
Improve your hashCode. return k1 * 500 + k2; resolves the issue.
Use THashMap. Open addressing should work better in case of collisions.
Make Node implement Comparable. This will be used by HashMap to construct balanced tree in case of conflicts.

Groovy collections performance considerations regarding space/time

What is the performance of Groovys collection methods (regarding space(!) and time) in comparison to plain Java for-loops?
Eg for this use cases:
sum() vs. for-loop with variable
each() vs. for-loop with variable
inject() vs. for-loop with variable
collect() vs. for-loop with temporary collection
findAll() vs. for-loop with temporary collection
find() vs. for-loop with variable
So, considering those results, is it advisable to use for-loops over Groovy-collection-methods in critical environments (eg. Grails-WebApp)? Are there resources regarding Groovy/Grails performance (optimization)?
Using this GBench test I got the following results for CPU-time:
user system cpu real
forLoop 2578777 67 2578844 2629592
forEachLoop 2027941 47 2027988 2054320
groovySum 3946137 91 3946228 3958705
groovyEach 4703000 0 4703000 4703000
groovyInject 4280792 108 4280900 4352287
import groovyx.gbench.BenchmarkBuilder
def testSize = 10000
def testSet = (0..testSize) as Set
def bm = new BenchmarkBuilder().run {
'forLoop' {
def n = 0
for(int i = 0; i<testSize; i++) {
n += i
}
return n
}
'forEachLoop' {
def n = 0
for(int i in testSet) {
n += i
}
return n
}
'groovySum' {
def n = testSet.sum()
return n
}
'groovyEach' {
def n = 0
testSet.each { n + it }
return n
}
'groovyInject' {
def n = testSet.inject(0) { el, sum -> sum + el }
return n
}
}
bm.prettyPrint()
Interesting benchmark. No surprise that sum() is slower. Here's how implementation looks like:
private static Object sum(Iterable self, Object initialValue, boolean first) {
Object result = initialValue;
Object[] param = new Object[1];
for (Object next : self) {
param[0] = next;
if (first) {
result = param[0];
first = false;
continue;
}
MetaClass metaClass = InvokerHelper.getMetaClass(result);
result = metaClass.invokeMethod(result, "plus", param);
}
return result;
}
As You can see it must be generic and uses meta programming. The result is bigger time cost.
The results of the benchmark You pasted are clear and pretty self descriptive. If You really need better performance it seems that better idea is to use for loops.

Java Mutable BigInteger Class

I am doing calculations with BigIntegers that uses a loop that calls multiply() about 100 billion times, and the new object creation from the BigInteger is making it very slow. I was hoping somebody had written or found a MutableBigInteger class. I found the MutableBigInteger in the java.math package, but it is private and when I copy the code into a new class, many errors come up, most of which I don't know how to fix.
What implementations exist of a Java class like MutableBigInteger that allows modifying the value in place?
Is their any particular reason you cannot use reflection to gain access to the class?
I was able to do so without any problems, here is the code:
public static void main(String[] args) throws Exception {
Constructor<?> constructor = Class.forName("java.math.MutableBigInteger").getDeclaredConstructor(int.class);
constructor.setAccessible(true);
Object x = constructor.newInstance(new Integer(17));
Object y = constructor.newInstance(new Integer(19));
Constructor<?> constructor2 = Class.forName("java.math.MutableBigInteger").getDeclaredConstructor(x.getClass());
constructor2.setAccessible(true);
Object z = constructor.newInstance(new Integer(0));
Object w = constructor.newInstance(new Integer(0));
Method m = x.getClass().getDeclaredMethod("multiply", new Class[] { x.getClass(), x.getClass()});
Method m2 = x.getClass().getDeclaredMethod("mul", new Class[] { int.class, x.getClass()});
m.setAccessible(true);
m2.setAccessible(true);
// Slightly faster than BigInteger
for (int i = 0; i < 200000; i++) {
m.invoke(x, y, z);
w = z;
z = x;
x = w;
}
// Significantly faster than BigInteger and the above loop
for (int i = 0; i < 200000; i++) {
m2.invoke(x, 19, x);
}
BigInteger n17 = new BigInteger("17");
BigInteger n19 = new BigInteger("19");
BigInteger bigX = n17;
// Slowest
for (int i = 0; i < 200000; i++) {
bigX = bigX.multiply(n19);
}
}
Edit:
I decided to play around with a bit more, it does appear that java.math.MutableBigInteger doesn't behave exactly as you would expect.
It operates differently when you multiply and it will throw a nice exception when it has to increase the size of the internal array when assigning to itself. Something I guess is fairly expected. Instead I have to swap around the objects so that it is always placing the result into a different MutableBigInteger. After a couple thousand calculations the overhead from reflection becomes negligible. MutableBigInteger does end up pulling ahead and offers increasingly better performance as the number of operations increases. If you use the 'mul' function with an integer primitive as the value to multiply with, the MutableBigInteger runs almost 10 times faster than using BigInteger. I guess it really boils down to what value you need to multiply with. Either way if you ran this calculation "100 billion times" using reflection with MutableBigInteger, it would run faster than BigInteger because there would be "less" memory allocation and it would cache the reflective operations, removing overhead from reflection.
JScience has a class call LargeInteger, which is also immutable, but which they claim has significantly improved perfomance compared to BigInteger.
http://jscience.org/
APFloat's Apint might be worth checking out too. http://www.apfloat.org/apfloat_java/
I copied MutableBigInteger, then commented out some methods' bodies I dind't need, adding a nice
throw new UnsupportedOperationException("...");
when invoked.
here is how it looks.
In Revisions you can see what's changed from the original java.math.MutableBigInteger.
I also added some convenience methods,
public void init(long val) {};
public MutableBigInteger(long val) {};
// To save previous value before modifying.
public void addAndBackup(MutableBigInteger addend) {}
// To restore previous value after modifying.
public void restoreBackup() {}
Here is how I used it:
private BigInteger traverseToFactor(BigInteger offset, BigInteger toFactorize, boolean forward) {
MutableBigInteger mbiOffset = new MutableBigInteger(offset);
MutableBigInteger mbiToFactorize = new MutableBigInteger(toFactorize);
MutableBigInteger blockSize = new MutableBigInteger(list.size);
if (! MutableBigInteger.ZERO.equals(mbiOffset.remainder(blockSize))) {
throw new ArithmeticException("Offset not multiple of blockSize");
}
LongBigArrayBigList pattern = (LongBigArrayBigList) list.getPattern();
while (true) {
MutableBigInteger divisor = new MutableBigInteger(mbiOffset);
for (long i = 0; i < pattern.size64(); i++) {
long testOperand = pattern.getLong(i);
MutableBigInteger.UNSAFE_AUX_VALUE.init(testOperand);
divisor.addAndBackup(MutableBigInteger.UNSAFE_AUX_VALUE);
if (MutableBigInteger.ZERO.equals(mbiToFactorize.remainder(divisor))) {
return divisor.toBigInteger();
}
divisor.restoreBackup();
}
if (forward) {
mbiOffset.add(blockSize);
} else {
mbiOffset.subtract(blockSize);
}
System.out.println(mbiOffset);
}
}

Modular increment with Java's Atomic classes

I was surprised that Java's AtomicInteger and AtomicLong classes don't have methods for modular increments (so that the value wraps around to zero after hitting a limit).
I figure I've got to be missing something obvious. What's the best way to do this?
For example, I want to share a simple int between threads, and I want each thread to be able to increment it, say, mod 10.
I can create a class which uses synchronization/locks, but is there a better, easier way?
Just mod 10 the value when you read from it?
public class AtomicWrappingCounter {
private final AtomicLong counter = new AtomicLong();
private final int max;
public AtomicWrappingCounter(int max) {
this.max = max;
}
public int get() {
return (int) (counter.get() % max);
}
public int incrementAndGet() {
return (int) (counter.incrementAndGet() % max);
}
}
Obviously if you might increment this counter more than Long.MAX_VALUE times, you couldn't use this approach, but 9 quintillion is a lot of times to be incrementing (around 292 years at a rate of 1 per nanosecond!).
In Java 8 you can use getAndUpdate (and updateAndGet) in AtomicInteger.
For example if we want to have a counter that wraps to zero each time it hits 10.
AtomicInteger counter = new AtomicInteger(0);
// to get & update
counter.getAndUpdate(value -> (value + 1) % 10)
I would think the simplest way is to build a wrapping counter yourself which stores it's values in an AtomicInteger, something like
public class AtomicWrappingCounter {
private AtomicInteger value;
private final int max;
public AtomicWrappingCounter(int start, int max) {
this.value = new AtomicInteger(start);
this.max = max;
}
public int get() {
return value.get();
}
/* Simple modification of AtomicInteger.incrementAndGet() */
public int incrementAndGet() {
for (;;) {
int current = get();
int next = (current + 1) % max;
if (value.compareAndSet(current, next))
return next;
}
}
}
Why doesn't AtomicInteger provide something like this itself? Who knows, but I think the intention of the concurrency framework authors were to provide some building blocks you could use to better create your own higher-level functions.
What's difficult about adding a synchronized modifier or block to your addModular() method?
The reason why the Atomic classes don't have this functionality is that they're based on specific atomic hardware instructions offered by current CPUs, and modular arithmetic cannot be implemented by those without resorting to locking or other more complex and potentially inefficient algorithms like the one suggested by matt.

Categories