What complexity are the methods multiply, divide and pow in BigInteger currently? There is no mention of the computational complexity in the documentation (nor anywhere else).
If you look at the code for BigInteger (provided with JDK), it appears to me that
multiply(..) has O(n^2) (actually the method is multiplyToLen(..)). The code for the other methods is a bit more complex, but you can see yourself.
Note: this is for Java 6. I assume it won't differ in Java 7.
As noted in the comments on #Bozho's answer, Java 8 and onwards use more efficient algorithms to implement multiplication and division than the naive O(N^2) algorithms in Java 7 and earlier.
Java 8 multiplication adaptively uses either the naive O(N^2) long multiplication algorithm, the Karatsuba algorithm or the 3 way Toom-Cook algorithm depending in the sizes of the numbers being multiplied. The latter are (respectively) O(N^1.58) and O(N^1.46).
Java 8 division adaptively uses either Knuth's O(N^2) long division algorithm or the Burnikel-Ziegler algorithm. (According to the research paper, the latter is 2K(N) + O(NlogN) for a division of a 2N digit number by an N digit number, where K(N) is the Karatsuba multiplication time for two N-digit numbers.)
Likewise some other operations have been optimized.
There is no mention of the computational complexity in the documentation (nor anywhere else).
Some details of the complexity are mentioned in the Java 8 source code. The reason that the javadocs do not mention complexity is that it is implementation specific, both in theory and in practice. (As illustrated by the fact that the complexity of some operations is significantly different between Java 7 and 8.)
There is a new "better" BigInteger class that is not being used by the sun jdk for conservateism and lack of useful regression tests (huge data sets). The guy that did the better algorithms might have discussed the old BigInteger in the comments.
Here you go http://futureboy.us/temp/BigInteger.java
Measure it. Do operations with linearly increasing operands and draw the times on a diagram.
Don't forget to warm up the JVM (several runs) to get valid benchmark results.
If operations are linear O(n), quadratic O(n^2), polynomial or exponential should be obvious.
EDIT: While you can give algorithms theoretical bounds, they may not be such useful in practice. First of all, the complexity does not give the factor. Some linear or subquadratic algorithms are simply not useful because they are eating so much time and resources that they are not adequate for the problem on hand (e.g. Coppersmith-Winograd matrix multiplication).
Then your computation may have all kludges you can only detect by experiment. There are preparing algorithms which do nothing to solve the problem but to speed up the real solver (matrix conditioning). There are suboptimal implementations. With longer lengths, your speed may drop dramatically (cache missing, memory moving etc.). So for practical purposes, I advise to do experimentation.
The best thing is to double each time the length of the input and compare the times.
And yes, you do find out if an algorithm has n^1.5 or n^1.8 complexity. Simply quadruple
the input length and you need only the half time for 1.5 instead of 2. You get again nearly half the time for 1.8 if you multiply the length 256 times.
In the program I am writing, I have used the following method to check whether a number is a perfect square or not.
// Checks whether x is a perfect square
public static boolean issqr(BigInteger x){
a=x.sqrt();
return x.equals(a.multiply(a));
}
In the above code, the following methods from the BigInteger class are used :-
BigInteger multiply(BigInteger num) : Returns the product of this and num.
boolean equals(object obj) : Checks for equality between this and obj.
BigInteger sqrt() : Returns the integral part of the square root of this.
I believe that the sqrt() method in Java uses Newton's method, which would model a binary search algorithm. The issqr(BigInteger x) method above must have the same complexity as the sqrt() method in BigInteger class. However, on comparing the run times for different values of x in the issqr(BigInteger x) method, it looks as though the run time is growing exponentially instead.
What is the reason for a binary search algorithm to have exponential run time complexity? Does it have anything to do with memory and the immutability of BigInteger datatype? Is there a more efficient algorithm to check if a number is a perfect square? Thank you in advance.
TL;DR - it is complicated!
According to Emil Jeřábek in https://cstheory.stackexchange.com/a/9709
The square root of an N-digit number can be computed in time O(M(n)) using e.g. Newton’s iteration, where M(N) is the time needed to multiply two N-digit integers. The current best bound on M(n) is N logN 2^O(logN) using Fürer’s algorithm.
So the theoretical complexity of the complete check would be O(M(N)) + O(M(N/2)) which reduces to O(M(N)).
In practice, we need to look at how BigInteger is implemented. According to comments in the Java 11 source code
"The implementation [of MutableBigInteger.sqrt()] is based on the material in Henry S. Warren, Jr., Hacker's Delight (2nd ed.) (Addison Wesley, 2013), 279-282.
According to the source code, Java 11 BigInteger.multiply(BigInteger) implementation uses:
a naive "grade school" algorithm for small numbers,
the Karatsuba algorithm, for intermediate numbers, or
an "optimal" 3-way Toom-Cook algorithm for really large numbers.
The latter is described in Towards Optimal Toom-Cook Multiplication for Univariate and Multivariate Polynomials in Characteristic 2 and 0. by Marco BODRATO; In C.Carlet and B.Sunar, Eds., "WAIFI'07 proceedings".
I don't have access to the references to check what they say about the complexity of 3-way Toom-Cook or Warren's algorithm respectively. However, Wikipedia says that Karatsuba multiplication for N-digit numbers has an asymptotic bound of Θ(N**log2(3)).
Based on that, we can say that checking if an N-digit number is a perfect square using BigInteger is likely to be O(N**log2(3)) == O(N**~1.585) or better.
This question already has answers here:
Java array with more than 4gb elements
(11 answers)
Closed 8 years ago.
I was trying to get all primes before 600851475143.
I was using Sieve of Eratosthenes for this.
This requires me to create a boolean array of that huge size.
Bad idea, you can run out of memory.
Any other way. I tried using a string, using each index with values 0 & 1 to represent true or false. but indexOf method too returns int.
Next i am using 2d array for my problem.
Any other better way to store such a huge array?
The memory requirement for 600851475143 booleans is at best 70Gb. This isn't feasible. You need to either use compression as suggested by Stephan, or find a different algorithm for calculating the primes.
I had a similar problem and i used a bit set (basically set 1 or 0 for the desired offset in order) and i recomend using EWAHCompressedBitmap it will also compress your bit set
EDIT
As Alan said the BitSet will occupy 70GB of memory but you can do another thing : to have multiple BitSets (consecutive ones so that you can calculate the absolute position) and load in memory just the BitSet that you need in that moment something like a lazy load, in this case you will have control of the memory used.
Its not really practical to remember for each number if it was a prime or not for such a large amount (the sieve is a very slow approach for large numbers in general).
From this link you get an idea how many primes there are to be expected smaller than X. For your 600 billion range you can expect roughly 20 billion primes to exist within that range. Storing them as long[] would require about 160GB of memory... that notably more than the suggested 70GB for storing a single bit for each number, half if you exclude even numbers (2 is the only even prime).
For a desktop computer 35GB in memory may be a bit much, but a good workstation can have that much RAM. I would try a two-dimensional array with bit shifting/masking.
I still would expect your sieve code to run a considerable amount of time (something from days to years). I suggest you investigate more advanced prime detection methods than sieve.
You could use HotSpot's internal sun.misc.Unsafe API to allocate a bigger array. I wrote a blogpost how to simulate an array with it However, it's not an official Java API, so it qualifies as a hack.
Use BitSet. You can then set bit any index element. 600851475143 is 2^39 thus taking only 39 bits internally (actually in reality it will occupy 64 bits as it uses long).
You can infact move upto 2^63 which is massive for most purposes
http://en.wikipedia.org/wiki/Binary_GCD_algorithm
This Wikipedia entry has a very dissatisfying implication: the Binary GCD algorithm was at one time as much as 60% more efficient than the standard Euclid Algorithm, but as late as 1998 Knuth concluded that there was only a 15% gain in efficiency on his contemporary computers.
Well another 15 years has passed... how do these two algorithms stack today with advances in hardware?
Does the Binary GCD continue to outperform the Euclidean Algorithm in low-level languages but languish behind due to its complexity in higher level languages like Java? Or is the difference moot in modern computing?
Why do I care you might ask? I just so happen to have to process like 100 billion of these today :) Here's a toast to living in an era of computing (poor Euclid).
The answer is of course "it depends". It depends on hardware, compiler, specific implementation, whatever I forgot. On machines with slow division, binary GCD tends to outperform the Euclidean algorithm. I benchmarked it a couple of years ago on a Pentium4 in C, Java and a few other languages, overall in that benchmark, binary gcd with a 256-element lookup table beat the Euclidean algorithm by a factor of between 1.6 and nearly 3. Euclidean came closer when instead of immediately dividing, first a few rounds of subtraction were performed. I don't remember the figures, but binary still was considerably faster.
If the machine has fast division, things may be different, since the Euclidean algorithm needs fewer operations. If the difference of cost between division and subtraction/shifts is small enough, binary will be slower. Which one is better in your circumstances, you have to find out by benchmarking yourself.
Per the Java documentation, the hash code for a String object is computed as:
s[0]*31^(n-1) + s[1]*31^(n-2) + ... + s[n-1]
using int arithmetic, where s[i] is the
ith character of the string, n is the length of
the string, and ^ indicates exponentiation.
Why is 31 used as a multiplier?
I understand that the multiplier should be a relatively large prime number. So why not 29, or 37, or even 97?
According to Joshua Bloch's Effective Java (a book that can't be recommended enough, and which I bought thanks to continual mentions on stackoverflow):
The value 31 was chosen because it is an odd prime. If it were even and the multiplication overflowed, information would be lost, as multiplication by 2 is equivalent to shifting. The advantage of using a prime is less clear, but it is traditional. A nice property of 31 is that the multiplication can be replaced by a shift and a subtraction for better performance: 31 * i == (i << 5) - i. Modern VMs do this sort of optimization automatically.
(from Chapter 3, Item 9: Always override hashcode when you override equals, page 48)
Goodrich and Tamassia computed from over 50,000 English words (formed as the union of the word lists provided in two variants of Unix) that using the constants 31, 33, 37, 39, and 41 will produce fewer than 7 collisions in each case. This may be the reason that so many Java implementations choose such constants.
See section 9.2 Hash Tables (page 522) of Data Structures and Algorithms in Java.
On (mostly) old processors, multiplying by 31 can be relatively cheap. On an ARM, for instance, it is only one instruction:
RSB r1, r0, r0, ASL #5 ; r1 := - r0 + (r0<<5)
Most other processors would require a separate shift and subtract instruction. However, if your multiplier is slow this is still a win. Modern processors tend to have fast multipliers so it doesn't make much difference, so long as 32 goes on the correct side.
It's not a great hash algorithm, but it's good enough and better than the 1.0 code (and very much better than the 1.0 spec!).
By multiplying, bits are shifted to the left. This uses more of the available space of hash codes, reducing collisions.
By not using a power of two, the lower-order, rightmost bits are populated as well, to be mixed with the next piece of data going into the hash.
The expression n * 31 is equivalent to (n << 5) - n.
You can read Bloch's original reasoning under "Comments" in http://bugs.java.com/bugdatabase/view_bug.do?bug_id=4045622. He investigated the performance of different hash functions in regards to the resulting "average chain size" in a hash table. P(31) was one of the common functions during that time which he found in K&R's book (but even Kernighan and Ritchie couldn't remember where it came from). In the end he basically had to choose one and so he took P(31) since it seemed to perform well enough. Even though P(33) was not really worse and multiplication by 33 is equally fast to calculate (just a shift by 5 and an addition), he opted for 31 since 33 is not a prime:
Of the remaining
four, I'd probably select P(31), as it's the cheapest to calculate on a RISC
machine (because 31 is the difference of two powers of two). P(33) is
similarly cheap to calculate, but it's performance is marginally worse, and
33 is composite, which makes me a bit nervous.
So the reasoning was not as rational as many of the answers here seem to imply. But we're all good in coming up with rational reasons after gut decisions (and even Bloch might be prone to that).
Actually, 37 would work pretty well! z := 37 * x can be computed as y := x + 8 * x; z := x + 4 * y. Both steps correspond to one LEA x86 instructions, so this is extremely fast.
In fact, multiplication with the even-larger prime 73 could be done at the same speed by setting y := x + 8 * x; z := x + 8 * y.
Using 73 or 37 (instead of 31) might be better, because it leads to denser code: The two LEA instructions only take 6 bytes vs. the 7 bytes for move+shift+subtract for the multiplication by 31. One possible caveat is that the 3-argument LEA instructions used here became slower on Intel's Sandy bridge architecture, with an increased latency of 3 cycles.
Moreover, 73 is Sheldon Cooper's favorite number.
Neil Coffey explains why 31 is used under Ironing out the bias.
Basically using 31 gives you a more even set-bit probability distribution for the hash function.
From JDK-4045622, where Joshua Bloch describes the reasons why that particular (new) String.hashCode() implementation was chosen
The table below summarizes the performance of the various hash
functions described above, for three data sets:
1) All of the words and phrases with entries in Merriam-Webster's
2nd Int'l Unabridged Dictionary (311,141 strings, avg length 10 chars).
2) All of the strings in /bin/, /usr/bin/, /usr/lib/, /usr/ucb/
and /usr/openwin/bin/* (66,304 strings, avg length 21 characters).
3) A list of URLs gathered by a web-crawler that ran for several
hours last night (28,372 strings, avg length 49 characters).
The performance metric shown in the table is the "average chain size"
over all elements in the hash table (i.e., the expected value of the
number of key compares to look up an element).
Webster's Code Strings URLs
--------- ------------ ----
Current Java Fn. 1.2509 1.2738 13.2560
P(37) [Java] 1.2508 1.2481 1.2454
P(65599) [Aho et al] 1.2490 1.2510 1.2450
P(31) [K+R] 1.2500 1.2488 1.2425
P(33) [Torek] 1.2500 1.2500 1.2453
Vo's Fn 1.2487 1.2471 1.2462
WAIS Fn 1.2497 1.2519 1.2452
Weinberger's Fn(MatPak) 6.5169 7.2142 30.6864
Weinberger's Fn(24) 1.3222 1.2791 1.9732
Weinberger's Fn(28) 1.2530 1.2506 1.2439
Looking at this table, it's clear that all of the functions except for
the current Java function and the two broken versions of Weinberger's
function offer excellent, nearly indistinguishable performance. I
strongly conjecture that this performance is essentially the
"theoretical ideal", which is what you'd get if you used a true random
number generator in place of a hash function.
I'd rule out the WAIS function as its specification contains pages of random numbers, and its performance is no better than any of the
far simpler functions. Any of the remaining six functions seem like
excellent choices, but we have to pick one. I suppose I'd rule out
Vo's variant and Weinberger's function because of their added
complexity, albeit minor. Of the remaining four, I'd probably select
P(31), as it's the cheapest to calculate on a RISC machine (because 31
is the difference of two powers of two). P(33) is similarly cheap to
calculate, but it's performance is marginally worse, and 33 is
composite, which makes me a bit nervous.
Josh
Bloch doesn't quite go into this, but the rationale I've always heard/believed is that this is basic algebra. Hashes boil down to multiplication and modulus operations, which means that you never want to use numbers with common factors if you can help it. In other words, relatively prime numbers provide an even distribution of answers.
The numbers that make up using a hash are typically:
modulus of the data type you put it into
(2^32 or 2^64)
modulus of the bucket count in your hashtable (varies. In java used to be prime, now 2^n)
multiply or shift by a magic number in your mixing function
The input value
You really only get to control a couple of these values, so a little extra care is due.
In latest version of JDK, 31 is still used. https://docs.oracle.com/en/java/javase/12/docs/api/java.base/java/lang/String.html#hashCode()
The purpose of hash string is
unique (Let see operator ^ in hashcode calculation document, it help unique)
cheap cost for calculating
31 is max value can put in 8 bit (= 1 byte) register, is largest prime number can put in 1 byte register, is odd number.
Multiply 31 is <<5 then subtract itself, therefore need cheap resources.
Java String hashCode() and 31
This is because 31 has a nice property – it's multiplication can be replaced by a bitwise shift which is faster than the standard multiplication:
31 * i == (i << 5) - i
I'm not sure, but I would guess they tested some sample of prime numbers and found that 31 gave the best distribution over some sample of possible Strings.
A big expectation from hash functions is that their result's uniform randomness survives an operation such as hash(x) % N where N is an arbitrary number (and in many cases, a power of two), one reason being that such operations are used commonly in hash tables for determining slots. Using prime number multipliers when computing the hash decreases the probability that your multiplier and the N share divisors, which would make the result of the operation less uniformly random.
Others have pointed out the nice property that multiplication by 31 can be done by a multiplication and a subtraction. I just want to point out that there is a mathematical term for such primes: Mersenne Prime
All mersenne primes are one less than a power of two so we can write them as:
p = 2^n - 1
Multiplying x by p:
x * p = x * (2^n - 1) = x * 2^n - x = (x << n) - x
Shifts (SAL/SHL) and subtractions (SUB) are generally faster than multiplications (MUL) on many machines. See instruction tables from Agner Fog
That's why GCC seems to optimize multiplications by mersenne primes by replacing them with shifts and subs, see here.
However, in my opinion, such a small prime is a bad choice for a hash function. With a relatively good hash function, you would expect to have randomness at the higher bits of the hash. However, with the Java hash function, there is almost no randomness at the higher bits with shorter strings (and still highly questionable randomness at the lower bits). This makes it more difficult to build efficient hash tables. See this nice trick you couldn't do with the Java hash function.
Some answers mention that they believe it is good that 31 fits into a byte. This is actually useless since:
(1) We execute shifts instead of multiplications, so the size of the multiplier does not matter.
(2) As far as I know, there is no specific x86 instruction to multiply an 8 byte value with a 1 byte value so you would have needed to convert "31" to a 8 byte value anyway even if you were multiplying. See here, you multiply entire 64bit registers.
(And 127 is actually the largest mersenne prime that could fit in a byte.)
Does a smaller value increase randomness in the middle-lower bits? Maybe, but it also seems to greatly increase the possible collisions :).
One could list many different issues but they generally boil down to two core principles not being fulfilled well: Confusion and Diffusion
But is it fast? Probably, since it doesn't do much. However, if performance is really the focus here, one character per loop is quite inefficient. Why not do 4 characters at a time (8 bytes) per loop iteration for longer strings, like this? Well, that would be difficult to do with the current definition of hash where you need to multiply every character individually (please tell me if there is a bit hack to solve this :D).