Hi I want to multiply 2 big integer in a most timely optimized way. I am currently using karatsuba algorithm. Can anyone suggest more optimized way or algo to do it.
Thanks
public static BigInteger karatsuba(BigInteger x, BigInteger y) {
// cutoff to brute force
int N = Math.max(x.bitLength(), y.bitLength());
System.out.println(N);
if (N <= 2000) return x.multiply(y); // optimize this parameter
// number of bits divided by 2, rounded up
N = (N / 2) + (N % 2);
// x = a + 2^N b, y = c + 2^N d
BigInteger b = x.shiftRight(N);
BigInteger a = x.subtract(b.shiftLeft(N));
BigInteger d = y.shiftRight(N);
BigInteger c = y.subtract(d.shiftLeft(N));
// compute sub-expressions
BigInteger ac = karatsuba(a, c);
BigInteger bd = karatsuba(b, d);
BigInteger abcd = karatsuba(a.add(b), c.add(d));
return ac.add(abcd.subtract(ac).subtract(bd).shiftLeft(N)).add(bd.shiftLeft(2*N));
}
The version of BigInteger in jdk8 switches between the naive algorithm, The Toom-Cook algorithm, and Karatsuba depending on the size of the input to get excellent performance.
Complexity and actual speed are very different things in practice, because of the constant factors involved in the O notation. There is always a point where complexity prevails, but it may very well be out of the range (of input size) you are working with. The implementation details (level of optimization) of an algorithm also directly affect those constant factors.
My suggestion is to try a few different algorithms, preferably from a library that the authors already spent some effort optimizing, and actually measure and compare their speeds on your inputs.
Regarding SPOJ, don't forget the possibility that the main problem lies elsewhere (i.e. not in the multiplication speed of large integers).
Related
So I need to sqrt a BigInteger in pre Java 9 and I found below function to do that. I do understand the code, but I don't really get why its there. So I guess I don't really get the math behind it. Like why is (n / 32 + 8) used. Why is mid calculated the way it is. etc.
BigInteger a = BigInteger.ONE;
BigInteger b = n.shiftRight(5).add(BigInteger.valueOf(8));
while (b.compareTo(a) >= 0) {
BigInteger mid = a.add(b).shiftRight(1);
if (mid.multiply(mid).compareTo(n) > 0) {
b = mid.subtract(BigInteger.ONE);
} else {
a = mid.add(BigInteger.ONE);
}
}
return a.subtract(BigInteger.ONE);
}
EDIT: James Reinstate Monica Polk is correct, this is not the Babylonian Method but rather the Bisection method. I did not look at the code carefully enough before answering. Please see his answer as it is more accurate than mine.
This looks to be the Babylonian Method for approximating square roots. (n/32 + 8) is just used as a "seed", as providing a sane starting value can provide a better approximation in fewer iterations than just picking any number.
The algorithm is the bisection method applied to finding the zero of the polynomial x2 - n = 0. Why is (n / 32 + 8) used as a seed? I have no idea as it is a rather poor approximation. A much better approximation that is almost as cheap to compute is n.shiftRight(n.bitLength()/2);
I am writing a program which requires multiplication of very big numbers (million digit) at a point. Can anyone suggest a java library for a fast multiplication of big numbers? I have found this, but I'm not sure if this is the right solution, so I'm trying to find another to try.
The solution you link to — Schönhage-Strassen — is indeed a good way to make multiplying very very large BigIntegers faster.
Due to the big overhead, it is not faster for much smaller BigIntegers, so you can use this, recursively down to a certain threshold (you'll have to find out empirically what that theshold is) and then use BigInteger's own multiplication, which already implements the Toom-Cook and Karatsuba divide-and-conquer algorithms (since Java 8, IIRC), also recursively down to certain thresholds.
Forget the answers telling you to use Karatsuba. Not only does Java implement this already, as well as the even faster (for very large BigIntegers) Toom-Cook algorithm, it is also a lot slower (for such huge values) than Schönhage-Strassen.
Conclusion
Again: for small values, use simple schoolbook multiplication (but using – unsigned – integers as "digits" or "bigits"). For much larger values, use Karatsuba (which is a recursive algorithm, breaking large BigIntegers down to several smaller ones and multiplying these -- a divide-and-conquer algorithm). For even larger BigIntegers, use Toom-Cook (also a divide-and-conquer). For very large BigIntegers, use Schönhage-Strassen (IIRC, an FFT-based algorithm). Note that Java already implements schoolbook (or "base case"), Karatsuba and Toom-Cook multiplications, for differently sized Bigintegers. It does not implement Schönhage-Strassen yet.
But even with all these optimizations, multiplications of very huge values tend to be slow, so don't expect miracles.
Note:
The Schönhage-Strassen algorithm you link to reverts to Karatsuba for smaller sub-products. Instead of Karatsuba, revert to the, since then (Christmas day 2012), much improved implementation in BigInteger and simply use BigInteger::multiply() directly, instead of Karatsuba. You may also have to change the thresholds used.
As far as my thinking abilities the Karatsuba Algorithm can be implemented in this manner:
This link provides with a C++ implementation of the same, this can be easily adopted for the Java like implementation easily as well.
import java.math.BigInteger;
import java.util.Random;
class Karatsuba {
private final static BigInteger ZERO = new BigInteger("0");
public static BigInteger karatsuba(BigInteger x, BigInteger y) {
// cutoff to brute force
int N = Math.max(x.bitLength(), y.bitLength());
if (N <= 2000) return x.multiply(y); // optimize this parameter
// number of bits divided by 2, rounded up
N = (N / 2) + (N % 2);
// x = a + 2^N b, y = c + 2^N d
BigInteger b = x.shiftRight(N);
BigInteger a = x.subtract(b.shiftLeft(N));
BigInteger d = y.shiftRight(N);
BigInteger c = y.subtract(d.shiftLeft(N));
// compute sub-expressions
BigInteger ac = karatsuba(a, c);
BigInteger bd = karatsuba(b, d);
BigInteger abcd = karatsuba(a.add(b), c.add(d));
return ac.add(abcd.subtract(ac).subtract(bd).shiftLeft(N)).add(bd.shiftLeft(2*N));
}
public static void main(String[] args) {
long start, stop, elapsed;
Random random = new Random();
int N = Integer.parseInt(args[0]);
BigInteger a = new BigInteger(N, random);
BigInteger b = new BigInteger(N, random);
start = System.currentTimeMillis();
BigInteger c = karatsuba(a, b);
stop = System.currentTimeMillis();
StdOut.println(stop - start);
start = System.currentTimeMillis();
BigInteger d = a.multiply(b);
stop = System.currentTimeMillis();
StdOut.println(stop - start);
StdOut.println((c.equals(d)));
}
}
Hope this answers your question well.
So I am attempting to create a Pollard's Rho Factoring Algorithm in Java using the BigInteger class to support very large integers. The code mostly works but cannot find a factor for 4 or 8 (which should be 2). Currently I have capped it to cycle through the algorithm 10,000,000 times and still it can't find 2 as a factor. a is generated randomly (limited between 0 and 1000). Is this just a flaw in the Pollard Rho Algorithm or is there a mistake somewhere in the implementation?
The n being passed is 4
The initial a is calculated as a random the same way in the below code, between 0 and 1000
The sqrt(n) method returns the floor of the square root of n (in this case sqrt(sqrt(4)) = 1
I printed count at the end to make sure it was actually iterating how many times it was supposed to.
private static BigInteger PollardRho (BigInteger a, BigInteger n) {
BigInteger gcd = BigInteger.ZERO;
BigInteger Tort = a;
BigInteger Hare = a;
BigInteger count = BigInteger.ZERO;
BigInteger iterationLim = (sqrt(sqrt(n))).multiply(BigInteger.valueOf(10000000));
while (count.compareTo(iterationLim)!=0)
//makes sure that the algorithm does not surpass (4th root of n)*10000000 iterations.
{
Tort = ((Tort.pow(2)).add(BigInteger.ONE)).mod(n);
//System.out.println("Tort: "+Tort);
Hare = (((Hare.pow(2)).add(BigInteger.ONE).pow(2)).add(BigInteger.ONE)).mod(n);
//System.out.println("Hare: "+Hare);
gcd = (Tort.subtract(Hare)).gcd(n);
//System.out.println("gcd: "+gcd);
if (gcd.compareTo(BigInteger.ONE) != 0 && gcd.compareTo(n) != 0)
{
// System.out.println("took if, gcd = "+gcd);
return gcd;
}
if (gcd.compareTo(n) == 0)
{
a = (BigInteger.valueOf((long) (1000*Math.random())));
Tort = a;
Hare = a;
}
count = count.add(BigInteger.ONE);
}
System.out.println(count);
return n;
}
Pollard's Rho method usually can only split numbers composed of different primes. It fails most of the time for numbers that are prime powers. 4 and 8 are powers of a single prime 2 and therefore unlikely to be split by this method.
The method works by iterating a random function f(x) mod n, in this case f(x) = x^2+1 is used, but other functions work as well. The trick is that f(x) mod p where p is a prime factor of n enters a cycle after a different number of iterations for different primes. So f(x) mod p1 may already be in a cycle, f(x) mod p2 not yet. The gcd calculation is then able to find the factor p1.
It is btw. very easy to check if a number is a proper power of an integer. Just calculate the 2nd, 3rd, 4th, ... root and check if it is an integer.
I can calculate the multiplication of two BigIntegers (say a and b) modulo n.
This can be done by:
a.multiply(b).mod(n);
However, assuming that a and b are of the same order of n, it implies that during the calculation, a new BigInteger is being calculated, and its length (in bytes) is ~ 2n.
I wonder whether there is more efficient implementation that I can use. Something like modMultiply that is implemented like modPow (which I believe does not calculate the power and then the modulo).
I can only think of
a.mod(n).multiply(b.mod(n)).mod(n)
and you seem already to be aware of this.
BigInteger has a toByteArray() but internally ints are used. hence n must be quite large to have an effect. Maybe in key generation cryptographic code there might be such work.
Furhtermore, if you think of short-cutting the multiplication, you'll get something like the following:
public static BigInteger multiply(BigInteger a, BigInteger b, int mod) {
if (a.signum() == -1) {
return multiply(a.negate(), b, mod).negate();
}
if (b.signum() == -1) {
return multiply(a, b.negate(), mod).negate();
}
int n = (Integer.bitCount(mod - 1) + 7) / 8; // mod in bytes.
byte[] aa = a.toByteArray(); // Highest byte at [0] !!
int na = Math.min(n, aa.length); // Heuristic.
byte[] bb = b.toByteArray();
int nb = Math.min(n, bb.length); // Heuristic.
byte[] prod = new byte[n];
for (int ia = 0; ia < na; ++ia) {
int m = ia + nb >= n ? n - ia - 1 : nb; // Heuristic.
for (int ib = 0; ib < m; ++ib) {
int p = (0xFF & aa[aa.length - 1 - ia]) * (0xFF & bb[bb.length - 1 - ib]);
addByte(prod, ia + ib, p & 0xFF);
if (ia + ib + 1 < n) {
addByte(prod, ia + ib + 1, (p >> 8) & 0xFF);
}
}
}
// Still need to do an expensive mod:
return new BigInteger(prod).mod(BigInteger.valueOf(mod));
}
private static void addByte(byte[] prod, int i, int value) {
while (value != 0 && i < prod.length) {
value += prod[prod.length - 1 - i] & 0xFF;
prod[prod.length - 1 - i] = (byte) value;
value >>= 8;
++i;
}
}
That code does not look appetizing. BigInteger has the problem of exposing the internal value only as big-endian byte[] where the first byte is the most significant one.
Much better would be to have the digits in base N. That is not unimaginable: if N is a power of 2 some nice optimizations are feasible.
(BTW the code is untested - as it does not seem convincingly faster.)
First, the bad news: I couldn't find any existing Java libraries that provided this functionality.
I couldn't find any pure-Java big integer libraries ... apart from java.math.BigInteger.
There are Java / JNI wrappers for the GMP library, but GMP doesn't implement this either.
So what are your options?
Maybe there is some pure-Java library that I missed.
Maybe there some other native (C / C++) big integer library supports this operation ... though you may need to write your own JNI wrappers.
You should be able to implement such a method for yourself, by copying the source code of java.math.BigInteger and adding an extra custom method. Alternatively, it looks like you could extend it.
Having said that, I'm not sure that there is a "substantially faster" algorithm for computing a * b mod n in Java, or any other language. (Apart from special cases; e.g. when n is a power of 2).
Specifically, the "Montgomery Reduction" approach wouldn't help for a single multiplication step. (The Wikipedia page says: "Because numbers have to be converted to and from a particular form suitable for performing the Montgomery step, a single modular multiplication performed using a Montgomery step is actually slightly less efficient than a "naive" one.")
So maybe the most effective way to speedup the computation would be to use the JNI wrappers for GMP.
You can use generic maths, like:
(A*B) mod N = ((A mod N) * (B mod N)) mod N
It may be more CPU intensive, but one should choose between CPU and memory, right?
If we are talking about modular arithmetic then indeed Montgomery reduction may be what you need. Don't know any out of box solutions though.
You can write a BigInteger multiplication as a standard long multiplication in a very large base -- for example, in base 2^32. It is fairly straightforward. If you want only the result modulo n, then it is advantageous to choose a base that is a factor of n or of which n is a factor. Then you can ignore all but one or a few of the lowest-order result (Big)digits as you perform the computation, saving space and maybe time.
That's most practical if you know n in advance, of course, but such pre-knowledge is not essential. It's especially nice if n is a power of two, and it's fairly messy if n is neither a power of 2 nor smaller than the maximum operand handled directly by the system's arithmetic unit, but all of those cases can be handled in principle.
If you must do this specifically with Java BigInteger instances, however, then be aware that any approach not provided by the BigInteger class itself will incur overhead for converting between internal and external representations.
Maybe this:
static BigInteger multiply(BigInteger c, BigInteger x)
{
BigInteger sum = BigInteger.ZERO;
BigInteger addOperand;
for (int i=0; i < FIELD_ELEMENT_BIT_SIZE; i++)
{
if (c.testBit(i))
addOperand = x;
else
addOperand = BigInteger.ZERO;
sum = add(sum, addOperand);
x = x.shiftRight(1);
}
return sum;
}
with the following helper functions:
static BigInteger add(BigInteger a, BigInteger b)
{
return modOrder(a.add(b));
}
static BigInteger modOrder(BigInteger n)
{
return n.remainder(FIELD_ORDER);
}
To be honest though, I'm not sure if this is really efficient at all since none of these operations are performed in-place.
How to determine time complexity of this code ? I guess that modPow method is the most "expensive ".
import java.math.BigInteger;
public class FermatOne
{
public static void main(String[] args)
{
BigInteger a = new BigInteger ("2");
BigInteger k = new BigInteger ("15");
BigInteger c = new BigInteger ("1");
int b = 332192810;
BigInteger n = new BigInteger ("2");
BigInteger power;
power = a.pow(b);
BigInteger exponent;
exponent = k.multiply(power);
BigInteger mod;
mod = exponent.add(c);
BigInteger result = n.modPow(exponent,mod);
System.out.println("Result is ==> " + result);
}
}
Well this particular code deterministically runs in O(1).
However, in more general terms for arbitrary variables, multiply() will run in O(nlog n) where n is the number of bits.
pow() method will run in O(log b) for small a and b. This is achieved by exponentiation by squaring. For larger values, the number of bits gets large (linearly) and so the multiplication takes more time. I'll leave it up to you to figure out the exact analysis.
I'm not 100% about the details about modPow(), but I suspect it runs similarly to pow() except with the extra mod at each step in the exponentiation by squaring. So it'll still be O(log b) multiplications with the added benefit that the number of bits is bounded by log m where m is the mod.
tskuzzy is correct.
But maybe reading between the lines a bit, and assuming this is a homework question, they probably want you to realize that there are several operations happening in this method with varying complexities. And then they probably want you to realize that the complexity of the overall method is the same as whatever the most complex operation is that happens in the method.