Time complexity of the simple java code - java

How to determine time complexity of this code ? I guess that modPow method is the most "expensive ".
import java.math.BigInteger;
public class FermatOne
{
public static void main(String[] args)
{
BigInteger a = new BigInteger ("2");
BigInteger k = new BigInteger ("15");
BigInteger c = new BigInteger ("1");
int b = 332192810;
BigInteger n = new BigInteger ("2");
BigInteger power;
power = a.pow(b);
BigInteger exponent;
exponent = k.multiply(power);
BigInteger mod;
mod = exponent.add(c);
BigInteger result = n.modPow(exponent,mod);
System.out.println("Result is ==> " + result);
}
}

Well this particular code deterministically runs in O(1).
However, in more general terms for arbitrary variables, multiply() will run in O(nlog n) where n is the number of bits.
pow() method will run in O(log b) for small a and b. This is achieved by exponentiation by squaring. For larger values, the number of bits gets large (linearly) and so the multiplication takes more time. I'll leave it up to you to figure out the exact analysis.
I'm not 100% about the details about modPow(), but I suspect it runs similarly to pow() except with the extra mod at each step in the exponentiation by squaring. So it'll still be O(log b) multiplications with the added benefit that the number of bits is bounded by log m where m is the mod.

tskuzzy is correct.
But maybe reading between the lines a bit, and assuming this is a homework question, they probably want you to realize that there are several operations happening in this method with varying complexities. And then they probably want you to realize that the complexity of the overall method is the same as whatever the most complex operation is that happens in the method.

Related

Sum a series n^n for values 1 through n with no overflow? Only last digits of answer needed

I want to write a Java program that sums all the integers n^n from 1 through n. I only need the last 10 digits of this number, but the values given for n exceed 800.
I have already written a basic java program to calculate this, and it works fine for n < 16. But it obviously doesn't deal with such large numbers. I am wondering if there is a way to just gather the last 10 digits of a number that would normally overflow a long, and if so, what that method or technique might be.
I have no code to show, just because the code I wrote already is exactly what you'd expect. A for loop that runs i*i while i<=n and a counter that sums each iteration with the one before. It works. I just don't know how to approach the problem for bigger numbers, and need guidance.
Around n=16, the number overflows a long, and returns negative values. Will BigInteger help with this, or is that still too small a data type? Or could someone point me towards a technique for gathering the last 10 digits of a massive number? I could store it in an array and then sum them up if I could just get that far.
Anyhow, I don't expect a finished piece of code, but maybe some suggestions as to how I could look at this problem anew? Some techniques my n00b self is missing?
Thank you!
sums all the integers n^n from 1 through n. I only need the last 10 digits of this number
If you only need last 10 digits, that means you need sum % 10¹⁰.
The sum is 1¹ + 2² + 3³ + ... nⁿ.
According to equivalences rules:
(a + b) % n = [(a % n) + (b % n)] % n
So you need to calculate iⁱ % 10¹⁰, for i=1 to n, sum them, and perform a last modulus on that sum.
According to the modular exponentiation article on Wikipedia, there are efficient ways to calculate aⁱ % m on a computer. You should read the article.
However, as the article also says:
Java's java.math.BigInteger class has a modPow() method to perform modular exponentiation
Combining all that to an efficient implementation in Java that doesn't use excessive amounts of memory:
static BigInteger calc(int n) {
final BigInteger m = BigInteger.valueOf(10_000_000_000L);
BigInteger sum = BigInteger.ZERO;
for (int i = 1; i <= n; i++) {
BigInteger bi = BigInteger.valueOf(i);
sum = sum.add(bi.modPow(bi, m));
}
return sum.mod(m);
}
Or the same using streams:
static BigInteger calc(int n) {
final BigInteger m = BigInteger.valueOf(10).pow(10);
return IntStream.rangeClosed(1, n).mapToObj(BigInteger::valueOf).map(i -> i.modPow(i, m))
.reduce(BigInteger.ZERO, BigInteger::add).mod(m);
}
Test
System.out.println(calc(800)); // prints: 2831493860
BigInteger would be suitable to work with these kinds of numbers. It's quite frankly what it's designed for.
Do note that instances of BigInteger are immutable and any operations you do on one will give you back a new BigInteger instance. You're going to want to store some of your results in variables.

How to calculate 2 to-the-power N where N is a very large number

I need to find 2 to-the-power N where N is a very large number (Java BigInteger type)
Java BigInteger Class has pow method but it takes only integer value as exponent.
So, I wrote a method as follows:
static BigInteger twoToThePower(BigInteger n)
{
BigInteger result = BigInteger.valueOf(1L);
while (n.compareTo(BigInteger.valueOf((long) Integer.MAX_VALUE)) > 0)
{
result = result.shiftLeft(Integer.MAX_VALUE);
n = n.subtract(BigInteger.valueOf((long) Integer.MAX_VALUE));
}
long k = n.longValue();
result = result.shiftLeft((int) k);
return result;
}
My code works fine, I am just sharing my idea and curious to know if there is any other better idea?
Thank you.
You cannot use BigInteger to store the result of your computation. From the javadoc :
BigInteger must support values in the range -2^Integer.MAX_VALUE (exclusive) to +2^Integer.MAX_VALUE (exclusive) and may support values outside of that range.
This is the reason why the pow method takes an int. On my machine, BigInteger.ONE.shiftLeft(Integer.MAX_VALUE) throws a java.lang.ArithmeticException (message is "BigInteger would overflow supported range").
Emmanuel Lonca's answer is correct. But, by Manoj Banik's idea, I would like to share my idea too.
My code do the same thing as Manoj Banik's code in faster way. The idea is init the buffer, and put the bit 1 in to correct location. I using the shift left operator on 1 byte instead of shiftLeft method.
Here is my code:
static BigInteger twoToThePower(BigInteger n){
BigInteger eight = BigInteger.valueOf(8);
BigInteger[] devideResult = n.divideAndRemainder(eight);
BigInteger bufferSize = devideResult[0].add(BigInteger.ONE);
int offset = devideResult[1].intValue();
byte[] buffer = new byte[bufferSize.intValueExact()];
buffer[0] = (byte)(1 << offset);
return new BigInteger(1,buffer);
}
But it still slower than BigInteger.pow
Then, I found that class BigInteger has a method called setBit. It also accepts parameter type int like the pow method. Using this method is faster than BigInteger.pow.
The code can be:
static BigInteger twoToThePower(BigInteger n){
return BigInteger.ZERO.setBit(n.intValueExact());
}
Class BigInteger has a method called modPow also. But It need one more parameter. This means you should specify the modulus and your result should be smaller than this modulus. I did not do a performance test for modPow, but I think it should slower than the pow method.
By using repeated squaring you can achieve your goal. I've posted below sample code to understand the logic of repeated squaring.
static BigInteger pow(BigInteger base, BigInteger exponent) {
BigInteger result = BigInteger.ONE;
while (exponent.signum() > 0) {
if (exponent.testBit(0)) result = result.multiply(base);
base = base.multiply(base);
exponent = exponent.shiftRight(1);
}
return result;
}
An interesting question. Just to add a little more information to the fine accepted answer, examining the openjdk 8 source code for BigInteger reveals that the bits are stored in an array final int[] mag;. Since arrays can contain at most Integer.MAX_VALUE elements this immediately puts a theoretical bound on this particular implementation of BigInteger of 2(32 * Integer.MAX_VALUE). So even your method of repeated left-shifting can only exceed the size of an int by at most a factor of 32.
So, are you ready to produce your own implementation of BigInteger?

Java library for fast multiplication of very big numbers

I am writing a program which requires multiplication of very big numbers (million digit) at a point. Can anyone suggest a java library for a fast multiplication of big numbers? I have found this, but I'm not sure if this is the right solution, so I'm trying to find another to try.
The solution you link to — Schönhage-Strassen — is indeed a good way to make multiplying very very large BigIntegers faster.
Due to the big overhead, it is not faster for much smaller BigIntegers, so you can use this, recursively down to a certain threshold (you'll have to find out empirically what that theshold is) and then use BigInteger's own multiplication, which already implements the Toom-Cook and Karatsuba divide-and-conquer algorithms (since Java 8, IIRC), also recursively down to certain thresholds.
Forget the answers telling you to use Karatsuba. Not only does Java implement this already, as well as the even faster (for very large BigIntegers) Toom-Cook algorithm, it is also a lot slower (for such huge values) than Schönhage-Strassen.
Conclusion
Again: for small values, use simple schoolbook multiplication (but using – unsigned – integers as "digits" or "bigits"). For much larger values, use Karatsuba (which is a recursive algorithm, breaking large BigIntegers down to several smaller ones and multiplying these -- a divide-and-conquer algorithm). For even larger BigIntegers, use Toom-Cook (also a divide-and-conquer). For very large BigIntegers, use Schönhage-Strassen (IIRC, an FFT-based algorithm). Note that Java already implements schoolbook (or "base case"), Karatsuba and Toom-Cook multiplications, for differently sized Bigintegers. It does not implement Schönhage-Strassen yet.
But even with all these optimizations, multiplications of very huge values tend to be slow, so don't expect miracles.
Note:
The Schönhage-Strassen algorithm you link to reverts to Karatsuba for smaller sub-products. Instead of Karatsuba, revert to the, since then (Christmas day 2012), much improved implementation in BigInteger and simply use BigInteger::multiply() directly, instead of Karatsuba. You may also have to change the thresholds used.
As far as my thinking abilities the Karatsuba Algorithm can be implemented in this manner:
This link provides with a C++ implementation of the same, this can be easily adopted for the Java like implementation easily as well.
import java.math.BigInteger;
import java.util.Random;
class Karatsuba {
private final static BigInteger ZERO = new BigInteger("0");
public static BigInteger karatsuba(BigInteger x, BigInteger y) {
// cutoff to brute force
int N = Math.max(x.bitLength(), y.bitLength());
if (N <= 2000) return x.multiply(y); // optimize this parameter
// number of bits divided by 2, rounded up
N = (N / 2) + (N % 2);
// x = a + 2^N b, y = c + 2^N d
BigInteger b = x.shiftRight(N);
BigInteger a = x.subtract(b.shiftLeft(N));
BigInteger d = y.shiftRight(N);
BigInteger c = y.subtract(d.shiftLeft(N));
// compute sub-expressions
BigInteger ac = karatsuba(a, c);
BigInteger bd = karatsuba(b, d);
BigInteger abcd = karatsuba(a.add(b), c.add(d));
return ac.add(abcd.subtract(ac).subtract(bd).shiftLeft(N)).add(bd.shiftLeft(2*N));
}
public static void main(String[] args) {
long start, stop, elapsed;
Random random = new Random();
int N = Integer.parseInt(args[0]);
BigInteger a = new BigInteger(N, random);
BigInteger b = new BigInteger(N, random);
start = System.currentTimeMillis();
BigInteger c = karatsuba(a, b);
stop = System.currentTimeMillis();
StdOut.println(stop - start);
start = System.currentTimeMillis();
BigInteger d = a.multiply(b);
stop = System.currentTimeMillis();
StdOut.println(stop - start);
StdOut.println((c.equals(d)));
}
}
Hope this answers your question well.

more efficient Fibonacci for BigInteger

I am working on a class project to create a more efficient Fibonacci than the recursive version of Fib(n-1) + Fib(n-2). For this project I need to use BigInteger. So far I have had the idea to use a map to store the previous fib numbers.
public static BigInteger theBigFib(BigInteger n) {
Map<BigInteger, BigInteger> store = new TreeMap<BigInteger, BigInteger>();
if (n.intValue()<= 2){
return BigInteger.ONE;
}else if(store.containsKey(n)){
return store.get(n);
}else{
BigInteger one = new BigInteger("1");
BigInteger two = new BigInteger("2");
BigInteger val = theBigFib(n.subtract(one)).add(theBigFib(n.subtract(two)));
store.put(n,val);
return val;
}
}
I think that the map is storing more than it should be. I also think this line
BigInteger val = theBigFib(n.subtract(one)).add(theBigFib(n.subtract(two)));
is an issue. If anyone could shed some light on what i'm doing wrong or possible another solution to make it faster than the basic code.
Thanks!
You don't need all the previous BigIntegers, you just need the last 2.
Instead of a recursive solution you can use a loop.
public static BigInteger getFib(int n) {
BigInteger a = new BigInteger.ONE;
BigInteger b = new BigInteger.ONE;
if (n < 2) {
return a;
}
BigInteger c = null;
while (n-- >= 2) {
c = a.add(b);
a = b;
b = c;
}
return c;
}
If you want to store all the previous values, you can use an array instead.
static BigInteger []memo = new BigInteger[MAX];
public static BigInteger getFib(int n) {
if (n < 2) {
return new BigInteger("1");
}
if (memo[n] != null) {
return memo[n];
}
memo[n] = getFib(n - 1).add(getFib(n - 2));
return memo[n];
}
If you just want the nth Fib value fast and efficient.
You can use the matrix form of fibonacci.
A = 1 1
1 0
A^n = F(n + 1) F(n)
F(n) F(n - 1)
You can efficiently calculate A^n using Exponentiation by Squaring.
I believe the main issue in your code is that you create a new Map on each function call. Note that it's still local variable, despite that your method is static. So, you're guaranteed that the store.containsKey(n) condition never holds and your solution is not better than naive. I.e. it still has exponential complexity of n. More precisely, it takes about F(n) steps to get to the answer (basically because all "ones" that make up your answer are returned by some function call).
I'd suggest making the variable a static field instead of a local variable. Then number of calls should become linear instead of exponential and you will see a significant improvement. Other solutions include for loop with three variables which iteratively calculate Fibonacci numbers from 0, 1, 2 up to n-th and the best solutions I know involve matrix exponentiation or explicit formula with real numbers (which is bad for precision), but it's a question better suited for computer science StackExchange website, imho.

BigInteger most time optimized multiplication

Hi I want to multiply 2 big integer in a most timely optimized way. I am currently using karatsuba algorithm. Can anyone suggest more optimized way or algo to do it.
Thanks
public static BigInteger karatsuba(BigInteger x, BigInteger y) {
// cutoff to brute force
int N = Math.max(x.bitLength(), y.bitLength());
System.out.println(N);
if (N <= 2000) return x.multiply(y); // optimize this parameter
// number of bits divided by 2, rounded up
N = (N / 2) + (N % 2);
// x = a + 2^N b, y = c + 2^N d
BigInteger b = x.shiftRight(N);
BigInteger a = x.subtract(b.shiftLeft(N));
BigInteger d = y.shiftRight(N);
BigInteger c = y.subtract(d.shiftLeft(N));
// compute sub-expressions
BigInteger ac = karatsuba(a, c);
BigInteger bd = karatsuba(b, d);
BigInteger abcd = karatsuba(a.add(b), c.add(d));
return ac.add(abcd.subtract(ac).subtract(bd).shiftLeft(N)).add(bd.shiftLeft(2*N));
}
The version of BigInteger in jdk8 switches between the naive algorithm, The Toom-Cook algorithm, and Karatsuba depending on the size of the input to get excellent performance.
Complexity and actual speed are very different things in practice, because of the constant factors involved in the O notation. There is always a point where complexity prevails, but it may very well be out of the range (of input size) you are working with. The implementation details (level of optimization) of an algorithm also directly affect those constant factors.
My suggestion is to try a few different algorithms, preferably from a library that the authors already spent some effort optimizing, and actually measure and compare their speeds on your inputs.
Regarding SPOJ, don't forget the possibility that the main problem lies elsewhere (i.e. not in the multiplication speed of large integers).

Categories