So I have been trying to implement 2 functions:
Method to perform integer polynomial evaluation in O(d) time where d is the degree of the polynomial.
Calculate exponentiation. I'd need it to perform in 0(log b) time
Here is what I've come up so far:
public static int polynomialEvaluation(int[] coefficients, int x){
int n = coefficients.length -1;
int y = coefficients[n];
for (int i = n-1; i >= 0; i--){
y = coefficients[i] + (x*y);
}
return y;
}
public static int exponentiation(int a, int b) {
int res = 1;
while (b > 0) {
res = res * a;
b--;
}
return res;
}
}
Does any of those 2 meet the time complexity requirement? I think I had the exponent function but not sure the cost of the 1st one.
Edited: I rewrote exponential function trying avoid iteration loop as following. It might compute more efficiently now in my opinion
public static int exponentiation(int a, int b) {
if ( b == 0) return 1;
int res = exponentiation(a, b/2);
if (a % 2 == 0)
return res * res;
else
return (a * res * res);
}
Basic algebraic operations (such as addition and multiplication), array lookups, and assignments are all considered to take constant time. Since your code only consists of these operations in a loop, the complexity is the number of iterations of the loop (plus a constant for the operations outside, but that disappears in the O notation). How many iterations do each of your loops perform?
This will hopefully tell you that the polynomial calculation has the desired complexity while the exponential one does not. Hint for a better algorithm: If you have already computed b2, what is the fastest way to use that answer to compute b4? And if you have that result, how can you compute b8? If you have computed e.g. b2, b4, b8, b16, b32, and b64 in this manner (and, of course, you still have the original b1), how can you use these results to compute e.g. b93?
Related
https://www.geeksforgeeks.org/program-to-calculate-the-value-of-ncr-efficiently/
this is the code I want to understand. Here is a video that explains it more in-depth https://www.youtube.com/watch?v=lhXwT7Zm3EU -> however, I still don't understand a certain aspect of it.
Here is the code:
// Java implementation to find nCr
class GFG {
// Function to find the nCr
static void printNcR(int n, int r)
{
// p holds the value of n*(n-1)*(n-2)...,
// k holds the value of r*(r-1)...
long p = 1, k = 1;
// C(n, r) == C(n, n-r),
// choosing the smaller value
if (n - r < r) {
r = n - r;
}
if (r != 0) {
while (r > 0) {
p *= n;
k *= r;
// gcd of p, k
long m = __gcd(p, k);
// dividing by gcd, to simplify
// product division by their gcd
// saves from the overflow
p /= m;
k /= m;
n--;
r--;
}
// k should be simplified to 1
// as C(n, r) is a natural number
// (denominator should be 1 ) .
}
else {
p = 1;
}
// if our approach is correct p = ans and k =1
System.out.println(p);
}
static long __gcd(long n1, long n2)
{
long gcd = 1;
for (int i = 1; i <= n1 && i <= n2; ++i) {
// Checks if i is factor of both integers
if (n1 % i == 0 && n2 % i == 0) {
gcd = i;
}
}
return gcd;
}
// Driver code
public static void main(String[] args)
{
int n = 50, r = 25;
printNcR(n, r);
}
}
Specifically, why does this code work:
if (n - r < r)
r = n - r;
Why, by doing this simple operation, produce the right answer eventually after going through and exiting the main while loop? I don't understand why this is necessary or why it makes sense to do. Like, why would not having this code make the nCr calculation fail or not work the way it's intended???? If someone can either explain this or point me to somewhere that does explain it or the math concept or something that would be great :) Maybe another way of coding the same thing would help. I just want to understand why this produces the right answer as a math and coding student.
To give a bit of perspective on my abilities (so you know what level I'm at), I am learning object-oriented programming, and have completed high school maths and basic computer science. I am by no means an expert.
The nCr operation has a speciality and it is mentioned in the comment above the if condition : // C(n, r) == C(n, n-r). Now, the while loop iterates when r>0 and with each iteration the value of r is decremented by 1. So in order to reduce the number of times the loop is executed, we need to reduce the value of r (if possible). Since C(n, r) == C(n, n-r), we take the smaller value among r and n-r so that the number of iterations are minimized but the result remains the same.
Assume that n = 100 and r=99. In this case if we skip the if condition, then the loop would be executed 99 times, whereas using the if condition we could update r as r = n-r so that r=1, then the loop would have been executed only once. Thus we are saving 98 unwanted iterations.
So there is a big performance improvement if we include the if condition.
Suppose the algorithm is as below:
public static BigInteger getFactorial(int num) {
BigInteger fact = BigInteger.valueOf(1);
for (int i = 1; i <= num; i++)
fact = fact.multiply(BigInteger.valueOf(i)); // ? time complexity
return fact;
}
It seems hard to calculate number of digits of fact.
Optimized version:
public BigInteger getFactorial2(long n) {
return subFactorial(1, n);
}
private BigInteger subFactorial(long a, long b) {
if ((b - a) < 10) {
BigInteger res = BigInteger.ONE;
for (long i = a; i <= b; i++) {
res = res.multiply(BigInteger.valueOf(i));
}
return res;
} else {
long mid = a + (b - a) / 2;
return subFactorial(a, mid).multiply(subFactorial(mid + 1, b));
}
}
The number of digits contained in fact is log(fact). It can be shown that O(log(n!)) == O(nlogn), so the number of digits in n! grows proportionally to nlogn. Since your algorithm piles values on to a partial product without splitting them into smaller intermediate values (divide and conquer fashion), we can assert that one of the numbers being multiplied will be less than n for calculating n!. Using grade school multiplication, we have O(logn * nlogn) time to multiply each of these numbers, and we have n-1 multiplies, so this is O(n * logn * nlogn) == O((nlogn)^2). I do believe this is a tight upper bound for grade-school multiplication, because even though the beginning multiplications are far smaller, the latter half are all larger than O((n/2)log^2(n/2)), and there are (n/2) of them, so O((n/2)^2 *log^2(n/2)) == O((nlogn)^2).
However, it is entirely possible that BigInteger uses Karatsuba multiplication, Toom-Cook multiplication, or maybe even the Schönhage–Strassen algorithm. I do not know how these perform on integers of such drastically varying sizes (logn vs nlogn), so I cannot give a tight upper bound on those. The best I can do is speculate that it will be less than that of the O(n*F(nlogn)), where F(x) is the time to multiply two numbers of length x using a specific algorithm.
Let M(n,k) be the sum of all possible multiplications of k distinct factors with largest possible factor n, where order is irrelevant.
For example, M(5,3) = 225 , because:
1*2*3 = 6
1*2*4 = 8
1*2*5 = 10
1*3*4 = 12
1*3*5 = 15
1*4*5 = 20
2*3*4 = 24
2*3*5 = 30
2*4*5 = 40
3*4*5 = 60
6+8+10+12+15+20+24+30+40+60 = 225.
One can easily notice that there are C(n,k) such multiplications, corresponding to the number of ways one can pick k objects out of n possible objects. In the example above, C(5,3) = 10 and there really are 10 such multiplications, stated above.
The question can also be visualized as possible n-sized sets containing exactly k 0's, where each cell that does not contain 0 inside it, has the value of its index+1 inside it. For example, one possible such set is {0,2,3,0,5}. From here on, one needs to multiply the values in the set that are different than 0.
My approach is a recursive algorithm. Similiarly to the above definition of
M(n,k), I define M(n,j,k) to be the sum of all possible multiplications of exactly k distinct factors with largest possible factor n, AND SMALLEST possible factor j. Hence, my approach would yield the desired value if ran on
M(n,1,k). So I start my recursion on M(n,1,k), with the following code, written in Java:
public static long M (long n, long j, long k)
{
if (k==1)
return usefulFunctions.sum(j, n);
for (long i=j;i<=n-k+1+1;i++)
return i*M(n,i+1,k-1);
}
Explanation to the code:
Starting with, for example, n=5 , j=1, k=3, the algorithm will continue to run as long as we need more factors, (k>=1), and it is made sure to run only distinct factors thanks to the for loop, which increases the minimal possible value j as more factors are added. The loop runs and decreases the number of needed factors as they are 'added', which is achieved through applying
M(n,j+1,k-1). The j+1 assures that the factors will be distinct because the minimal value of the factor increases, and k-1 symbolizes that we need 1 less factor to add.
The function 'sum(j,n)' returns the value of the sum of all numbers starting from j untill n, so sum(1,10)=55. This is done with a proper, elegant and simple mathematical formula, with no loops: sum(j,n) = (n+1)*n/2 - (i-1)*i/2
public static long sum (long i, long n)
{
final long s1 = n * (n + 1) / 2;
final long s2 = i * (i - 1) / 2;
return s1 - s2 ;
}
The reason to apply this sum when k=1, I will explain with an example:
Say we have started with 1*2. Now we need a third factor, which can be either of 3,4,5. Because all multiplications: 1*2*3, 1*2*4, 1*2*5 are valid, we can return 1*2*(3+4+5) = 1*2*sum(3,5) = 24.
Similiar logic explains the coefficient "i" next to the M(n,j+1,k-1).
say we have now the sole factor 2. Hence we need 2 more factors, so we multiply 2 by the next itterations, which should result in:
2*(3*sum(4,5) + 4*sum(5,5))
However, for a reason I can't explain yet, the code doesn't work. It returns wrong values and also has "return" issues that cause the function not to return anything, don't know why.
This is the reason i'm posting this question here, in hope someone will aid me. Either by fixing this code or sharing a code of his own. Explaining where I'm going wrong will be most appreciable.
Thanks a lot in advance, and sorry for this very long question,
Matan.
-----------------------EDIT------------------------
Below is my fixed code, which solves this question. Posting it incase one should ever need it :) Have fun !
public static long M(long n, long j, long k)
{
if (k == 0)
return 0;
if (k == 1)
return sum(j,n);
else
{
long summation = 0;
for (long i=j; i<=n; i++)
summation += i*M(n, i+1, k-1);
return summation;
}
}
I see that u got ur answer and i really like ur algorithm but i cant control myself posting a better algorithm . here is the idea
M(n,k) = coefficient of x^k in (1+x)(1+2*x)(1+3*x)...(1+n*x)
u can solve above expression by divide and conquer algorithm Click Here to find how to multiply above expression and get polynomial function in the form of ax^n + bx^(n-1)....+c
overall algorithm time complexity is O(n * log^2 n)
and best part of above algorithm is
in the attempt of finding solution for M(n,k) , u will find solution for M(n,x) where 1<=x<=n
i hope it will be useful to know :)
I am not sure about your algorithm, but you certainly messed up with your sum function. The problem you have is connected to type casting and division of integer numbers. Try something like this:
public static long sum (long i, long n)
{
final long s1 = n * (n + 1) / 2;
final long s2 = (i * i - i) / 2;
return s1 - s2 ;
}
You have a problem with your sum function : here is the correct formula:
public static long sum (long i, long n)
{
double s1 = n*(n+1)/2;
double s2 = i*(i-1)/2;
return (long)(s1-s2);
}
Here the full solution :
static int n = 5;
static long k = 3;
// no need to add n and k them inside your M function cause they are fixed.
public static long M (long start) // start = 1
{
if(start > k) // if start is superior to k : like your example going from 1..3 , then you return 0
return 0;
int res = 0; // res of your function
for(long i=start+1;i<n;i++){
res+=start*i*sum(i+1,n); // here you take for example 1*2*sum(3,5) + 1*3*sum(4,5).... ect
}
return res+M(start+1); // return res and start again from start+1 wich would be 2.
}
public static long sum (long i, long n)
{
if(i>n)
return 0;
double s1 = n*(n+1)/2;
double s2 = i*(i-1)/2;
return (long)(s1-s2);
}
public static void main(String[] args) {
System.out.println(M(1));
}
Hope it helped
I just tried implementing code (in Java) for various means by which the nth term of the Fibonacci sequence can be computed and I'm hoping to verify what I've learnt.
The iterative implementation is as follows:
public int iterativeFibonacci(int n)
{
if ( n == 1 ) return 0;
else if ( n == 2 ) return 1;
int i = 0, j = 1, sum = 0;
for ( ; (n-2) != 0; --n )
{
sum = i + j;
i = j;
j = sum;
}
return sum;
}
The recursive implementation is as follows :-
public int recursiveFibonacci(int n)
{
if ( n == 1 ) return 0;
else if ( n == 2 ) return 1;
return recursiveFibonacci(n-1) + recursiveFibonacci(n-2);
}
The memoized implementation is as follows :-
public int memoizedFibonacci(int n)
{
if ( n <= 0 ) return -1;
else if ( n == 1 ) return 0;
else if ( n == 2 ) return 1;
if ( memory[n-1] == 0 )
memory[n-1] = memoizedFibonacci(n-1);
if ( memory[n-2] == 0 )
memory[n-2] = memoizedFibonacci(n-2);
return memory[n-1]+memory[n-2];
}
I'm having a bit of a doubt when trying to figure out the Big-O of these implementations. I believe the iterative implementation to be O(n) as it loops through N-2 times.
In the recursive function, there are values recomputed, hence I think it's O(n^2).
In the memoized function, more than half of the values are accessed based on memoization. I've read that an algorithm is O(log N) if it takes constant time to reduce the problem space by a fraction and that an algorithm is O(N) if it takes constant time to reduce the problem space by a constant amount. Am I right in believing that the memoized implementation is O(n) in complexity? If so, wouldn't the iterative implementation be the best among all three? (as it does not use the additional memory that memoization requires).
The recursive version is not polynomial time - it's exponential, tightly bounded at φn where φ is the golden ratio (≈ 1.618034). The recursive version will use O(n) memory (the usage comes from the stack).
The memoization version will take O(n) time on first run, since each number is only computed once. However, in exchange, it also take O(n) memory for your current implementation (the n comes from storing the computed value, and also for the stack on the first run). If you run it many times, the time complexity will become O(M + q) where M is the max of all input n and q is the number of queries. The memory complexity will become O(M), which comes from the array which holds all the computed values.
The iterative implementation is the best if you consider one run, as it also runs in O(n), but uses constant amount of memory O(1) to compute. For a large number of runs, it will recompute everything, so its performance may not be as good as memoization version.
(However, practically speaking, long before the problem of performance and memory, the number is likely to overflow even 64-bit integer, so an accurate analysis must take into account the time it takes to do addition if you are computing the full number).
As plesiv mentioned, the Fibonacci number can also be computed in O(log n) by matrix multiplication (using the same trick as fast exponentiation by halving the exponent at every step).
A java implementation to find Fibonacci number using matrix multiplication is as follows:
private static int fibonacci(int n)
{
if(n <= 1)
return n;
int[][] f = new int[][]{{1,1},{1,0}};
//for(int i = 2; i<=n;i++)
power(f,n-1);
return f[0][0];
}
// method to calculate power of the initial matrix (M = [][]{{1,1},{1,0}})
private static void power(int[][] f, int n)
{
int[][] m = new int[][]{{1,1},{1,0}};
for(int i = 2; i<= n; i++)
multiply(f, m);
}
// method to multiply two matrices
private static void multiply(int[][] f, int[][] m)
{
int x = f[0][0] * m[0][0] + f[0][1] * m[1][0];
int y = f[0][0] * m[0][1] + f[0][1] * m[1][1];
int z = f[1][0] * m[0][0] + f[1][1] * m[1][0];
int w = f[1][0] * m[0][1] + f[1][1] * m[1][1];
f[0][0] = x;
f[0][1] = y;
f[1][0] = z;
f[1][1] = w;
}
Even another faster method than Matrix exponentiation method to calculate Fibonacci number is Fast Doubling method. The amortized time complexity for both the methods is O(logn).
The method follows the following formula
F(2n) = F(n)[2*F(n+1) – F(n)]
F(2n + 1) = F(n)2 + F(n+1)2
One such java implementation for Fast Doubling method is as below:
private static void fastDoubling(int n, int[] ans)
{
int a, b,c,d;
// base case
if(n == 0)
{
ans[0] = 0;
ans[1] = 1;
return;
}
fastDoubling((n/2), ans);
a = ans[0]; /* F(n) */
b = ans[1]; /* F(n+1) */
c = 2*b-a;
if(c < 0)
c += MOD;
c = (a*c) % MOD; /* F(2n) */
d = (a*a + b*b) % MOD ; /* F(2n+1) */
if(n%2 == 0)
{
ans[0] = c;
ans[1] = d;
}
else
{
ans[0] = d;
ans[1] = (c+d);
}
}
I was just going through the iterative version of fibonacci series algorithm. I found this following code
int Fibonacci(int n)
{
int f1 = 0;
int f2 = 1;
int fn;
for ( int i = 2; i < n; i++ )
{
fn = f1 + f2;
f1 = f2;
f2 = fn;
}
}
A silly question just raised in my mind. The function above adds two previous numbers and returns the third one and then get variables ready for the next iteration. What if it would be something like this. "Return a number of series which is the sum of previous three numbers" how we can change the above code to find such a number.u
As a hint, notice that the above algorithm works by "cycling" the numbers through some variables. In the above code, at each point you are storing
F_0 F_1
a b
You then "shift" them over by one step in the loop:
F_1 F_2
a b
You then "shift" them again in the next loop iteration:
F_2 F_3
a b
If you want to update the algorithm sum the last three values, think about storing them like this:
T_0 T_1 T_2
a b c
Then shift them again:
T_1 T_2 T_3
a b c
Then shift them again:
T_2 T_3 T_4
a b c
Converting this intuition into code is a good exercise, so I'll leave those details to you.
That said - there is a much, much faster way to compute the nth term of the Fibonacci and "Tribonacci" sequences. This article describes a very clever trick using matrix multiplication to compute terms more quickly than the above loop, and there is code available here that implements this algorithm.
Hope this helps!
I like recursion. Call me a sadist.
static int rTribonacci (int n, int a, int b, int c) {
if (n == 0) return a;
return rTribonacci (n-1, b, c, a + b + c);
}
int Tribonacci (int n) { return rTribonacci(n, 0, 0, 1); }
I don't normally answer questions that "smell" like homework, but since someone else already replied this is what I would do:
int Tribonacci(int n)
{
int last[3] = { 0, 0, 1 }; // the start of our sequence
for(int i = 3; i <= n; i++)
last[i % 3] = last[i % 3] + last[(i + 1) % 3] + last[(i + 2) % 3];
return last[n % 3];
}
It can be improved a bit to avoid all the ugly modular arithmetic (which I left in to make the circular nature of the last[] array clear) by changing the loop to this:
for(int i = 3; i <= n; i++)
last[i % 3] = last[0] + last[1] + last[2];
It can be optimized a bit more and frankly, there are much better ways to calculate such sequences, as templatetypedef said.
If you want to use recursion, you don't need any other parameters:
int FibonacciN(int position)
{ if(position<0) throw new ArgumentException("invalid position");
if(position==0 || position ==1) return position;
return FibonacciN(position-1) + FibonacciN(position-2);
}