I am currently struggling with computing the Big O for a recursive exponent function that takes a shortcut whenever n%2 == 0. The code is as follows:
public static int fasterExponent(int x, int n){
if ( n == 0 ) return 1;
if ( n%2 == 0 ){
int temp = fasterExponent(x, n/2);
return temp * temp;
}
return x * fasterExponent(x, --n); //3
}
I understand that, without the (n%2 == 0) case, this recursive exponent function would be O(n). The inclusion of the (n%2 == 0) case speeds up the execution time, but I do not know how to determine neither its complexity nor the values of its witness c.
The answer is O(log n).
Reason: fasterExponent(x, n/2) this halfes the input in each step and when it reaches 0 we are done. This obviously needs log n steps.
But what about fasterExponent(x, --n);? We do this when the input is odd and in the next step it will be even and we fall back to the n/2 case. Let's consider the worst case that we have to do this every time we divide n by 2. Well then we do the second recursive step once for every time we do the first recursive step. So we need 2 * log n operations. That is still O(log n).
I hope my explanation helps.
It's intuitive to see that at each stage you are cutting the problem size by half. For instance, to find x4, you find x2(let's call this A), and return the result as A*A. Again x2 itself is found by dividing it into x and x.
Considering multiplication of two numbers as a primitive operation, you can see that the recurrence is:
T(N) = T(N/2) + O(1)
Solving this recurrence(using say the Master Theorem) yields:
T(N) = O(logN)
Related
So n is the length of array a and p is an array of int of len 2.both elements are zero in p. The first call is findbigO(a, n-1, p).
findbigO(int[] a, int i, int[] p)
if (i == 0) {
p[0] = a[0];
p[1] = a[0];
} else {
findbigO(a, i‐1, p);
if (a[i] < p[0]]) {
p[0] = a[i];
}
if (a[i] > p[1]]) {
p[1] = a[i];
}
}
The code basically finds the max and min in an array and stores them in an different array P. I am trying to figure out the Big O of this code. i think its Big O of n since recursion is called n times depending on the length of the array. what do you guys think
Well, i in the first call is by definition n-1, i.e. of the same magnitude as n. Thus, for big-O-over-n notation purposes, the initial i can be treated as n.
The code itself is, other than the recursive invocation, constant time: There are no fors or whiles or any other way in which this code's # of executions is affected by anything.
The recursive call neccessarily marches towards the end condition (i == 0, when no recursion occurs), and does so in O(n) time: In fact, after exactly n steps, 0 will have been reached.
Thus, we have an O(1) 'loop' being executed O(i-initial) times, where i-initial is the same magnitude as n, which combines to O(1) * O(n) which is just O(n).
To help you out and to confirm big O notation, here's the 'point' of big-O:
Make a 2D line graph. On the x-axis, put 'n'. On the y-axis, put 'time taken by the CPU'.
Then fill in this chart. It'll be messy at first (maybe in one of the runs, your winamp switches songs or whatnot), but go far enough to the right and the algorithmic complexity of the input will start being the deciding factor. It 'balances out', in other words, into a recognizable graph. What does that graph look like?
For this algorithm, a straight line, that is not horizontal. In other words, it looks like y = C*x with C being some constant. That's what O(n) means: That graph will eventually stabilize and look like y = C*x does, for some C.
O(n^2) would mean: The graph eventually stabilizes into something that looks like y = x^2. That's also why O(x^2 + x) is not a thing, because y = x^2 + x, once you go far enough to the right, looks like just y = x^2 does on its right flank.
findbigO(n) = findbigO(n-1) + (1)
findbigO(n) = (findbigO(n-2) + O(1)) + (1)
...
findbigO(n) = findbigO(n-n) + n*O(1)
findbigO(n) = findbigO(0) + n*O(1)
findbigO(n) = O(1) + n*O(1)
findbigO(n) = O(1) + n*O(1)
findbigO(n) = O(1) + O(n)
findbigO(n) <= O(n) + O(n)
findbigO(n) <= 2*O(n)
findbigO in O(n)
I've been given a simple pseudocode and told to determine the big O running time of the function myMethod() by counting the approximate number of operations it performs. The thing I am unsure about is that within the while loop of the function myMethod() there is a function call to doIt(), in which there is another while loop. I know that I need to include the operations within doIt(), however I am unsure if it should count as n or n^2 since it is a separate function, despite it being a while loop within a while loop.
I've written what I think the number of basic operations is for each line beside their respective lines, I would appreciate some guidance on this problem as I've looked around on the internet but not much luck regarding this specific issue.
static int doIt(int n)
{
count = 0 //1
j=1 //1
while j < n //n x n
{
count = count +1 //n x n
j=j+2 //n x n
}
return count //1
}
static int myMethod (int n)
{
i = 1 //1
while(i<n) //log n
{
dolt(i) //log n
i = ix2 //log n
}
return 1; //1
}
First, your doIt function is a basic while loop. I don't know what n*n is supposed to mean, but that loop is not O(n^2), its O(N) because it executes n/2 times- which we can write as 1/2 * n, and since we don't care about constants in terms of writing Big-O notation, we can say doIt has a Big-O complexity of O(N)
You correctly identified myMethod's loop to be log(N) time. Since the doIt function runs in O(N) time- the overall complexity of myMethod is log(N) for the complexity of the outer loop multiplied by the complexity of doIt- so log(N) * N which equals O(nlog(n))
I'am doing a practice interview question from HackerRank Coin Change
I got stuck and I'm trying to understand a Solution to this problem. Here is the recursive solution to this problem.(with my comments to understand it)
public static int numWays(int[] coins, int sumTo) {
return numWaysWhichCoin(coins, sumTo, 0);
}
private static int numWaysWhichCoin(int[] coins, int sumTo, int whichCoin) {
if(sumTo == 0) {
//empty set
return 1;
} else if(sumTo < 0) {
//no way to form a negative sum with positive coin values
return 0;
} else {
//sumTo is positive
//case gone through all the coins but still a positive sum. Impossible
if(sumTo > 0 && whichCoin == coins.length) {
return 0;
}
//with and without. With, you can keep using the same coin
return numWaysWhichCoin(coins, sumTo - coins[whichCoin], whichCoin) + numWaysWhichCoin(coins, sumTo, whichCoin + 1);
}
}
The author states that the algorithm runs in O(2n) time complexity. From my experience in interviews, you are expected to justify your answers.
How wold you justify this time complexity? From my previous work to showing algorithms run in O(2n) time, I would use recurrence relations, like(Fibonacci) T(n) = T(n-1) + T(n-2) +c <= 2T(n-1) + c, T(1) = d, but I can't derive a recurrence relation like that from here. Is there another way of going about this justification?
The two recursive calls make it act like a binaric tree that grows in 2 n rate.
Your algorithm as far as complexity is identical to Fibonacci recursive algorithem. So you can look and find the many answers and explenations and even proofs why recursive Fibonacci is from the order of 2 n.
Suppose there are R different combinations (the requested result).
The number of coins of the i-th coin (0<=i<=M-1) used in a specific solution r (0<=r<=R-1) is C(i, r). So for each r in 0...R-1 we have C(0,r)+C(1,r)+....C(M-1,r)=N.
The maximum value of C(i, r) for each r in 0...R-1 is max_c(i)=floor(N/Vi) (Vi is the value of coin i), which less or equal to N.
The sum of c_max(i) where i=0..M-1 is <= N*M. So the total number of individual coins used in all the combinations is O(NM).
The algorithm you presented simply iterates all the sub-groups of the group above of c_max(i) individual coins for each coin value Vi, which O(2^(NM)).
so I have a question about an algorithm I'm supposed to "invent"/"find". It's an algorithm which calculates 2^(n) - 1 for Ө(n^n) and Ө(1) and Ө(n).
I was thinking for several hours but I couldn't find any solution for both tasks (the first ones while the last one was the easist imo, I posted the algorithm below). But I'm not skilled enough to "invent"/"find" one for a very slow and very fast algorithm.
So far my algorithms are (In Pseudocode):
The one for Ө(n)
int f(int n) {
int number = 2
if(n = 0) then return 0
if(n==1) then return 1
while(n > 1)
number = number * 2
n--
number = number - 1
return number
A simple one and kinda obvious one which uses recursion though I don't know how fast it is (It would be nice if someone could tell me that):
int f(int n) {
if(n==0) then return 0
if(n==1) then return 1
return 3*f(n-1) - 2*f(n-2)
}
Assuming n is not bounded by any constant (and output should not be a simple int, but a data type that can contain large integers to allow it) - there is no algorithm
to yield 2^n -1 in Ө(1), since the size of the output itself is
Ө(log(n)), so if we assume there is such algorithm, and let it
run in constant time and makes less than C operations, for n =
2^(C+1), you will require C+1 operations only to print the
output, which contradicts the assumption that C is the upper bound, so
there is no such algorithm.
For Ө(n^n), if you have a more efficient algorithm (Ө(n) for example), you can make a pointless loop that runs extra n^n iterations and do nothing important, it will make your algorithm Ө(n^n).
There is also a Ө(log(n)*M(logn)) algorithm, using exponent by squaring, and then simply reducing 1 from this value. In here M(x) is complexity of your multiplying operator for number containing x digits.
As commented by #kajacx, you can even improve (3) by applying Fourier transform
Something like:
HugeInt h = 1;
h = h << n;
h = h - 1;
Obviously HugeInt is pseudo-code for an integer type that can be of arbitrary size allowing for any n.
=====
Look at amit's answer instead!
the Ө(n^n) is too tricky for me, but a real Ө(1) algorithm on any "binary" architecture would be:
return n-1 bits filled with 1
(assuming your architecture can allocate and fill n-1 bits in constant time)
;)
I've been doing some questions but answers not provided so I was wondering if my answers are correct
a) given that a[i....j] is an integer array with n elements and x is an integer
int front, back;
while(i <= j) {
front = (i + j) / 3;
back = 2 * (i + j) / 3;
if(a[front] == x)
return front;
if (a[back] ==x)
return back;
if(x < a[front])
j = front - 1;
else if(x > a[back])
i = back+1;
else {
j = back-1;
i = front + 1;
}
}
My answer would be O(1) but I have a feeling I'm wrong.
B)
public static void whatIs(int n) {
if (n > 0)
System.out.print(n+" "+whatIs(n/2)+" "+whatIs(n/2));
}
ans: I'm not sure whether is it log4n or logn since recursion happens twice.
A) Yes. O(1) is wrong. You are going around the loop a number of times that depends on i, j, x ... and the contents of the array. Work out how many times you go around the loop in the best and worst cases.
B) Simplify log(4*n) using log(a*b) -> log(a) + log(b) (basic high-school mathematics) and then apply the definition of big O.
But that isn't the right answer either. Once again, you should go back to first principles and count the number of times that the method gets called for a given value of the parameter n. And do a proof by induction.
Both answers are incorrect.
In the first example on each iteration either you find the number or you shrink the length of the interval by 1/3. I.e. if the length used to be n you make it (2/3)*n. Worst case you find x on the last iteration - when the length of the interval is 1. So just like with binary search the complexity is calculated via a log: the complexity is O(log3/2(n)) and this in fact is simply O(log(n))
In the second example for a given number n you perform twice the number of operations needed for n/2. Start from n = 0 and n = 1 and use induction to prove the complexity is in fact O(n).
Hope this helps.
A) This algorithm seems similar to the Golden section search. When analyzing complexity, it's sometimes easier to imagine what would happen if we would extend the data structure, rather than contracting it. Think of it like this: Every loop removes a third from the search interval. That means, that if we know exactly how long time it takes for a certain length, we could add 50% more if we're allowed to loop once more – an exponential growth. Thus, the search algorithm must have complexity O(log n).
B) Every time we add a "layer" of function calls, we need to double the number of them (since the function always calls itself twice). In other words, given a certain length and time consumption, doubling n also requires twice as many function calls in the last layer. The algorithm is O(n).