I've been doing some questions but answers not provided so I was wondering if my answers are correct
a) given that a[i....j] is an integer array with n elements and x is an integer
int front, back;
while(i <= j) {
front = (i + j) / 3;
back = 2 * (i + j) / 3;
if(a[front] == x)
return front;
if (a[back] ==x)
return back;
if(x < a[front])
j = front - 1;
else if(x > a[back])
i = back+1;
else {
j = back-1;
i = front + 1;
}
}
My answer would be O(1) but I have a feeling I'm wrong.
B)
public static void whatIs(int n) {
if (n > 0)
System.out.print(n+" "+whatIs(n/2)+" "+whatIs(n/2));
}
ans: I'm not sure whether is it log4n or logn since recursion happens twice.
A) Yes. O(1) is wrong. You are going around the loop a number of times that depends on i, j, x ... and the contents of the array. Work out how many times you go around the loop in the best and worst cases.
B) Simplify log(4*n) using log(a*b) -> log(a) + log(b) (basic high-school mathematics) and then apply the definition of big O.
But that isn't the right answer either. Once again, you should go back to first principles and count the number of times that the method gets called for a given value of the parameter n. And do a proof by induction.
Both answers are incorrect.
In the first example on each iteration either you find the number or you shrink the length of the interval by 1/3. I.e. if the length used to be n you make it (2/3)*n. Worst case you find x on the last iteration - when the length of the interval is 1. So just like with binary search the complexity is calculated via a log: the complexity is O(log3/2(n)) and this in fact is simply O(log(n))
In the second example for a given number n you perform twice the number of operations needed for n/2. Start from n = 0 and n = 1 and use induction to prove the complexity is in fact O(n).
Hope this helps.
A) This algorithm seems similar to the Golden section search. When analyzing complexity, it's sometimes easier to imagine what would happen if we would extend the data structure, rather than contracting it. Think of it like this: Every loop removes a third from the search interval. That means, that if we know exactly how long time it takes for a certain length, we could add 50% more if we're allowed to loop once more – an exponential growth. Thus, the search algorithm must have complexity O(log n).
B) Every time we add a "layer" of function calls, we need to double the number of them (since the function always calls itself twice). In other words, given a certain length and time consumption, doubling n also requires twice as many function calls in the last layer. The algorithm is O(n).
Related
Learning about algorithms and I am slightly puzzled when it comes to calculating Time Complexity. To my understanding, if the output of an algorithm does not depend on the input size, it takes constant time i.e. O(1). Whereas when it does depend on the input, it is known as linear time i.e. O(n).
However, how does the time complexity work out when we know the size of the input?
For example, I have the following code which prints out all the prime numbers between 1 and 100. In this scenario, I know the size of the input (100) so how would that translate to the Time Complexity?
public void findPrime(){
for(int i = 2; i <=100; i++){
boolean isPrime = true;
for(int j = 2; j < i; j++){
int x = i % j;
if(x == 0)
isPrime = false;
}
if (isPrime)
System.out.println(i);
}
}
In this case, would the complexity still be O(1) because the time is constant? Or would it be O(n) n being the i condition which affects the number of iterations for both for loops?
Am I also right in saying that the condition of i affects the algorithm the most in terms of run time? Greater the i, the longer the algorithm runs for?
Would appreciate any help.
The output is not dynamic and always the same (like the input), which is per definition a constant. The complexity of calculating that is constant, it's always the same. If the upper bound was not fixed, then the complexity wouldn't be constant.
To introduce a dynamic upper bound, we need to change the code and check out the complexities of the lines:
public void findPrime(int n){
for(int i = 2; i <= n; i++){ // sum from 2 to n
boolean isPrime = true; // 1
for(int j = 2; j < i; j++){ // sum from 2 to i - 1
int x = i % j; // 1
if(x == 0) // 1
isPrime = false; // 1
}
if (isPrime) // 1
System.out.println(i); // 1, see below
}
}
As the number i gets longer and longer, the complexity to print it is not constant. For simplicity, we say that printing out to System.out is constant.
Now when we know the complexities of the lines, we translate that into an equation and simplify it.
As the result is a polynomial, due to the properties of O notation, we can see that this function is O(n^2).
As the other answers have shown, you can also say it's O(n^2) by "locking at it". You need mathematical proofs only for more difficult cases (and to be sure).
If algorithm scalability depends on the input size, it's not always/necessarily only O(n2). It may be Qubic O(n3), Logarithmic O(log2(n)) or etc.
When algorithm doesn't depend on the input size, i.e. you have a constant amount of static operations which don't grow when your input grows - that algorithm is said to have a Constant Time Complexity which in asymptotic notation is O(1).
Usually, we want to measure Worst Cast Complexity for the algorithm, because that is what interests us for increasingly/sufficiently large inputs (for small inputs, mostly, it doesn't make any difference). So, the worst case is the case, when every possible iteration will execute/happen.
Now, pay attention to your double-for-loop. If you'll have your static range [2, 100] in your code, of course, if will always hit 3 as the first prime number, and every execution will have a Constant Time Complexity **O(1)**m but usually, we want to find prime numbers in some dynamically given range, and if that's the case, then, in the worst case, both loops may iterate over entire array, and as array grows - number of iterations, hence operations, will grow.
So, your code's worst-case time complexity is definitely O(n2).
Whereas when it does depend on the input, it is known as linear time i.e. O(n).
That's not true. When it depends on the input size, it is simply not constant.
It could be polynomial, meaning that it's complexity is represented as a polynom f(n).
Here, f(n) could be anything that is a polynom with parameter n - examples for this are:
f(n) = n - linear
f(n) = log(n) - logarithmic
f(n) = n*n - squared
...and so on
f(n) could also be an exponent, for example f(n) = 2^n, which represents an algorithm, which complexity grows very fast.
Time complexity denpend on what algorithm you use. You can calculate time complexity of an algorithm by using follow simple rules:
Primitive expression: 1
N primitive expressions: N
If you has 2 separate code blocks, 1st code block has time complexity is A, 2nd code block has time complexity is B, so total time complexity is A + B.
If you loop a code block N times, code block has time complexity is M, so total time complexity is N*M
If you use recursive function, you can calculate time complexity by using Master theorem: https://en.wikipedia.org/wiki/Master_theorem_(analysis_of_algorithms)
Big O notation is a mathematical notation (https://en.wikipedia.org/wiki/Big_O_notation) describes the bound of a function. Time complexity is usually a function of input size, so, we can use big O notation to describe bound of time complexity. Some simple rules:
constant = O(constant) = O(1)
n = O(n)
n^2 = O(n^2)
...
g(a*f(n)) = O(f(n)) with a is a constant.
O(f(n) + g(n)) = O(max(f(n), g(n))
...
An algorithm that goes through all possible sequences of indexes inside an array.
Time complexity of a single loop and is linear and two nested loops is quadratic O(n^2). But what if another loop is nested and goes through all indexes separated between these two indexes? Does the time complexity rise to cubic O(n^3)? When N becomes very large it doesn't seem that there are enough iterations to consider the complexity cubic yet it seems to big to be quadratic O(n^2)
Here is the algorithm considering N = array length
for(int i=0; i < N; i++)
{
for(int j=i; j < N; j++)
{
for(int start=i; start <= j; start++)
{
//statement
}
}
}
Here is a simple visual of the iterations when N=7(which goes on until i=7):
And so on..
Should we consider the time complexity here quadratic, cubic or as a different size complexity?
For the basic
for (int i = 0; i < N; i++) {
for (int j = i; j < N; j++) {
// something
}
}
we execute something n * (n+1) / 2 times => O(n^2). As to why: it is the simplified form of
sum (sum 1 from y=x to n) from x=1 to n.
For your new case we have a similar formula:
sum (sum (sum 1 from z=x to y) from y=x to n) from x=1 to n. The result is n * (n + 1) * (n + 2) / 6 => O(n^3) => the time complexity is cubic.
The 1 in both formulas is where you enter the cost of something. This is in particular where you extend the formula further.
Note that all the indices may be off by one, I did not pay particular attention to < vs <=, etc.
Short answer, O(choose(N+k, N)) which is the same as O(choose(N+k, k)).
Here is the long answer for how to get there.
You have the basic question version correct. With k nested loops, your complexity is going to be O(N^k) as N goes to infinity. However as k and N both vary, the behavior is more complex.
Let's consider the opposite extreme. Suppose that N is fixed, and k varies.
If N is 0, your time is constant because the outermost loop fails on the first iteration.. If N = 1 then your time is O(k) because you go through all of the levels of nesting with only one choice and only have one choice every time. If N = 2 then something more interesting happens, you go through the nesting over and over again and it takes time O(k^N). And in general, with fixed N the time is O(k^N) where one factor of k is due to the time taken to traverse the nesting, and O(k^(N-1)) being taken by where your sequence advances. This is an unexpected symmetry!
Now what happens if k and N are both big? What is the time complexity of that? Well here is something to give you intuition.
Can we describe all of the times that we arrive at the innermost loop? Yes!
Consider k+N-1 slots With k of them being "entered one more loop" and N-1 of them being "we advanced the index by 1". I assert the following:
These correspond 1-1 to the sequence of decisions by which we reached the innermost loop. As can be seen by looking at which indexes are bigger than others, and by how much.
The "entered one more loop" entries at the end is work needed to get to the innermost loop for this iteration that did not lead to any other loop iterations.
If 1 < N we actually need one more that that in unique work to get to the end.
Now this looks like a mess, but there is a trick that simplifies it quite unexpectedly.
The trick is this. Suppose that we took one of those patterns and inserted one extra "we advanced the index by 1" somewhere in that final stretch of "entered one more loop" entries at the end. How many ways are there to do that? The answer is that we can insert that last entry in between any two spots in that last stretch, including beginning and end, and there is one more way to do that than there are entries. In other words, the number of ways to do that matches how much unique work there was getting to this iteration!
And what that means is that the total work is proportional to O(choose(N+k, N)) which is also O(choose(N+k, k)).
It is worth knowing that from the normal approximation to the binomial formula, if N = k then this turns out to be O(2^(N+k)/sqrt(N+k)) which indeed grows faster than polynomial. If you need a more general or precise approximation, you can use Stirling's approximation for the factorials in choose(N+k, N) = (N+k)! / ( N! k! ).
I am currently struggling with computing the Big O for a recursive exponent function that takes a shortcut whenever n%2 == 0. The code is as follows:
public static int fasterExponent(int x, int n){
if ( n == 0 ) return 1;
if ( n%2 == 0 ){
int temp = fasterExponent(x, n/2);
return temp * temp;
}
return x * fasterExponent(x, --n); //3
}
I understand that, without the (n%2 == 0) case, this recursive exponent function would be O(n). The inclusion of the (n%2 == 0) case speeds up the execution time, but I do not know how to determine neither its complexity nor the values of its witness c.
The answer is O(log n).
Reason: fasterExponent(x, n/2) this halfes the input in each step and when it reaches 0 we are done. This obviously needs log n steps.
But what about fasterExponent(x, --n);? We do this when the input is odd and in the next step it will be even and we fall back to the n/2 case. Let's consider the worst case that we have to do this every time we divide n by 2. Well then we do the second recursive step once for every time we do the first recursive step. So we need 2 * log n operations. That is still O(log n).
I hope my explanation helps.
It's intuitive to see that at each stage you are cutting the problem size by half. For instance, to find x4, you find x2(let's call this A), and return the result as A*A. Again x2 itself is found by dividing it into x and x.
Considering multiplication of two numbers as a primitive operation, you can see that the recurrence is:
T(N) = T(N/2) + O(1)
Solving this recurrence(using say the Master Theorem) yields:
T(N) = O(logN)
We have the following Java method:
static void comb(int[] a, int i, int max) {
if(i < 0) {
for(int h = 0; h < a.length; h++)
System.out.print((char)(’a’+a[h]));
System.out.print("\n");
return;
}
for(int v = max; v >= i; v--) {
a[i] = v;
comb(a, i-1, v-1);
}
}
static void comb(int[] a, int n) { // a.length <= n
comb(a, a.length-1, n - 1);
return;
}
I have to determine an asymptotic estimate of the cost of the algorithm comb(int[],int) in function of the size of the input.
Since I'm just starting out with this type of exercises, I can not understand if in this case for input size means the size of the array a or some other method parameter.
Once identified the input size, how to proceed to determine the cost of having a multiple recursion?
Please, you can tell me the recurrence equation which determines the cost?
To determine the complexity of this algorithm you have to understand on which "work" you spend most of the time. Different kind of algorithm may depend different aspects of its parameters like input size, input type, input order, and so on. This one depends on array size and n.
Operations like System.out.print, (char), 'a' + a [h], a.length, h++ and so on are constant time operations and mostly depends on processor commands you will get after compilation and from a processor on which you will execute those instructions. But eventually they can be summed to constant say C. This constant will not depend on the algorithm and input size so you can safely omit it from estimation.
This algorithm has linearly dependent on the input size because it cycles, it's input array (with a cycle from h = 0 to last array element). And because n can be equal to array size (a.length = n - this is the worst case for this algorithm because it forces it execute recursion "array size" times) we should consider this input case in our estimation. And then we get another cycle with recursion which will execute method comb other n times.
So in the worst case we will get a O(n*n*C) number of execution steps for significantly large input size constant C will become insignificant so you can omit it from estimation. Thus final estimation will be O(n^2).
The original method being called is comb(int[] a, int n), and you know that a.length <= n. This means you can bound the running time of the method with a function of n, but you should think whether you can compute a better bound with a function of both n and a.length.
For example, if the method executes a.length * n steps and each step takes a constant amount of time, you can say that the method takes O(n^2) time, but O(a.length * n) would be more accurate (especially if n is much larger than a.length.
You should analyze how many times the method is called recursively, and how many operations occur in each call.
Basically for a given size of input array, how many steps does it take to compute the answer? If you double the input size, what happens to the number of steps? The key is to examine your loops and work out how many times they get executed.
so I have a question about an algorithm I'm supposed to "invent"/"find". It's an algorithm which calculates 2^(n) - 1 for Ө(n^n) and Ө(1) and Ө(n).
I was thinking for several hours but I couldn't find any solution for both tasks (the first ones while the last one was the easist imo, I posted the algorithm below). But I'm not skilled enough to "invent"/"find" one for a very slow and very fast algorithm.
So far my algorithms are (In Pseudocode):
The one for Ө(n)
int f(int n) {
int number = 2
if(n = 0) then return 0
if(n==1) then return 1
while(n > 1)
number = number * 2
n--
number = number - 1
return number
A simple one and kinda obvious one which uses recursion though I don't know how fast it is (It would be nice if someone could tell me that):
int f(int n) {
if(n==0) then return 0
if(n==1) then return 1
return 3*f(n-1) - 2*f(n-2)
}
Assuming n is not bounded by any constant (and output should not be a simple int, but a data type that can contain large integers to allow it) - there is no algorithm
to yield 2^n -1 in Ө(1), since the size of the output itself is
Ө(log(n)), so if we assume there is such algorithm, and let it
run in constant time and makes less than C operations, for n =
2^(C+1), you will require C+1 operations only to print the
output, which contradicts the assumption that C is the upper bound, so
there is no such algorithm.
For Ө(n^n), if you have a more efficient algorithm (Ө(n) for example), you can make a pointless loop that runs extra n^n iterations and do nothing important, it will make your algorithm Ө(n^n).
There is also a Ө(log(n)*M(logn)) algorithm, using exponent by squaring, and then simply reducing 1 from this value. In here M(x) is complexity of your multiplying operator for number containing x digits.
As commented by #kajacx, you can even improve (3) by applying Fourier transform
Something like:
HugeInt h = 1;
h = h << n;
h = h - 1;
Obviously HugeInt is pseudo-code for an integer type that can be of arbitrary size allowing for any n.
=====
Look at amit's answer instead!
the Ө(n^n) is too tricky for me, but a real Ө(1) algorithm on any "binary" architecture would be:
return n-1 bits filled with 1
(assuming your architecture can allocate and fill n-1 bits in constant time)
;)