Explain Time Complexity? - java

How does one find the time complexity of a given algorithm notated both in N and Big-O? For example,
//One iteration of the parameter - n is the basic variable
void setUpperTriangular (int intMatrix[0,…,n-1][0,…,n-1]) {
for (int i=1; i<n; i++) { //Time Complexity {1 + (n+1) + n} = {2n + 2}
for (int j=0; j<i; j++) { //Time Complexity {1 + (n+1) + n} = {2n + 2}
intMatrix[i][j] = 0; //Time complexity {n}
}
} //Combining both, it would be {2n + 2} * {2n + 2} = 4n^2 + 4n + 4 TC
} //O(n^2)
Is the Time Complexity for this O(n^2) and 4n^2 + 4n + 4? If not, how did you get to your answer?
Also, I have a question about a two-param matrix with time complexity.
//Two iterations in the parameter, n^2 is the basic variable
void division (double dividend [0,…,n-1], double divisor [0,…,n-1])
{
for (int i=0; i<n; i++) { //TC {1 + (n^2 + 1) + n^2} = {2n^2 + 2}
if (divisor[i] != 0) { //TC n^2
for (int j=0; j<n; j++) { //TC {1 + (n^2 + 1) + n^2} = {2n^2 + 2}
dividend[j] = dividend[j] / divisor[i]; //TC n^2
}
}
} //Combining all, it would be {2n^2 + 2} + n^2(2n^2 + 2) = 2n^3 + 4n^2 + 2 TC
} //O(n^3)
Would this one be O(N^3) and 2n^3 + 4n^2 + 2? Again, if not, can somebody please explain why?

Both are O(N2). You are processing N2 items in the worst case.
The second example might be just O(N) in the best case (if the second argument is all zeros).
I am not sure how you get the other polynomials. Usually the exact complexity is of no importance (namely when working with higher-level language).

What you're looking for in big O time complexity is the approximate number of times an instruction is executed. So, in the first function, you have the executable statement:
intMatrix[i][j] = 0;
Since the executable statement takes the same amount of time every time, it is O(1). So, for the first function, you can cut it down to look like this and work back from the executable statement:
i: execute n times{//Time complexity=n*(n+1)/2
j: execute i times{
intMatrix[i][j] = 0; //Time complexity=1
}
}
Working back, the both the i loop executes n times and the j loop executes i times. For example, if n = 5, the number of instructions executed would be 5+4+3+2+1=15. This is an arithmetic series, and can be represented by n(n+1)/2. The time complexity of the function is therefore n(n+1)/2=n^2/2+n/2=O(n^2).
For the second function, you're looking at something similar. Your executable statement is:
dividend[j] = dividend[j] / divisor[i];
Now, with this statement it's a little more complicated, as you can see from wikipedia, complexity of schoolbook long division is O(n^2). However, the dividend and divisor DO NOT use your variable n, so they're not dependent on it. Let's call the dividend and divisor, aka the actual contents of the matrix "m". So the time complexity of the executable statement is O(m^2). Moving on to simplify the second function:
i: execute n times{ //Time complexity=n*(n*(1*m^2))=O(n^2*m^2)
j: execute n times{ //Time complexity=n*(1*m^2)
if statement: execute ONCE{ //Time complexity=1*m^2
dividend[j] = dividend[j] / divisor[i]; //Time complexity=m^2
}
}
}
Working backwards, you can see that the inner statement will take O(m^2), and since the if statement takes the same amount of time every time, its time complexity is O(1). Your final answer is then O(n^2m^2). Since division takes so little time on modern processors, it is usually estimated at O(1) (see this for a better explanation for why this is), what your professor is probably looking for O(n^2) for the second function.

Big O Notation or time complexity, describes the relationship between a change in data size (n), and the magnitude of time / space required for a given algorithm to process it.
In your case you have two loops. For each number of n (the outer loop), you process n items (in the inner loop) items. Thus in you have O(n2) or "quadratic" time complexity.
So for small numbers of n the difference is negligible, but for larger numbers of n, it quickly grinds to a halt.
Eliminating 0 from the divisor as in algorithm 2 does not significantly change the time complexity, because checking to see if a number = 0 is O(1) and several orders of magnitude less then O(n2). Eliminating the inner loop in that specific case is still O(n), and still dwarfed by the time it takes to do the O(n2). Your second algorithm, thus technically becomes (best case) O(n) (if there are only zeros in the divisor series).

Related

Big O notation of the following min and max recursive code in Java

So n is the length of array a and p is an array of int of len 2.both elements are zero in p. The first call is findbigO(a, n-1, p).
findbigO(int[] a, int i, int[] p)
if (i == 0) {
p[0] = a[0];
p[1] = a[0];
} else {
findbigO(a, i‐1, p);
if (a[i] < p[0]]) {
p[0] = a[i];
}
if (a[i] > p[1]]) {
p[1] = a[i];
}
}
The code basically finds the max and min in an array and stores them in an different array P. I am trying to figure out the Big O of this code. i think its Big O of n since recursion is called n times depending on the length of the array. what do you guys think
Well, i in the first call is by definition n-1, i.e. of the same magnitude as n. Thus, for big-O-over-n notation purposes, the initial i can be treated as n.
The code itself is, other than the recursive invocation, constant time: There are no fors or whiles or any other way in which this code's # of executions is affected by anything.
The recursive call neccessarily marches towards the end condition (i == 0, when no recursion occurs), and does so in O(n) time: In fact, after exactly n steps, 0 will have been reached.
Thus, we have an O(1) 'loop' being executed O(i-initial) times, where i-initial is the same magnitude as n, which combines to O(1) * O(n) which is just O(n).
To help you out and to confirm big O notation, here's the 'point' of big-O:
Make a 2D line graph. On the x-axis, put 'n'. On the y-axis, put 'time taken by the CPU'.
Then fill in this chart. It'll be messy at first (maybe in one of the runs, your winamp switches songs or whatnot), but go far enough to the right and the algorithmic complexity of the input will start being the deciding factor. It 'balances out', in other words, into a recognizable graph. What does that graph look like?
For this algorithm, a straight line, that is not horizontal. In other words, it looks like y = C*x with C being some constant. That's what O(n) means: That graph will eventually stabilize and look like y = C*x does, for some C.
O(n^2) would mean: The graph eventually stabilizes into something that looks like y = x^2. That's also why O(x^2 + x) is not a thing, because y = x^2 + x, once you go far enough to the right, looks like just y = x^2 does on its right flank.
findbigO(n) = findbigO(n-1) + (1)
findbigO(n) = (findbigO(n-2) + O(1)) + (1)
...
findbigO(n) = findbigO(n-n) + n*O(1)
findbigO(n) = findbigO(0) + n*O(1)
findbigO(n) = O(1) + n*O(1)
findbigO(n) = O(1) + n*O(1)
findbigO(n) = O(1) + O(n)
findbigO(n) <= O(n) + O(n)
findbigO(n) <= 2*O(n)
findbigO in O(n)

What is the time complexity of an iteration through all possible sequences of an array

An algorithm that goes through all possible sequences of indexes inside an array.
Time complexity of a single loop and is linear and two nested loops is quadratic O(n^2). But what if another loop is nested and goes through all indexes separated between these two indexes? Does the time complexity rise to cubic O(n^3)? When N becomes very large it doesn't seem that there are enough iterations to consider the complexity cubic yet it seems to big to be quadratic O(n^2)
Here is the algorithm considering N = array length
for(int i=0; i < N; i++)
{
for(int j=i; j < N; j++)
{
for(int start=i; start <= j; start++)
{
//statement
}
}
}
Here is a simple visual of the iterations when N=7(which goes on until i=7):
And so on..
Should we consider the time complexity here quadratic, cubic or as a different size complexity?
For the basic
for (int i = 0; i < N; i++) {
for (int j = i; j < N; j++) {
// something
}
}
we execute something n * (n+1) / 2 times => O(n^2). As to why: it is the simplified form of
sum (sum 1 from y=x to n) from x=1 to n.
For your new case we have a similar formula:
sum (sum (sum 1 from z=x to y) from y=x to n) from x=1 to n. The result is n * (n + 1) * (n + 2) / 6 => O(n^3) => the time complexity is cubic.
The 1 in both formulas is where you enter the cost of something. This is in particular where you extend the formula further.
Note that all the indices may be off by one, I did not pay particular attention to < vs <=, etc.
Short answer, O(choose(N+k, N)) which is the same as O(choose(N+k, k)).
Here is the long answer for how to get there.
You have the basic question version correct. With k nested loops, your complexity is going to be O(N^k) as N goes to infinity. However as k and N both vary, the behavior is more complex.
Let's consider the opposite extreme. Suppose that N is fixed, and k varies.
If N is 0, your time is constant because the outermost loop fails on the first iteration.. If N = 1 then your time is O(k) because you go through all of the levels of nesting with only one choice and only have one choice every time. If N = 2 then something more interesting happens, you go through the nesting over and over again and it takes time O(k^N). And in general, with fixed N the time is O(k^N) where one factor of k is due to the time taken to traverse the nesting, and O(k^(N-1)) being taken by where your sequence advances. This is an unexpected symmetry!
Now what happens if k and N are both big? What is the time complexity of that? Well here is something to give you intuition.
Can we describe all of the times that we arrive at the innermost loop? Yes!
Consider k+N-1 slots With k of them being "entered one more loop" and N-1 of them being "we advanced the index by 1". I assert the following:
These correspond 1-1 to the sequence of decisions by which we reached the innermost loop. As can be seen by looking at which indexes are bigger than others, and by how much.
The "entered one more loop" entries at the end is work needed to get to the innermost loop for this iteration that did not lead to any other loop iterations.
If 1 < N we actually need one more that that in unique work to get to the end.
Now this looks like a mess, but there is a trick that simplifies it quite unexpectedly.
The trick is this. Suppose that we took one of those patterns and inserted one extra "we advanced the index by 1" somewhere in that final stretch of "entered one more loop" entries at the end. How many ways are there to do that? The answer is that we can insert that last entry in between any two spots in that last stretch, including beginning and end, and there is one more way to do that than there are entries. In other words, the number of ways to do that matches how much unique work there was getting to this iteration!
And what that means is that the total work is proportional to O(choose(N+k, N)) which is also O(choose(N+k, k)).
It is worth knowing that from the normal approximation to the binomial formula, if N = k then this turns out to be O(2^(N+k)/sqrt(N+k)) which indeed grows faster than polynomial. If you need a more general or precise approximation, you can use Stirling's approximation for the factorials in choose(N+k, N) = (N+k)! / ( N! k! ).

time complexity : why O(nlogn)?

I have a document that says the average case time-complexity for the given code is O(nlog2n)
Random r = new Random();
int k = 1 + r.nextInt(n);
for (int i = 0; i < n ; i += k);
I have computed the best and worst cases as:
Best case, k = n leading to time complexity of O(1).
Worst case, k = 1 leading to time complexity of O(n).
How can average case be O(nlog2n), which is higher than the worst case. Am I missing something?
Edit: The document could be prone to mistakes, so in that case what would be the average time-complexity of the above code, and why?
For a given value of k, the for loop runs n/k times. (I'm ignoring rounding, which makes the analysis a bit more complicated but doesn't change the result).
Averaging over all values of k gives: (n/1 + n/2 + n/3 + ... + n/n) / n. That's the n'th harmonic number. The harmonic numbers tend to log(n).
Thus the average runtime complexity of this code is log(n). That's O(log n) or equivalently O(log_2 n).
Perhaps your book had an additional outer loop that ran this code n times?

What is the complexity of empty for loop?

I was wondering if the complexity of a empty for loop like below is still O(n^2)
for (int i = 0; i < n; i++) {
for (int j = 0; j < n; j++) {
}
}
update : changed height and width variable to n
If it won't get optimized out by the compiler, the complexity will still be O(n^2) (or actually O(N*M)) - even though the loops bodies are empty, the condition checks and incrementation of both counters are still valid operations which have to be performed.
The complexity of any for loop that runs from 1 .. n is O(n), even if it does not do anything inside it. So in your case it is always going to be O(n^2) irrespective of what you are doing inside the loops.
Here in your example i and j are running till n and hence individually depends on the value of n making the the nested for loops having a complexity of O(n^2)
Pay attention, you can do something else than i++, e.g. fun(i).
Based off of my understanding of time-complexity of an algorithm, we assume that there are one or more fundamental operations. Re-writing the code using a while loop and expanding for logic :
int i = 0, j = 0;
while(i < n)
{
while(j < n)
{
; //nop or no-operation
j = j + 1; // let jInc be alias for j + 1
}
i = i + 1; // let iInc be alias for i + 1
}
Now if your objective is to perform a 'nop' n^2 times, then the time complexity is O(0) where 'nop' is the fundamental operation. However, if the objective is to iterate 2 counters ('i' and 'j') from 0 to n -1 or count n^2 times then the fundamental operations can be addition (j + 1 and i + 1), comparison (i < n and j < n) or assignment (i = iInc and j = jInc) i.e. O(n^2).
Big O is just an approximation for evaluating count of steps in algorithm.
We could have formulas for exact count of steps in algorithm, but they are complex and difficult to realise the actual complexity.
1) O(0.000 000 001*n^2 - 1 000 000 000) = n^2
2) O(1 000 000 000*n ) = n
Despite of Big O first case is less e.g. for N = 0..1 000 000
More than, it doesn't take into account how fast particular step.
So, your loop is a case when O(n^2) could be less than O(1)
The nested loop performs constant work O(1) n times, so nO(1)=O(n)O(1)=O(n).
The external loop performs the above mentioned O(n) work n times so nO(n)=O(n)O(n) =O(n^2).
In general:``
f(n) ∈ O(f(n))
cf(n) ∈ O(f(n)) if c is constant
f(n)g(n) ∈ O(f(n)g(n))
It depends on the compiler.
Theoretically, it's O(n), where n is the number of loops, even if there's no task inside the loop.
But, in case of some compiler, the compiler optimizes the loop and doesn't iterates n times. In this situation, the complexity is O(1).
For the loop mentioned above, it's both O(n) and O(n^2). But, it's good practice to write O(n^2) as Big O covers upper bound.

Finding maximum in O(logn) time?

I've always taken it for granted that iterative search is the go-to method for finding maximum values in an unsorted list.
The thought came to me rather randomly, but in a nutshell: I believe I can accomplish the task in O(logn) time with n being the input array's size.
The approach piggy-backs on merge sort: divide and conquer.
Step 1: divide the findMax() task to two sub-tasks findMax(leftHalf) and findMax(rightHalf). This division should be finished in O(logn) time.
Step 2: merge the two maximum candidates back up. Each layer in this step should take constant time O(1), and there are, per the previous step, O(logn) such layers. So it should also be done in O(1) * O(logn) = O(logn) time (pardon the abuse of notation). This is so wrong. Each comparison is done in constant time, but there are 2^j/2 such comparisons to be done (2^j pairs of candidates at level j-th).
Thus, the whole task should be completed in O(logn) time. O(n) time.
However, when I try to time it, I get results that clearly reflect a linear O(n) running time.
size = 100000000 max = 0 time = 556
size = 200000000 max = 0 time = 1087
size = 300000000 max = 0 time = 1648
size = 400000000 max = 0 time = 1990
size = 500000000 max = 0 time = 2190
size = 600000000 max = 0 time = 2788
size = 700000000 max = 0 time = 3586
How come?
Here's the code (I left the arrays uninitialized to save on pre-processing time, the method, as far as I'd tested it, accurately identifies the maximum value in unsorted arrays):
public static short findMax(short[] list) {
return findMax(list, 0, list.length);
}
public static short findMax(short[] list, int start, int end) {
if(end - start == 1) {
return list[start];
}
else {
short leftMax = findMax(list, start, start+(end-start)/2);
short rightMax = findMax(list, start+(end-start)/2, end);
return (leftMax <= rightMax) ? (rightMax) : (leftMax);
}
}
public static void main(String[] args) {
for(int j=1; j < 10; j++) {
int size = j*100000000; // 100mil to 900mil
short[] x = new short[size];
long start = System.currentTimeMillis();
int max = findMax(x);
long end = System.currentTimeMillis();
System.out.println("size = " + size + "\t\t\tmax = " + max + "\t\t\t time = " + (end - start));
System.out.println();
}
}
You should count the number of comparisons that actually take place :
In the final step, after you find the maximum of the first n/2 numbers and last n/2 nubmers, you need 1 more comparison to find the maximum of the entire set of numbers.
On the previous step you have to find the maximum of the first and second groups of n/4 numbers and the maximum of the third and fourth groups of n/4 numbers, so you have 2 comparisons.
Finally, at the end of the recursion, you have n/2 groups of 2 numbers, and you have to compare each pair, so you have n/2 comparisons.
When you sum them all you get :
1 + 2 + 4 + ... + n/2 = n-1 = O(n)
You indeed create log(n) layers.
But at the end of the day, you still go through each element of every created bucket. Therefore you go through every element. So overall you are still O(n).
With Eran's answer, you already know what's wrong with your reasoning.
But anyway, there is a theorem called the Master Theorem, which aids in the running time analysis of recursive functions.
It verses on the following equation:
T(n) = a*T(n/b) + O(n^d)
Where T(n) is the running time for a problem of size n.
In your case, the recurrence equation would be T(n) = 2*T(n/2) + O(1) So a=2, b=2, and d=0. That is the case because, for each n-sized instance of your problem, you break it into 2 (a) subproblems, of size n / 2 (b), and combines them in O(1) = O(n^0).
The master theorem simply states three cases:
if a = b^d, then the total running time is O(n^d*log n)
if a < b^d, then the total running time is O(n^d)
if a > b^d, then the total running time is O(n^(log a / log b))
Your case matches the third, so the total running time is O(n^(log 2 / log 2)) = O(n)
It is a nice exercise to try to understand the reason behind these three cases. They are merely the cases for which:
1st) We do the same amount total work for each recursion level (this is the case of mergesort), so we only multiply the merging time, O(n^d), by the number of levels, log n.
2nd) We do less work for the second recursion level than for the first, and so on. Therefore the total work is basically the one for the last merge step (first recursion level), O(n^d).
3rd) We do more work for deeper levels (your case), so the running time is O(number of leaves in the recursion tree). In your case you have n leaves for the deeper recursion level, so O(n).
There are some short videos on a Stanford cousera course which are very nice to explain the Master Method, available https://www.coursera.org/course/algo. I believe you can always preview the course, even if not enrolled.

Categories