Well, i am trying to figure out the Big O runtimes in this loop. I have the answers but i would like to check with the community.
m =1; 1
for (i = 1; i <= n; i++) n+1
for (j = 1; j <= n*n; j++) n(n+1) = n^2+n
for (k = 1; k <= n*n*n; k++) n(n^2+n) = n^3+n^2
M++; 1*n*n*n
My answer for complexity is O(n^3)
How many does the instruction m++ apply?
My answer was n^3*n^2*n but i am not sure
What is the value for m when the process has finished?
My answer was 1*n *n *n
Is this correct?
Your problem is somewhat simplified in that the three loops have no functional dependence on each other (other than that they are nested together), so we can just multilply the complexities of each loop together to get a final answer.
The k loop is O(n^3), because the upper bound of the loop is n*n*n. By similar reasoning, the middle loop is O(n^2) and the outer loop is O(n). Multiplying together we get O(n^6).
I think the "trap" of this question would be duping someone into thinking that because the loop with the highest complexity is O(n^3) that therefore this term dominates the expression, which then must also be overall O(n^3). This isn't the case as hopefully you can see by now.
Related
I have the code below and am trying figure out the big O worst case running time for it. I think that the first loop is O(log N), but I am not sure what the second loop is. I thought maybe it was O(N) but that didn't seem right. Any insights would be very helpful.
for(int jump = inList.size(); jump > 0; jump/= 2) {
for(int i = 0; i < inList.size(); i = ++i * jump) {
This is going to be a O(log(n)) algorithm because the outer loop is clearly O(log(n)) and for large enough N the inner loop is going to finish executing in constant (2) iterations because n/2 * (n+1)/4 = (n^2+n)/8 > n for n > 6. For all values greater than 6 the inner for loop always iterates twice, big O deals with large cases (approaching infinity) in which case the inner loop is constant.
An algorithm that goes through all possible sequences of indexes inside an array.
Time complexity of a single loop and is linear and two nested loops is quadratic O(n^2). But what if another loop is nested and goes through all indexes separated between these two indexes? Does the time complexity rise to cubic O(n^3)? When N becomes very large it doesn't seem that there are enough iterations to consider the complexity cubic yet it seems to big to be quadratic O(n^2)
Here is the algorithm considering N = array length
for(int i=0; i < N; i++)
{
for(int j=i; j < N; j++)
{
for(int start=i; start <= j; start++)
{
//statement
}
}
}
Here is a simple visual of the iterations when N=7(which goes on until i=7):
And so on..
Should we consider the time complexity here quadratic, cubic or as a different size complexity?
For the basic
for (int i = 0; i < N; i++) {
for (int j = i; j < N; j++) {
// something
}
}
we execute something n * (n+1) / 2 times => O(n^2). As to why: it is the simplified form of
sum (sum 1 from y=x to n) from x=1 to n.
For your new case we have a similar formula:
sum (sum (sum 1 from z=x to y) from y=x to n) from x=1 to n. The result is n * (n + 1) * (n + 2) / 6 => O(n^3) => the time complexity is cubic.
The 1 in both formulas is where you enter the cost of something. This is in particular where you extend the formula further.
Note that all the indices may be off by one, I did not pay particular attention to < vs <=, etc.
Short answer, O(choose(N+k, N)) which is the same as O(choose(N+k, k)).
Here is the long answer for how to get there.
You have the basic question version correct. With k nested loops, your complexity is going to be O(N^k) as N goes to infinity. However as k and N both vary, the behavior is more complex.
Let's consider the opposite extreme. Suppose that N is fixed, and k varies.
If N is 0, your time is constant because the outermost loop fails on the first iteration.. If N = 1 then your time is O(k) because you go through all of the levels of nesting with only one choice and only have one choice every time. If N = 2 then something more interesting happens, you go through the nesting over and over again and it takes time O(k^N). And in general, with fixed N the time is O(k^N) where one factor of k is due to the time taken to traverse the nesting, and O(k^(N-1)) being taken by where your sequence advances. This is an unexpected symmetry!
Now what happens if k and N are both big? What is the time complexity of that? Well here is something to give you intuition.
Can we describe all of the times that we arrive at the innermost loop? Yes!
Consider k+N-1 slots With k of them being "entered one more loop" and N-1 of them being "we advanced the index by 1". I assert the following:
These correspond 1-1 to the sequence of decisions by which we reached the innermost loop. As can be seen by looking at which indexes are bigger than others, and by how much.
The "entered one more loop" entries at the end is work needed to get to the innermost loop for this iteration that did not lead to any other loop iterations.
If 1 < N we actually need one more that that in unique work to get to the end.
Now this looks like a mess, but there is a trick that simplifies it quite unexpectedly.
The trick is this. Suppose that we took one of those patterns and inserted one extra "we advanced the index by 1" somewhere in that final stretch of "entered one more loop" entries at the end. How many ways are there to do that? The answer is that we can insert that last entry in between any two spots in that last stretch, including beginning and end, and there is one more way to do that than there are entries. In other words, the number of ways to do that matches how much unique work there was getting to this iteration!
And what that means is that the total work is proportional to O(choose(N+k, N)) which is also O(choose(N+k, k)).
It is worth knowing that from the normal approximation to the binomial formula, if N = k then this turns out to be O(2^(N+k)/sqrt(N+k)) which indeed grows faster than polynomial. If you need a more general or precise approximation, you can use Stirling's approximation for the factorials in choose(N+k, N) = (N+k)! / ( N! k! ).
I was wondering if the complexity of a empty for loop like below is still O(n^2)
for (int i = 0; i < n; i++) {
for (int j = 0; j < n; j++) {
}
}
update : changed height and width variable to n
If it won't get optimized out by the compiler, the complexity will still be O(n^2) (or actually O(N*M)) - even though the loops bodies are empty, the condition checks and incrementation of both counters are still valid operations which have to be performed.
The complexity of any for loop that runs from 1 .. n is O(n), even if it does not do anything inside it. So in your case it is always going to be O(n^2) irrespective of what you are doing inside the loops.
Here in your example i and j are running till n and hence individually depends on the value of n making the the nested for loops having a complexity of O(n^2)
Pay attention, you can do something else than i++, e.g. fun(i).
Based off of my understanding of time-complexity of an algorithm, we assume that there are one or more fundamental operations. Re-writing the code using a while loop and expanding for logic :
int i = 0, j = 0;
while(i < n)
{
while(j < n)
{
; //nop or no-operation
j = j + 1; // let jInc be alias for j + 1
}
i = i + 1; // let iInc be alias for i + 1
}
Now if your objective is to perform a 'nop' n^2 times, then the time complexity is O(0) where 'nop' is the fundamental operation. However, if the objective is to iterate 2 counters ('i' and 'j') from 0 to n -1 or count n^2 times then the fundamental operations can be addition (j + 1 and i + 1), comparison (i < n and j < n) or assignment (i = iInc and j = jInc) i.e. O(n^2).
Big O is just an approximation for evaluating count of steps in algorithm.
We could have formulas for exact count of steps in algorithm, but they are complex and difficult to realise the actual complexity.
1) O(0.000 000 001*n^2 - 1 000 000 000) = n^2
2) O(1 000 000 000*n ) = n
Despite of Big O first case is less e.g. for N = 0..1 000 000
More than, it doesn't take into account how fast particular step.
So, your loop is a case when O(n^2) could be less than O(1)
The nested loop performs constant work O(1) n times, so nO(1)=O(n)O(1)=O(n).
The external loop performs the above mentioned O(n) work n times so nO(n)=O(n)O(n) =O(n^2).
In general:``
f(n) ∈ O(f(n))
cf(n) ∈ O(f(n)) if c is constant
f(n)g(n) ∈ O(f(n)g(n))
It depends on the compiler.
Theoretically, it's O(n), where n is the number of loops, even if there's no task inside the loop.
But, in case of some compiler, the compiler optimizes the loop and doesn't iterates n times. In this situation, the complexity is O(1).
For the loop mentioned above, it's both O(n) and O(n^2). But, it's good practice to write O(n^2) as Big O covers upper bound.
for example, if I have
for(int i = 1; i < 500; i++){
for (int j = 0; j < N; j++){
if (array[j] == someNumber && i == someNumber)
counter++;
}
}
if all numbers were the same in the array, hence counter would increase for each of them, would this make the complexity O(2N), O(N squared) or still just O(N), where it never became O(2N)? I hope the question makes sense. I find it hard to understand how statements might affect the complexity.
Since the statements inside your loop take constant time, they don't have any impact on the overall complexity of your algorithm. The complexity of the code you posted is just O(N), since your outer loop is executed a constant number of times, and your inner loop executes N times.
An example of when it would matter would be if you did something inside a loop like call a function that searches for a value in an array, or sorts a list. Since those operations depend on the size of the array (or list), you'd have to take them into account when calculating the complexity of your algorithm.
I have seen at lot of places, the complexity for bubble sort is O(n2).
But how can that be so because the inner loop should always runs n-i times.
for (int i = 0; i < toSort.length -1; i++) {
for (int j = 0; j < toSort.length - 1 - i; j++) {
if(toSort[j] > toSort[j+1]){
int swap = toSort[j+1];
toSort[j + 1] = toSort[j];
toSort[j] = swap;
}
}
}
And what is the "average" value of n-i ? n/2
So it runs in O(n*n/2) which is considered as O(n2)
There are different types of time complexity - you are using big O notation so that means all cases of this function will be at least this time complexity.
As it approaches infinity this can be basically n^2 time complexity worst case scenario. Time complexity is not an exact art but more of a ballpark for what sort of speed you can expect for this class of algorithm and hence you are trying to be too exact.
For example the theoretical time complexity might very well be n^2 even though it should in theory be n*n-1 because of whatever unforeseen processing overhead might be performed.
Since outer loop runs n times and for each iteration inner loop runs (n-i) times , the total number of operations can be calculated as
n*(n-i) = O(n2).
It's O(n^2),because length * length.