Big O notation explanation nested while loop - java

I was wondering what the big o notation is for the following (java)code:
while (n > 0) {
while (n > 0){
n-- ;
}
}
If I use n = 10 it will do one iteration in the outer loop and 10 inside the inner loop. So a total of 11 iterations right?
if I use n = 100 it will de one iteration in the outer loop and 100 inside the inner loop. So a total of 101 iterations right?
But this is the point where I got stuck. Because I think the notation is O(n). Simply because I think the iteration are almost equal to n.
But I don't know how to prove it?
I am not that much in math so a clear explanation would be appriciated

Informally speaking, for positive arguments, the outer loop takes exactly one iteration, as in the inner loop n is decreased to zero. The inner loop will take exactly n iterations, so the runtime complexity of the inner loop is O(n). In total, although the termination condition of the outer loop syntactically depends on n, it is in fact independent from n. The overall complexity can be seen as O(n+c) where c is a constant representing the execution of the outer loop. However, O(n+c) equals O(n).
What probably puzzles you is that in your terminology, you speak of a number of 101 iterations of a loop, where you in fact refer to two different loops.

It's O(n), because the outer loop runs one single time. When the inner loop finishes, then the condition for the outer loop is also false. Therefore the outer loop is not important to the O notation.

Yes, it's O(n). Mathematical proofs for even simple algorithms are not easy, however.
What you could do is apply weakest precondition to formally analyze this.
See this https://en.wikipedia.org/wiki/Predicate_transformer_semantics#While_loop
Informally, it's easy to see that after the inner while n >= 0 must be true, regardless of what happens inside the inner loop. And if n >= 0 then the outer while will end. As this happens every single time after the inner while (regardless of it's contents), then the outer loop never executes more than once.
Weakest precondition can be used to proof this more formally, but if you apply it to bigger problems your head will definitely start to ache. It's educational though.

Related

In Java, is a while(true) behaving equally as an endless for-loop?

Do the following loops behave the same in terms of speed & CPU usage:
Loop 1
while(true){}
Loop 2
for(int i = 0; i != -1 ; i++) {}
Are there any differences or is it basically the same code in 2 different expressions?
EDIT:
to specify:
Do the following loops behave the same in terms of speed & CPU usage in java?:
Loop 1
while(true){}
Loop 2
for(;;) {}
These two loops are not the same:
The first loop will never stop
The second loop will run for a long time, until i overflows, at which point the loop will exit.
Given that these two loops are not the same, the byte code, CPU usage, and speed for each would be different as well.
It is similar but not identical. Keep in mind, the i++ of the for loop is actually being executed each time through the loop, so it will be slower.
Also, the initialization of i to 0 is also something that does not happen with the while loop.
Finally, consider the fact that an integer may roll over from its greatest possible positive value to it's smallest possible negative value eventually.

Calculating time complexity by just seeing the algorithm code

I have currently learned the code of all sorting algorithms used and understood their functioning. However as a part of these, one should also be capable to find the time and space complexity. I have seen people just looking at the loops and deriving the complexity. Can someone guide me towards the best practice for achieving this. The given example code is for "Shell sort". What should be the strategy used to understand and calculate from code itself. Please help! Something like step count method. Need to understand how we can do asymptotic analysis from code itself. Please help.
int i,n=a.length,diff=n/2,interchange,temp;
while(diff>0) {
interchange=0;
for(i=0;i<n-diff;i++) {
if(a[i]>a[i+diff]) {
temp=a[i];
a[i]=a[i+diff];
a[i+diff]=temp;
interchange=1;
}
}
if(interchange==0) {
diff=diff/2;
}
}
Since the absolute lower bound on worst-case of a comparison-sorting algorithm is O(n log n), evidently one can't do any better. The same complexity holds here.
Worst-case time complexity:
1. Inner loop
Let's first start analyzing the inner loop:
for(i=0;i<n-diff;i++) {
if(a[i]>a[i+diff]) {
temp=a[i];
a[i]=a[i+diff];
a[i+diff]=temp;
interchange=1;
}
}
Since we don't know much (anything) about the structure of a on this level, it is definitely possible that the condition holds, and thus a swap occurs. A conservative analysis thus says that it is possible that interchange can be 0 or 1 at the end of the loop. We know however that if we will execute the loop a second time, with the same diff value.
As you comment yourself, the loop will be executed O(n-diff) times. Since all instructions inside the loop take constant time. The time complexity of the loop itself is O(n-diff) as well.
Now the question is how many times can interchange be 1 before it turns to 0. The maximum bound is that an item that was placed at the absolute right is the minimal element, and thus will keep "swapping" until it reaches the start of the list. So the inner loop itself is repeated at most: O(n/diff) times. As a result the computational effort of the loop is worst-case:
O(n^2/diff-n)=O(n^2/diff-n)
2. Outer loop with different diff
The outer loop relies on the value of diff. Starts with a value of n/2, given interchange equals 1 at the end of the loop, something we cannot prove will not be the case, a new iteration will be performed with diff being set to diff/2. This is repeated until diff < 1. This means diff will take all powers of 2 up till n/2:
1 2 4 8 ... n/2
Now we can make an analysis by summing:
log2 n
------
\
/ O(n^2/2^i-n) = O(n^2)
------
i = 0
where i represents *log2(diff) of a given iteration. If we work this out, we get O(n2) worst case time complexity.
Note (On the lower bound of worst-case comparison sort): One can proof no comparison sort algorithm exists with a worst-case time complexity of O(n log n).
This is because for a list with n items, there are n! possible orderings. For each ordering, there is a different way one needs to reorganize the list.
Since using a comparison can split the set of possible orderings into two equals parts at the best, it will require at least log2(n!) comparisons to find out which ordering we are talking about. The complexity of log2(n) can be calculated using the Stirling approximation:
n
/\
|
| log(x) dx = n log n - n = O(n log n)
\/
1
Best-case time complexity: in the best case, the list is evidently ordered. In that case the inner loop will never perform the if-then part. As a consequence, the interchange will not be set to 1 and therefore after executing the for loop one time. The outer loop will still be repeated O(log n) times, thus the time complexity is O(n log n).
Look at the loops and try to figure out how many times they execute. Start from the innermost ones.
In the given example (not the easiest one to begin with), the for loop (innermost) is excuted for i in range [0,n-diff], i.e. it is executed exactly n-diff times.
What is done inside that loop doesn't really matter as long as it takes "constant time", i.e. there is a finite number of atomic operations.
Now the outer loop is executed as long as diff>0. This behavior is complex because an iteration can decrease diff or not (it is decreased when no inverted pair was found).
Now you can say that diff will be decreased log(n) times (because it is halved until 0), and between every decrease the inner loop is run "a certain number of times".
An exercised eye will also recognize interleaved passes of bubblesort and conclude that this number of times will not exceed the number of elements involved, i.e. n-diff, but that's about all that can be said "at a glance".
Complete analysis of the algorithm is an horrible mess, as the array gets progressively better and better sorted, which will influence the number of inner loops.

Big O notation (The complexity) of the following code?

I just wondering what is the big o for the code below:
I am thinking O(n). What do you guys think? Thank you for your help!
for( w = Length ; w >= 0 ; w = w / 2 ){
for( i = Length ; i >= 0 ; --i ){
if( randomNumber() == 4 )
return
}
}
Since you are asking for the Big O notation, which is the worst case time complexity, the answer is:
O(n^x) , where x is the denominator used in the outer-for loop.
This pretty much looks like a class assignment, so I do not answer it, but just give you some pointers (homework should not be done by copying the assignment to the web ;) ). Also the assignment is incomplete. I hope your teacher/lecturer did not give it like this.
The missing information is:
Are you looking for worst case runtime or average case runtime? Big-O can be used for both. [originally I included best case runtime, but this is done with big omega, as Jerry pointed out in the comments]
Another missing piece is the datatype of the variables. If they are doubles, it takes much longer until w = w/2 is 0 than with integers.
Worst-case runtime:
The inner loop has i = i-1, so it is executed length times. This gives you O(n) for the inner loop.
This already shows that your estimate is wrong. It has to be the number of executions of the outer loop TIMES the number of executions of the inner loop, so it must be more than linear (unless the outer loop has constant number of executions).
The outer loop has w = w/2, so, in terms of length, how long will this need to be 0? This gives you how often the outer loop is executed. And, by multiplication, the total number of executions.
Than there is this randomNumber(). As I said I am assuming worst-case analysis, the worst case is clearly that it is never 4 and thereby we can ignore this return.
Average-case runtime:
The analysis for the loops does not change. For the randomNumber(), we need to estimate how long it takes until the probability of NOT having 4 is sufficiently small. However, I do not have enough information about randomNumber() to do this.
Best-case runtime [should be big omega, not big o]:
In the best case, randomNumber() returns 4 on the first call. So the best case runtime is constant, O(1).

Time complexity for a triple for loop

What have I done before
Asymptotic analysis of three nested for loops
I was solving the time complexity of this algorithm.
seeing that the outer loop runs n times and inner loop 1 runs i times, I applied the summation and got the two outer loops' complexity to be n(n plus 1)/2.
Then the inner loop executes j times equals to summation of j from j is 0 to j is n(n plus 1)/2. This yields the total complexity of O(n4).
The problem
Asymptotic analysis of three nested for loops
Seems like my answer is wrong. Where did I made the mistake?
You counted the wrong thing. The inner loop executes j times, and j is always less than n. Therefore, for each of your n(n-1)/2 times the inner loop starts, the body of the inner loop will be executed less than n times, which means the total number of times the loop executes is at most n(n-1)/2 * O(n), which is at most O(n^3). What I think you did was double-counting. You tried to use the "summation" of j, which is the total number of times the for (int j... loop executes. But that information is already contained in the computation you already made, n(n-1)/2; using that information again and multiplying it with n(n-1)/2 is double-counting.

Big O for 3 nested loops

A Big O notation question...What is the Big O for the following code:
for (int i = n; i > 0; i = i / 2){
for (int j = 0; j < n; j++){
for (int k = 0; k < n; k++){
count++;
}
}
}
My Thoughts:
So breaking it down, I think the outside loop is O(log2(n)), then each of the inner loops is O(n) which would result in O(n^2 * log2(n)) Question #1 is that correct?
Question #2:
when combining nested loops is it always just as simple as multiply the Big O of each loop?
When loop counters do not depend on one another, it's always possible to work from the inside outward.
The innermost loop always takes time O(n), because it loops n times regardless of the values of j and i.
When the second loop runs, it runs for O(n) iterations, on each iteration doing O(n) work to run the innermost loop. This takes time O(n2).
Finally, when the outer loop runs, it does O(n2) work per iteration. It also runs for O(log n) iterations, since it runs equal to the number of times you have to divide n by two before you reach 1. Consequently, the total work is O(n2 log n).
In general, you cannot just multiply loops together, since their bounds might depend on one another. In this case, though, since there is no dependency, the runtimes can just be multiplied. Hopefully the above reasoning sheds some light on why this is - it's because if you work from the inside out thinking about how much work each loop does and how many times it does it, the runtimes end up getting multiplied together.
Hope this helps!
Yes, this is correct: the outer loop is logN, the other two are N each, for the total of O(N^2*LogN)
In the simple cases, yes. In more complex cases, when loop indexes start at numbers indicated by other indexes, the calculations are more complex.
To answer this slightly (note: slightly) more formally, say T(n) is the time (or number of operations) required to complete the algorithm. Then, for the outer loop, T(n) = log n*T2(n), where T2(n) is the number of operations inside the loop (ignoring any constants). Similarly, T2(n) = n*T3(n) = n*n.
Then, use the following theorem:
If f1(n) = O(g1(n)) and f2(n) = O(g2(n)), then f1(n)×f2(n) = O(g1(n)×g2(n))
(source and proof)
This leaves us with T(n) = O(n2logn).
"Combining nested loops" is just an application of this theorem. The trouble can be in figuring out exactly how many operations each loop uses, which in this case is simple.
You can proceed formally using Sigma Notation, to faithfully imitate your loops:

Categories