I'm trying to understand the growth function of this code.
for (int count=0; count < n; count++) {
for (int count2=1; count2 < n; count2=count2*2) {
System.out.println(count + ", " + count2);
}
}
The outer loop is linear since you're incrementing by one each time. The inner loop is log(n) since your upper bound would have to increase exponentially to keep up with the growth of the count2 variable therefore the entire nested iteration is nlog(n) .
A good way to approach these problems when you're just starting to learn about growth and order is by thinking about it visually. Refer to the diagram and explanation below:
Assume n = 8 and printing is a constant time operation.
Look at the loops separately first.
The outer loop is fairly trivial:
we start with a count of 0, and increment count by 1 each time until we reach n = 8. Therefore, we must increment 8/1 = 8 times in order to complete the loop. Generalized to n, the loop will run for n times.
The inner loop is slightly more involved:
we start with a count of 1 and increment by multiplying 2 to count2 until we reach n = 8. In this way, count2 increases by a factor of 2 at each iteration and its value can be determined by 2^k where k is the number of iterations. To figure out how many iterations it will take to reach n = 8, solve 2^k = 8 or k = log2(8) = 3. Generalized to n the loop will run for log2(n) times.
Combining these two facts, the inner loop will be run for every iteration of the outer loop so all in all, it will take n * log2(n) to run. Therefore, the runtime complexity is O(nlog(n))
Related
I got 2 functions and I need to find the execution time for both functions in Big O, however I am confuse on fnB
int fnA(int n){
int sum = 0;
for(int i=0; i<n; i++){
for(int j=n; i<j; j=j-2){
sum += i*j;
}
return sum;
}
I got O(n^2) for fnA
int fnB(int n) {
int sum =0;
for(int size = 1; size<n; size=2*size){
sum+=fnA(size);
}
return sum;
}
Since within the for loop in fnB, size increase exponentially. I am leaning fnB has an O(n^3). Am I correct, if not please correct me, thank you
fnA has a running time of O(n2).
However, fnB has a running time of O(n2logn), since it has log2n iterations, and each iteration takes O(n2) time (it actually takes O(size2), but since size < n, we can bound it with O(n2)).
A more detailed explanation:
fnA(n) has n iterations in the outer loop and at most n/2 iterations in the inner loop, which gives O(n2) upper bound. Since each iteration of fnB(n) calls fnA(size), it takes O(size2) == O(n2) (since size < n).
Now, the loop of fnB(n) assigns the following values to size : 20, 21, 22, ..., 2k where 2k <= n. Therefore the number of iterations is k <= log2n, and the upper bound of fnB is O(n2log2n).
Big-O notation can be used to represent time-complexity or space-complexity of an algorithm.
In your program, the function fnA can be thought of having a time complexity of O(n^2) because it has two nested for loops. However, due to the condition i<j which always evaluates to false, the inner for loop would never execute. So the more realistic time-complexity of your fnA function is O(n).
Your fnB function calls fnA in a single for loop. Therefore, its time complexity is O(n^2).
I have the code below and am trying figure out the big O worst case running time for it. I think that the first loop is O(log N), but I am not sure what the second loop is. I thought maybe it was O(N) but that didn't seem right. Any insights would be very helpful.
for(int jump = inList.size(); jump > 0; jump/= 2) {
for(int i = 0; i < inList.size(); i = ++i * jump) {
This is going to be a O(log(n)) algorithm because the outer loop is clearly O(log(n)) and for large enough N the inner loop is going to finish executing in constant (2) iterations because n/2 * (n+1)/4 = (n^2+n)/8 > n for n > 6. For all values greater than 6 the inner for loop always iterates twice, big O deals with large cases (approaching infinity) in which case the inner loop is constant.
An algorithm that goes through all possible sequences of indexes inside an array.
Time complexity of a single loop and is linear and two nested loops is quadratic O(n^2). But what if another loop is nested and goes through all indexes separated between these two indexes? Does the time complexity rise to cubic O(n^3)? When N becomes very large it doesn't seem that there are enough iterations to consider the complexity cubic yet it seems to big to be quadratic O(n^2)
Here is the algorithm considering N = array length
for(int i=0; i < N; i++)
{
for(int j=i; j < N; j++)
{
for(int start=i; start <= j; start++)
{
//statement
}
}
}
Here is a simple visual of the iterations when N=7(which goes on until i=7):
And so on..
Should we consider the time complexity here quadratic, cubic or as a different size complexity?
For the basic
for (int i = 0; i < N; i++) {
for (int j = i; j < N; j++) {
// something
}
}
we execute something n * (n+1) / 2 times => O(n^2). As to why: it is the simplified form of
sum (sum 1 from y=x to n) from x=1 to n.
For your new case we have a similar formula:
sum (sum (sum 1 from z=x to y) from y=x to n) from x=1 to n. The result is n * (n + 1) * (n + 2) / 6 => O(n^3) => the time complexity is cubic.
The 1 in both formulas is where you enter the cost of something. This is in particular where you extend the formula further.
Note that all the indices may be off by one, I did not pay particular attention to < vs <=, etc.
Short answer, O(choose(N+k, N)) which is the same as O(choose(N+k, k)).
Here is the long answer for how to get there.
You have the basic question version correct. With k nested loops, your complexity is going to be O(N^k) as N goes to infinity. However as k and N both vary, the behavior is more complex.
Let's consider the opposite extreme. Suppose that N is fixed, and k varies.
If N is 0, your time is constant because the outermost loop fails on the first iteration.. If N = 1 then your time is O(k) because you go through all of the levels of nesting with only one choice and only have one choice every time. If N = 2 then something more interesting happens, you go through the nesting over and over again and it takes time O(k^N). And in general, with fixed N the time is O(k^N) where one factor of k is due to the time taken to traverse the nesting, and O(k^(N-1)) being taken by where your sequence advances. This is an unexpected symmetry!
Now what happens if k and N are both big? What is the time complexity of that? Well here is something to give you intuition.
Can we describe all of the times that we arrive at the innermost loop? Yes!
Consider k+N-1 slots With k of them being "entered one more loop" and N-1 of them being "we advanced the index by 1". I assert the following:
These correspond 1-1 to the sequence of decisions by which we reached the innermost loop. As can be seen by looking at which indexes are bigger than others, and by how much.
The "entered one more loop" entries at the end is work needed to get to the innermost loop for this iteration that did not lead to any other loop iterations.
If 1 < N we actually need one more that that in unique work to get to the end.
Now this looks like a mess, but there is a trick that simplifies it quite unexpectedly.
The trick is this. Suppose that we took one of those patterns and inserted one extra "we advanced the index by 1" somewhere in that final stretch of "entered one more loop" entries at the end. How many ways are there to do that? The answer is that we can insert that last entry in between any two spots in that last stretch, including beginning and end, and there is one more way to do that than there are entries. In other words, the number of ways to do that matches how much unique work there was getting to this iteration!
And what that means is that the total work is proportional to O(choose(N+k, N)) which is also O(choose(N+k, k)).
It is worth knowing that from the normal approximation to the binomial formula, if N = k then this turns out to be O(2^(N+k)/sqrt(N+k)) which indeed grows faster than polynomial. If you need a more general or precise approximation, you can use Stirling's approximation for the factorials in choose(N+k, N) = (N+k)! / ( N! k! ).
I have seen at lot of places, the complexity for bubble sort is O(n2).
But how can that be so because the inner loop should always runs n-i times.
for (int i = 0; i < toSort.length -1; i++) {
for (int j = 0; j < toSort.length - 1 - i; j++) {
if(toSort[j] > toSort[j+1]){
int swap = toSort[j+1];
toSort[j + 1] = toSort[j];
toSort[j] = swap;
}
}
}
And what is the "average" value of n-i ? n/2
So it runs in O(n*n/2) which is considered as O(n2)
There are different types of time complexity - you are using big O notation so that means all cases of this function will be at least this time complexity.
As it approaches infinity this can be basically n^2 time complexity worst case scenario. Time complexity is not an exact art but more of a ballpark for what sort of speed you can expect for this class of algorithm and hence you are trying to be too exact.
For example the theoretical time complexity might very well be n^2 even though it should in theory be n*n-1 because of whatever unforeseen processing overhead might be performed.
Since outer loop runs n times and for each iteration inner loop runs (n-i) times , the total number of operations can be calculated as
n*(n-i) = O(n2).
It's O(n^2),because length * length.
Say I have the following for loops:
for (int i = 1; i < n; i*=2) {
for (int j = 1; j < i; j*=2) {
//constant functions here
}
}
The outer loop will run log(n) times, my question is about how many times the inner loop will run. We can approximate j to be 2s, for some s, and it will stop when 2s < i, which can again be approximated to 2r, for some r. If we take the log of both sides, we get that s < r, which is when the second for loop will stop. Based on this, can we say that the inner loop runs constant times? And that the total number of times this function will run is just log(n)?
First neglect the outer loop, suppose the value of i =4, the inner loop will run for log(4) times, generalizing the inner loop will run log(i) times,for different values of i. Now consider the outer loop, value of i changes log(n) times. Hence the complexity is log(n)*log(i) or just simply (log(n))^2
in one word the answer is : Log2(n)2
Proof
consider Log2(n) = k so we have in first iteration of outer loop, inner loop runs 1 time because i = 2, in second iteration inner loops runs 2 times because i = 4, and ... in k'th iteration of outer loop, inner loop runs k times because i = 2k so number of your iterations:
1 + 2 + ... + k = k(k+1)/2
so the answer is O(Log2(n)2)