I just need some clarification or help on this Big O problem. I don't know if I'm explaining this correctly, but I noticed that the for loop has a false condition, so that means it won't loop at all. And my professor said it's possible to still determine the run time of the loops. So what I'm thinking is this:
1 + (n - 1 - n) * (n) = 1 + 1 * n = 1 + n = O(n)
Explanation: 1 is for the operation outside of the loop. (n - 1 - n) is the iteration of the outer loop and n is the iteration of the inner loop.
Note: I'm still learning Big O, so please correct me if any of my logic is wrong.
int total = 0;
for (int i = n; i < n - 1; i++) {
for (int j = 0; j < n; j++) {
total = total + 1
}
}
There shouldn't be any negative number in Big O analysis. It doesn't make sense for negative running time. Also, (n - 1 - n) is not just in order O(1). Your outer loop doesn't even go into one iteration. Thus, the time complexity for whatever statement in your loop doesn't matter.
To conclude, the running time is 1 + 1 = O(1).
Big O notation to describe the asymptotic behavior of functions. Basically, it tells you how fast a function grows or
declines
For example, when analyzing some algorithm, one might find that the time (or the number of steps) it takes to complete a problem of size n is given by
T(n) = 4 n^2 - 2 n + 2
If we ignore constants (which makes sense because those depend on the particular hardware the program is run on) and slower growing terms, we could say "T(n)" grows at the order of n^2 " and write:T(n) = O(n^2)
For the formal definition, suppose f(x) and g(x) are two functions defined on some subset of the real numbers. We write
f(x) = O(g(x))
(or f(x) = O(g(x)) for x -> infinity to be more precise) if and only if there exist constants N and C such that
|f(x)| <= C|g(x)| for all x>N
Intuitively, this means that f does not grow faster than g
If a is some real number, we write
f(x) = O(g(x)) for x->a
if and only if there exist constants d > 0 and C such that
|f(x)| <= C|g(x)| for all x with |x-a| < d
So for your case it would be O(n^2) as |f(x)| > C|g(x)|
Reference from http://web.mit.edu/16.070/www/lecture/big_o.pdf
int total = 0;
for (int i = n; i < n - 1; i++) { // --> n loop
for (int j = 0; j < n; j++) { // --> n loop
total = total + 1; // -- 1 time
}
}
}
Big O Notation gives an assumption when value is very big outer loop will run n times and inner loop is running n times
Assume n -> 100
than total n^2 10000 run times
Related
I am trying to analyze this code below. I wish to calculate both the complexity and the number of operations / iterations (which leads to complexity). My guess is that the complexity is O(n^2), since I have nested for loops. However, inside the inner loop, the values are switching, replacing places. Doesn't this operation makes the algorithm repeat things more than once and hence it's more than O(n^2), or is it only possible with a while loop ? How do I find the exact number of iterations / operations done ?
for (int i = 0; i < b.Length; i++)
{
for (int j = i + 1; j < b.Length; j++)
{
if (b[i] > b[j])
{
t = b[i];
b[i] = b[j];
b[j] = t;
}
}
}
The outer loop has b.length iterations. Let's call that n.
The inner loop has n - i - 1 iterations.
The total number of iterations of the inner loop is
(n - 1) + (n - 2) + ... + 1 = n * (n -1) / 2 = O(n^2).
Each iteration of the inner loop does constant work - at most 1 condition + 3 assignments - so the total running time is O(n^2).
The exact number of operations depends on the input, since the input determines how many times the condition is true.
The number of loops is controlled by b.length which is a constant and the index variables i and j. As long as you don't meddle with i an j inside the loop, the complexity remains the same.
Ok, so I know two nested for-loops each incrementing by 1 gives a quadratic time complexity. Then I was curious to see if I change the update of one of the loops incrementing by a multiplication of 2 would I get O(n log n) instead of O(n^2) and vice versa to the other loop.
In each inner loop I have a variable to count how many times the loop executes. The array is size 2^20 so 1,048,576. I'm thinking both methods should have the same complexity of n log n (20 * 1,048,576). But only Algorithm 2 gets close to that complexity and Algorithm 1 has a complexity of n * 2.
To my understanding one loop is O(n) and the other is O(log n) so it should be O(n log n) together and then if I switch them I should get a complexity of O(log n n) which would be the same thing.
int[] arr = new int[1048576];
// Algorithm 1
int counter1 = 0;
for (int i = 1; i < arr.length; i++) {
for (int j = i; j < arr.length; j *= 2) {
counter1++;
}
}
System.out.println(counter1);
// Algorithm 2
int counter2 = 0;
for (int i = 1; i < arr.length; i *= 2) {
for (int j = i; j < arr.length; j++) {
counter2++;
}
}
System.out.println(counter2);
// counter1: 2097130 (n * 2)
// counter2: 19922945 (n log n)
Simple math. Assume for now, that w *= 2 will essentially take 20 steps to reach roughly 1 million.
Algo 1 will run roughly 1 million j-loops, but those loops will only take about 20 j-steps each to complete. You're running the inner loop on algo 2 (especially the first time) on a factor of millions, whereas the other will run <=20 times, 1 million times. However, you need to account for the decay, especially at the start. By the time you've hit i=2, you're already down to 19 j-steps on the algorithm. By 4, you're down to 18, and so on. This early decay will essentially "kill" the momentum for the number of steps. The last ~500,000 "i-steps" will only increment the counter once.
Algo 2 runs 1 million j-steps the first run alone, followed by another (i-step-1) j-steps (1, followed by 2, followed by 4, etc). You're running a million steps roughly each time.
Let's count the number of passes in each loop for the second algorithm. Let's take N=arr.length.
First the outer most loop. i ranges from 1 to N, and multiplied by 2 each time, that makes log(N) iterations.
Then, in the inner most loop j ranges from i to N and is incremented by 1 each time, that makes (N-i) iterations.
Lets now take k=log(i). So the total number of times counter2 is incremented is sum(N-2^k) for k=0 to log(N)
Sum for k=0 to log(N) of 2^k is a geometrical sum that adds up to 2^(log(N)+1)-1 so 2N-1.
Therfore the total complexity of the second loop is Nlog(N)-2N+1 which is O(Nlog(N)) just like your first loop.
The difference is the second order term 2N. If we push our development we have for the first loop a complexity of:
Nlog(N) + O(1)
and for the second:
Nlog(N) - 2N + O(1)
Both solutions you posted have exactly the dame complexity, i assume u forgot to swap the inner for loop begin clause variables.
O (n^2) with a constant factor.
Big-O notation works like this:
n/2 * n is the time your loop needs (any of the two u posted)
-> 1/2 * n * n = 1/2 * n^2
1/2 is the factor.
Complexity is polynomial, n^2
I am trying to figure out the Big O notation of the following 2 algorithms below but am having trouble.
The first one is:
public static int fragment3 (int n){
int sum = 0;
for (int i = 1; i <= n*n; i *= 4)
for (int j = 0; j < i*i; j++)
sum++;
return sum;
} //end fragment 3
The answer should be O(n^4). When I try to do it myself this is what I get:
I look at the first for loop and think it runs n^2 logn times. Then for the inner for loop, it runs n times + the run time of the outer loop which is n^3 logn times. I know this is wrong but just don't get it.
For the code fragment below, the answer is O(n^9).
public static int fragment6(int n) {
int sum = 0;
for(int i=0; i < n*n*n; i++) {
if(i%100 == 0) {
for(int j=0; j < i*i; j += 10)
sum++;
} // if
else {
for(int k=0; k <= i; k++)
sum++;
} // else
} // outer loop
return sum;
} // fragment 6
When I attempt it I get: n^3 for the outer for loop. for the if statement I get n, for the second for loop I get n + the other for loop and if statement, making it n^5. Finally, I get n for the final for loop and everything adds up to O(n^6).
What am I doing wrong and what is the correct way to get its O(n^9) complexity?
For the first one.
Let's look at the inner loop..
At the first iteration of the outer loop (i=1) it runs 1 time. At the second iteration (i=4) it runs 16 (4*4) times. At the third iteration (i=16) it runs 256 (16*16) times. In general, at the (k+1)-th iteration of the outer loop inner loop runs times, as at that iteration. So the total number of iterations will be
Now, how many numbers in that sum we will have? To determine that we should have a look at the outer loop. In it i grows as , until it reaches . So the total number of iterations is .
This means that the total number of runs of inner loop is
(by dropping all the numbers from the sum but the last one).
Now we know, that the inner loop runs at least times, so we are not faster than O(n^4).
Now,
Solving for N,
where C is a constant, so we're not slower than O(n^4).
Your approach to computing big-O is flat-out wrong, and you've made computation errors.
In some common cases you can take the worst case number of iterations and multiply them together, but this isn't a sound method and fails for cases like this:
for (i = 1; i < n; i *= 2) {
for (j = 0; j < i; j++) {
sum++;
}
}
Here, the outer loop runs log_2(n) times, and the inner loop worst case is n iterations. So the wrong method that you're using will tell you that the complexity of this code is O(n log n).
The correct method is to count accurately the number of iterations, and approximate at the end only. The number of iterations is actually:
1 + 2 + 4 + 8 + ... + 2^k
where 2^k is the largest power of two less than n. This sum is 2^(k+1) - 1, which is less than 2n. So the accurate complexity is O(n).
Applying this idea to your first example:
for (int i = 1; i <= n*n; i *= 4)
for (int j = 0; j < i*i; j++)
sum++
i takes the values 4^0, 4^1, 4^2, ..., 4^k where 4^k is the largest power of 4 less than or equal to n^2.
The inner loop executes i^2 times for a given value of i.
So overall, the inner sum++ is executed this many times:
(4^0)^2 + (4^1)^2 + ... + (4^k)^2
= 2^0 + 4^2 + ... + 4^2k
= 16^0 + 16^1 + ... + 16^k
= (16^k - 1) / 15
Now by definition of k we have n^2/4 < 4^k <= n^2. So n^4/16 < 4^2k <= n^4, and since 16^k = 4^2k, we get that the total number of times the inner loop is executed is O(16^k) = O(n^4).
The second example can be solved using a similar approach.
First case:
Last run of the inner-loop with i = n^2, runs for n^4. The outer-loop up to n^2, but using exponential growth. For summation the sum over all inner-loop runs but the last is less than the last run. So inner-loop is essentially contributing O(1).
Second case:
100 % n == 0 does not matter really in O thinking
else part does not matter, it is much less than main part
outer-loop runs from 0 to n^3 => n^3
inner-loop runs from 0 to n^6 => n^6
outer-loop times inner-loop => n^9
int y = 1;
for (int x = 1 ; x <= n+2 ; x++)
for (int w = n ; w > 0 ; w--)
y = y + 1;
I'm a little confused about determining the BigO of the above code. If in the outermost loop it was for(int x = 1; x <= n; w++), then the BigO of the loop would be O(n^2) because the outermost loop would iterate n times and the innermost loop would also iterate n times.
However, given that the outermost loop iterates n+2 times, would that change the bigO or does the rule that additive constants don't matter imply? Lastly, would it change anything if the innermost loop were to iterate n+2 times instead of n?
Thank you!
Outer loop run n + 2 times, and inner loop runs n times, so code block runs (n + 2) * n times, which is n * n + 2 * n times. With increasing values of n, the 2 * n becomes insignificant, so you're left with n * n, giving you the answer: O(n^2)
Long-ish answer short, the additive constants don't matter.
Suppose we did count the constants. Then, the inner loop is executed
(n+2)(n) = n^2 + 2n
times. This is still O(n^2), since the squared term takes precedence over the linear term.
n and n+2 are the same order of magnitude, so this code run in O(n^2).
Even if the inner loop runs n + 2 times.
for (int x = 1 ; x <= n+2 ; x++)
outer loop is (n+2) times.
for (int w = n ; w > 0 ; w--)
inner loop is (n) time
((n+2) * n) => n^2 + 2n => O(n^2). Because we consider the larger value.
The reason is for the larger values of n, value of 2n will be insignificant to n^2. So we drop the n.
You can read here for more explanation: Big O Analysis
I am following the an online course and I can't understand well how to estimate the order of growth of an algorithm here s an example
What is the order of growth of the worst case running time of the following code fragment
as a function of N?
Int sum = 0;
For (int i = 1; i <= 4*N; i = i*4)
for (int j = 0; j < i; j++)
sum++;
Can anyone explain to me how to get it
Just calculate how many times the statement sum++; is executed.
= 1 + 4 + 16 + 64 ... + 4*N
This is a geometric progression with common factor of 4. If number of terms in this series is k, then
4^k = 4*N.
Sum of series = (4^k - 1)(4-1) = (4*N - 1)/3.
In order of growth we neglect the constant factors.
Hence the complexity is O(N).
This is fairly straightforward:
There are log(N) + 1 iterations of the outer loop (logarithm is base 4).
Let x be the outer loop iteration number. The inner loop iterates 4^x times.
Hence the total runtime is Sum(4 ^ x) for x in [0..c], where c is log(N) This is a geometric series, and it's sum is easily calculated using the formula from the wiki:
Sum(4 ^ x) for x in [1..c] = (1 - 4^c)/(1 - 4) = (4 ^ c)/3. Now c is log(N) in base 4, hence 4^c = N. The total answer is hence N with some constant factors.
While finding order of algorithm we find the total number of steps that algorithm goes through
Here the innermost loop has the number of steps equal to current value of i.
Let i goes through values i1,i2,i3...in
So the total number of steps in the algorithm are ->> i1+i2+i3+ ... in .
Here values of i1,i2,i3...in are 1,4,64...4N ; which is a GP with first term=a=1 and last term
equal to 4N.So the complexity of this algorithm is sum of all terms in this GP.
SUM=1+4+64+...4N
sum of GP with n terms a,ar,ar^2...ar^(n-1)=a((r^n)-1)/(r-1)=a(L*r-1)/(r-1)
where L=last term;
Here in our case sum= 1*((4*4N)-1)/3
which is approximately 1.33 times the last term L
SUM=1.33*4N
which is linear order of N
Thus number of steps are linear function of N and
So the complexity of algorithm is of order N; i.e. O(n).