Big O of nested for loop - java

int y = 1;
for (int x = 1 ; x <= n+2 ; x++)
for (int w = n ; w > 0 ; w--)
y = y + 1;
I'm a little confused about determining the BigO of the above code. If in the outermost loop it was for(int x = 1; x <= n; w++), then the BigO of the loop would be O(n^2) because the outermost loop would iterate n times and the innermost loop would also iterate n times.
However, given that the outermost loop iterates n+2 times, would that change the bigO or does the rule that additive constants don't matter imply? Lastly, would it change anything if the innermost loop were to iterate n+2 times instead of n?
Thank you!

Outer loop run n + 2 times, and inner loop runs n times, so code block runs (n + 2) * n times, which is n * n + 2 * n times. With increasing values of n, the 2 * n becomes insignificant, so you're left with n * n, giving you the answer: O(n^2)

Long-ish answer short, the additive constants don't matter.
Suppose we did count the constants. Then, the inner loop is executed
(n+2)(n) = n^2 + 2n
times. This is still O(n^2), since the squared term takes precedence over the linear term.

n and n+2 are the same order of magnitude, so this code run in O(n^2).
Even if the inner loop runs n + 2 times.

for (int x = 1 ; x <= n+2 ; x++)
outer loop is (n+2) times.
for (int w = n ; w > 0 ; w--)
inner loop is (n) time
((n+2) * n) => n^2 + 2n => O(n^2). Because we consider the larger value.
The reason is for the larger values of n, value of 2n will be insignificant to n^2. So we drop the n.
You can read here for more explanation: Big O Analysis

Related

Algorithm Analysis Dependant quadratic

I want to ask for this question, what is the algorithm analysis for this?
outer loop will execute for n + 2 times
how we determine the inner loop? is it (n+1)/2
for(int i = 0;i <= n;i++)
for(int j = n;j>i;j--)
The outer loop has n + 1 iterations.
The inner loop has n - i iterations.
To find the overall time complexity, you have to sum the iterations of the inner loop for all the values of i:
n + (n - 1) + (n - 2) + ... + 1 = (n+1)*n/2 = O(n^2)
#Eran's answer works, but it's easier to figure this out without calculating exactly how many iterations there are of the inner loop.
For n/2 iterations of the outer loop (the ones for small i), the inner loop iterates at least n/2 times. So in total you have at least (n/2)2 = n2/4 = O(n2).
We can also see that the outer loop iterates at most n times, and the inner loop at most n times for all of those, so in total you have at most n2 inner iterations, which is also O(2)

What is the Big O complexity of this algorithm

I can't seem to figure out the time complexity for this algorithm. I know that the while loop will execute at O(log n) times because n keeps halving. But I'm not super sure how to represent the time complexity of the inner for loop.
while(n>0){
for(int j = 0; j < n; j++){
System.out.println("*");
}
n = n/2;
}
Each while loop divides n by 2.
Therefore, the first for loop would run n times, the second for loop would run n/2 times, the third for loop would run n/4 times, and so on.
If we set n to an arbitrary large value, then the complexity would be:
n + n/2 + n/4 + n/8 + n/16 + n/32 + ... = 2n
Of course, n is an integer, and programmatically, the division would result in 0 after enough repetitions.
Therefore, the time-complexity of the algorithm would be O(2N) = O(N)
We can figure out the complexity of both loops by inspection. For an input of n = 8, the inner loop would print out this many stars at each iteration:
8 + 4 + 2 + 1 = 15 ~ 2^4
For n = 16:
16 + 8 + 4 + 2 + 1 = 31 ~ 2^5
The complexity here is:
O(2^([log_2(N)]+1)) = 2*O(2^log_2[N])
= O(N)
We can see that at each step the number of print statements is roughly twice the value of the input N. So the overall number of print operations is actually O(N), which is linear in the actual value of the input N.
Lets n₀ be the start value of n.
At the i-th (from 0) iteration of the while loop, n = n₀ * ½ⁱ
The cost of one iteration of the while loop is n (because of the for loop)
Hence the global cost is: sum for i from 0 to whatever of n₀ * ½ⁱ
Or: n₀ times the sum for i from 0 to whatever of ½ⁱ
The sum is a sum of a geometric series, and is less than 2.
Hence the global cost is O(2.n₀) or O(n₀)

Why am I getting different time complexities in these two different nested for-loops?

Ok, so I know two nested for-loops each incrementing by 1 gives a quadratic time complexity. Then I was curious to see if I change the update of one of the loops incrementing by a multiplication of 2 would I get O(n log n) instead of O(n^2) and vice versa to the other loop.
In each inner loop I have a variable to count how many times the loop executes. The array is size 2^20 so 1,048,576. I'm thinking both methods should have the same complexity of n log n (20 * 1,048,576). But only Algorithm 2 gets close to that complexity and Algorithm 1 has a complexity of n * 2.
To my understanding one loop is O(n) and the other is O(log n) so it should be O(n log n) together and then if I switch them I should get a complexity of O(log n n) which would be the same thing.
int[] arr = new int[1048576];
// Algorithm 1
int counter1 = 0;
for (int i = 1; i < arr.length; i++) {
for (int j = i; j < arr.length; j *= 2) {
counter1++;
}
}
System.out.println(counter1);
// Algorithm 2
int counter2 = 0;
for (int i = 1; i < arr.length; i *= 2) {
for (int j = i; j < arr.length; j++) {
counter2++;
}
}
System.out.println(counter2);
// counter1: 2097130 (n * 2)
// counter2: 19922945 (n log n)
Simple math. Assume for now, that w *= 2 will essentially take 20 steps to reach roughly 1 million.
Algo 1 will run roughly 1 million j-loops, but those loops will only take about 20 j-steps each to complete. You're running the inner loop on algo 2 (especially the first time) on a factor of millions, whereas the other will run <=20 times, 1 million times. However, you need to account for the decay, especially at the start. By the time you've hit i=2, you're already down to 19 j-steps on the algorithm. By 4, you're down to 18, and so on. This early decay will essentially "kill" the momentum for the number of steps. The last ~500,000 "i-steps" will only increment the counter once.
Algo 2 runs 1 million j-steps the first run alone, followed by another (i-step-1) j-steps (1, followed by 2, followed by 4, etc). You're running a million steps roughly each time.
Let's count the number of passes in each loop for the second algorithm. Let's take N=arr.length.
First the outer most loop. i ranges from 1 to N, and multiplied by 2 each time, that makes log(N) iterations.
Then, in the inner most loop j ranges from i to N and is incremented by 1 each time, that makes (N-i) iterations.
Lets now take k=log(i). So the total number of times counter2 is incremented is sum(N-2^k) for k=0 to log(N)
Sum for k=0 to log(N) of 2^k is a geometrical sum that adds up to 2^(log(N)+1)-1 so 2N-1.
Therfore the total complexity of the second loop is Nlog(N)-2N+1 which is O(Nlog(N)) just like your first loop.
The difference is the second order term 2N. If we push our development we have for the first loop a complexity of:
Nlog(N) + O(1)
and for the second:
Nlog(N) - 2N + O(1)
Both solutions you posted have exactly the dame complexity, i assume u forgot to swap the inner for loop begin clause variables.
O (n^2) with a constant factor.
Big-O notation works like this:
n/2 * n is the time your loop needs (any of the two u posted)
-> 1/2 * n * n = 1/2 * n^2
1/2 is the factor.
Complexity is polynomial, n^2

Calculating Big O complexity of these algorithms?

I am trying to figure out the Big O notation of the following 2 algorithms below but am having trouble.
The first one is:
public static int fragment3 (int n){
int sum = 0;
for (int i = 1; i <= n*n; i *= 4)
for (int j = 0; j < i*i; j++)
sum++;
return sum;
} //end fragment 3
The answer should be O(n^4). When I try to do it myself this is what I get:
I look at the first for loop and think it runs n^2 logn times. Then for the inner for loop, it runs n times + the run time of the outer loop which is n^3 logn times. I know this is wrong but just don't get it.
For the code fragment below, the answer is O(n^9).
public static int fragment6(int n) {
int sum = 0;
for(int i=0; i < n*n*n; i++) {
if(i%100 == 0) {
for(int j=0; j < i*i; j += 10)
sum++;
} // if
else {
for(int k=0; k <= i; k++)
sum++;
} // else
} // outer loop
return sum;
} // fragment 6
When I attempt it I get: n^3 for the outer for loop. for the if statement I get n, for the second for loop I get n + the other for loop and if statement, making it n^5. Finally, I get n for the final for loop and everything adds up to O(n^6).
What am I doing wrong and what is the correct way to get its O(n^9) complexity?
For the first one.
Let's look at the inner loop..
At the first iteration of the outer loop (i=1) it runs 1 time. At the second iteration (i=4) it runs 16 (4*4) times. At the third iteration (i=16) it runs 256 (16*16) times. In general, at the (k+1)-th iteration of the outer loop inner loop runs times, as at that iteration. So the total number of iterations will be
Now, how many numbers in that sum we will have? To determine that we should have a look at the outer loop. In it i grows as , until it reaches . So the total number of iterations is .
This means that the total number of runs of inner loop is
(by dropping all the numbers from the sum but the last one).
Now we know, that the inner loop runs at least times, so we are not faster than O(n^4).
Now,
Solving for N,
where C is a constant, so we're not slower than O(n^4).
Your approach to computing big-O is flat-out wrong, and you've made computation errors.
In some common cases you can take the worst case number of iterations and multiply them together, but this isn't a sound method and fails for cases like this:
for (i = 1; i < n; i *= 2) {
for (j = 0; j < i; j++) {
sum++;
}
}
Here, the outer loop runs log_2(n) times, and the inner loop worst case is n iterations. So the wrong method that you're using will tell you that the complexity of this code is O(n log n).
The correct method is to count accurately the number of iterations, and approximate at the end only. The number of iterations is actually:
1 + 2 + 4 + 8 + ... + 2^k
where 2^k is the largest power of two less than n. This sum is 2^(k+1) - 1, which is less than 2n. So the accurate complexity is O(n).
Applying this idea to your first example:
for (int i = 1; i <= n*n; i *= 4)
for (int j = 0; j < i*i; j++)
sum++
i takes the values 4^0, 4^1, 4^2, ..., 4^k where 4^k is the largest power of 4 less than or equal to n^2.
The inner loop executes i^2 times for a given value of i.
So overall, the inner sum++ is executed this many times:
(4^0)^2 + (4^1)^2 + ... + (4^k)^2
= 2^0 + 4^2 + ... + 4^2k
= 16^0 + 16^1 + ... + 16^k
= (16^k - 1) / 15
Now by definition of k we have n^2/4 < 4^k <= n^2. So n^4/16 < 4^2k <= n^4, and since 16^k = 4^2k, we get that the total number of times the inner loop is executed is O(16^k) = O(n^4).
The second example can be solved using a similar approach.
First case:
Last run of the inner-loop with i = n^2, runs for n^4. The outer-loop up to n^2, but using exponential growth. For summation the sum over all inner-loop runs but the last is less than the last run. So inner-loop is essentially contributing O(1).
Second case:
100 % n == 0 does not matter really in O thinking
else part does not matter, it is much less than main part
outer-loop runs from 0 to n^3 => n^3
inner-loop runs from 0 to n^6 => n^6
outer-loop times inner-loop => n^9

Big O of a for loop with a false condition

I just need some clarification or help on this Big O problem. I don't know if I'm explaining this correctly, but I noticed that the for loop has a false condition, so that means it won't loop at all. And my professor said it's possible to still determine the run time of the loops. So what I'm thinking is this:
1 + (n - 1 - n) * (n) = 1 + 1 * n = 1 + n = O(n)
Explanation: 1 is for the operation outside of the loop. (n - 1 - n) is the iteration of the outer loop and n is the iteration of the inner loop.
Note: I'm still learning Big O, so please correct me if any of my logic is wrong.
int total = 0;
for (int i = n; i < n - 1; i++) {
for (int j = 0; j < n; j++) {
total = total + 1
}
}
There shouldn't be any negative number in Big O analysis. It doesn't make sense for negative running time. Also, (n - 1 - n) is not just in order O(1). Your outer loop doesn't even go into one iteration. Thus, the time complexity for whatever statement in your loop doesn't matter.
To conclude, the running time is 1 + 1 = O(1).
Big O notation to describe the asymptotic behavior of functions. Basically, it tells you how fast a function grows or
declines
For example, when analyzing some algorithm, one might find that the time (or the number of steps) it takes to complete a problem of size n is given by
T(n) = 4 n^2 - 2 n + 2
If we ignore constants (which makes sense because those depend on the particular hardware the program is run on) and slower growing terms, we could say "T(n)" grows at the order of n^2 " and write:T(n) = O(n^2)
For the formal definition, suppose f(x) and g(x) are two functions defined on some subset of the real numbers. We write
f(x) = O(g(x))
(or f(x) = O(g(x)) for x -> infinity to be more precise) if and only if there exist constants N and C such that
|f(x)| <= C|g(x)| for all x>N
Intuitively, this means that f does not grow faster than g
If a is some real number, we write
f(x) = O(g(x)) for x->a
if and only if there exist constants d > 0 and C such that
|f(x)| <= C|g(x)| for all x with |x-a| < d
So for your case it would be O(n^2) as |f(x)| > C|g(x)|
Reference from http://web.mit.edu/16.070/www/lecture/big_o.pdf
int total = 0;
for (int i = n; i < n - 1; i++) { // --> n loop
for (int j = 0; j < n; j++) { // --> n loop
total = total + 1; // -- 1 time
}
}
}
Big O Notation gives an assumption when value is very big outer loop will run n times and inner loop is running n times
Assume n -> 100
than total n^2 10000 run times

Categories