Could somebody please help me find the big-O of this code? I've already tried to calculate it but I just don't understand how.
int n = //user input
for (int i = 0; i < n; i++){
for (int j = 1; j < n; j = j * 2){
System.out.println(i * j);
}
}
You should ask yourself how many iterations are in the outer loop and how many iterations are in the inner loop. Then you multiply the two.
The outer loop is simple - i grows from 0 to n-1 in increments of 1, so the total number of iterations is n, which is O(n).
In the inner loop j grows from 1 to n-1, but j is multiplied by 2 in each iteration. If n=2^k, for some integer k, there would be k iterations, and log n = k. Therefore there are O(log n) iterations in the inner loop.
Multiplying the two, you get O(n)*O(log(n)) = O(n log(n));
O(n Log n). Why?
for(int i=0; i<n;i++){ // n
for(int j=1; j<n; j=j*2){ //Log n (multiplying by 2)
System.out.println(i*j);
}
}`
Using Sigma notation, you may do the following:
Related
I am writing a java method that finds the stability indexes for an array. My algorithm works fine but I am unsure of its run time complexity. I believe it is O(n) since the first loop is O(n) and the two inner loops are O(2n), but again I'm not sure.
int[] arr = {0, -3, 5, -4, -2, 3, 1, 0};
for(int num = 0; num < arr.length; num++){
int sumLeft= 0;
int sumRight = 0;
for(int i = 0; i<num; i++){
sumLeft= sumLeft + arr[i];
}
for(int i = num + 1; i < arr.length;i++){
sumRight= sumRight + arr[i];
}
if(sumLeft==sumRight){
System.out.println(num);
}
}
Output:
0
3
7
We can do two things:
We can give you an answer, and that answer is O(N^2).
We can explain how to work it out for yourself.
The way to work it out is to count the operations.
When I say "count", I don't mean that literally. What I actually mean is that you need to work out an algebraic formula for the number of times some indicative operation is performed.
So in your example, I would identify these two statement as the ones that are the most important:
sumLeft= sumLeft + arr[i];
sumRight= sumRight + arr[i];
(Why did I pick those statements? Intuition / experience! The pedantic way to do this is to count all operations. But with experience, you can pick the important ones ... and the rest don't matter.)
So now for the formulae:
In one iteration of the outer loop, the first statement is executed from 0 to num-1; i.e. num times.
In one iteration of the outer loop, the second statement is executed from num+1 to array.length - 1; i.e. array.length - num - 1 times.
So, in one iteration of the outer loop, the two statements are executed num + array.length - num - 1 times which reduces to array.length - 1 times.
But the outer loop runs array.length times. So the two statements are executed array.length x (array.length - 1) times.
Finally, by the definition of Big Oh, array.length x (array.length - 1) is in the complexity class O(N^2) where N is the array size.
it is O(n^2)
for(int num = 0; num < arr.length; num++){
int sumLeft= 0;
int sumRight = 0;
for(int i = 0; i<num; i++){// <------------- 1 ~ n
//https://en.wikipedia.org/wiki/1_%2B_2_%2B_3_%2B_4_%2B_%E2%8B%AF
sumLeft= sumLeft + arr[i];
}
for(int i = num + 1; i < arr.length;i++){
sumRight= sumRight + arr[i];
}
if(sumLeft==sumRight){
System.out.println(num);
}
}
Hi I was solving this problem, but I'm kind of stuck to choose which one is the right answer. So, I'm trying to get a help from you guys.
Here is the code:
for (int i=0; i < n; i++) { // loop1 from 0 to n-1
int total = 0;
for (int j=0; j < n; j++) // loop2 from 0 to n-1
for (int k=0; k <= j; k++) // loop3 from 0 to j
total += first[k];
if (second[i] == total) count++;
} return count;
From above code, loop1 and loop2 have n times n which has n^2 complex processing time
but the problem is loop3, I really don't want to say that has n times also since the limit of the loop is j not n.
In the worst time case, how should I conclude the complexity? will it going to be ((n+1)(n+2))/2 only for loop3? or ((n+1)(n+2))/2 for loop2 and loop3
My final answers are like this
first case: n * n * (((n+1)(n+2))/2) = O(n^4)
second case: n * (((n+1)(n+2))/2) = O(n^3)
which one will be the correct one? Or did I get both wrong?
p.s. please mind loop3 is 0 to j, not 0 to j-1
It is not clear from the question that what are the different cases you are concerned about. While discussing Big O, we only care about the most significant portion of complexity. Here j in the third loop can go up to n, as per the second loop. Hence there is no problem substituting it with n when calculating the total complexity. Since there are 3 loops in all, all directly or indirectly dependent on n, the complexity would be O(n^3). Also, you could ignore constants, so n-1 could be simply considered as n.
See
https://stackoverflow.com/a/487278/945214
Big-O for Eight Year Olds?
I have just begun learning about Big O Notation and honestly I don't think I have the hang of it, and I am not quite sure how to determine the O() performance by just looking at for loops. I have listed a few examples and then some of the answers that I think are correct! Please let me know if they are wrong and any explanations would be greatly appreciated!
for (int i = 0; i <1000; i++) {
count ++;
I believe this would be O(n), because nothing else is going on in the for loop except constant time printing. We iterate 'n' times, or in this case 1000?
for (int i = 0; i < n; i++) {
for(int j = 0; j < n; j++)
count ++;
Would this one have an O(n^2) because the loops are nested and it iterates n twice, n*n?
for (int i = 0; i < n; i++) {
for( int j = i; j < n; j++)
count++;
Is this one another O(n^2) but in the worst case? Or is this O(n log n)?
Big-O notation is supposed to be a guide to help the user understand how the runtime increases with input.
For the first example, the loop runs exactly 1000 times no matter what n is, thus it is O(1000) = O(1) time.
For the second example, the nested loop runs n times for every time the outer loop runs, which runs n times. This is a total of n*n = n^2 operations. Thus the number of operations increases proportional to the square of n, thus we say it is O(n^2).
For the third example, it is still O(n^2). This is because Big-O notation ignores constants in the exact formula of the time complexity. If you calculate the number of times j runs as i increases, you get this pattern.
i: 0 1 2 3 ...
j: n n-1 n-2 n-3 ...
In total, the number of operations is around 1/2 n^2. Since Big-O notation ignores constants this is still O(n^2)
I am trying to print all triplets in array, unlike 3SUM or anything similiar, they don't satisfy any condition. I just want to print them all.
My simple solution
for (int i = 0; i < arr.length - 2; i++) {
for (int j = i + 1; j < arr.length - 1; j++) {
for (int k = j + 1; k < arr.length; k++){
System.out.println(arr[i] + " " + arr[j] + " " + arr[k]);
}
}
}
runs in O(n^3) with the amount of triplets being ((n)*(n-1)*(n-2))/(3!).
So my question is, can this be done any faster than O(n^3)?
n choose 3, which is the number of combinations of triplets, is
O(n^3).
So no, theoretically it is impossible to do better than that since the mere operation of printing them will take O(n^3) operations.
You can't print n^3 triplets less in n^3 operation.
If time of computing is problem you can divide your print operation in n threads so in theory that will reduce time by n.
1.
for(i = 0; i < 3; i++){
for(j = 0; j < 10; j++){
print i+j;
}
}
I would assume Big O would be 30 since the most amount of times would be 3*10.
2.
for(i = 0; i < n; i++){
for(j = 0; j < m; j++){
print i+j;
}
}
Would be O be n*m?
3.
for(i = 0; i < n; i++){
for(j = 0; j < m; j++){
for(int k = 1; k < 1000; k *= 2){
print i+j+k;
}
}
}
n * m * log base 2 (1000) The Big O is in nlog(n) time
4.
for(i = 0; i < n - 10; i++){
for(j = 0; j < m/2; j++){
print i+j;
}
}
5.
for(i = 0; i < n; i++){
print i;
}
//n and m are some integers
for(j = 1; j < m; j *= 2){
print j;
}
Can someone give me a hand with this if you know Big O. I am looking at these and at a loss. I hope I am posting this in the right location, I find these problems difficult. I appreciate any help.
I think it's important just to point out that Big O notation is all about functions that, given an arbitrary constant, will be considered upper bounds at some point.
O(1)
This is because each loop iterates in a constant amount of time. We would refer to this as O(1) instead of O(30) because the function which is the upper bound is 1 with an arbitrary constant >=30.
O(n*m)
Simply because we have to loop through m iterations n times.
O(n*m)
This is the same as the previous one, only we're throwing in another loop in the middle. Now you can notice that this loop, similar to the first problem, is just a constant time. Therefore, you don't even need to really spend time figuring out how often it loops since it will always be constant - it is O(1) and would be interpreted as O(n*m*1) which we can simply call O(n*m)
O(n*m)
For the outer loop, don't get caught up on the .. - 10 and realize that we can just say that loop runs in O(n). We can ignore that .. - 10 for the same reason we ignored the exact values in the first problem; constants don't really matter. This same principle applies for the m/2 because you can think of m just being manipulated by a constant of 1/2. So we can just call this O(n*m).
T(n) = O(n) + O(lg m) => O(n + lg m)
So there are two components we have to look at here; the first loop and the second loop. The first loop is clearly O(n), so that's no problem. Now the second loop is a little tricky. Basically, you can notice that the iterator j is growing exponentially (notably power of 2's), therefore that loop will be running the inverse of exponentially (logarithmic). So this function runs in O(n + lg m).
Any constant factor can be ignored. O(30) is equal to O(1), which is what one would typically say for 1).
2) Just so.
3) in O(n*m*log_2(1000)), log_2(1000) is constant, so it's O(n*m).
4) O(n-10) is same as O(n). O(m/2) is same as O(m). Thus, O(n*m) again.
5) Trivially O(n).
6) O(log_2(m)).