for example, if I have
for(int i = 1; i < 500; i++){
for (int j = 0; j < N; j++){
if (array[j] == someNumber && i == someNumber)
counter++;
}
}
if all numbers were the same in the array, hence counter would increase for each of them, would this make the complexity O(2N), O(N squared) or still just O(N), where it never became O(2N)? I hope the question makes sense. I find it hard to understand how statements might affect the complexity.
Since the statements inside your loop take constant time, they don't have any impact on the overall complexity of your algorithm. The complexity of the code you posted is just O(N), since your outer loop is executed a constant number of times, and your inner loop executes N times.
An example of when it would matter would be if you did something inside a loop like call a function that searches for a value in an array, or sorts a list. Since those operations depend on the size of the array (or list), you'd have to take them into account when calculating the complexity of your algorithm.
Related
I got 2 functions and I need to find the execution time for both functions in Big O, however I am confuse on fnB
int fnA(int n){
int sum = 0;
for(int i=0; i<n; i++){
for(int j=n; i<j; j=j-2){
sum += i*j;
}
return sum;
}
I got O(n^2) for fnA
int fnB(int n) {
int sum =0;
for(int size = 1; size<n; size=2*size){
sum+=fnA(size);
}
return sum;
}
Since within the for loop in fnB, size increase exponentially. I am leaning fnB has an O(n^3). Am I correct, if not please correct me, thank you
fnA has a running time of O(n2).
However, fnB has a running time of O(n2logn), since it has log2n iterations, and each iteration takes O(n2) time (it actually takes O(size2), but since size < n, we can bound it with O(n2)).
A more detailed explanation:
fnA(n) has n iterations in the outer loop and at most n/2 iterations in the inner loop, which gives O(n2) upper bound. Since each iteration of fnB(n) calls fnA(size), it takes O(size2) == O(n2) (since size < n).
Now, the loop of fnB(n) assigns the following values to size : 20, 21, 22, ..., 2k where 2k <= n. Therefore the number of iterations is k <= log2n, and the upper bound of fnB is O(n2log2n).
Big-O notation can be used to represent time-complexity or space-complexity of an algorithm.
In your program, the function fnA can be thought of having a time complexity of O(n^2) because it has two nested for loops. However, due to the condition i<j which always evaluates to false, the inner for loop would never execute. So the more realistic time-complexity of your fnA function is O(n).
Your fnB function calls fnA in a single for loop. Therefore, its time complexity is O(n^2).
I have the code below and am trying figure out the big O worst case running time for it. I think that the first loop is O(log N), but I am not sure what the second loop is. I thought maybe it was O(N) but that didn't seem right. Any insights would be very helpful.
for(int jump = inList.size(); jump > 0; jump/= 2) {
for(int i = 0; i < inList.size(); i = ++i * jump) {
This is going to be a O(log(n)) algorithm because the outer loop is clearly O(log(n)) and for large enough N the inner loop is going to finish executing in constant (2) iterations because n/2 * (n+1)/4 = (n^2+n)/8 > n for n > 6. For all values greater than 6 the inner for loop always iterates twice, big O deals with large cases (approaching infinity) in which case the inner loop is constant.
I know this is easy but my textbook doesn't talk about Big-Oh order with do-while loops, and neither do any of my other algorithm sources.
This problem states that the following code fragment is parameterized on the variable "n", and that a tight upper bound is also required.
int i=0, j=0;
do {
do {
System.out.println("...looping..."); //growth should be measured in calls to println.
j=j+5;
} while (j < n);
i++;
j = 0;
} while (i < n);
Can anyone help me with this and explain Big-Oh order in terms of do-while loops? Are they just the same as for loops?
A good maxim for working with nested loops and big-O is
"When in doubt, work from the inside out!"
Here's the code you have posted:
int i=0, j=0;
do {
do {
Do something
j=j+5;
} while (j < n);
i++;
j = 0;
} while (i < n);
Let's look at that inner loop. It runs roughly n / 5 times, since j starts at 0 and grows by five at each step. (We also see that j is always reset back to 0 before the loop begins, either outside the loop or at the conclusion of an inner loop). We can therefore replace that inner loop with something that basically says "do Θ(n) operations that we care about," like this:
int i=0;
do {
do Θ(n) operations that we care about;
i++;
} while (i < n);
Now we just need to see how much work this does. Notice that this will loop Θ(n) times, since i counts 0, 1, 2, ..., up to n. The net effect is that this loop is run Θ(n) times, and since we do Θ(n) operations that we care about on each iteration, the net effect is that this does Θ(n2) of the printouts that you're trying to count.
I have seen at lot of places, the complexity for bubble sort is O(n2).
But how can that be so because the inner loop should always runs n-i times.
for (int i = 0; i < toSort.length -1; i++) {
for (int j = 0; j < toSort.length - 1 - i; j++) {
if(toSort[j] > toSort[j+1]){
int swap = toSort[j+1];
toSort[j + 1] = toSort[j];
toSort[j] = swap;
}
}
}
And what is the "average" value of n-i ? n/2
So it runs in O(n*n/2) which is considered as O(n2)
There are different types of time complexity - you are using big O notation so that means all cases of this function will be at least this time complexity.
As it approaches infinity this can be basically n^2 time complexity worst case scenario. Time complexity is not an exact art but more of a ballpark for what sort of speed you can expect for this class of algorithm and hence you are trying to be too exact.
For example the theoretical time complexity might very well be n^2 even though it should in theory be n*n-1 because of whatever unforeseen processing overhead might be performed.
Since outer loop runs n times and for each iteration inner loop runs (n-i) times , the total number of operations can be calculated as
n*(n-i) = O(n2).
It's O(n^2),because length * length.
Have this java code for bubblesort:
public void sort() {
for(int i = 1; i < getElementCount() ; ++i) {
for(int j = getElementCount()-1; j >= i; j--) {
if (cmp(j,j-1) < 0) swap(j, j-1);
}
}
}
the method "cmp" and "swap" are as follows:
public int cmp(int i, int j) {
return get(i).intValue()-get(j).intValue();
}
public void swap(int i, int j) {
Integer tmp = get(i);
set(i, get(j));
set(j, tmp);
}
I have now written an improved version of the Bubblesort where the sorting method "sort()" looks like this:
public void sort() {
boolean done = false;
for(int i = 1; i < getElementCount() && !done; ++i) {
done = true;
for(int j = getElementCount()-1; j >= i; j--) {
if (cmp(j,j-1) < 0) {
swap(j, j-1);
done = false;
}
}
}
}
Can anyone explain how to compute the time complexity of the latter algorithm? I'm thinking it's comparing n elements one time, and therefore it has complexity O(1) in its best case, and O(n^2) in it's worst case scenario, but I don't know if I'm right and would like to know how to think on this issue.
The complexity tells the programmer how long time it takes to process the data.
The O(1) complexity says that no matter how many elements it will only take one operation.
Insert a value in an array would have O(1)
E.g.
array[100] = value;
In your best case you will have to loop throught the entire array and compare each element.
Your complexity code is then O(n) where n is number of elements in the array.
In the worst case you will have to run through the array once for each element, that would give a complexity of O(n*n)
I just looked over what you've done and it is exactly the same as the one you had previously listed. You have set a boolean condition done = false and then you are checking the negation of it which will always evaluate to true - exactly the same logic as before. You can remove done in your code and you will see that it runs exactly the same. Just like before, you will have a best case complexity of O(n) and a worst case complexity of O(n^2). There is no way any sorting algorithm is O(1) as at the very least you at least have to move through the list once which gives O(n).
Worst Case : If the array is sorted in the sorted reverse order (descending) , it will show the worst time complexity of O(N^2).
Best Case : If the array is in sorted order, then the inner loop will go through each element at least once so it is O(N) - > (If you break out of the loop using the information in done, which is not present in the code).
At no point can it be a O(1) (In fact it is mathematically impossible to get a lower function than O(N) as the lower bounds for sorting is Omega(N) for comparison based sorts)
Omega(N) is the lowest possible function, as for comparison you have to see all elements at least once.
The best way is to represent your loops using Sigma notation like the following (General Case):
'c' here refers to the constant time of if, cmp, and swap that execute inside the innermost loop.
For the best case (modified bubble sort), the running time should look like this: