Complexity of Bubble Sort - java

I have seen at lot of places, the complexity for bubble sort is O(n2).
But how can that be so because the inner loop should always runs n-i times.
for (int i = 0; i < toSort.length -1; i++) {
for (int j = 0; j < toSort.length - 1 - i; j++) {
if(toSort[j] > toSort[j+1]){
int swap = toSort[j+1];
toSort[j + 1] = toSort[j];
toSort[j] = swap;
}
}
}

And what is the "average" value of n-i ? n/2
So it runs in O(n*n/2) which is considered as O(n2)

There are different types of time complexity - you are using big O notation so that means all cases of this function will be at least this time complexity.
As it approaches infinity this can be basically n^2 time complexity worst case scenario. Time complexity is not an exact art but more of a ballpark for what sort of speed you can expect for this class of algorithm and hence you are trying to be too exact.
For example the theoretical time complexity might very well be n^2 even though it should in theory be n*n-1 because of whatever unforeseen processing overhead might be performed.

Since outer loop runs n times and for each iteration inner loop runs (n-i) times , the total number of operations can be calculated as
n*(n-i) = O(n2).

It's O(n^2),because length * length.

Related

What will be the execution time in Big O notation in fnB?

I got 2 functions and I need to find the execution time for both functions in Big O, however I am confuse on fnB
int fnA(int n){
int sum = 0;
for(int i=0; i<n; i++){
for(int j=n; i<j; j=j-2){
sum += i*j;
}
return sum;
}
I got O(n^2) for fnA
int fnB(int n) {
int sum =0;
for(int size = 1; size<n; size=2*size){
sum+=fnA(size);
}
return sum;
}
Since within the for loop in fnB, size increase exponentially. I am leaning fnB has an O(n^3). Am I correct, if not please correct me, thank you
fnA has a running time of O(n2).
However, fnB has a running time of O(n2logn), since it has log2n iterations, and each iteration takes O(n2) time (it actually takes O(size2), but since size < n, we can bound it with O(n2)).
A more detailed explanation:
fnA(n) has n iterations in the outer loop and at most n/2 iterations in the inner loop, which gives O(n2) upper bound. Since each iteration of fnB(n) calls fnA(size), it takes O(size2) == O(n2) (since size < n).
Now, the loop of fnB(n) assigns the following values to size : 20, 21, 22, ..., 2k where 2k <= n. Therefore the number of iterations is k <= log2n, and the upper bound of fnB is O(n2log2n).
Big-O notation can be used to represent time-complexity or space-complexity of an algorithm.
In your program, the function fnA can be thought of having a time complexity of O(n^2) because it has two nested for loops. However, due to the condition i<j which always evaluates to false, the inner for loop would never execute. So the more realistic time-complexity of your fnA function is O(n).
Your fnB function calls fnA in a single for loop. Therefore, its time complexity is O(n^2).

Big O notation with nested for loop

I have the code below and am trying figure out the big O worst case running time for it. I think that the first loop is O(log N), but I am not sure what the second loop is. I thought maybe it was O(N) but that didn't seem right. Any insights would be very helpful.
for(int jump = inList.size(); jump > 0; jump/= 2) {
for(int i = 0; i < inList.size(); i = ++i * jump) {
This is going to be a O(log(n)) algorithm because the outer loop is clearly O(log(n)) and for large enough N the inner loop is going to finish executing in constant (2) iterations because n/2 * (n+1)/4 = (n^2+n)/8 > n for n > 6. For all values greater than 6 the inner for loop always iterates twice, big O deals with large cases (approaching infinity) in which case the inner loop is constant.

What is the time complexity of an iteration through all possible sequences of an array

An algorithm that goes through all possible sequences of indexes inside an array.
Time complexity of a single loop and is linear and two nested loops is quadratic O(n^2). But what if another loop is nested and goes through all indexes separated between these two indexes? Does the time complexity rise to cubic O(n^3)? When N becomes very large it doesn't seem that there are enough iterations to consider the complexity cubic yet it seems to big to be quadratic O(n^2)
Here is the algorithm considering N = array length
for(int i=0; i < N; i++)
{
for(int j=i; j < N; j++)
{
for(int start=i; start <= j; start++)
{
//statement
}
}
}
Here is a simple visual of the iterations when N=7(which goes on until i=7):
And so on..
Should we consider the time complexity here quadratic, cubic or as a different size complexity?
For the basic
for (int i = 0; i < N; i++) {
for (int j = i; j < N; j++) {
// something
}
}
we execute something n * (n+1) / 2 times => O(n^2). As to why: it is the simplified form of
sum (sum 1 from y=x to n) from x=1 to n.
For your new case we have a similar formula:
sum (sum (sum 1 from z=x to y) from y=x to n) from x=1 to n. The result is n * (n + 1) * (n + 2) / 6 => O(n^3) => the time complexity is cubic.
The 1 in both formulas is where you enter the cost of something. This is in particular where you extend the formula further.
Note that all the indices may be off by one, I did not pay particular attention to < vs <=, etc.
Short answer, O(choose(N+k, N)) which is the same as O(choose(N+k, k)).
Here is the long answer for how to get there.
You have the basic question version correct. With k nested loops, your complexity is going to be O(N^k) as N goes to infinity. However as k and N both vary, the behavior is more complex.
Let's consider the opposite extreme. Suppose that N is fixed, and k varies.
If N is 0, your time is constant because the outermost loop fails on the first iteration.. If N = 1 then your time is O(k) because you go through all of the levels of nesting with only one choice and only have one choice every time. If N = 2 then something more interesting happens, you go through the nesting over and over again and it takes time O(k^N). And in general, with fixed N the time is O(k^N) where one factor of k is due to the time taken to traverse the nesting, and O(k^(N-1)) being taken by where your sequence advances. This is an unexpected symmetry!
Now what happens if k and N are both big? What is the time complexity of that? Well here is something to give you intuition.
Can we describe all of the times that we arrive at the innermost loop? Yes!
Consider k+N-1 slots With k of them being "entered one more loop" and N-1 of them being "we advanced the index by 1". I assert the following:
These correspond 1-1 to the sequence of decisions by which we reached the innermost loop. As can be seen by looking at which indexes are bigger than others, and by how much.
The "entered one more loop" entries at the end is work needed to get to the innermost loop for this iteration that did not lead to any other loop iterations.
If 1 < N we actually need one more that that in unique work to get to the end.
Now this looks like a mess, but there is a trick that simplifies it quite unexpectedly.
The trick is this. Suppose that we took one of those patterns and inserted one extra "we advanced the index by 1" somewhere in that final stretch of "entered one more loop" entries at the end. How many ways are there to do that? The answer is that we can insert that last entry in between any two spots in that last stretch, including beginning and end, and there is one more way to do that than there are entries. In other words, the number of ways to do that matches how much unique work there was getting to this iteration!
And what that means is that the total work is proportional to O(choose(N+k, N)) which is also O(choose(N+k, k)).
It is worth knowing that from the normal approximation to the binomial formula, if N = k then this turns out to be O(2^(N+k)/sqrt(N+k)) which indeed grows faster than polynomial. If you need a more general or precise approximation, you can use Stirling's approximation for the factorials in choose(N+k, N) = (N+k)! / ( N! k! ).

What is the complexity of empty for loop?

I was wondering if the complexity of a empty for loop like below is still O(n^2)
for (int i = 0; i < n; i++) {
for (int j = 0; j < n; j++) {
}
}
update : changed height and width variable to n
If it won't get optimized out by the compiler, the complexity will still be O(n^2) (or actually O(N*M)) - even though the loops bodies are empty, the condition checks and incrementation of both counters are still valid operations which have to be performed.
The complexity of any for loop that runs from 1 .. n is O(n), even if it does not do anything inside it. So in your case it is always going to be O(n^2) irrespective of what you are doing inside the loops.
Here in your example i and j are running till n and hence individually depends on the value of n making the the nested for loops having a complexity of O(n^2)
Pay attention, you can do something else than i++, e.g. fun(i).
Based off of my understanding of time-complexity of an algorithm, we assume that there are one or more fundamental operations. Re-writing the code using a while loop and expanding for logic :
int i = 0, j = 0;
while(i < n)
{
while(j < n)
{
; //nop or no-operation
j = j + 1; // let jInc be alias for j + 1
}
i = i + 1; // let iInc be alias for i + 1
}
Now if your objective is to perform a 'nop' n^2 times, then the time complexity is O(0) where 'nop' is the fundamental operation. However, if the objective is to iterate 2 counters ('i' and 'j') from 0 to n -1 or count n^2 times then the fundamental operations can be addition (j + 1 and i + 1), comparison (i < n and j < n) or assignment (i = iInc and j = jInc) i.e. O(n^2).
Big O is just an approximation for evaluating count of steps in algorithm.
We could have formulas for exact count of steps in algorithm, but they are complex and difficult to realise the actual complexity.
1) O(0.000 000 001*n^2 - 1 000 000 000) = n^2
2) O(1 000 000 000*n ) = n
Despite of Big O first case is less e.g. for N = 0..1 000 000
More than, it doesn't take into account how fast particular step.
So, your loop is a case when O(n^2) could be less than O(1)
The nested loop performs constant work O(1) n times, so nO(1)=O(n)O(1)=O(n).
The external loop performs the above mentioned O(n) work n times so nO(n)=O(n)O(n) =O(n^2).
In general:``
f(n) ∈ O(f(n))
cf(n) ∈ O(f(n)) if c is constant
f(n)g(n) ∈ O(f(n)g(n))
It depends on the compiler.
Theoretically, it's O(n), where n is the number of loops, even if there's no task inside the loop.
But, in case of some compiler, the compiler optimizes the loop and doesn't iterates n times. In this situation, the complexity is O(1).
For the loop mentioned above, it's both O(n) and O(n^2). But, it's good practice to write O(n^2) as Big O covers upper bound.

I believe this algorithm is O(N). Am I correct?

This algorithm reverses an array of N integers. I believe this algorithm is O(N) because for each loop iteration, the four lines of code are executed once thus completing the job in 4N time.
public static void reverseTheNumbers(int[] list) {
for (int i = 0; i < list.length / 2; i++) {
int j = list.length - 1 - i;
int temp = list[i];
list[i] = list[j];
list[j] = temp;
}
}
There isn't such a thing as 4N time. The algorithm is linear because as you increase the size of the input the runtime of the algorithm increases proportionally. In other words if you doubled the size of list you would expect the algorithm to take twice as long.
It doesn't matter how many operations you do inside your loop - as long as they are each constant time (relative to the input) the runtime of the loop is determined simply by the number of iterations.
Put another way, these four statements are - all together - an O(1) operation.
int j = list.length - 1 - i;
int temp = list[i];
list[i] = list[j];
list[j] = temp;
There's nothing significant about the fact that this sequence of steps is expressed in four statements in Java syntax - experimenting with javap suggests these four lines compiles into ~20 bytecode commands, and who knows how many processor instructions that bytecode gets converted into. The good news is Big-O notation works the same regardless of the particular syntax - a sequence of operations is O(1) or constant time if its execution time is the same regardless of the input.
Therefore you're doing an O(1) operation N times; aka O(N).
Yes, you are correct. The number of operations is linearly dependent on the size of the array (N), making it an O(N) algorithm.
Yes, the complexity of the algorithm is O(n).
However, the exact "time" (because there are no constant factors in asymptotic complexity, check comment below) is not 4 times the size of the array, we could say it is 1/2*(c1+c2+c3+c3) times the size of the array, where 1/2 corresponds to each loop iteration and each c corresponds to the time needed for each operation inside theloop.
It would be 4 times the size of the array, if the algorithm was iterating the whole array 4 times.

Categories