I have the following:
public static int[] MyAlgorithm(int[] A, int n) {
boolean done = true;
int j = 0;
while(j <= n - 2) {
if(A[j] > A[j+1]) {
int temp = A[j + 1];
A[j + 1] = A[j];
A[j] = temp;
done = false;
}
j++;
}
j = n - 1;
while(j >= 1) {
if(A[j] < A[j-1]) {
int temp = A[j - 1];
A[j - 1] = A[j];
A[j] = temp;
done = false;
}
j--;
}
if(!done)
return MyAlgorithm(A, n);
else
return A;
}
This essentially sorts an array 'A' of length 'n'. However, after trying to figure out what the time complexity of this algorithm, I keep running into circles. If I take a look at the first while-loop, the content in the loop will get execute 'n-2' times, thus making it O(n). The second while-loop executes in 'n-1' times, thus making it O(n), provided we've dropped the constants for both functions. Now, the recursive portion of this algorithm is what throws me off again.
The recursion looks to be tail-recursive given that it doesn't call anything else afterwards. At this point, I'm not sure if the the recursion being tail-recursive has anything to do with this time complexity... If this is really an O(n) does this necessarily mean that it's Omega(n) as well?
Please correct any of my assumptions I've made if there are any. Any hints would be great!
This is O(n2).
This is because with each recursion, you iterate the entire array twice. Once up (bubbling the highest answer to the top) and once down (bubbling the lowest answer to the bottom).
On the next iteration, you have yet another 2n. However, we know the topmost and bottommost elements are correct. Because of this we know we have n-2 unsorted elements. When it repeats, you will sort 2 more elements, and so on. if we want to find the number of iterations, "i", then we solve for n - 2i = 0. i = n/2 iterations.
n/2 iterations * 2n operations per iteration = n2 operations.
EDIT: tail recursion doesn't really help with time order, but it DOES help some languages with memory processing. I can't say exactly how it works, but it significantly reduces the stack space required somehow.
Also, I'm a bit rusty on this, but O notation denotes WORST case, whereas Omega notation denotes BEST case. This is Omega(n), because the best case is that it iterates the array twice, finds everything is sorted, and doesn't recurse.
In your case the recurrence relation is something like:
T(n) = T(n) + n.
But if we assume the biggest no.
will end up at the end in every case. We can approximate:
T(n) = T(n-1) +n
T(n-1) = T(n-2) + n-1
t(n-2) = T(n-3) + n-2
T(n) = T(n-2) + n-1 +n
T(n) = T(n-3) + n-2 + n-1 + n
T(n) = T(n-k) + kn -k(k-1)/2
If n-k = 1
then k = n+1
substitutuing that
T(n) = T(1) + n(n+1)-(n+1)(n)/2
order O(n^2)
Also smallest no. will end up at the begining so we could also have approximated.
T(n) = T(n-2) +n
Still order would be O(n^2)
If that approximation is removed we can't estimate exactly when
done will be true. But in this case we can be sure that the biggest no
will always end up at the end after each iteration and the smallest at the beginning so nothing would be done for 0 and n-1.
I hope this helps you understand why n^2.
Related
double expRecursive(double x, int n) {
if (n <= 4) {
return expIterativ(x, n);
}
return expRecursive(x, n/2) *
expRecursive(x, (n + 1)/2);
}
So the problem I am dealing with is how I can write the time complexity of this method using a big O notation.
Here is what I've done so far. I'm not sure if it is correct but let me explain it. I get that T(n)= 2 T(n/2) + 1 for n>4 since we have 2 recursive calls and other operations. But when it comes to n<=4, that is where I got stuck. There is a recursive call which means that even it will be something like T(n)= T(n/2)+1. But this doesn't even feel right, I would really appreciate it if someone could help me.
Assuming a constant x for our purposes (i.e., we are not interested in the growth rate as a function of x), expIterative is also just a function of n, and is only called for cases where n <= 4. There is some largest time t* that expIterative takes to run on x and n where n goes from 0 to 4. We can simply use that largest time t* as a constant, since the range of n that can be sent as an input is bounded.
double expRecursive(double x, int n) {
if (n <= 4) { // a+b
return expIterativ(x, n); // c+t*
}
return expRecursive(x, n/2) * // c+T(n/2)
expRecursive(x, (n + 1)/2); // d+T((n+1)/2)
}
As you pointed out, we can make the simplifying assumption that n is even and just worry about that case. If we assume n is a power of 2, even easier, since then all recursive calls will be for even numbers.
We get
T(n) <= 2T(n/2) + (a+b+2c+d+t*)
The stuff in parentheses at the end is just a sum of constants, so we can add them together and call the result k:
T(n) <= 2T(n/2) + k
We can write out some terms here:
n T(n)
4 t*
8 2t* + k
16 4t* + 2k + k
32 8t* + 4k + 2k + k
...
2^n 2^(n-2)t* + 2^(n-2)k - k
= (2^n)(t* + k)/4 - k
So for an input 2^n, it takes time proportional to 2^n. That means that T(n) = O(n).
I am trying to calculate the time complexity of these two algorithms. The book I am referring to specifies these time complexities of each.
A) Algorithm A: O(nlogn)
int i = n;
while (i > 0)
{
for (int j = 0; j < n; j++)
System.out.println("*");
i = i / 2;
}
B) Algorithm B: O(n)
while (n > 0)
{
for (int j = 0; j < n; j++)
System.out.println("*");
n = n / 2;
}
I can see how algo. A is O(nlogn). The for loop is O(n) and while loop is O(logn). However I am failing to see how AlgoB has a time complexity of O(n). I was expecting it to be O(nlogn) also. Any help would be appreciated.
Let's look at Algorithm B from a mathematical standpoint.
The number of * printed in the first loop is n. The number of * printed in each subsequent loop is at most n/2. That recurrence relation leads to the sequence:
n + n/2 + n/4 + n/8 + ...
If this were an infinite sequence, then the sum could be represented by the formula n/(1 - r), where r is the factor between terms. In this case, r is 1/2, so the infinite sequence has a sum of 2(n).
Your algorithm certainly won't go on forever, and it each time it loops, it may be printing less than half the stars of the previous loop. The number of stars printed is therefore less than or equal to 2(n).
Because constant factors drop out of time complexity, the algorithm is O(n).
This concept is called amortized complexity, which averages the cost of operations in a loop, even if some of the operations might be relatively expensive. Please see this question and the Wikipedia page for a more thorough explanation of amortized complexity.
Algorithm B is printing half the starts at every iteration. Assume n=10, then:
n=10 -> 10*
n=5 -> 5*
n=2 -> 2*
n=1 -> 1*
In total 18* are printed. You will print n + n/2 + n/4 + ... + n/(2^i) stars. How much does i value? It is equal to the number of steps required for n to become 0. In other terms, it is the exponent to which 2 must be raised to produce n: log_2(n). You get the sum in picture:
Which can be approximated to O(n).
hello i need to apply an algorithm similar to this but the problem is that i need the complexity to be O(logn). the complexity of the code below is said to be O(logn) but from what i understand a recursive method has the growth order of O(n). so the question is what is the growth order of the code below.
public static int findPeak(int[] array, int start, int end) {
int index = start + (end - start) / 2;
if (index - 1 >= 0 && array[index] < array[index - 1]) {
return findPeak(array, start, index - 1);
} else if (index + 1 <= array.length - 1 && array[index] < array[index + 1]) {
return findPeak(array, index + 1, end);
} else {
return array[index];
}
}
It should be O(logn). For simplicity (also an easy way to think) think of this as creating a binary tree. In each function call it divide the input array into two halves (creating nodes in a binary tree). So if the number of input are n then the binary tree has log(n) levels (one level -> one function call).
Also note that in one function call only one recursive call happen( either in if block or else block but not both). This might made you feel it like o(n) growth.
The size of the input array in each branch of the code is half of the original input array into the function. Hence, if T(n) is the time complexity of the function, we can write:
T(n) = T(n/2) + 1
1 shows the comparison in branches, and T(n/2) is for any selected branch. Hence, T(n) is in O(log(n)).
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 2 years ago.
Improve this question
I've been researching a lot about time complexity for my Data Structures class. And I've been tasked to report about Shell sort algorithm and explain its time complexity (best/worst/average case). I found this site https://stackabuse.com/shell-sort-in-java/ that shows that the time complexity of this Shell sort algorithm:
void shellSort(int array[], int n){
//n = array.length
for (int gap = n/2; gap > 0; gap /= 2){
for (int i = gap; i < n; i += 1) {
int temp = array[i];
int j;
for (j = i; j >= gap && array[j - gap] > temp; j -= gap){
array[j] = array[j - gap];
}
array[j] = temp;
}
}
}
is O(n log n). But the problem is that I'm still confused about makes logn a logn or what does nlogn means.
I also tried step count method but again, I don't know where to start so I just copied from the site above and did this.
void shellSort(int array[], int n){
//n = array.length
for (int gap = n/2; gap > 0; gap /= 2){ //step 1 = runs logn times
for (int i = gap; i < n; i += 1) { //step 2 = runs n-gap times
int temp = array[i]; //step 3 = 1
int j; //step 4 = 1
for (j = i; j >= gap && array[j - gap] > temp; j -= gap){ //step 5 = i/gap times
array[j] = array[j - gap]; //step 6 = 1
}
array[j] = temp; //step 7 = 1
}
}
}
But I don't know if this is correct, I just based it off on this site. https://stackabuse.com/shell-sort-in-java/.
I've also tried comparing the total number of moves between Insertion Sort and Shell Sort since Shell Sort is a generalization of Insertion and Bubble Sort. I'll attach the pics below. I also used an online number generator that will give me 100 random numbers, copied it and applied it to both the Insertion Sort and Shell sort and assigned it as the array to sort.
And this was what came up,
Total number of moves of Insertion Sort = 4731
Total number of moves of Shell Sort = 1954
Shell Sort implementation that tells me the total number of moves it does
Insertion Sort implementation that tells me the total number of moves it does
What I've understood from all of this is that despite Shell sort being a generalization of Insertion sort, when it comes to sorting large arrays such as 100 elements Shell Sort is 2x faster than Insertion Sort.
But the ultimate question is, is there a beginner way to calculate the time complexity like this Shell Sort algorithm?
You have to take a look at the big O or big Theta analysis of your function. Your outer loop is being divided by half on every iteration so the overall time that it runs is log n. Now when you look at your inner loop it runs initially from n/2 to n all the way to 1 to n or 2 to n depending on the initial size of n so its execution time will be n/2 + n/4 + .... n /2^k which its a 'Harmonic series' (You can search geometric series as well, if you factor n -> n(1/2 + 1/4 + ... + 1/2^k) which equals nlogn. Now the best case where every list may be sorted to some extent will be Ω(nlogn) as the in the middle of the outer loop we will find optimal solution so we can say that nlogn is its lower bound - Meaning it is definitely equal or bigger than that - therefor we can say that the average case is Θ(nlog^2 n) meaning that it is in the tight bound of that - Please note for average case I use Big theta. Now if we assume that the list is completely reverse the outer loop will run all the way to the end meaning log n. The inter loop will run nlogn so the total time will be nlog^2(n) which we can say it will be O(nlog^2(n)) (we can also use Big O but theta is better you can search that up that how theta provides tight bound and big O only provides upper bound). Therefore, we can also say the worst case is O(n^2) which is relatively correct in some context.
I suggest you take a look at Big-O and Big-Theta as well as Big-Omega which can also be useful in this case.
However, the most precise mathematical representation for shell algorithm will be O(n^3/2). However, there are still arguments and analyzation taking place.
I hope this helps.
First, I'll show that the algorithm will never be slower than O(n^2), and then I'll show that the worst-case run time is at least O(n^2).
Assume n is a power of two. We know the worst case for insertion sort is O(n^2). When h-sorting the array, we're performing h insertion sorts, each on an array of size n / h. So the complexity for a h-sort pass is O(h * (n / h)^2) = O(n^2 / h). The complexity of the whole algorithm is now the sum of n^2 / h where h is each power of two up to n / 2. This is a geometric series with first term n^2, common ratio 1 / 2, and log2(n) terms. Using the geometric series sum formula gives n^2*((1 / 2)^log2(n) - 1) / (1 / 2 - 1) = n^2*(1 / n - 1) / (-1 / 2) = n^2*(-2 / n + 2) = 2n^2 - 2n = O(n^2).
Consider an array created by interweaving two increasing sequences, where all elements in one sequence is greater than all elements in the other sequence, such as [1, 5, 2, 6, 3, 7, 4, 8]. Since this array is two-sorted, all passes except the last one does nothing. In the last pass, an element at index i where i is even has to be moved to index i / 2, which uses O(i / 2) operations. So we have 1 + 2 + 3 + ... + n / 2 = (n / 2) * (n / 2 + 1) / 2 = O(n^2).
Preparing for exams I came across this question in an old exam:
What is the worst case/big-O complexity of this function:
float foo(float[] A) {
int n = A.length;
if (n == 1) return A[0];
float[] A1 = new float[n/2];
float[] A2 = new float[n/2];
float[] A3 = new float[n/2];
float[] A4 = new float[n/2];
for (i = 0; i <= (n/2)-1; i++) {
for (j = 0; j <= (n/2)-1; j++) {
A1[i] = A[i];
A2[i] = A[i+j];
A3[i] = A[n/2+j];
A4[i] = A[j];
}
}
return foo(A1)
+ foo(A2)
+ foo(A3)
+ foo(A4);
}
(Yes, the code makes no sense, but this is exactly the way it was written).
What's tripping me up is that the total size of n doubles for each recursive level, but the suggested answer(with the end result O(log n * n^2)) ignores that part. Am I misunderstanding things?
Edit: Replaced the semi-pseudocode with syntactically correct(but still nonsensical) code.
If you solve this recursive relation, you'd be able to determine the complexity.
T(n) = 4T(n/2) + O(n²)
With
T(1) = c
Okay, I finally figured it out.
Every time we recurse we do 4 times as many function calls as last time, so if we define the recursion level as m the number of function calls per level is
Every time we recurse we also halve the size of the array, so the amount of work per function call is
At each recursive level the work done in total then is:
In fact 4^m is the same as (2^m)^2:
Thus the amount of work can be written as just n^2:
There are log n recursive levels.
Thus the total amount of work is O(n^2 * log n), and that is because there are 4 recursive calls.
If there were just 2 recursive calls the amount of work at each level would be
which we can't reduce nicely(but turns out to be in O(n^2) if my math is correct).