what's the growth order of "find a peak" algorithm - java

hello i need to apply an algorithm similar to this but the problem is that i need the complexity to be O(logn). the complexity of the code below is said to be O(logn) but from what i understand a recursive method has the growth order of O(n). so the question is what is the growth order of the code below.
public static int findPeak(int[] array, int start, int end) {
int index = start + (end - start) / 2;
if (index - 1 >= 0 && array[index] < array[index - 1]) {
return findPeak(array, start, index - 1);
} else if (index + 1 <= array.length - 1 && array[index] < array[index + 1]) {
return findPeak(array, index + 1, end);
} else {
return array[index];
}
}

It should be O(logn). For simplicity (also an easy way to think) think of this as creating a binary tree. In each function call it divide the input array into two halves (creating nodes in a binary tree). So if the number of input are n then the binary tree has log(n) levels (one level -> one function call).
Also note that in one function call only one recursive call happen( either in if block or else block but not both). This might made you feel it like o(n) growth.

The size of the input array in each branch of the code is half of the original input array into the function. Hence, if T(n) is the time complexity of the function, we can write:
T(n) = T(n/2) + 1
1 shows the comparison in branches, and T(n/2) is for any selected branch. Hence, T(n) is in O(log(n)).

Related

How to find the time complexity of a recursive method?

double expRecursive(double x, int n) {
if (n <= 4) {
return expIterativ(x, n);
}
return expRecursive(x, n/2) *
expRecursive(x, (n + 1)/2);
}
So the problem I am dealing with is how I can write the time complexity of this method using a big O notation.
Here is what I've done so far. I'm not sure if it is correct but let me explain it. I get that T(n)= 2 T(n/2) + 1 for n>4 since we have 2 recursive calls and other operations. But when it comes to n<=4, that is where I got stuck. There is a recursive call which means that even it will be something like T(n)= T(n/2)+1. But this doesn't even feel right, I would really appreciate it if someone could help me.
Assuming a constant x for our purposes (i.e., we are not interested in the growth rate as a function of x), expIterative is also just a function of n, and is only called for cases where n <= 4. There is some largest time t* that expIterative takes to run on x and n where n goes from 0 to 4. We can simply use that largest time t* as a constant, since the range of n that can be sent as an input is bounded.
double expRecursive(double x, int n) {
if (n <= 4) { // a+b
return expIterativ(x, n); // c+t*
}
return expRecursive(x, n/2) * // c+T(n/2)
expRecursive(x, (n + 1)/2); // d+T((n+1)/2)
}
As you pointed out, we can make the simplifying assumption that n is even and just worry about that case. If we assume n is a power of 2, even easier, since then all recursive calls will be for even numbers.
We get
T(n) <= 2T(n/2) + (a+b+2c+d+t*)
The stuff in parentheses at the end is just a sum of constants, so we can add them together and call the result k:
T(n) <= 2T(n/2) + k
We can write out some terms here:
n T(n)
4 t*
8 2t* + k
16 4t* + 2k + k
32 8t* + 4k + 2k + k
...
2^n 2^(n-2)t* + 2^(n-2)k - k
= (2^n)(t* + k)/4 - k
So for an input 2^n, it takes time proportional to 2^n. That means that T(n) = O(n).

Binary-search with duplicate elements in array

I want find if there is a single element in a list of duplicate elements.
For this code
private static int findDuplicate(int array[]) {
int low = 0;
int high = array.length - 1;
while (low <= high) {
int mid = (low + high) >>> 1;
int midVal = array[mid];
if (midVal == mid)
low = mid + 1;
else
high = mid - 1;
}
return high;
}
It find the duplicate number but I want to find only the single number
in the duplicate and sorted array.
For example, given this int[] input:
[1,1,2,2,3,3,4,5,5]
Output would be '4'.
Or this int[] input:
[1,1,2,2,3,4,4,5,5,6,6]
Output would be '3'.
In this int[] input:
[1,1,2,7,7,9,9]
Output would be '2'.
I'm working in Java now but any language or psuedo-code is fine.
I know the obvious traversal at O(n) linear time, but I'm trying to see if this is possible via binary search at O(log n) time.
The elements are sorted and only duplicate twice!
I know the way with simple loop but I want to do it by binary search.
Consider each pair of 2 consecutive elements: (the pairs with 2 elements different are highlighted) (note that there's a stray element at the end)
(1 1) (2 2) (3 3) (4 5) (5 6) (6 7) (7 8) (8)
Observe that the only non-duplicated element will make the corresponding pair and all the later pairs have different values, the pairs before that have the same value.
So just binary search for the index of the different pair.
This algorithm also don't require that the list is sorted, only that there's exactly one element which appears once, and all other elements appear twice, in consecutive indices.
Special case: if the last element is the unique one, then all the pairs will have equal values.
Every pair of same values will be like below in terms of indices:
(0,1),
(2,3),
(4,5)
(6,7)
etc. You can clearly see that if index is even, check with next element for similarity. If index is odd, you can check with previous value for similarity.
If this symmetry is broken, you can move towards left side or if everything is ok, keep moving right.
Pseudocode(not tested):
low = 0,high = arr.length - 1
while low <= high:
mid = (low + high) / 2
if mid == 0 || mid == arr.length - 1 || arr[mid] != arr[mid-1] and arr[mid] != arr[mid + 1]: // if they are corner values or both left and right values are different, you are done
return arr[mid]
if(mid % 2 == 0):
if arr[mid + 1] != arr[mid]: // check with next index since even for symmetry
high = mid
else:
low = mid + 2
else:
if arr[mid - 1] != arr[mid]: // check with prev index since odd for symmetry
high = mid
else:
low = mid + 1
return -1

A beginner way to understand how time complexity works [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 2 years ago.
Improve this question
I've been researching a lot about time complexity for my Data Structures class. And I've been tasked to report about Shell sort algorithm and explain its time complexity (best/worst/average case). I found this site https://stackabuse.com/shell-sort-in-java/ that shows that the time complexity of this Shell sort algorithm:
void shellSort(int array[], int n){
//n = array.length
for (int gap = n/2; gap > 0; gap /= 2){
for (int i = gap; i < n; i += 1) {
int temp = array[i];
int j;
for (j = i; j >= gap && array[j - gap] > temp; j -= gap){
array[j] = array[j - gap];
}
array[j] = temp;
}
}
}
is O(n log n). But the problem is that I'm still confused about makes logn a logn or what does nlogn means.
I also tried step count method but again, I don't know where to start so I just copied from the site above and did this.
void shellSort(int array[], int n){
//n = array.length
for (int gap = n/2; gap > 0; gap /= 2){ //step 1 = runs logn times
for (int i = gap; i < n; i += 1) { //step 2 = runs n-gap times
int temp = array[i]; //step 3 = 1
int j; //step 4 = 1
for (j = i; j >= gap && array[j - gap] > temp; j -= gap){ //step 5 = i/gap times
array[j] = array[j - gap]; //step 6 = 1
}
array[j] = temp; //step 7 = 1
}
}
}
But I don't know if this is correct, I just based it off on this site. https://stackabuse.com/shell-sort-in-java/.
I've also tried comparing the total number of moves between Insertion Sort and Shell Sort since Shell Sort is a generalization of Insertion and Bubble Sort. I'll attach the pics below. I also used an online number generator that will give me 100 random numbers, copied it and applied it to both the Insertion Sort and Shell sort and assigned it as the array to sort.
And this was what came up,
Total number of moves of Insertion Sort = 4731
Total number of moves of Shell Sort = 1954
Shell Sort implementation that tells me the total number of moves it does
Insertion Sort implementation that tells me the total number of moves it does
What I've understood from all of this is that despite Shell sort being a generalization of Insertion sort, when it comes to sorting large arrays such as 100 elements Shell Sort is 2x faster than Insertion Sort.
But the ultimate question is, is there a beginner way to calculate the time complexity like this Shell Sort algorithm?
You have to take a look at the big O or big Theta analysis of your function. Your outer loop is being divided by half on every iteration so the overall time that it runs is log n. Now when you look at your inner loop it runs initially from n/2 to n all the way to 1 to n or 2 to n depending on the initial size of n so its execution time will be n/2 + n/4 + .... n /2^k which its a 'Harmonic series' (You can search geometric series as well, if you factor n -> n(1/2 + 1/4 + ... + 1/2^k) which equals nlogn. Now the best case where every list may be sorted to some extent will be Ω(nlogn) as the in the middle of the outer loop we will find optimal solution so we can say that nlogn is its lower bound - Meaning it is definitely equal or bigger than that - therefor we can say that the average case is Θ(nlog^2 n) meaning that it is in the tight bound of that - Please note for average case I use Big theta. Now if we assume that the list is completely reverse the outer loop will run all the way to the end meaning log n. The inter loop will run nlogn so the total time will be nlog^2(n) which we can say it will be O(nlog^2(n)) (we can also use Big O but theta is better you can search that up that how theta provides tight bound and big O only provides upper bound). Therefore, we can also say the worst case is O(n^2) which is relatively correct in some context.
I suggest you take a look at Big-O and Big-Theta as well as Big-Omega which can also be useful in this case.
However, the most precise mathematical representation for shell algorithm will be O(n^3/2). However, there are still arguments and analyzation taking place.
I hope this helps.
First, I'll show that the algorithm will never be slower than O(n^2), and then I'll show that the worst-case run time is at least O(n^2).
Assume n is a power of two. We know the worst case for insertion sort is O(n^2). When h-sorting the array, we're performing h insertion sorts, each on an array of size n / h. So the complexity for a h-sort pass is O(h * (n / h)^2) = O(n^2 / h). The complexity of the whole algorithm is now the sum of n^2 / h where h is each power of two up to n / 2. This is a geometric series with first term n^2, common ratio 1 / 2, and log2(n) terms. Using the geometric series sum formula gives n^2*((1 / 2)^log2(n) - 1) / (1 / 2 - 1) = n^2*(1 / n - 1) / (-1 / 2) = n^2*(-2 / n + 2) = 2n^2 - 2n = O(n^2).
Consider an array created by interweaving two increasing sequences, where all elements in one sequence is greater than all elements in the other sequence, such as [1, 5, 2, 6, 3, 7, 4, 8]. Since this array is two-sorted, all passes except the last one does nothing. In the last pass, an element at index i where i is even has to be moved to index i / 2, which uses O(i / 2) operations. So we have 1 + 2 + 3 + ... + n / 2 = (n / 2) * (n / 2 + 1) / 2 = O(n^2).

Time complexity of algorithm with recursion

I have the following:
public static int[] MyAlgorithm(int[] A, int n) {
boolean done = true;
int j = 0;
while(j <= n - 2) {
if(A[j] > A[j+1]) {
int temp = A[j + 1];
A[j + 1] = A[j];
A[j] = temp;
done = false;
}
j++;
}
j = n - 1;
while(j >= 1) {
if(A[j] < A[j-1]) {
int temp = A[j - 1];
A[j - 1] = A[j];
A[j] = temp;
done = false;
}
j--;
}
if(!done)
return MyAlgorithm(A, n);
else
return A;
}
This essentially sorts an array 'A' of length 'n'. However, after trying to figure out what the time complexity of this algorithm, I keep running into circles. If I take a look at the first while-loop, the content in the loop will get execute 'n-2' times, thus making it O(n). The second while-loop executes in 'n-1' times, thus making it O(n), provided we've dropped the constants for both functions. Now, the recursive portion of this algorithm is what throws me off again.
The recursion looks to be tail-recursive given that it doesn't call anything else afterwards. At this point, I'm not sure if the the recursion being tail-recursive has anything to do with this time complexity... If this is really an O(n) does this necessarily mean that it's Omega(n) as well?
Please correct any of my assumptions I've made if there are any. Any hints would be great!
This is O(n2).
This is because with each recursion, you iterate the entire array twice. Once up (bubbling the highest answer to the top) and once down (bubbling the lowest answer to the bottom).
On the next iteration, you have yet another 2n. However, we know the topmost and bottommost elements are correct. Because of this we know we have n-2 unsorted elements. When it repeats, you will sort 2 more elements, and so on. if we want to find the number of iterations, "i", then we solve for n - 2i = 0. i = n/2 iterations.
n/2 iterations * 2n operations per iteration = n2 operations.
EDIT: tail recursion doesn't really help with time order, but it DOES help some languages with memory processing. I can't say exactly how it works, but it significantly reduces the stack space required somehow.
Also, I'm a bit rusty on this, but O notation denotes WORST case, whereas Omega notation denotes BEST case. This is Omega(n), because the best case is that it iterates the array twice, finds everything is sorted, and doesn't recurse.
In your case the recurrence relation is something like:
T(n) = T(n) + n.
But if we assume the biggest no.
will end up at the end in every case. We can approximate:
T(n) = T(n-1) +n
T(n-1) = T(n-2) + n-1
t(n-2) = T(n-3) + n-2
T(n) = T(n-2) + n-1 +n
T(n) = T(n-3) + n-2 + n-1 + n
T(n) = T(n-k) + kn -k(k-1)/2
If n-k = 1
then k = n+1
substitutuing that
T(n) = T(1) + n(n+1)-(n+1)(n)/2
order O(n^2)
Also smallest no. will end up at the begining so we could also have approximated.
T(n) = T(n-2) +n
Still order would be O(n^2)
If that approximation is removed we can't estimate exactly when
done will be true. But in this case we can be sure that the biggest no
will always end up at the end after each iteration and the smallest at the beginning so nothing would be done for 0 and n-1.
I hope this helps you understand why n^2.

Time complexity of n-vertex subgraph enumeration

I have an algorithm for creating a list of all possible subgraphs on P vertices through a given vertex. It's not perfect
but I think it should be working alright. The problem is I get lost when I try to calculate its time complexity.
I conjured up something like T(p) = 2^d + 2^d * (n * T(p-1) ), where d=Δ(G), p=#vertices required, n=|V|. It's really just a guess.
Can anyone help me with this?
The powerSet() algorithm used should be O(2^d) or O(d*2^d).
private void connectedGraphsOnNVertices(int n, Set<Node> connectedSoFar, Set<Node> neighbours, List<Set<Node>> graphList) {
if (n==1) return;
for (Set<Node> combination : powerSet(neighbours)) {
if (connectedSoFar.size() + combination.size() > n || combination.size() == 0) {
continue;
} else if (connectedSoFar.size() + combination.size() == n) {
Set<Node> newGraph = new HashSet<Node>();
newGraph.addAll(connectedSoFar);
newGraph.addAll(combination);
graphList.add(newGraph);
continue;
}
connectedSoFar.addAll(combination);
for (Node node: combination) {
Set<Node> k = new HashSet<Node>(node.getNeighbours());
connectedGraphsOnNVertices(n, connectedSoFar, k, graphList);
}
connectedSoFar.removeAll(combination);
}
}
It looks like the algorithm has a bug because after the recursive call, it is possible that nodes that appear in combination also appear in connectedSoFar, so the check that connectedSoFar.size() + combination.size() equals n seems incorrect, as it might count a node twice.
Anyway, otherwise to analyze the algorithm, you have 2d elements in the powerset; every operation in the "elase" branch takes O(n) time because connectedSoFar and combination together can't contain more than n nodes. Adding elements to connectedSoFar then takes O(n log n) time because |combination| &leq; n. The iteration over combination nodes happens O(n) times; within it there is O(d) operation to construct the hash set k and then recursive call.
Denote then the complexity of the procedure by X(n) where n is the parameter. You have
X(n) ~ 2d (n + n log n + n (d + X(n - 1)))
because in the recursive call you have added at least one vertex to the graph so in practice the parameter n in the recursive call decreases virtually by at least one.
Simplify this to
X(n) ~ 2d (n (1 + d + log n + X(n - 1)))
because d is constant, mark D = 2d, eliminate the constant 1, and you get
X(n) ~ D n (d + log n + X(n - 1))
which you can analyze as
X(n) ~ (2d)n n! (d + log n)
showing that your algorithm is really a time hog :)

Categories