Binary search recursive number of calls? - java

So I was wondering, in my book, recursive binary search is implemented as follows:
private static int bin(int[] arr, int low, int high, int target){
counter++; //ignore this, it was to count how many calls this function invocated
if(low > high) return -1;
else{
int mid = (low+high) / 2;
if(target == arr[mid]) return mid;
else if(target < arr[mid]) return bin(arr, low, mid-1, target);
else return bin(arr, mid + 1, high, target);
}
}
And it says that "If n, the number of elements, is a power of 2, express n as a power of 2... Case 3: The key is not in the array, and its value lies between a[0] and a[n-1]. Here the number of comparisons to determine that the key is not in the array is equal to the exponent. There will be one fewer comparison than in the worst case."
But when I sat down and found the number of function calls using an array {1,2,3,4,5,6,7,9} and key of 8, the number of calls was 4. The book says the number of COMPARISONS is 3 (which is excluding the 3rd line I am guessing?), but I'm pretty sure the number of function calls is 4. I also generalized this to an iterative implementation of binary search and generalized that the number of iterations, OR recursive function calls, is always floor(log base 2 ( n ) ) + 1.
Can explain what's going on here?

Only 3 target == arr[mid] comparisons are made. On the fourth iteration the base case if(low > high) is reached so the comparison is never made. As you stated: "Here the number of comparisons to determine that the key is not in the array is equal to the exponent." You are correct in that we are not dealing with the comparison statement on line 3. We are only concerned with the comparison statement for our target value.
Let's look at the iterations until either of the 2 base cases are reached.
Binary search for 8 in array {1,2,3,4,5,6,7,9}
First iteration:
low = 0
high = 7
mid = 3
arr[mid] = 4
(target == arr[mid]) == false
Second iteration:
low = 4
high = 7
mid = 5
arr[mid] = 6
(target == arr[mid]) == false
Third iteration:
low = 7
high = 7
mid = 7
arr[mid] = 7
(target == arr[mid]) == false
Forth iteration:
low = 8
high = 7
low > high == true
Also, the Big O notation is O(log n). The + 1 is considered insignificant in Big O and therefore not counted. Please see this list on Wikipedia for order of Big O functions from fastest to slowest.

Related

what's the growth order of "find a peak" algorithm

hello i need to apply an algorithm similar to this but the problem is that i need the complexity to be O(logn). the complexity of the code below is said to be O(logn) but from what i understand a recursive method has the growth order of O(n). so the question is what is the growth order of the code below.
public static int findPeak(int[] array, int start, int end) {
int index = start + (end - start) / 2;
if (index - 1 >= 0 && array[index] < array[index - 1]) {
return findPeak(array, start, index - 1);
} else if (index + 1 <= array.length - 1 && array[index] < array[index + 1]) {
return findPeak(array, index + 1, end);
} else {
return array[index];
}
}
It should be O(logn). For simplicity (also an easy way to think) think of this as creating a binary tree. In each function call it divide the input array into two halves (creating nodes in a binary tree). So if the number of input are n then the binary tree has log(n) levels (one level -> one function call).
Also note that in one function call only one recursive call happen( either in if block or else block but not both). This might made you feel it like o(n) growth.
The size of the input array in each branch of the code is half of the original input array into the function. Hence, if T(n) is the time complexity of the function, we can write:
T(n) = T(n/2) + 1
1 shows the comparison in branches, and T(n/2) is for any selected branch. Hence, T(n) is in O(log(n)).

Binary-search with duplicate elements in array

I want find if there is a single element in a list of duplicate elements.
For this code
private static int findDuplicate(int array[]) {
int low = 0;
int high = array.length - 1;
while (low <= high) {
int mid = (low + high) >>> 1;
int midVal = array[mid];
if (midVal == mid)
low = mid + 1;
else
high = mid - 1;
}
return high;
}
It find the duplicate number but I want to find only the single number
in the duplicate and sorted array.
For example, given this int[] input:
[1,1,2,2,3,3,4,5,5]
Output would be '4'.
Or this int[] input:
[1,1,2,2,3,4,4,5,5,6,6]
Output would be '3'.
In this int[] input:
[1,1,2,7,7,9,9]
Output would be '2'.
I'm working in Java now but any language or psuedo-code is fine.
I know the obvious traversal at O(n) linear time, but I'm trying to see if this is possible via binary search at O(log n) time.
The elements are sorted and only duplicate twice!
I know the way with simple loop but I want to do it by binary search.
Consider each pair of 2 consecutive elements: (the pairs with 2 elements different are highlighted) (note that there's a stray element at the end)
(1 1) (2 2) (3 3) (4 5) (5 6) (6 7) (7 8) (8)
Observe that the only non-duplicated element will make the corresponding pair and all the later pairs have different values, the pairs before that have the same value.
So just binary search for the index of the different pair.
This algorithm also don't require that the list is sorted, only that there's exactly one element which appears once, and all other elements appear twice, in consecutive indices.
Special case: if the last element is the unique one, then all the pairs will have equal values.
Every pair of same values will be like below in terms of indices:
(0,1),
(2,3),
(4,5)
(6,7)
etc. You can clearly see that if index is even, check with next element for similarity. If index is odd, you can check with previous value for similarity.
If this symmetry is broken, you can move towards left side or if everything is ok, keep moving right.
Pseudocode(not tested):
low = 0,high = arr.length - 1
while low <= high:
mid = (low + high) / 2
if mid == 0 || mid == arr.length - 1 || arr[mid] != arr[mid-1] and arr[mid] != arr[mid + 1]: // if they are corner values or both left and right values are different, you are done
return arr[mid]
if(mid % 2 == 0):
if arr[mid + 1] != arr[mid]: // check with next index since even for symmetry
high = mid
else:
low = mid + 2
else:
if arr[mid - 1] != arr[mid]: // check with prev index since odd for symmetry
high = mid
else:
low = mid + 1
return -1

How to calculate Big O of Dynamic programming (Memoization) algorithm

How would I go about calculating the big O of a DP algorithm. I've come to realize my methods for calculating algorithms doesn't always work. I would use simple tricks to extract what the Big O was. For example if I were evaluating the none memoized version of the algorithm below (removing the cache mechanism) I would look at the number of times the recursive method called itself in this case 3 times. I would then raise this value to n giving O(3^n). With DP that isn't right at all because the recursive stack doesn't go as deep. My intuition tells me that the Big O of the DP solution would be O(n^3). How would we verbally explain how we came up with this answer. More importantly what is a technique that can be used to find the Big O of similar problems. Since it is DP I'm sure the number of sub problems is important how do we calculate the number of sub problems.
public class StairCase {
public int getPossibleStepCombination(int n) {
Integer[] memo = new Integer[n+1];
return getNumOfStepCombos(n, memo);
}
private int getNumOfStepCombos(int n, Integer[] memo) {
if(n < 0) return 0;
if(n == 0) return 1;
if(memo[n] != null) return memo[n];
memo[n] = getNumOfStepCombos(n - 1, memo) + getNumOfStepCombos(n - 2, memo) + getNumOfStepCombos(n-3,memo);
return memo[n];
}
}
The first 3 lines do nothing but compare int values, access an array by index, and see if an Integer reference is null. Those things are all O(1), so the only question is how many times the method is called recursively.
This question is very complicated, so I usually cheat. I just use a counter to see what's going on. (I've made your methods static for this, but in general you should avoid static mutable state wherever possible).
static int counter = 0;
public static int getPossibleStepCombination(int n) {
Integer[] memo = new Integer[n+1];
return getNumOfStepCombos(n, memo);
}
private static int getNumOfStepCombos(int n, Integer[] memo) {
counter++;
if(n < 0) return 0;
if(n == 0) return 1;
if(memo[n] != null) return memo[n];
memo[n] = getNumOfStepCombos(n - 1, memo) + getNumOfStepCombos(n - 2, memo) + getNumOfStepCombos(n-3,memo);
return memo[n];
}
public static void main(String[] args) {
for (int i = 0; i < 10; i++) {
counter = 0;
getPossibleStepCombination(i);
System.out.print(i + " => " + counter + ", ");
}
}
This program prints
0 => 1, 1 => 4, 2 => 7, 3 => 10, 4 => 13, 5 => 16, 6 => 19, 7 => 22, 8 => 25, 9 => 28,
so it looks like the final counter values are given by 3n + 1.
In a more complicated example, I might not be able to spot the pattern, so I enter the first few numbers (e.g. 1, 4, 7, 10, 13, 16) into the Online Encyclopedia of Integer Sequences and I usually get taken to a page containing a simple formula for the pattern.
Once you've cheated in this way to find out the rule, you can set about understanding why the rule works.
Here's how I understand where 3n + 1 comes from. For each value of n you only have to do the line
memo[n] = getNumOfStepCombos(n - 1, memo) + getNumOfStepCombos(n - 2, memo) + getNumOfStepCombos(n-3,memo);
exactly once. This is because we are recording the results and only doing this line if the answer has not already been calculated.
Therefore, when we start with n == 5 we run that line exacly 5 times; once for n == 5, once with n == 4, once with n == 3, once with n == 2 and once with n == 1. So that's 3 * 5 == 15 times the method getNumOfStepCombos gets called from itself. The method also gets called once from outside itself (from getPossibleStepCombination), so the total number of calls is 3n + 1.
Therefore this is an O(n) algorithm.
If an algorithm has lines that are not O(1) this counter method cannot be used directly, but you can often adapt the approach.
Paul's answer is technically not wrong but is a bit misleading. We should be calculating big O notation by how the function responds to changes in input size. Paul's answer of O(n) makes the complexity appear to be linear time when it really is exponential to the number of bits required to represent the number n. So for example, n=10 has ~30 calculations and m=2 bits. n=100 has ~300 calculations and m=3 bits. n=1000 has ~3000 calculations and m=4 bits.
I believe that your function's complexity would be O(2^m) where m is number of bits needed to represent n. I referred to https://www.quora.com/Why-is-the-Knapsack-problem-NP-complete-even-when-it-has-complexity-O-nW for a lot of my answer.

Convert algorithm from o(n) to o(1)

Basically what I wanted to is if a number n is divisible by b for a(count) times, then find the a(count), and divide n by b for a(count) times.
That is,
count = 0;
while(n%b == 0)
n=n/b;
count = count + 1;
How to optimize this, so that everything can be obtained in one step
You can do it in O(log(a)) by applying binary search, on a sorted "list" to find the last element that equals 1.
The list is metaphoric, and each element in it is calculated on the fly when queried by a simple calculation:
list[i] = 1 n % a^i == 0
0 otherwise
You can first find the range of possible a's using exponention:
curr = b
tempA = 1
while n % curr == 0:
curr = curr * curr
tempA = tempA *2
And then, run the binary search on the range [tempA/2, tempA]. This range is of size (a/2), so finding the last "element" that the symbolic list holds 1 - is done in O(loga) multiplications.
Code + Demo:
private static int specialBinarySearch(int n, int b, int aLow, int aHigh) {
if (aHigh == aLow) return aHigh;
int mid = (aHigh - aLow)/2 + aLow;
//pow method can be optimized to remember pre-calculated values and use them
int curr = (int)Math.round(Math.pow(b, mid));
if (n % curr == 0) { //2nd half, or found it:
if (n % (curr*b) != 0) return mid; //found it
return specialBinarySearch(n, b, mid+1, aHigh); //2nd half
}
else return specialBinarySearch(n, b, aLow, mid); //first half
}
public static int findA(int n, int b) {
int curr = b;
int tempA = 1;
while (n % curr == 0) {
curr = curr * curr;
tempA = tempA *2;
}
return specialBinarySearch(n, b, tempA/2, tempA);
}
public static void main(String args[]) {
System.out.println(findA(62,2)); //1
System.out.println(findA(1024,2)); //10
System.out.println(findA(1,2)); //0
System.out.println(findA(100,2)); //2
System.out.println(findA(6804,3)); //5
}
You cannot solve this in O(1) but there is a different kind of approach to this problem if you start using a numeric system where b is the base.
For example, if we have a number like 154200, and b is 10, we know the answer is 2 here immediately because we can simply count how many zeros there are on the right hand side.
Similarly, in binary, if b is 2, you simply count how many zeros there are on the right side with a binary representation.
If b is 5, we have to use the odd base 5 representation where a number like 8 is represented as 13. Again we know that the answer for a is zero is n=8 and b=5 because there are no zeros on the right hand side.
This won't necessarily give you speed gains except possibly in cases where b is a power of two where you can use bitwise logic to deduce the answer, but it gives you a different kind of way of looking at the problem lexically by digits instead of through arithmetic.

Binary search 1/4 modification

My task is to write 1/4 - 3/4 binary search algorithm modification wherein the first element compared, when searching for an item in the list, is the 'pivot' element which is at a distance of 1/4th from
one end of the list(assuming the end chosen is the start of the 'remaining'
list). If there is no match ('pivot' element is not equal to the search key) and
if the part of the list that should be examined further for the search is 1/4th
part of the list, continue with the same strategy. Whenever the part of the
list that should be examined further for the search is of size 3/4th, switch to
a binary search once and return to the 1/4th-3/4th strategy.
My code is here, but it doesnt work, and i dont know even if i am doing it right:
public static int ThreeFour(int[] Array,int item)
{
int counter =0;
int high=Array.length-1;
int low=0;
int pivot = 0;
boolean split = true;
boolean last =true;
while(high >= low) {
if(split){
pivot = (high+low)/4;
last=true;}
else
{ pivot = (high+low)/2;
split=true;
last=false;
}
if(Array[pivot] == item)
{ counter++;
System.out.println("Pivot"+pivot);
return counter;
}
if(Array[pivot] < item) {
low = pivot + 1;
counter++;
}
if(Array[pivot] > item) {
high = pivot - 1;
counter++;
if (last)
split=false;
}
}
return 0;
}
It doesnt work, maybe there is a simplier strategy to do that? The hardest part is to make it remember that it already splited in half once:/
Your formula to determine the pivot is wrong for the 3/4 split. If you want to split an interval between low and high at some point c with 0 <= c <=1, you get:
pivot = low + c * (high - low)
= (1 - c) * low + c * high
This wil give you low for c == 0, highfor c == 1 and for your 3/4 split:
pivot = 0.75 * low + 0.25 * high
or, with integer arithmetic:
pivot = (3 * low + high) / 4
In particular, the factors for low and high should sum up to 1.
I also think that your function has a logic error: You return the recursion depth, which has no meaning to the array. You should return the pivot, i.e. the array index at which the item is found. That also means that you can't return 0 on failure, because that's a valid array index. Return an illegal index like -1 to indicate that the item wasn't found.

Categories