Can someone please explain how the recursion works.I tried to figure out writing down low,mid,high at each recursive call.But I seem to be making a mistake somehow.Can some one please show me the steps.Also I don't understand where the values are returned to in
if(leftmax>rightmax){
return leftmax;
}
else
return rightmax;
Here's the code:
public class Maximum{
public static int max(int[] a,int low,int high){
int mid,leftmax,rightmax;
if(low==high)
return a[low];
else{
mid=(low+high)/2;
leftmax=max(a,low,mid);
rightmax=max(a,mid+1,high);
if(leftmax>rightmax){
return leftmax;}
else
return rightmax;
}
}
public static void main (String args[]){
int[] a={32,8,12};
System.out.println(max(a,0,2));
}
}
So it appears this is the famous Divide and Conquer algorithm for finding an element, in this case, the maximum element.
The method starts with this input:
max(int[] a, int low, int hight)
a[] = { 32, 8, 12 }; //The target array. You want to find the maximum element in it.
low = 0; //The index of the array where you will start searching
high = 2; //The index of the array where you will stop searching
So, basically, low and high defines a "range", you will search the maximum elment within this range, if you want to search the whole array, then you specify the range to be the index 0 and the maximum possible index, that is the length of the array - 1 (Remember, index begins from zero, so you need to substract the offset)
The first condition in the method checks wether you gave a range with one element. In this case, low and high will be the same, becasue they refer to the same index on the array. Therefore, you just return that particular element in the index (You don't have to do any searching, if you are searching the maximum element in a one-element range, then the maximum element is that single element). Returning a[low] or a[high] is the same.
If you gave a range with more than one element, you go into the else section.
Here, you get the middle index of the range.
So, if you specified a range from index 3 (low), to index 7 (high), then the middle index would be 5. (* (low + high) / 2 *)
You then partition the range into two ranges, the left range and the right range, they both come from the same original range which was splitted by two.
You then perform all the above operations I stated with each of those two ranges, until, at same point, you will split so many ranges, that you will end up with a one-element range, and would return it.
Let's stop here for a second.
Look at the code, we are storing the return values of the splitted ranges in leftmax and rightmax accordingly, but since you are calling the same method, each splitted range will also have it's own leftmax and rightmax and it's own splitted ranges.
It's like you are diving deeper and deeper, the surface being the initial execution of the method.
The last level of deepness is giving a one-element range (result of a previous split). In this case, the method will stop calling itself, because it will actually return a value. Who will catch this returned value? The previous level, which can be very deep also, this deep level will capture the return values of the last level and do the comparition in the code, that is returning the max element, and will return the max element of these two numbers to the level above it, which will do the same, and return it to a higher level, and on and on and on, until you reach the surface level, with the two maximum numbers of each splitted array accordingly, you then check the maximum of these two numbers and return them to you (You are the highest level!).
I hope I explained the whole process clearly and helped you!!
1 low=0 high=2 mid=1
2 low=0 high=1 mid=0
3 low=0 high=0 --> return a[0] --> leftmax == 32
4 low=1 high=1 --> return a[1] --> rightmax == 8
leftmax > rightmax --> return leftmax == 32
5 low=2 high=2 --> return a[2] --> rightmax == 12
leftmax > rightmax --> return leftmax == 32
The max() function's job is:
If low and high are equal, return the value pointed to by the index
Otherwise, partition the section of the array we're looking at into two segments and call max() recursively on both halves. Then return the larger of the values returned by the two calls.
Related
We would be given an array of integers and a value k. We need to find the total number of sub-arrays whose sum equals k.
I found some interesting code online (on Leetcode) which is as follows:
public class Solution {
public int subarraySum(int[] nums, int k) {
int sum = 0, result = 0;
Map<Integer, Integer> preSum = new HashMap<>();
preSum.put(0, 1);
for (int i = 0; i < nums.length; i++) {
sum += nums[i];
if (preSum.containsKey(sum - k)) {
result += preSum.get(sum - k);
}
preSum.put(sum, preSum.getOrDefault(sum, 0) + 1);
}
return result;
}
}
To understand it, I walked through some specific examples like [1,1,1,1,1] with k=3 and [1,2,3,0,3,2,6] with k=6. While the code works perfectly in both the cases, I fail to follow how it actually computes the output.
I have two specific points of confusion:
1) Why does the code continuously add the values in the array, without ever zeroing it out? For example, in case of [1,1,1,1,1] with k=3, once sum=3, don't we need to reset sum to zero? Doesn't not resetting sum interfere with finding later subarrays?
2) Shouldn't we simply do result++ when we find a subarray of sum k? Why do we add preSum.get(sum-k) instead?
Let's handle your first point of confusion first:
The reason the code keeps summing the array and doesn't reset sum is because we are saving the sum in preSum (previous sums) as we go. Then, any time we get to a point where sum-k is a previous sum (say at index i), we know that the sum between index i and our current index is exactly k.
For example, in the image below with i=2, and our current index equal to 4, we can see that since 9, the sum at our current index, minus 3, the sum at index i, is 6, the sum between indexes 2 and 4 (inclusive) is 6.
Another way to think about this is to see that discarding [1,2] from the array (at our current index of 4) gives us a subarray of sum 6, for similar reasons as above (see image for details).
Using this method of thinking, we can say we want to discard from the front of the array until we are left with a subarray of sum k. We could do this by saying, for each index, "discard just 1, then discard 1+2, then discard 1+2+3, etc" (these numbers are from our example) until we found a subarray of sum k (k=6 in our example).
That gives a perfectly valid solution, but notice we would be doing this at every index of our array, and thus summing the same numbers over and over. A way to save computation would be to save these sums for later use. Even better, we already sum these same numbers to get our current sum, so we can just save that total as we go.
To find a subarray, we can just look through our saved sums, subtracting them and testing if what we are left with is k. It is a bit annoying to have to subtract every saved sum, so we can use the commutativity of subtraction to see that if sum-x=k is true, sum-k=x is also true. This way we can just see if x is a saved sum, and, if it is, know we have found a subarray of size k. A hash map makes this lookup efficient.
Now for your second point of confusion:
Most of the time you are right, upon finding an appropriate subarray we could just do result++. Almost always, the values in preSum will be 1, so result+=preSum.get(sum-k) will be equivalent to result+=1, or result++.
The only time it isn't is when preSum.put is called on a sum that has been reached before. How can we get back to a sum we already had? The only way is with either negative numbers, which cancel out previous numbers, or with zero, which doesn't affect the sum at all.
Basically, we get back to a previous sum when a subarray's sum is equal to 0. Two examples of such subarrays are [2,-2] or the trivial [0]. With such a subarray, when we find a later, adjoining subarray with sum k, we need to add more than 1 to result as we have found more than one new subarray, one with the zero-sum subarray (sum=k+0) and one without it (sum=k).
This is the reason for that +1 in the preSum.put as well. Every time we reach the same sum again, we have found another zero-sum subarray. With two zero-sum subarrays, finding a new adjoining subarray with sum=k actually gives 3 subarrays: the new subarray (sum=k), the new subarray plus the first zero-sum (sum=k+0), and the original with both zero-sums (sum=k+0+0). This logic holds for higher numbers of zero-sum subarrays as well.
I was confronted not so long ago to an algorithmic problem.
I needed to find if a value stored in an array was at it "place".
An example will be easier to understand.
Let's take an Array A = {-10, -3, 3, 5, 7}. The algorithm would return 3, because the number 3 is at A[2] (3rd place).
On the contrary, if we take an Array B = {5, 7, 9, 10}, the algorithm will return 0 or false or whatever.
The array is always sorted !
I wasn't able to find a solution with a good complexity. (Looking at each value individualy is not good !) Maybe it is possible to resolve that problem by using an approach similar to merge sorting, by cuting in half and verifying on those halves ?
Can somebody help me on this one ?
Java algorithm would be the best, but pseudocode would also help me a lot !
Here is an algorithm (based on binary search) to find all matching indices that has a best-case complexity of O(log(n)) and a worst case complexity of O(n):
1- Check the element at position m = array.length / 2
2- if the value array[m] is strictly smaller than m, you can forget about the left half of the array (from index 0 to index m-1), and apply recursively to the right half.
3- if array[m]==m, add one to the counter and apply recursively to both halves
4- if array[m]>m, forget about the right half of the array and apply recursively to the left half.
Using threads can accelerate things here. I suppose that there is no repetitions in the array.
Since there can be no duplicates, you can use the fact that the function f(x): A[x] - x is monotonous and apply binary search to solve the problem in O(log n) worst-case complexity.
You want to find a point where that function A[x] - x takes value zero. This code should work:
boolean binarySearch(int[] data, int size)
{
int low = 0;
int high = size - 1;
while(high >= low) {
int middle = (low + high) / 2;
if(data[middle] - 1 == middle) {
return true;
}
if(data[middle] - 1 < middle) {
low = middle + 1;
}
if(data[middle] - 1 > middle) {
high = middle - 1;
}
}
return false;
}
Watch out for the fact that arrays in Java are 0-indexed - that is the reason why I subtract -1 from the array.
If you want the find the first number in the array that is at its own place, you just have to iterate the array:
static int find_in_place(int[] a) {
for (int i=0; i<a.length; i++) {
if (a[i] == i+1) {
return a[i];
}
}
return 0;
}
It has a complexity of O(n), and an average cost of n/2
You can skip iterating if there is no such element by adding a special condition
if(a[0]>1 && a[a.length-1]>a.length){
//then don't iterate through the array and return false
return false;
} else {
//make a loop here
}
Using binary search (or a similar algorithm) you could get better than O(n). Since the array is sorted, we can make the following assumptions:
if the value at index x is smaller than x-1 (a[x] <= x), you know that all previous values also must be smaller than their index (because no duplicates are allowed)
if a[x] > x + 1 all following values must be greater than their index (again no duplicates allowed).
Using that you can use a binary approach and pick the center value, check for its index and discard the left/right part if it matches one of the conditions above. Of course you stop when a[x] = x + 1.
simply use a binary search for the 0 and use for compare the value in the array minus index of the array. O(log n)
I had an interview and there was the following question:
Find unique numbers from sorted array in less than O(n) time.
Ex: 1 1 1 5 5 5 9 10 10
Output: 1 5 9 10
I gave the solution but that was of O(n).
Edit: Sorted array size is approx 20 billion and unique numbers are approx 1000.
Divide and conquer:
look at the first and last element of a sorted sequence (the initial sequence is data[0]..data[data.length-1]).
If both are equal, the only element in the sequence is the first (no matter how long the sequence is).
If the are different, divide the sequence and repeat for each subsequence.
Solves in O(log(n)) in the average case, and O(n) only in the worst case (when each element is different).
Java code:
public static List<Integer> findUniqueNumbers(int[] data) {
List<Integer> result = new LinkedList<Integer>();
findUniqueNumbers(data, 0, data.length - 1, result, false);
return result;
}
private static void findUniqueNumbers(int[] data, int i1, int i2, List<Integer> result, boolean skipFirst) {
int a = data[i1];
int b = data[i2];
// homogenous sequence a...a
if (a == b) {
if (!skipFirst) {
result.add(a);
}
}
else {
//divide & conquer
int i3 = (i1 + i2) / 2;
findUniqueNumbers(data, i1, i3, result, skipFirst);
findUniqueNumbers(data, i3 + 1, i2, result, data[i3] == data[i3 + 1]);
}
}
I don't think it can be done in less than O(n). Take the case where the array contains 1 2 3 4 5: in order to get the correct output, each element of the array would have to be looked at, hence O(n).
If your sorted array of size n has m distinct elements, you can do O(mlogn).
Note that this is going to efficient when m << n (eg m=2 and n=100)
Algorithm:
Initialization: Current element y = first element x[0]
Step 1: Do a binary search for the last occurrence of y in x (can be done in O(log(n)) time. Let it's index be i
Step 2: y = x[i+1] and go to step 1
Edit: In cases where m = O(n) this algorithm is going to work badly. To alleviate it you can run it in parallel with regular O(n) algorithm. The meta algorithm consists of my algorithm and O(n) algorithm running in parallel. The meta algorithm stops when either of these two algorithms complete.
Since the data consists of integers, there are a finite number of unique values that can occur between any two values. So, start with looking at the first and last value in the array. If a[length-1] - a[0] < length - 1, there will be some repeating values. Put a[0] and a[length-1] into some constant-access-time container like a hash set. If the two values are equal, you konow that there is only one unique value in the array and you are done. You know that the array is sorted. So, if the two values are different, you can look at the middle element now. If the middle element is already in the set of values, you know that you can skip the whole left part of the array and only analyze the right part recursively. Otherwise, analyze both left and right part recursively.
Depending on the data in the array you will be able to get the set of all unique values in a different number of operations. You get them in constant time O(1) if all the values are the same since you will know it after only checking the first and last element. If there are "relatively few" unique values, your complexity will be close to O(log N) because after each partition you will "quite often" be able to throw away at least one half of the analyzed sub-array. If the values are all unique and a[length-1] - a[0] = length - 1, you can also "define" the set in constant time because they have to be consecutive numbers from a[0] to a[length-1]. However, in order to actually list them, you will have to output each number, and there are N of them.
Perhaps someone can provide a more formal analysis, but my estimate is that this algorithm is roughly linear in the number of unique values rather than the size of the array. This means that if there are few unique values, you can get them in few operations even for a huge array (e.g. in constant time regardless of array size if there is only one unique value). Since the number of unique values is no grater than the size of the array, I claim that this makes this algorithm "better than O(N)" (or, strictly: "not worse than O(N) and better in many cases").
import java.util.*;
/**
* remove duplicate in a sorted array in average O(log(n)), worst O(n)
* #author XXX
*/
public class UniqueValue {
public static void main(String[] args) {
int[] test = {-1, -1, -1, -1, 0, 0, 0, 0,2,3,4,5,5,6,7,8};
UniqueValue u = new UniqueValue();
System.out.println(u.getUniqueValues(test, 0, test.length - 1));
}
// i must be start index, j must be end index
public List<Integer> getUniqueValues(int[] array, int i, int j) {
if (array == null || array.length == 0) {
return new ArrayList<Integer>();
}
List<Integer> result = new ArrayList<>();
if (array[i] == array[j]) {
result.add(array[i]);
} else {
int mid = (i + j) / 2;
result.addAll(getUniqueValues(array, i, mid));
// avoid duplicate divide
while (mid < j && array[mid] == array[++mid]);
if (array[(i + j) / 2] != array[mid]) {
result.addAll(getUniqueValues(array, mid, j));
}
}
return result;
}
}
I have to code a recursive method that iterates through a linked list and returns the number of integers that are positive. Here is the question:
The method countPos below must be a recursive method that takes a Node head
as its argument, goes down the list headed by head, and counts the number of nodes which have a positive data field.
The code I have works however, I don't understand how it works.
public int countPos(Node head) {
int count = 0;
if (head == null) { return count; }
if (head.data > 0) {
count++;
return count + countPos(head.next);
} else {
return count + countPos(head.next);
}
}
The problem I'm having is I don't understand how count doesn't get set back to 0 every time the method is called. For some reason the statement int count = 0; is ignored the next time the method gets called. Is this because I'm returning count also? Any explanation would be greatly appreciated.
Thanks.
DON'T begin by tracing execution or debugging. The power of recursion is that it lets you reason about complicated programs with simple logic.
Your code works by chance. It reflects that whoever wrote it (you?) doesn't understand how recursion solves problems. It's more complex than necessary.
To exploit recursion, take the problem at hand and:
Define the function interface.
Split the problem into parts, at least one of which is a smaller version of the same problem.
Solve that (or those) smaller version(s) by calling the function interface itself.
Find the "base case" or cases that are solutions to very small instances of the same problem.
With all this done, the pseudocode for most recursive algorithms is:
function foo(args)
if args describe a base case
return the base case answer.
solve the smaller problem or problems by calling foo with
args that describe the smaller problem!
use the smaller problem solution(s) to get the answer for this set of args
return that answer
end
Let's apply this to your case:
PROBLEM: Count the number of positive items in a list.
Define the function interface: int countPos(Node head).
Split the problem up into parts: Get the number of positives in the list remaining after the head, then add one if the head is positive and nothing if the head is zero or negative.
The smaller version of the problem is finding the number of positives in the list with head removed: countPos(head.next).
Find the base case: The empty list has zero positives.
Put this all together:
int countPos(Node head) {
// Take care of the base case first.
if (head == null) return 0;
// Solve the smaller problem.
int positiveCountWithoutHead = countPos(head.next);
// Now the logic in step 2. Return either the positive count or 1+ the positive count:
return head.data > 0 ? positiveCountWithoutHead + 1 : positiveCountWithoutHead;
}
You might learn a little bit by tracing execution of something like this one time. But trying to write recursive code by reasoning about what's going on with the stack is a dead end. To be successful, you must think at a higher level.
Let's try one that doesn't quite follow the standard template: Recursive binary search. We have an array a of integers and are trying to find the index of x if it exists in the array and return -1 if not.
PROBLEM: Search the array between positions i0 and i1-1.
(The above is an example of how you must sometimes "specialize" the problem by adding parameters so that smaller subproblems can be described in the recursive call or calls. Here we are adding the new parameters i0 and i1 so that we can specify a subarray of a. Knowing how and when to do this is a matter of practice. The parameters needed can vary with language features.)
Function interface: int search(int [] a, int x, int i0, int i1)
Split the problem in parts: We'll pick a "middle" element index: mid = (i0 + i1) / 2. Then the subproblem is either searching the first half of the array up to but excluding mid or the second half of the array starting after mid and continuing to the end.
The calls are search(a, x, i0, mid) and search(a, x, mid + 1, i1).
The base cases are that 1) if i0 >= i1, there are no elements to search, so return -1 and 2) if we have a[mid] == x, then we've found x and can return mid.
Putting this all together
int search(int [] a, int x, int i0, int i1) {
// Take care of one base case.
if (i0 >= i1) return -1;
// Set up mid and take care of the other base case.
int mid = (i0 + i1) / 2;
if (a[mid] == x) return mid;
// Solve one or the other subproblems. They're both smaller!
return x < a[mid] ? search(a, x, i0, mid) : search(a, x, mid + 1, i1);
}
And to start the search:
int search(int [] a, int x) { return search(a, x, 0, a.length); }
Each time you call countPos(), a new version of that function starts. This function starts from a clean slate meaning all of the local variables (count) are its own, and no other "copy" of countPos can see or modify its local variables.
The only state that is passed between these "copies" or of countPos is the variables that are passed in as parameters (Node head).
So here's a rough workflow assuming the list [1, -2, 3]
countPos starts, and says number of positive nodes is equal to 1, since "1" is positive. The total number of positive nodes is equal to 1 + whatever the next function returns.
The next function says the number of positive nodes is equal to 0 + whatever the next function returns.
The next function says the number of positive nodes is equal to 1 + whatever the next function returns
The next function sees that head == null and so returns 0.
Now each recursive function returns one after another to the original function that called it, with the total number of positive nodes "snowballing" as we return.
The total number returned in the end will be 2.
I'm taking an online class so there isn't any help from the teachers or other classmates. Our assignment is that we need to find the max value and index of an array of random numbers. We need to do it in two ways. A regualr loop(brute force) and divide and conquer. In the divide and conquer we need to split the array into two smaller arrays and find the max of both and then merge.
I got the brute force to work and I got the divide and conqure to find the max also. But I can't seem to get the max of the two smaller arrays and merge the two. We also need to check for how many comparison is made by both methods and print the output.
Here's what I have so far:
public class MinMaxValues{
// Find maxiumum (largest) value in array using Divide and Conquer
public static int findMax( int[]numbers, int left, int right )
{
int middle;
int max_l, max_r, max_m;
if ( left == right ) // Only one element...
{
// Base case: solved easily...
return numbers[left];
}
else
{
// Solve smaller problems
middle = (left+right)/2; // Divide into 2 halves
max_l = findMax( numbers, left, middle);
// Find max in first half
max_r = findMax( numbers, middle+1, right);
// Find max in second half
//System.out.println("Maximum Value = " + max_r);
max_m = max_l+ max_r;
// Use the solutions to solve original problem
if ( max_l > max_r )
return(max_l);
else
return(max_r);
//return(max_m);
}
}
}
You are never returning an array.
Also you don't make any changes to the array.
You must change the array in some way once you find the max.
Try wrapping it with a method.
public static int[] maxSort(int[] array,int length){
int[] sorted = new int[array.length];
sorted[arrayLength]=findmax(array,0,sorted,arrayLength);//assumes find max returns maximum value of entire array.
while(length>0){
sorted=maxsort(array,length--);
}
return sorted;
}
I am not 100% sure it's working by i think it's a step in the right direction.
You need to more carefully address points in your program where you are comparing the index or the value at that index. For example, instead of checking whether max_l > max_r, I believe you mean to be checking whether numbers[max_l] > numbers[max_r].