Confusion on quicksort that takes logarithmic space - java

If only the smaller partitioned array is being called how is the larger one sorted? I only see code to change the position of b if done recursively (QS called in if and else statement).
public static void QS(int[] b, int h, int k) {
int h1= h; int k1= k;
// invariant b[h..k] is sorted if b[h1..k1] is sorted
while (b[h1..k1] has more than 1 element) {
int j= partition(b, h1, k1);
// b[h1..j-1] <= b[j] <= b[j+1..k1]
if (b[h1..j-1] smaller than b[j+1..k1])
QS(b, h, j-1); h1= j+1;
else
QS(b, j+1, k1); k1= j-1;
}
}

That is some hard to read psuedo-code. This might be a bit easier to understand:
QuickSort(b[], low, hi)
while the range low..hi contains more than 1 element
1: Pick a random pivot 'j' from the array
2: Re-order the array so that all elements less than the pivot are in
b[low..j-1], and all elements greater than the pivot are in b[j+1..hi]
3: Call quicksort on the side with less elements, and update the range to
exclude the sorted side
Roughly half of the values will be less than the pivot, and half of the values will be greater than the pivot. This means that after step 3, the size of the range low..hi has roughly halved. Thus, it takes log|N| iterations before the range contains only one element.
It's hard to explain this bit, but see how step 3 only calls QuickSort on one half of the array? It's because the remainder of the while-loop sorts the other half. The function could easily be re-written as the following:
QuickSort(b[], low, hi)
if the range low..hi contains more than 1 element
1: Pick a random pivot 'j' from the array
2: Re-order the array so that all elements less than the pivot are in
b[low..j-1], and all elements greater than the pivot are in b[j+1..hi]
3: Call quicksort on both sides
The while-loop has been replaced by an if statement and a second recursive call. I hope from here that you can see the complexity is roughly N log|N|.
Edit
So how does the while-loop sort the remaining elements? After step 3, the range has been updated to exclude the smaller half, because we just sorted it with a call to QuickSort. This means that the range now only contains the larger half - the unsorted elements. So we repeat steps 1 - 3 on these unsorted elements, and update the range again.
The number of unsorted elements gets smaller and smaller with every iteration, and eventually we will be left with only one unsorted element. But of course, one element on its own is sorted, so at this point we know we have sorted every element in the array.

Note after the recursive call to QS, h1 is is updated if b[h1..] was smaller than b[j+1..] and k1 is updated if if b[h1..] was greater or equal to b[j+1..] .
There's a bug in the code, the first call after the if should be QS(b, h1, j-1);
Logarithmic space usage is referring to the stack space used by quicksort due to recursion. In the example code, only the smaller partition is sorted with a recursive call, then the code loops back to split up the larger partition into two parts, and again, only use a recursive call for the smaller part of the now split up larger partition.
Link to articles:
http://en.wikipedia.org/wiki/Quicksort#Optimizations
http://blogs.msdn.microsoft.com/devdev/2006/01/18/efficient-selection-and-partial-sorting-based-on-quicksort
I'm not sure about the reference to tail recursion, since the code includes an actual loop instead of using tail recursion. Tail recursion would look like a recursive call on the last line to be executed in a function, where a compiler can optimize it into a loop.

Related

Best way to retrieve K largest elements from large unsorted arrays?

I recently had a coding test during an interview. I was told:
There is a large unsorted array of one million ints. User wants to retrieve K largest elements. What algorithm would you implement?
During this, I was strongly hinted that I needed to sort the array.
So, I suggested to use built-in sort() or maybe a custom implementation if performance really mattered. I was then told that using a Collection or array to store the k largest and for-loop it is possible to achieve approximately O(N), in hindsight, I think it's O(N*k) because each iteration needs to compare to the K sized array to find the smallest element to replace, while the need to sort the array would cause the code to be at least O(N log N).
I then reviewed this link on SO that suggests priority queue of K numbers, removing the smallest number every time a larger element is found, which would also give O(N log N). Write a program to find 100 largest numbers out of an array of 1 billion numbers
Is the for-loop method bad? How should I justify pros/cons of using the for-loop or the priorityqueue/sorting methods? I'm thinking that if the array is already sorted, it could help by not needing to iterate through the whole array again, i.e. if some other method of retrieval is called on the sorted array, it should be constant time. Is there some performance factor when running the actual code that I didn't consider when theorizing pseudocode?
Another way of solving this is using Quickselect. This should give you a total average time complexity of O(n). Consider this:
Find the kth largest number x using Quickselect (O(n))
Iterate through the array again (or just through the right-side partition) (O(n)) and save all elements ≥ x
Return your saved elements
(If there are repeated elements, you can avoid them by keeping count of how many duplicates of x you need to add to the result.)
The difference between your problem and the one in the SO question you linked to is that you have only one million elements, so they can definitely be kept in memory to allow normal use of Quickselect.
There is a large unsorted array of one million ints. The user wants to retrieve the K largest elements.
During this, I was strongly hinted that I needed to sort the array.
So, I suggested using a built-in sort() or maybe a custom
implementation
That wasn't really a hint I guess, but rather a sort of trick to deceive you (to test how strong your knowledge is).
If you choose to approach the problem by sorting the whole source array using the built-in Dual-Pivot Quicksort, you can't obtain time complexity better than O(n log n).
Instead, we can maintain a PriorytyQueue which would store the result. And while iterating over the source array for each element we need to check whether the queue has reached the size K, if not the element should be added to the queue, otherwise (is size equals to K) we need to compare the next element against the lowest element in the queue - if the next element is smaller or equal we should ignore it if it is greater the lowest element has to be removed and the new element needs to be added.
The time complexity of this approach would be O(n log k) because adding a new element into the PriorytyQueue of size k costs O(k) and in the worst-case scenario this operation can be performed n times (because we're iterating over the array of size n).
Note that the best case time complexity would be Ω(n), i.e. linear.
So the difference between sorting and using a PriorytyQueue in terms of Big O boils down to the difference between O(n log n) and O(n log k). When k is much smaller than n this approach would give a significant performance gain.
Here's an implementation:
public static int[] getHighestK(int[] arr, int k) {
Queue<Integer> queue = new PriorityQueue<>();
for (int next: arr) {
if (queue.size() == k && queue.peek() < next) queue.remove();
if (queue.size() < k) queue.add(next);
}
return toIntArray(queue);
}
public static int[] toIntArray(Collection<Integer> source) {
return source.stream().mapToInt(Integer::intValue).toArray();
}
main()
public static void main(String[] args) {
System.out.println(Arrays.toString(getHighestK(new int[]{3, -1, 3, 12, 7, 8, -5, 9, 27}, 3)));
}
Output:
[9, 12, 27]
Sorting in O(n)
We can achieve worst case time complexity of O(n) when there are some constraints regarding the contents of the given array. Let's say it contains only numbers in the range [-1000,1000] (sure, you haven't been told that, but it's always good to clarify the problem requirements during the interview).
In this case, we can use Counting sort which has linear time complexity. Or better, just build a histogram (first step of Counting Sort) and look at the highest-valued buckets until you've seen K counts. (i.e. don't actually expand back to a fully sorted array, just expand counts back into the top K sorted elements.) Creating a histogram is only efficient if the array of counts (possible input values) is smaller than the size of the input array.
Another possibility is when the given array is partially sorted, consisting of several sorted chunks. In this case, we can use Timsort which is good at finding sorted runs. It will deal with them in a linear time.
And Timsort is already implemented in Java, it's used to sort objects (not primitives). So we can take advantage of the well-optimized and thoroughly tested implementation instead of writing our own, which is great. But since we are given an array of primitives, using built-in Timsort would have an additional cost - we need to copy the contents of the array into a list (or array) of wrapper type.
This is a classic problem that can be solved with so-called heapselect, a simple variation on heapsort. It also can be solved with quickselect, but like quicksort has poor quadratic worst-case time complexity.
Simply keep a priority queue, implemented as binary heap, of size k of the k smallest values. Walk through the array, and insert values into the heap (worst case O(log k)). When the priority queue is too large, delete the minimum value at the root (worst case O(log k)). After going through the n array elements, you have removed the n-k smallest elements, so the k largest elements remain. It's easy to see the worst-case time complexity is O(n log k), which is faster than O(n log n) at the cost of only O(k) space for the heap.
Here is one idea. I will think for creating array (int) with max size (2147483647) as it is max value of int (2147483647). Then for every number in for-each that I get from the original array just put the same index (as the number) +1 inside the empty array that I created.
So in the end of this for each I will have something like [1,0,2,0,3] (array that I created) which represent numbers [0, 2, 2, 4, 4, 4] (initial array).
So to find the K biggest elements you can make backward for over the created array and count back from K to 0 every time when you have different element then 0. If you have for example 2 you have to count this number 2 times.
The limitation of this approach is that it works only with integers because of the nature of the array...
Also the representation of int in java is -2147483648 to 2147483647 which mean that in the array that need to be created only the positive numbers can be placed.
NOTE: if you know that there is max number of the int then you can lower the created array size with that max number. For example if the max int is 1000 then your array which you need to create is with size 1000 and then this algorithm should perform very fast.
I think you misunderstood what you needed to sort.
You need to keep the K-sized list sorted, you don't need to sort the original N-sized input array. That way the time complexity would be O(N * log(K)) in the worst case (assuming you need to update the K-sized list almost every time).
The requirements said that N was very large, but K is much smaller, so O(N * log(K)) is also smaller than O(N * log(N)).
You only need to update the K-sized list for each record that is larger than the K-th largest element before it. For a randomly distributed list with N much larger than K, that will be negligible, so the time complexity will be closer to O(N).
For the K-sized list, you can take a look at the implementation of Is there a PriorityQueue implementation with fixed capacity and custom comparator? , which uses a PriorityQueue with some additional logic around it.
There is an algorithm to do this in worst-case time complexity O(n*log(k)) with very benign time constants (since there is just one pass through the original array, and the inner part that contributes to the log(k) is only accessed relatively seldomly if the input data is well-behaved).
Initialize a priority queue implemented with a binary heap A of maximum size k (internally using an array for storage). In the worst case, this has O(log(k)) for inserting, deleting and searching/manipulating the minimum element (in fact, retrieving the minimum is O(1)).
Iterate through the original unsorted array, and for each value v:
If A is not yet full then
insert v into A,
else, if v>min(A) then (*)
insert v into A,
remove the lowest value from A.
(*) Note that A can return repeated values if some of the highest k values occur repeatedly in the source set. You can avoid that by a search operation to make sure that v is not yet in A. You'd also want to find a suitable data structure for that (as the priority queue has linear complexity), i.e. a secondary hash table or balanced binary search tree or something like that, both of which are available in java.util.
The java.util.PriorityQueue helpfully guarantees the time complexity of its operations:
this implementation provides O(log(n)) time for the enqueing and dequeing methods (offer, poll, remove() and add); linear time for the remove(Object) and contains(Object) methods; and constant time for the retrieval methods (peek, element, and size).
Note that as laid out above, we only ever remove the lowest (first) element from A, so we enjoy the O(log(k)) for that. If you want to avoid duplicates as mentioned above, then you also need to search for any new value added to it (with O(k)), which opens you up to a worst-case overall scenario of O(n*k) instead of O(n*log(k)) in case of a pre-sorted input array, where every single element v causes the inner loop to fire.

Operation and time complexity in arbitrary array

Suppose you're given an arbitrary array of length n. Which of the following operations can you perform on the array in worst-case O(1) time?
A. Remove the ith element, decreasing the size by 1
B. Insert an element at ith position , increasing the size by 1
C. find the maximum element
D. Swap the elements at location i and j
In this question, I'm not sure about the definition of arbitrary array. It seems that D is correct, but I'm not sure. Could anybody explain it? Many thanks!
I think by arbitrary, it might just mean that the values of the array don't matter.
A) Removing the ith element and decreasing the array's size by one has an O(n) complexity: when removing an ith element, you have to move all the other elements down one index. If you removed the 0th element, you would need to move n - 1 elements down one index.
B) Inserting an element at the ith position and increasing the array's size by one has an O(n) complexity, for a similar reason. If I add an element to the beginning of the array, I need to move all the other elements up one. Not to mention that because arrays have fixed size, I would also need to create a new array and copy the former elements.
C) Finding the maximum element will at least take O(n) times. You need to look through each element to find the max value, right? Well, if the max value is at position n at the end of the array, you need to go through all the values up to that point.
D) Swapping elements is just O(1). It's a constant operation that doesn't require for or while loops or what not.
Something like that. Hope it helps!

Running time between Linked List and ArrayList? Code analysis

I have a midterm next week, and some of it has to do with code analysis/ sum simplification. I'm very lost, and am trying to understand this question my professor gave us on a practice work sheet.
Here is the pseudo code:
List <Integer> method( List <Integer> ints) {
for ( int i = 0; i < ints.size() / 2; i ++) {
swap(i, n − i − 1);
}
The question is asking: Express the worst case running time of this method as a sum assuming that the List is an ArrayList?
In which I got O(log n), since the size of the list is being divided in half every time.
But then the next question is: Express the worst case running time of this method as a sum assuming that the List is an LinkedList?
Now I am confused, I know that ArrayLists and LinkedLists have different time complexities, but wouldn't the answer be the same O(log n)?
Also how would I express this as a sum for each question? This is not homework, but I am trying my best to understand this subject.
If ints is an ArrayList, it can access any element in constant time. So it is going through the first half of the elements and swapping them with the corresponding element from the second half. This is still considered to be O(n) because the total number of iterations will be 1/2 * (n), and you drop the constant for the big O notation.
If ints is a LinkedList, you do not have constant time access to any element- you have to traverse the entire list to get to each element. So for each element in the first half of the list, you are iterating through again to find the corresponding element from the second half. This leads to a worst case runtime of O(n^2).

Find k-th smallest number of a subsequence in a circular array

Hi I am trying to solve this problem from IEEEXtreme 2014:
You are given N integers that are arranged circularly. There are N ways to pick consecutive subsequences of length M (M < N). For any such subsequence we can find the “K”-value of that subsequence. “K”-value for a given subsequence is the K-th smallest number in that subsequence. Given the array of N, find the smallest K-value of all possible subsequences. For example N=5 M=3 K=2 and the array 1 5 3 4 2 give the result 2.
My approach is first I create a sorted array list which inserts the new input in the correct position. I add the first M integers into the list. Record the K-th smallest value. Then I keep removing the oldest integer and adding the next integer into the list and comparing the new K-th value with the old one. This is my sorted array list.
class SortedArrayList extends ArrayList {
public void insertSorted(int value) {
for (int i = size()-1; i >= 0; i--){
if( value - (Integer)get(i)>=0){
add(i+1,new Integer(value));
return;
}
}
add(0,new Integer(value));
}
}
I think this brute-force method is not efficient but not able to come up with any ideas yet. Do you know any better solutions for this ? Thanks.
Here is a more efficient solution:
Let's get rid of circularity to keep things simpler. We can do it by appending the given array to itself.
We can assume that all numbers in the input are unique. If it is not the case, we may use a pair (element, position) instead of each element.
Let's sort the given array. Now we will use the binary search over the answer(that is, the position of the k-th smallest element among all subarrays in the sorted global array).
How to check that a fixed candidate x is at least as large as the k-th smallest number? Let's mark all positions of the numbers less than or equal to x with 1 and the rest with 0. Now we just need to check if there is a subarray of length M that contains at least k ones. We can do it in linear time using rolling sums.
The time complexity is: O(N log N) for sorting the input + O(N log N) for binary search over the answer(there are O(log N) checks and each of them is done in linear time as described in 4.). Thus, the total time complexity is O(N log N).
P.S. I can think of several other solutions with the same time complexity, but this one seems to be the simplest one to implement(it does not require any custom data structures).
More elegant solution for the problem with the circular array would be to simply use modulo. So, if you're just looking for a solution for simulating a circular array, i would suggest something like this:
int n = somevalue;//the startingpoint of the subsequence
int m = someothervalue;//the index in the subsequence
int absolute_index = (n + m) % N;
where N is the total number of elements in the sequence.
Next step towards more efficiency would be to store the index of the k-th value. This way, you'd only have to calculate a new K-Value every M-th step (worst case) and simply compare it to one new value per every other step.
But i'll leave that to you ;)

Sorted Array Distinct Values Sum to Target

I am currently working on this coding problem for class.
Given a sorted array of n distinct values as well as a target value T, determine in O(n) time whether or not there exist two distinct values in the array that sum to T. (For example, if the array contained 3, 5, 6, 7, and 9 and T = 14, then the method you are to write should return true, since 5+9 = 14. It should return false if for the same array of values T = 17.)
So, initially, I just wrote the problem with a nested linear search approach which obviously results in a O(n^2) runtime to establish a baseline to simplify from, however, I have only been able to, so far, simplify it to O(n log(n)). I did this by creating a new array made up of the differences of the Target - array[i] and then comparing the new array to the original array using a binary search nested within a loop that linearly goes up the new array.
I am not asking for an answer but rather a hint at where to look to simplify my code. I feel like the fact that the array is sorted is important in getting it down to O(n) but not sure how to go about doing it.
Thanks for your time!
Imagine you have two pointers (s, e) wich set on start and end of you array.
If you will move them in opposite direction (with specific algorithm) and look at the summ of elements you will see that moving one pointer increase summ and moving other decrease.
Onli thing you need is find balance.
If it doesnt help. Ask for next tip.
Some tips/steps:
1 - Start the iteration by the array[i], which is the nearest lower value from T
2 - Move another pointer to the array[0]
3 - Sum both values and compare with T
4 - If bigger or if lower, do appropriate moving in the pointers and repeat the step 3
A Hint:
Something like Binary Search, start with middle (compare with middle)
we have startindex = 0, endindex = N-1
while(some condition){
middleindex = endindex - startindex / 2, middle = array[middleindex]
if T - array[middleindex] > middle, startindex = middleindex
if T - array[middleindex] < middle, endindex = middleindex
}
It will do the task in O(log(n)) :D

Categories