Is the space complexity of this subset algorithm actually O(n)? - java

This is problem 9.4 from Cracking the Coding Interview 5th
The Problem: Write a method to return all the subsets of a set.
Here is my solution in Java.(tested it, it works!!!)
public static List<Set<Integer>> subsets(Set<Integer> s) {
Queue<Integer> copyToProtectData = new LinkedList<Integer>();
for(int member: s) {
copyToProtectData.add(member);
}
List<Set<Integer>> subsets = new ArrayList<Set<Integer>>();
generateSubsets(copyToProtectData, subsets, new HashSet<Integer>());
return subsets;
}
private static void generateSubsets(Queue<Integer> s,
List<Set<Integer>> subsets, Set<Integer> hashSet) {
if(s.isEmpty()) {
subsets.add(hashSet);
} else {
int member = s.remove();
Set<Integer> copy = new HashSet<Integer>();
for(int i:hashSet) {
copy.add(i);
}
hashSet.add(member);
Queue<Integer> queueCopy = new LinkedList<Integer>();
for(int i:s){
queueCopy.add(i);
}
generateSubsets(s, subsets, hashSet);
generateSubsets(queueCopy, subsets, copy);
}
}
I looked at the solutions for this problem and the author said that the solution to this algorithm runs in O(2n) time complexity and O(2n) space complexity. I agree with her that this algorithm runs in O(2n) time because to solve this problem, you have to consider the fact that for any element, you have two possibilities, it can either be in the set or not. And because you have n elements, your problem will have 2n possibilities so the problem would be solved with O(2n) time.
However I believe that I have a compelling argument that my algorithm runs in O(n) space. I know that space complexity is "the total space taken by an algorithm with respect to the input size"
Space Complexity and is relative to the the depth of a recursive call(remember this from some Youtube video I watched)
An example I have is generating [1,2,3] as a subset of [1,2,3]. Here is the set of recursive calls to generate that set
generateSubsets([], subsets, [1,2,3])
generateSubsets([3],subsets,[1,2])
generateSubsets([2,3],subsets,[1])
generateSubsets([1,2,3],subsets,[])
This show that the greatest depth of a recursive call with respect to the original set size n is n itself. Each of these recursive calls will have its own stack frame. So from this, I concluded that the space complexity is O(n) Does anyone see any flaws in my proof?

You need to take into account all memory that is allocated by your algorithm (or, rather, the greatest amount of allocated memory that is "in use" at any time) - not only on the stack, but also on the heap. Each of the generated subsets is being stored in the subsets list, which will eventually contain 2n sets, each of size somewhere between 0 and n (with most of the sets containing around n / 2 elements) - so the space complexity is actually O(n 2n).

Related

Time complexity for all subsets using backtracking

I am trying to understand the time complexity while using backtracking. The problem is
Given a set of unique integers, return all possible subsets.
Eg. Input [1,2,3] would return [[],[1],[2],[1,2],[3],[1,3],[2,3],[1,2,3]]
I am solving it using backtracking as this:
private List<List<Integer>> result = new ArrayList<>();
public List<List<Integer>> getSubsets(int[] nums) {
for (int length = 1; length <= nums.length; length++) { //O(n)
backtrack(nums, 0, new ArrayList<>(), length);
}
result.add(new ArrayList<>());
return result;
}
private void backtrack(int[] nums, int index, List<Integer> listSoFar, int length) {
if (length == 0) {
result.add(listSoFar);
return;
}
for (int i = index; i < nums.length; i++) { // O(n)
List<Integer> temp = new ArrayList<>();
temp.addAll(listSoFar); // O(2^n)
temp.add(nums[i]);
backtrack(nums, i + 1, temp, length - 1);
}
}
The code works fine, but I am having trouble understanding the time/space complexity.
What I am thinking is here the recursive method is called n times. In each call, it generates the sublist that may contain max 2^n elements. So time and space, both will be O(n x 2^n), is that right?
Is that right? If not, can any one elaborate?
Note that I saw some answers here, like this but unable to understand. When recursion comes into the picture, I am finding it a bit hard to wrap my head around it.
You're exactly right about space complexity. The total space of the final output is O(n*2^n), and this dominates the total space used by the program. The analysis of the time complexity is slightly off though. Optimally, time complexity would, in this case, be the same as the space complexity, but there are a couple inefficiencies here (one of which is that you're not actually backtracking) such that the time complexity is actually O(n^2*2^n) at best.
It can definitely be useful to analyze a recursive algorithm's time complexity in terms of how many times the recursive method is called times how much work each call does. But be careful about saying backtrack is only called n times: it is called n times at the top level, but this is ignoring all the subsequent recursive calls. Also every call at the top level, backtrack(nums, 0, new ArrayList<>(), length); is responsible for generating all subsets sized length, of which there are n Choose length. That is, no single top-level call will ever produce 2^n subsets; it's instead that the sum of n Choose length for lengths from 0 to n is 2^n:
Knowing that across all recursive calls, you generate 2^n subsets, you might then want to ask how much work is done in generating each subset in order to determine the overall complexity. Optimally, this would be O(n), because each subset varies in length from 0 to n, with the average length being n/2, so the overall algorithm might be O(n/2*2^n) = O(n*2^n), but you can't just assume the subsets are generated optimally and that no significant extra work is done.
In your case, you're building subsets through the listSoFar variable until it reaches the appropriate length, at which point it is appended to the result. However, listSoFar gets copied to a temp list in O(n) time for each of its O(n) characters, so the complexity of generating each subset is O(n^2), which brings the overall complexity to O(n^2*2^n). Also, some listSoFar subsets are created which never figure into the final output (you never check to see that there are enough numbers remaining in nums to fill listSoFar out to the desired length before recursing), so you end up doing unnecessary work in building subsets and making recursive calls which will never reach the base case to get appended to result, which might also worsen the asymptotic complexity. You can address the first of these inefficiencies with back-tracking, and the second with a simple break statement. I wrote these changes into a JavaScript program, leaving most of the logic the same but re-naming/re-organizing a little bit:
function getSubsets(nums) {
let subsets = [];
for (let length = 0; length <= nums.length; length++) {
// refactored "backtrack" function:
genSubsetsByLength(length); // O(length*(n Choose length))
}
return subsets;
function genSubsetsByLength(length, i=0, partialSubset=[]) {
if (length === 0) {
subsets.push(partialSubset.slice()); // O(n): copy partial and push to result
return;
}
while (i < nums.length) {
if (nums.length - i < length) break; // don't build partial results that can't finish
partialSubset.push(nums[i]); // O(1)
genSubsetsByLength(length - 1, ++i, partialSubset);
partialSubset.pop(); // O(1): this is the back-tracking part
}
}
}
for (let subset of getSubsets([1, 2, 3])) console.log(`[`, ...subset, ']');
The key difference is using back-tracking to avoid making copies of the partial subset every time you add a new element to it, such that each is built in O(length) = O(n) time rather than O(n^2) time, because there is now only O(1) work done per element added. Popping off the last character added to the partial result after each recursive call allows you to re-use the same array across recursive calls, thus avoiding the O(n) overhead of making temp copies for each call. This, along with the fact that only subsets which appear in the final output are built, allows you to analyze the total time complexity in terms of the total number of elements across all subsets in the output: O(n*2^n).
Your code works not efficiently.
Like first solution in the link, you only think about the number will be included or not. (like getting combination)
It means, you don't have to iterate in getSubsets and backtrack function.
"backtrack" function could iterate "nums" array with parameter
private List<List<Integer>> result = new ArrayList<>();
public List<List<Integer>> getSubsets(int[] nums) {
backtrack(nums, 0, new ArrayList<>(), new ArrayList<>());
return result;
}
private void backtrack(int[] nums, int index, List<Integer> listSoFar)
// This function time complexity 2^N, because will search all cases when the number included or not
{
if (index == nums.length) {
result.add(listSoFar);
return;
}
// exclude num[index] in the subset
backtrack(nums, index+1, listSoFar)
// include num[index] in the subset
backtrack(nums, index+1, listSoFar.add(nums[index]))
}

Finding the K largest elements on Java with O(n+klog n)? [duplicate]

What is the fastest way to find the k largest elements in an array in order (i.e. starting from the largest element to the kth largest element)?
One option would be the following:
Using a linear-time selection algorithm like median-of-medians or introsort, find the kth largest element and rearrange the elements so that all elements from the kth element forward are greater than the kth element.
Sort all elements from the kth forward using a fast sorting algorithm like heapsort or quicksort.
Step (1) takes time O(n), and step (2) takes time O(k log k). Overall, the algorithm runs in time O(n + k log k), which is very, very fast.
Hope this helps!
C++ also provides the partial_sort algorithm, which solves the problem of selecting the smallest k elements (sorted), with a time complexity of O(n log k). No algorithm is provided for selecting the greatest k elements since this should be done by inverting the ordering predicate.
For Perl, the module Sort::Key::Top, available from CPAN, provides a set of functions to select the top n elements from a list using several orderings and custom key extraction procedures. Furthermore, the Statistics::CaseResampling module provides a function to calculate quantiles using quickselect.
Python's standard library (since 2.4) includes heapq.nsmallest() and nlargest(), returning sorted lists, the former in O(n + k log n) time, the latter in O(n log k) time.
Radix sort solution:
Sort the array in descending order, using radix sort;
Print first K elements.
Time complexity: O(N*L), where L = length of the largest element, can assume L = O(1).
Space used: O(N) for radix sort.
However, I think radix sort has costly overhead, making its linear time complexity less attractive.
1) Build a Max Heap tree in O(n)
2) Use Extract Max k times to get k maximum elements from the Max Heap O(klogn)
Time complexity: O(n + klogn)
A C++ implementation using STL is given below:
#include <iostream>
#include<bits/stdc++.h>
using namespace std;
int main() {
int arr[] = {4,3,7,12,23,1,8,5,9,2};
//Lets extract 3 maximum elements
int k = 3;
//First convert the array to a vector to use STL
vector<int> vec;
for(int i=0;i<10;i++){
vec.push_back(arr[i]);
}
//Build heap in O(n)
make_heap(vec.begin(), vec.end());
//Extract max k times
for(int i=0;i<k;i++){
cout<<vec.front()<<" ";
pop_heap(vec.begin(),vec.end());
vec.pop_back();
}
return 0;
}
#templatetypedef's solution is probably the fastest one, assuming you can modify or copy input.
Alternatively, you can use heap or BST (set in C++) to store k largest elements at given moment, then read array's elements one by one. While this is O(n lg k), it doesn't modify input and only uses O(k) additional memory. It also works on streams (when you don't know all the data from the beginning).
Here's a solution with O(N + k lg k) complexity.
int[] kLargest_Dremio(int[] A, int k) {
int[] result = new int[k];
shouldGetIndex = true;
int q = AreIndicesValid(0, A.Length - 1) ? RandomizedSelet(0, A.Length-1,
A.Length-k+1) : -1;
Array.Copy(A, q, result, 0, k);
Array.Sort(result, (a, b) => { return a>b; });
return result;
}
AreIndicesValid and RandomizedSelet are defined in this github source file.
There was a question on performance & restricted resources.
Make a value class for the top 3 values. Use such an accumulator for reduction in a parallel stream. Limit the parallelism according to the context (memory, power).
class BronzeSilverGold {
int[] values = new int[] {Integer.MIN_VALUE, Integer.MIN_VALUE, Integer.MIN_VALUE};
// For reduction
void add(int x) {
...
}
// For combining two results of two threads.
void merge(BronzeSilverGold other) {
...
}
}
The parallelism must be restricted in your constellation, hence specify an N_THREADS in:
try {
ForkJoinPool threadPool = new ForkJoinPool(N_THREADS);
threadPool.submit(() -> {
BronzeSilverGold result = IntStream.of(...).parallel().collect(
BronzeSilverGold::new,
(bsg, n) -> BronzeSilverGold::add,
(bsg1, bsg2) -> bsg1.merge(bsg2));
...
});
} catch (InterruptedException | ExecutionException e) {
prrtl();
}

Finding mean and median in constant time

This is a common interview question.
You have a stream of numbers coming in (let's say more than a million). The numbers are between [0-999]).
Implement a class which supports three methods in O(1)
* insert(int i);
* getMean();
* getMedian();
This is my code.
public class FindAverage {
private int[] store;
private long size;
private long total;
private int highestIndex;
private int lowestIndex;
public FindAverage() {
store = new int[1000];
size = 0;
total = 0;
highestIndex = Integer.MIN_VALUE;
lowestIndex = Integer.MAX_VALUE;
}
public void insert(int item) throws OutOfRangeException {
if(item < 0 || item > 999){
throw new OutOfRangeException();
}
store[item] ++;
size ++;
total += item;
highestIndex = Integer.max(highestIndex, item);
lowestIndex = Integer.min(lowestIndex, item);
}
public float getMean(){
return (float)total/size;
}
public float getMedian(){
}
}
I can't seem to think of a way to get the median in O(1) time.
Any help appreciated.
You have already done all the heavy lifting, by building the store counters. Together with the size value, it's easy enough.
You simply start iterating the store, summing up the counts until you reach half of size. That is your median value, if size is odd. For even size, you'll grab the two surrounding values and get their average.
Performance is O(1000/2) on average, which means O(1), since it doesn't depend on n, i.e. performance is unchanged even if n reaches into the billions.
Remember, O(1) doesn't mean instant, or even fast. As Wikipedia says it:
An algorithm is said to be constant time (also written as O(1) time) if the value of T(n) is bounded by a value that does not depend on the size of the input.
In your case, that bound is 1000.
The possible values that you can read are quite limited - just 1000. So you can think of implementing something like a counting sort - each time a number is input you increase the counter for that value.
To implement the median in constant time, you will need two numbers - the median index(i.e. the value of the median) and the number of values you've read and that are on the left(or right) of the median. I will just stop here hoping you will be able to figure out how to continue on your own.
EDIT(as pointed out in the comments): you already have the array with the sorted elements(stored) and you know the number of elements to the left of the median(size/2). You only need to glue the logic together. I would like to point out that if you use linear additional memory you won't need to iterate over the whole array on each insert.
For the general case, where range of elements is unlimited, such data structure does not exist based on any comparisons based algorithm, as it will allow O(n) sorting.
Proof: Assume such DS exist, let it be D.
Let A be input array for sorting. (Assume A.size() is even for simplicity, that can be relaxed pretty easily by adding a garbage element and discarding it later).
sort(A):
ds = new D()
for each x in A:
ds.add(x)
m1 = min(A) - 1
m2 = max(A) + 1
for (i=0; i < A.size(); i++):
ds.add(m1)
# at this point, ds.median() is smallest element in A
for (i = 0; i < A.size(); i++):
yield ds.median()
# Each two insertions advances median by 1
ds.add(m2)
ds.add(m2)
Claim 1: This algorithm runs in O(n).
Proof: Since we have constant operations of add() and median(), each of them is O(1) per iteration, and the number of iterations is linear - the complexity is linear.
Claim 2: The output is sorted(A).
Proof (guidelines): After inserting n times m1, the median is the smallest element in A. Each two insertions after it advances the median by one item, and since the advance is sorted, the total output is sorted.
Since the above algorithm sorts in O(n), and not possible under comparisons model, such DS does not exist.
QED.

Find all the ways you can go up an n step staircase if you can take k steps at a time such that k <= n

This is a problem I'm trying to solve on my own to be a bit better at recursion(not homework). I believe I found a solution, but I'm not sure about the time complexity (I'm aware that DP would give me better results).
Find all the ways you can go up an n step staircase if you can take k steps at a time such that k <= n
For example, if my step sizes are [1,2,3] and the size of the stair case is 10, I could take 10 steps of size 1 [1,1,1,1,1,1,1,1,1,1]=10 or I could take 3 steps of size 3 and 1 step of size 1 [3,3,3,1]=10
Here is my solution:
static List<List<Integer>> problem1Ans = new ArrayList<List<Integer>>();
public static void problem1(int numSteps){
int [] steps = {1,2,3};
problem1_rec(new ArrayList<Integer>(), numSteps, steps);
}
public static void problem1_rec(List<Integer> sequence, int numSteps, int [] steps){
if(problem1_sum_seq(sequence) > numSteps){
return;
}
if(problem1_sum_seq(sequence) == numSteps){
problem1Ans.add(new ArrayList<Integer>(sequence));
return;
}
for(int stepSize : steps){
sequence.add(stepSize);
problem1_rec(sequence, numSteps, steps);
sequence.remove(sequence.size()-1);
}
}
public static int problem1_sum_seq(List<Integer> sequence){
int sum = 0;
for(int i : sequence){
sum += i;
}
return sum;
}
public static void main(String [] args){
problem1(10);
System.out.println(problem1Ans.size());
}
My guess is that this runtime is k^n where k is the numbers of step sizes, and n is the number of steps (3 and 10 in this case).
I came to this answer because each step size has a loop that calls k number of step sizes. However, the depth of this is not the same for all step sizes. For instance, the sequence [1,1,1,1,1,1,1,1,1,1] has more recursive calls than [3,3,3,1] so this makes me doubt my answer.
What is the runtime? Is k^n correct?
TL;DR: Your algorithm is O(2n), which is a tighter bound than O(kn), but because of some easily corrected inefficiencies the implementation runs in O(k2 × 2n).
In effect, your solution enumerates all of the step-sequences with sum n by successively enumerating all of the viable prefixes of those step-sequences. So the number of operations is proportional to the number of step sequences whose sum is less than or equal to n. [See Notes 1 and 2].
Now, let's consider how many possible prefix sequences there are for a given value of n. The precise computation will depend on the steps allowed in the vector of step sizes, but we can easily come up with a maximum, because any step sequence is a subset of the set of integers from 1 to n, and we know that there are precisely 2n such subsets.
Of course, not all subsets qualify. For example, if the set of step-sizes is [1, 2], then you are enumerating Fibonacci sequences, and there are O(φn) such sequences. As k increases, you will get closer and closer to O(2n). [Note 3]
Because of the inefficiencies in your coded, as noted, your algorithm is actually O(k2 αn) where α is some number between φ and 2, approaching 2 as k approaches infinity. (φ is 1.618..., or (1+sqrt(5))/2)).
There are a number of improvements that could be made to your implementation, particularly if your intent was to count rather than enumerate the step sizes. But that was not your question, as I understand it.
Notes
That's not quite exact, because you actually enumerate a few extra sequences which you then reject; the cost of these rejections is a multiplier by the size of the vector of possible step sizes. However, you could easily eliminate the rejections by terminating the for loop as soon as a rejection is noticed.
The cost of an enumeration is O(k) rather than O(1) because you compute the sum of the sequence arguments for each enumeration (often twice). That produces an additional factor of k. You could easily eliminate this cost by passing the current sum into the recursive call (which would also eliminate the multiple evaluations). It is trickier to avoid the O(k) cost of copying the sequence into the output list, but that can be done using a better (structure-sharing) data-structure.
The question in your title (as opposed to the problem solved by the code in the body of your question) does actually require enumerating all possible subsets of {1…n}, in which case the number of possible sequences would be exactly 2n.
If you want to solve this recursively, you should use a different pattern that allows caching of previous values, like the one used when calculating Fibonacci numbers. The code for Fibonacci function is basically about the same as what do you seek, it adds previous and pred-previous numbers by index and returns the output as current number. You can use the same technique in your recursive function , but add not f(k-1) and f(k-2), but gather sum of f(k-steps[i]). Something like this (I don't have a Java syntax checker, so bear with syntax errors please):
static List<Integer> cache = new ArrayList<Integer>;
static List<Integer> storedSteps=null; // if used with same value of steps, don't clear cache
public static Integer problem1(Integer numSteps, List<Integer> steps) {
if (!ArrayList::equal(steps, storedSteps)) { // check equality data wise, not link wise
storedSteps=steps; // or copy with whatever method there is
cache.clear(); // remove all data - now invalid
// TODO make cache+storedSteps a single structure
}
return problem1_rec(numSteps,steps);
}
private static Integer problem1_rec(Integer numSteps, List<Integer> steps) {
if (0>numSteps) { return 0; }
if (0==numSteps) { return 1; }
if (cache.length()>=numSteps+1) { return cache[numSteps] } // cache hit
Integer acc=0;
for (Integer i : steps) { acc+=problem1_rec(numSteps-i,steps); }
cache[numSteps]=acc; // cache miss. Make sure ArrayList supports inserting by index, otherwise use correct type
return acc;
}

Java, Finding Kth largest value from the array [duplicate]

This question already has answers here:
How to find the kth largest element in an unsorted array of length n in O(n)?
(32 answers)
Closed 7 years ago.
I had an interview with Facebook and they asked me this question.
Suppose you have an unordered array with N distinct values
$input = [3,6,2,8,9,4,5]
Implement a function that finds the Kth largest value.
EG: If K = 0, return 9. If K = 1, return 8.
What I did was this method.
private static int getMax(Integer[] input, int k)
{
List<Integer> list = Arrays.asList(input);
Set<Integer> set = new TreeSet<Integer>(list);
list = new ArrayList<Integer>(set);
int value = (list.size() - 1) - k;
return list.get(value);
}
I just tested and the method works fine based on the question. However, interviewee said, in order to make your life complex! lets assume that your array contains millions of numbers then your listing becomes too slow. What you do in this case?
As hint, he suggested to use min heap. Based on my knowledge each child value of heap should not be more than root value. So, in this case if we assume that 3 is root then 6 is its child and its value is grater than root's value. I'm probably wrong but what you think and what is its implementation based on min heap?
He has actually given you the whole answer. Not just a hint.
And your understanding is based on max heap. Not min heap. And it's workings are self-explanatory.
In a min heap, the root has the minimum (less than it's children) value.
So, what you need is, iterate over the array and populate K elements in min heap.
Once, it's done, the heap automatically contains the lowest at the root.
Now, for each (next) element you read from the array,
-> check if the value is greater than root of min heap.
-> If yes, remove root from min heap, and add the value to it.
After you traverse your whole array, the root of min heap will automtically contain the kth largest element.
And all other elements (k-1 elements to be precise) in the heap will be larger than k.
Here is the implementation of the Min Heap using PriorityQueue in java. Complexity: n * log k.
import java.util.PriorityQueue;
public class LargestK {
private static Integer largestK(Integer array[], int k) {
PriorityQueue<Integer> queue = new PriorityQueue<Integer>(k+1);
int i = 0;
while (i<=k) {
queue.add(array[i]);
i++;
}
for (; i<array.length; i++) {
Integer value = queue.peek();
if (array[i] > value) {
queue.poll();
queue.add(array[i]);
}
}
return queue.peek();
}
public static void main(String[] args) {
Integer array[] = new Integer[] {3,6,2,8,9,4,5};
System.out.println(largestK(array, 3));
}
}
Output: 5
The code loop over the array which is O(n). Size of the PriorityQueue (Min Heap) is k, so any operation would be log k. In the worst case scenario, in which all the number are sorted ASC, complexity is n*log k, because for each element you need to remove top of the heap and insert new element.
Edit: Check this answer for O(n) solution.
You can probably make use of PriorityQueue as well to solve this problem:
public int findKthLargest(int[] nums, int k) {
int p = 0;
int numElements = nums.length;
// create priority queue where all the elements of nums will be stored
PriorityQueue<Integer> pq = new PriorityQueue<Integer>();
// place all the elements of the array to this priority queue
for (int n : nums){
pq.add(n);
}
// extract the kth largest element
while (numElements-k+1 > 0){
p = pq.poll();
k++;
}
return p;
}
From the Java doc:
Implementation note: this implementation provides O(log(n)) time for
the enqueing and dequeing methods (offer, poll, remove() and
add); linear time for the remove(Object) and contains(Object)
methods; and constant time for the retrieval methods (peek,
element, and size).
The for loop runs n times and the complexity of the above algorithm is O(nlogn).
Heap based solution is perfect if the number of elements in array/stream is unknown. But, what if they are finite but still you want an optimized solution in linear time.
We can use Quick Select, discussed here.
Array = [3,6,2,8,9,4,5]
Let's chose the pivot as first element:
pivot = 3 (at 0th index),
Now partition the array in such a way that all elements less than or equal are on left side and numbers greater than 3 on right side. Like it's done in Quick Sort (discussed on my blog).
So after first pass - [2,3,6,8,9,4,5]
pivot index is 1 (i.e it's the second lowest element). Now apply the same process again.
chose, 6 now, the value at index after previous pivot - [2,3,4,5,6,8,9]
So now 6 is at the proper place.
Keep checking if you have found the appropriate number (kth largest or kth lowest in each iteration). If it's found you are done else continue.
One approach for constant values of k is to use a partial insertion sort.
(This assumes distinct values, but can easily be altered to work with duplicates as well)
last_min = -inf
output = []
for i in (0..k)
min = +inf
for value in input_array
if value < min and value > last_min
min = value
output[i] = min
print output[k-1]
(That's pseudo code, but should be easy enough to implement in Java).
The overall complexity is O(n*k), which means it works pretty well if and only if k is constant or known to be less that log(n).
On the plus side, it is a really simple solution. On the minus side, it is not as efficient as the heap solution

Categories