Can InsertionSort complexity be brought down to O(nlogn)? - java

I was wondering if by using BinarySearch & ArrayLists in java the complexity of InsertionSOrt can be brought down to O(nlogn) from O(n2).

There are 2 things that contribute to O(n2) complexity in insertion sort :
1) Searching appropriate position for the elements to be replaced - which is O(n) per iteration. It can be reduced to O(log n) using binary search.
2) Once the correct position is found then shifting the elements greater than (or smaller than) the element to the right. In the worst case you will insert an element to the front of the list and this would require shifting all remaining elements in the sorted list to the right - Worst case is O(n) per iteration.
So total complexity using Binary Search & ArrayList would be :
O(n * n + n * log n)

The complexity for Insertion Sort is :
Iterate through all the remaining elements which are to be sorted. (O(n) for n elements)
a. if the next element (say at position k) is greater than the element at pos k-1, then do nothing (checking is O(1) ). i.e, O(n * 1) is best case where already all elements are sorted.
b. if the element at pos k is smaller than that at k-1, then move the element left until it is greater than the element to its left or there are no elements left to compare. . So O(n*n)

Related

Best way to retrieve K largest elements from large unsorted arrays?

I recently had a coding test during an interview. I was told:
There is a large unsorted array of one million ints. User wants to retrieve K largest elements. What algorithm would you implement?
During this, I was strongly hinted that I needed to sort the array.
So, I suggested to use built-in sort() or maybe a custom implementation if performance really mattered. I was then told that using a Collection or array to store the k largest and for-loop it is possible to achieve approximately O(N), in hindsight, I think it's O(N*k) because each iteration needs to compare to the K sized array to find the smallest element to replace, while the need to sort the array would cause the code to be at least O(N log N).
I then reviewed this link on SO that suggests priority queue of K numbers, removing the smallest number every time a larger element is found, which would also give O(N log N). Write a program to find 100 largest numbers out of an array of 1 billion numbers
Is the for-loop method bad? How should I justify pros/cons of using the for-loop or the priorityqueue/sorting methods? I'm thinking that if the array is already sorted, it could help by not needing to iterate through the whole array again, i.e. if some other method of retrieval is called on the sorted array, it should be constant time. Is there some performance factor when running the actual code that I didn't consider when theorizing pseudocode?
Another way of solving this is using Quickselect. This should give you a total average time complexity of O(n). Consider this:
Find the kth largest number x using Quickselect (O(n))
Iterate through the array again (or just through the right-side partition) (O(n)) and save all elements ≥ x
Return your saved elements
(If there are repeated elements, you can avoid them by keeping count of how many duplicates of x you need to add to the result.)
The difference between your problem and the one in the SO question you linked to is that you have only one million elements, so they can definitely be kept in memory to allow normal use of Quickselect.
There is a large unsorted array of one million ints. The user wants to retrieve the K largest elements.
During this, I was strongly hinted that I needed to sort the array.
So, I suggested using a built-in sort() or maybe a custom
implementation
That wasn't really a hint I guess, but rather a sort of trick to deceive you (to test how strong your knowledge is).
If you choose to approach the problem by sorting the whole source array using the built-in Dual-Pivot Quicksort, you can't obtain time complexity better than O(n log n).
Instead, we can maintain a PriorytyQueue which would store the result. And while iterating over the source array for each element we need to check whether the queue has reached the size K, if not the element should be added to the queue, otherwise (is size equals to K) we need to compare the next element against the lowest element in the queue - if the next element is smaller or equal we should ignore it if it is greater the lowest element has to be removed and the new element needs to be added.
The time complexity of this approach would be O(n log k) because adding a new element into the PriorytyQueue of size k costs O(k) and in the worst-case scenario this operation can be performed n times (because we're iterating over the array of size n).
Note that the best case time complexity would be Ω(n), i.e. linear.
So the difference between sorting and using a PriorytyQueue in terms of Big O boils down to the difference between O(n log n) and O(n log k). When k is much smaller than n this approach would give a significant performance gain.
Here's an implementation:
public static int[] getHighestK(int[] arr, int k) {
Queue<Integer> queue = new PriorityQueue<>();
for (int next: arr) {
if (queue.size() == k && queue.peek() < next) queue.remove();
if (queue.size() < k) queue.add(next);
}
return toIntArray(queue);
}
public static int[] toIntArray(Collection<Integer> source) {
return source.stream().mapToInt(Integer::intValue).toArray();
}
main()
public static void main(String[] args) {
System.out.println(Arrays.toString(getHighestK(new int[]{3, -1, 3, 12, 7, 8, -5, 9, 27}, 3)));
}
Output:
[9, 12, 27]
Sorting in O(n)
We can achieve worst case time complexity of O(n) when there are some constraints regarding the contents of the given array. Let's say it contains only numbers in the range [-1000,1000] (sure, you haven't been told that, but it's always good to clarify the problem requirements during the interview).
In this case, we can use Counting sort which has linear time complexity. Or better, just build a histogram (first step of Counting Sort) and look at the highest-valued buckets until you've seen K counts. (i.e. don't actually expand back to a fully sorted array, just expand counts back into the top K sorted elements.) Creating a histogram is only efficient if the array of counts (possible input values) is smaller than the size of the input array.
Another possibility is when the given array is partially sorted, consisting of several sorted chunks. In this case, we can use Timsort which is good at finding sorted runs. It will deal with them in a linear time.
And Timsort is already implemented in Java, it's used to sort objects (not primitives). So we can take advantage of the well-optimized and thoroughly tested implementation instead of writing our own, which is great. But since we are given an array of primitives, using built-in Timsort would have an additional cost - we need to copy the contents of the array into a list (or array) of wrapper type.
This is a classic problem that can be solved with so-called heapselect, a simple variation on heapsort. It also can be solved with quickselect, but like quicksort has poor quadratic worst-case time complexity.
Simply keep a priority queue, implemented as binary heap, of size k of the k smallest values. Walk through the array, and insert values into the heap (worst case O(log k)). When the priority queue is too large, delete the minimum value at the root (worst case O(log k)). After going through the n array elements, you have removed the n-k smallest elements, so the k largest elements remain. It's easy to see the worst-case time complexity is O(n log k), which is faster than O(n log n) at the cost of only O(k) space for the heap.
Here is one idea. I will think for creating array (int) with max size (2147483647) as it is max value of int (2147483647). Then for every number in for-each that I get from the original array just put the same index (as the number) +1 inside the empty array that I created.
So in the end of this for each I will have something like [1,0,2,0,3] (array that I created) which represent numbers [0, 2, 2, 4, 4, 4] (initial array).
So to find the K biggest elements you can make backward for over the created array and count back from K to 0 every time when you have different element then 0. If you have for example 2 you have to count this number 2 times.
The limitation of this approach is that it works only with integers because of the nature of the array...
Also the representation of int in java is -2147483648 to 2147483647 which mean that in the array that need to be created only the positive numbers can be placed.
NOTE: if you know that there is max number of the int then you can lower the created array size with that max number. For example if the max int is 1000 then your array which you need to create is with size 1000 and then this algorithm should perform very fast.
I think you misunderstood what you needed to sort.
You need to keep the K-sized list sorted, you don't need to sort the original N-sized input array. That way the time complexity would be O(N * log(K)) in the worst case (assuming you need to update the K-sized list almost every time).
The requirements said that N was very large, but K is much smaller, so O(N * log(K)) is also smaller than O(N * log(N)).
You only need to update the K-sized list for each record that is larger than the K-th largest element before it. For a randomly distributed list with N much larger than K, that will be negligible, so the time complexity will be closer to O(N).
For the K-sized list, you can take a look at the implementation of Is there a PriorityQueue implementation with fixed capacity and custom comparator? , which uses a PriorityQueue with some additional logic around it.
There is an algorithm to do this in worst-case time complexity O(n*log(k)) with very benign time constants (since there is just one pass through the original array, and the inner part that contributes to the log(k) is only accessed relatively seldomly if the input data is well-behaved).
Initialize a priority queue implemented with a binary heap A of maximum size k (internally using an array for storage). In the worst case, this has O(log(k)) for inserting, deleting and searching/manipulating the minimum element (in fact, retrieving the minimum is O(1)).
Iterate through the original unsorted array, and for each value v:
If A is not yet full then
insert v into A,
else, if v>min(A) then (*)
insert v into A,
remove the lowest value from A.
(*) Note that A can return repeated values if some of the highest k values occur repeatedly in the source set. You can avoid that by a search operation to make sure that v is not yet in A. You'd also want to find a suitable data structure for that (as the priority queue has linear complexity), i.e. a secondary hash table or balanced binary search tree or something like that, both of which are available in java.util.
The java.util.PriorityQueue helpfully guarantees the time complexity of its operations:
this implementation provides O(log(n)) time for the enqueing and dequeing methods (offer, poll, remove() and add); linear time for the remove(Object) and contains(Object) methods; and constant time for the retrieval methods (peek, element, and size).
Note that as laid out above, we only ever remove the lowest (first) element from A, so we enjoy the O(log(k)) for that. If you want to avoid duplicates as mentioned above, then you also need to search for any new value added to it (with O(k)), which opens you up to a worst-case overall scenario of O(n*k) instead of O(n*log(k)) in case of a pre-sorted input array, where every single element v causes the inner loop to fire.

Time complexity halving an array

What would be the time complexity of partitioning an array in two and finding the minimum element overall?
Is it O(n) or O(log n)?
The complexity of dividing an (unsorted) array into 2 sorted partitions is O(NlogN).
Once you have two sorted partitions, it is O(1) to find the smallest element in either ... and hence both partitions.
(The smallest element of a sorted partition is the first one.)
Time Complexity for Partitioned Array
If an array A is already divided in two sorted partitions P1 and P2, where P1 is distributed along the indexes of A 0 <= i < k and P2 along the indexes k <= i < n with k an arbitrary index within the range 0 <= k < n.
Then, you know that the smallest element of both partitions is their first. Accessing both partition's first element has a time complexity of O(1) and comparing the two smallest values retrieved has again a time complexity of O(1).
So, the overall complexity of finding the minimum value in an array divided into two sorted partitions is O(1).
Time Complexity for Array to Partition
Instead, if the given array A has to be sorted in two sorted partitions (because this is a requirement) and then you need to find its minimum element. Then you need to divide your array into two partitions with an arbitrary index k, sort the two partitions with the most efficient sorting algorithm, which has complexity O(n log n), and then applying the same logic exposed above in finding the minimum element.
For any given value of k, with k 0 <= k < n, we would have to apply the sorting algorithm twice (on both partitions). However, since the additive property of Complexity Computation states that the addition of two complexities of the same order is still equal to the same complexity, then for example for k = n/2 we would have that: O(n/2 log n/2) + O(n/2 log n/2) still produces O(n log n). More in general O(k log k) + O((n-k) log (n-k)) with 0 <= k < n and n => ∞, which will still give us O(n log n) due to the constant factors property. Finally, we need to account to this complexity the constant complexity, O(1) of finding the minimum element among the two partitions, which will still give us O(n log n).
In conclusion, the overall complexity for dividing an array A in two partitions P1 and P2 and finding the minimum element overall is O(n log n).

Algorithm with O(m (log n + log m)) time complexity for finding kth smallest element in n*m matrix with each row sorted?

I ran into an interview question recently.
We have m*n matrix such that each row is in non-decreasing order (sorted with distinct elements). design an algorithm on order O(m (log m+ log n)) to find k-th smallest element on this matrix (just one element as k-th smallest element).
I think this is not possible so search on Google and find this link and another solution and this answer to a similar question.
I think as follows:
Put the median of all rows into an array and we find the median of this array in O(m) and called it pivot
We find the rank of this element in O(m log n). i.e: in each row how many elements are lower than the pivot found in step (1).
By comparing k and "rank of the pivot" we can know that in each row works on the right half or left half. (reduce to m*n/2 matrix.)
But the time complexity of this algorithm is O(m * log^2 n). What is the algorithm that can works on O(m (log n + log m))? Is there any idea?
m - rows
n - columns
Is it compulsory that you want a solution with O(m (log m+ log n)) Complexity?
I Can Think of a Solution with Complexity O(k * log-m) with extra space of O(m)
You can use a modified PriorityQueue (heap) DataStructure for this complexity
class PQObject {
int value; // PQ sorting happens on this int..
int m; // And m and n are positions.
int n;
}
You can just put all the values from the first column to the Priority Queue and Start popping until the kth smallest element
Every time you pop re-insert the next value of the row using m and n in the popped object.
Ultimately the problem comes down to find the kth smallest element in M sorted arrays.

Why are these loop & hashing operations take O(N) time complexity?

Given the array :
int arr[]= {1, 2, 3, 2, 3, 1, 3}
You are asked to find a number within the array that occurs odd number of times. It's 3 (occurring 3 times). The time complexity should be at least O(n).
The solution is to use an HashMap. Elements become keys and their counts become values of the hashmap.
// Code belongs to geeksforgeeks.org
// function to find the element occurring odd
    // number of times
    static int getOddOccurrence(int arr[], int n)
    {
        HashMap<Integer,Integer> hmap = new HashMap<>();
        // Putting all elements into the HashMap
        for(int i = 0; i < n; i++)
        {
            if(hmap.containsKey(arr[i]))
            {
                int val = hmap.get(arr[i]);
                // If array element is already present then
                // increase the count of that element.
                hmap.put(arr[i], val + 1); 
            }
            else
                // if array element is not present then put
                // element into the HashMap and initialize 
                // the count to one.
                hmap.put(arr[i], 1); 
        }
        // Checking for odd occurrence of each element present
          // in the HashMap 
        for(Integer a:hmap.keySet())
        {
            if(hmap.get(a) % 2 != 0)
                return a;
        }
        return -1;
    }
I don't get why this overall operation takes O(N) time complexity. If I think about it, the loop alone takes O(N) time complexity. Those hmap.put (an insert operation) and hmap.get(a find operations) take O(N) and they are nested within the loop. So normally I would think this function takes O(N^2) times. Why it instead takes O(N)?.
I don't get why this overall operation takes O(N) time complexity.
You must examine all elements of the array - O(N)
For each element of the array you call contain, get and put on the array. These are O(1) operations. Or more precisely, they are O(1) on averaged amortized over the lifefime of the HashMap. This is due to the fact that a HashMap will grow its hash array when the ratio of the array size to the number of elements exceeds the load factor.
O(N) repetitions of 2 or 3 O(1) operations is O(N). QED
Reference:
Is a Java hashmap really O(1)?
Strictly speaking there are a couple of scenarios where a HashMap is not O(1).
If the hash function is poor (or the key distribution is pathological) the hash chains will be unbalanced. With early HashMap implementations, this could lead to (worst case) O(N) operations because operations like get had to search a long hash chain. With recent implementations, HashMap will construct a balanced binary tree for any hash chain that is too long. That leads to worst case O(logN) operations.
HashMap is unable to grow the hash array beyond 2^31 hash buckets. So at that point HashMap complexity starts transitioning to O(log N) complexity. However if you have a map that size, other secondary effects will probably have affected the real performance anyway.
The algorithm first iterates the array of numbers, of size n, to generate the map with counts of occurrences. It should be clear why this is an O(n) operation. Then, after the hashmap has been built, it iterates that map and finds all entries whose counts are odd numbers. The size of this map would in practice be somewhere between 1 (in the case of all input numbers being the same), and n (in the case where all inputs are different). So, this second operation is also bounded by O(n), leaving the entire algorithm O(n).

First list is sorted and the other is unsorted now which is the better approach to merge both list

I have one Sorted ArrayList A and one unsorted ArrayList B, now I want to merge items of B in A such that A remains sorted.
Now I can think of only two ways to do this.
First one is to sort Arraylist B and then have two index positions one for Arraylist A and other for Araylist B, then we will move index
one by one to insert B list's item in A.
Let us assume size of Arraylist A be n and size of Arraylist B bem.
Order of complexity will be O(m Log(m))(for sorting ArrayList B) + O(n + m).
Second Approach is just have an index on ArrayListaylist B and then use Binary search to place item from Arraylist B to A.
Order of complexity will be O(Log(n) * m).
Now can anybody please tell me which approach should i opt, also if you can think of any other approach better than these two then please mention.
It depends on the relative size of n and m.
When n > m*log(m) the run time of the first algorithm with complexity O(m*Log(m) + max(n,m)) would be dominated by that linear term on n (notice in this scenario max(n,m)=n as n > m*log(m)). In this case the second algorithm with complexity O(log(n) * m) would be better.
The exact practical cutoff point would depend on the constant factor for each algorithm particular implementations, but in principle, the second algorithm becomes better as n gets bigger in relation to m, and eventually becomes the better option. In other words, for every possible value of m there exists a big enough value for n for which the second algorithm is better.
EDIT: THE ABOVE IS PARTLY WRONG
I answered assuming the given complexities for both algorithms, but now I'm not sure the complexity for second one is correct. You propose inserting each number from the unsorted list into the sorted list using binary search, but how exactly would you do this? If you have a linked list you can not do binary search. If you have an array you need to displace part of the array on each insert and that is a linear overhead on each insert. I'm not sure if there is a way to achieve this with a more complex data structure, but you can not do this with either a linked list or an array.
To clarify, if you had two algorithms with those time complexities, then my original answer holds, but your second algorithm doesn't have the O(m log(n)) complexity we assumed.
1st Approach: m * log(n) = O(mlgn)
2nd Approach: m * log(m) + n + m = O(mlgm)
if n <= m {
1st approach
} else {
2nd approach
}

Categories