so I have a program that performs a heap sort and I have a remove element function. I do this by taking the last element all the way on the right side and then replace the n-th element with it. To maintain the sort, I then bubble the element down to its correct place in the heap. My friend doesn't seem to think that this will work. will it?
Heapsort is using max-heap, each time, you need to switch the root node, which is the max one, with the nth element, then put the max element into result list from right to left.
Then, bubble the element down to its proper position, and decrease the heap's size from n to n-1.
Then, the new root node of new heap is the max element among the remaining n-1 elements. Just recursively switch root element with the n-1th element, and again put the max element to the result list, decrease heap's size by 1 until the heap's size is 0
Related
Suppose you're given an arbitrary array of length n. Which of the following operations can you perform on the array in worst-case O(1) time?
A. Remove the ith element, decreasing the size by 1
B. Insert an element at ith position , increasing the size by 1
C. find the maximum element
D. Swap the elements at location i and j
In this question, I'm not sure about the definition of arbitrary array. It seems that D is correct, but I'm not sure. Could anybody explain it? Many thanks!
I think by arbitrary, it might just mean that the values of the array don't matter.
A) Removing the ith element and decreasing the array's size by one has an O(n) complexity: when removing an ith element, you have to move all the other elements down one index. If you removed the 0th element, you would need to move n - 1 elements down one index.
B) Inserting an element at the ith position and increasing the array's size by one has an O(n) complexity, for a similar reason. If I add an element to the beginning of the array, I need to move all the other elements up one. Not to mention that because arrays have fixed size, I would also need to create a new array and copy the former elements.
C) Finding the maximum element will at least take O(n) times. You need to look through each element to find the max value, right? Well, if the max value is at position n at the end of the array, you need to go through all the values up to that point.
D) Swapping elements is just O(1). It's a constant operation that doesn't require for or while loops or what not.
Something like that. Hope it helps!
I am confused over the searching complexity of LinkedList in java. I have read that time complexity to search an element from a LinkedList is O(n).
say for example,
LinkedList<String> link=new LinkedList<String>();
link.add("A");
link.add("B");
link.add("C");
System.out.println(link.get(1));
Now, from here by get(index) method we can say that to search an element It should take O(1) times. But I have read that it will take O(n).
Can anybody help me out to get clear concept?
Access in a linked list implementation, like java.util.LinkedList, is O(n). To get an element from the list, there is a loop that follows links from one element to the next. In the worst case, in a list of n elements, n iterations of the loop are executed.
Contrast that with an array-based list, like java.util.ArrayList. Given an index, one random-access operation is performed to retrieve the data. That's O(1).
A linked list is as such, a list of items that are linked together by a means such as a pointer. To search a linked list, you are going to iterate over each item in the list. The most time this will take, will be T(n) where n is the length of your list.
A big-O notation stands for the upper bounds or the worst case scenario.
If you are searching for an item at index 1 it will finish almost immediately, T(1), since it was the first item in the list. If you are to search for the n^th item it will take T(n) time, thus staying in O(n).
A visual representation of a linked list:
[1] -> [2] -> [3] -> [4] -> ... [n-1] -> [n]
An example of what a get() method might look like
get(int i)
{
Node current = head
while (i > 0)
{
current = current.getNext()
i--
}
return current
}
As you see, it iterates over each node within the list.
The Ordo function means a maximal approximation. So it is O(n) because if the list has n entries and you want to get the last entry, then the search goes through n items.
On the other hand, lookup for an ArrayList is O(1), because the lookup time is close to constant, regardless the size of the list and the index you are looking for.
If I am trying to remove the first element (index 0), would it be more time efficient to do list.remove(0) <- removes index 0 or use a queue and queue.dequeue(). I know delete for arraylist is o(n), does this still hold true if you provide the index to remove from? I am new to Java and algorithms, please bear with me if this is a dumb question
Yes, ArrayList is only fast adding and removing close to the end. It takes O(N) time to add or remove at or near the beginning even if you provide in index.
If you need a queue, use ArrayDeque. It's fast at both ends.
If I am trying to remove the first element (index 0), would it be more time efficient to do list.remove(0) <- removes index 0 or use a queue and queue.dequeue().
If you are removing always from beginning i.e. index 0 , then queue is better because it just needs to shift the head index and complexity is O(1) whereas for arraylist it involves left shifiting and complexity is O(n). Otherwise list is the choice if you want to remove random index.
I know delete for arraylist is o(n), does this still hold true if you provide the index to remove from?
Yes. Because when you delete an element at particular index in an arraylist, it leaves a hole(or gap). This has to be filled because arrays are contiguous. So we have to left shift all elements that are towards the right of the removed element.
I was curious regarding a specific issue regarding unsorted linked lists. Let's say we have an unsorted linked list based on an array implementation. Would it be important or advantageous to maintain the current order of elements when removing an element from the center of the list? That hole would have to be filled, so let's say we take the last element in the list and insert it into that hole. Is the time complexity of shifting all elements over greater than moving that single element?
You can remove an item from a linked list without leaving a hole.
A linked list is not represented as an array of contiguous elements. Instead, it's a chain of elements with links. You can remove an element merely by linking its adjacent elements to each other, in a constant-time operation.
Now, if you had an array-based list, you could choose to implement deletion of an element by shifting the last element into position. This would give you O(1) deletion instead of O(n) deletion. However, you would want to document this behavior.
Is the time complexity of shifting all elements over greater than moving that single element?
Yes, for an array-based list. Shifting all the subsequent elements is O(n), and moving a single element is O(1).
java.util.List
If your list were an implementation of java.util.List, note that java Lists are defined to be ordered collections, and the List.remove(int index) method is defined to shift the remaining elements.
Yes, using an array implementation it would have a larger time complexity up to n/2(if the element was in the middle of the array) to shift all entires over. Where moving one element would be constant time.
Since you are using array the answer is yes, because you have to make multiple assignments.
If you would have used Nodes then it would be better in terms of complexity.
So I am currently learning Java and I was asking myself, why the Insertion-Sort method doesn´t have the need to use the swap operation? As Far as I understood, elements get swapped so wouldn´t it be usefull to use the swap operation in this sorting algorithm?
As I said, I am new to this but I try to understand the background of these algorithms , why they are the way they actually are
Would be happy for some insights :)
B.
Wikipedia's article for Insertion sort states
Each iteration, insertion sort removes one element from the input
data, finds the location it belongs within the sorted list, and
inserts it there. It repeats until no input elements remain. [...] If
smaller, it finds the correct position within the sorted list, shifts
all the larger values up to make a space, and inserts into that
correct position.
You can consider this shift as an extreme swap. What actually happens is the value is stored in a placeholder and checked versus the other values. If those values are smaller, they are simply shifted, ie. replace the previous (or next) position in the list/array. The placeholder's value is then put in the position from which the element was shifted.
Insertion Sort does not perform swapping. It performs insertions by shifting elements in a sequential list to make room for the element that is being inserted.
That is why it is an O(N^2) algorithm: for each element out of N, there can be O(N) shifts.
So, you Could do insertion sort by swapping.
But, is that the best way to do it? you should think of what a swap is...
temp = a
a=b
b=temp
there are 3 assignments that take place for a single swap.
eg. [2,3,1]
If the above list is to be sorted, you could 1. swap 3 and 1 then, 2. swap 1 and 2
total 6 assignments
Now,
Instead of swapping, if you just shift 2 and 3 one place to the right ( 1 assignment each) and then put 1 in array[0], you would end up with just 3 assignments instead of the 6 you would do with swapping.