If I am trying to remove the first element (index 0), would it be more time efficient to do list.remove(0) <- removes index 0 or use a queue and queue.dequeue(). I know delete for arraylist is o(n), does this still hold true if you provide the index to remove from? I am new to Java and algorithms, please bear with me if this is a dumb question
Yes, ArrayList is only fast adding and removing close to the end. It takes O(N) time to add or remove at or near the beginning even if you provide in index.
If you need a queue, use ArrayDeque. It's fast at both ends.
If I am trying to remove the first element (index 0), would it be more time efficient to do list.remove(0) <- removes index 0 or use a queue and queue.dequeue().
If you are removing always from beginning i.e. index 0 , then queue is better because it just needs to shift the head index and complexity is O(1) whereas for arraylist it involves left shifiting and complexity is O(n). Otherwise list is the choice if you want to remove random index.
I know delete for arraylist is o(n), does this still hold true if you provide the index to remove from?
Yes. Because when you delete an element at particular index in an arraylist, it leaves a hole(or gap). This has to be filled because arrays are contiguous. So we have to left shift all elements that are towards the right of the removed element.
Related
I have array of 10 elements. Integer[] arr = new Integer10;
What is the time complexity when I add 5th element like arr[5]=999;. I strongly believe that time complexity in arrays for insertion is O(1) , but on some literatures its said that for insertion on arrays (not arraylist) T.C is O(N) because the shifting occurs. How there can be shifting occured? It is not dynamic array.
Here the sources which I believe they are wrong:
Big O Notation Arrays vs. Linked List insertions
O(1) accurately describes inserting at the end of the array. However,
if you're inserting into the middle of an array, you have to shift all
the elements after that element, so the complexity for insertion in
that case is O(n) for arrays. End appending also discounts the case
where you'd have to resize an array if it's full.
https://iq.opengenus.org/time-complexity-of-array/
Inserting and deleting elements take linear time depending on the
implementation. If we want to insert an element at a specific index,
we need to skip all elements from that index to the right by one
position. This takes linear time O(N).
3.https://www.log2base2.com/data-structures/array/insert-element-particular-index-array.html
If we want to insert an element to index 0, then we need to shift all
the elements to right.
For example, if we have 5 elements in the array and need to insert an
element in arr[0], we need to shift all those 5 elements one position
to the right.
There's a misunderstanding:
arr[5]=999; is a direct assignment, not an 'insertion' where shifting would occur, so yes: it's O(1)
Insertion on the other hand consists of the following steps:
if array is full, create a new array
and copy data below index (<) to 0
copy data equal and above index (>=) to index + 1
assign value arr[index]=newValue;
So insertion, because it has to copy/move n-index elements on each insert, is considered O(n), because big-oh notation is designed to showcase worst-case.
Now, with MXX registers and parallel piplines on GPUs and stuff like that, we can speed this up by another big (constant) factor, but the problem itself remains in the O(n) category.
Not to argue with other answers, but I think the OP ask about direct assignment on array arr[5]=999; This operation will assign the value 999 to 5th index in array (if array has length 6 or more). It will not do any shifting or allocation. and for this operation the notion is definitely is O(1)
There are two use cases of inserting data in the array.
Inplace replacement
Shift the data
In first case (Inplace replacement), we generally have code like arr[5] = 99. For this use case, we are not going to shift the data of index 5 to index 6 and so on. Hence, the time complexity of such operation is O(1).
Whereas, in second case (Shift data), for inserting data at index 5, we will first shift the data of index 5 to index 6, and index 6 data to index 7 and so on. In this case, the time complexity of such operation would be O(N)
I need sequentially remove the middle element from sorted data. What would be the best way to do that? Since LinkedList operation takes n/2 time to remove the element, an arraylist would be faster. But an arraylist on the other hand then takes time to shift all the elements to the left, which isn't efficient either. Would other datastructures be of use maybe?
Removing the middle element is composed of two parts:
Finding the middle element
Deleting that element
ArrayList is O(1) at random access, so 1st step is fast for Array. While LinkedList is O(1) at deletion (given node), so 2nd step is easy for List.
What you want is best of both worlds.
IMO this is easily achievable if you write a custom (or extend existing) LinkedList. You'll need to have extra middle reference variable which will :
move to next upon insertion if the size becomes odd.
move to prev upon deletion if the size becomes odd.
You can also do even in both cases but they must be same (either even or odd).
Might a TreeSet fit the bill? If offers removal in O(log n) by key.
Suppose you're given an arbitrary array of length n. Which of the following operations can you perform on the array in worst-case O(1) time?
A. Remove the ith element, decreasing the size by 1
B. Insert an element at ith position , increasing the size by 1
C. find the maximum element
D. Swap the elements at location i and j
In this question, I'm not sure about the definition of arbitrary array. It seems that D is correct, but I'm not sure. Could anybody explain it? Many thanks!
I think by arbitrary, it might just mean that the values of the array don't matter.
A) Removing the ith element and decreasing the array's size by one has an O(n) complexity: when removing an ith element, you have to move all the other elements down one index. If you removed the 0th element, you would need to move n - 1 elements down one index.
B) Inserting an element at the ith position and increasing the array's size by one has an O(n) complexity, for a similar reason. If I add an element to the beginning of the array, I need to move all the other elements up one. Not to mention that because arrays have fixed size, I would also need to create a new array and copy the former elements.
C) Finding the maximum element will at least take O(n) times. You need to look through each element to find the max value, right? Well, if the max value is at position n at the end of the array, you need to go through all the values up to that point.
D) Swapping elements is just O(1). It's a constant operation that doesn't require for or while loops or what not.
Something like that. Hope it helps!
So I am currently learning Java and I was asking myself, why the Insertion-Sort method doesn´t have the need to use the swap operation? As Far as I understood, elements get swapped so wouldn´t it be usefull to use the swap operation in this sorting algorithm?
As I said, I am new to this but I try to understand the background of these algorithms , why they are the way they actually are
Would be happy for some insights :)
B.
Wikipedia's article for Insertion sort states
Each iteration, insertion sort removes one element from the input
data, finds the location it belongs within the sorted list, and
inserts it there. It repeats until no input elements remain. [...] If
smaller, it finds the correct position within the sorted list, shifts
all the larger values up to make a space, and inserts into that
correct position.
You can consider this shift as an extreme swap. What actually happens is the value is stored in a placeholder and checked versus the other values. If those values are smaller, they are simply shifted, ie. replace the previous (or next) position in the list/array. The placeholder's value is then put in the position from which the element was shifted.
Insertion Sort does not perform swapping. It performs insertions by shifting elements in a sequential list to make room for the element that is being inserted.
That is why it is an O(N^2) algorithm: for each element out of N, there can be O(N) shifts.
So, you Could do insertion sort by swapping.
But, is that the best way to do it? you should think of what a swap is...
temp = a
a=b
b=temp
there are 3 assignments that take place for a single swap.
eg. [2,3,1]
If the above list is to be sorted, you could 1. swap 3 and 1 then, 2. swap 1 and 2
total 6 assignments
Now,
Instead of swapping, if you just shift 2 and 3 one place to the right ( 1 assignment each) and then put 1 in array[0], you would end up with just 3 assignments instead of the 6 you would do with swapping.
This is from wikipedia: http://en.wikipedia.org/wiki/Arraylist under Performance.
ArrayList: constant time for remove(), add() at end of array, linear time to add(), remove() at beginning.
LinkedList: both of operations stated : constant time, indexing: linear.
1)Why the difference in arraylist processing time between the two operations?
2)Linkedlist is linear for indexing, constant for adding at the end, why?
1) Because to add/remove at the beginning it has to shift everything and reindex.
2) Because it maintains references to the head and tail (beginning & end). Indexing means traversing the list.
When you add to the end of an ArrayList, it will grow itself to have some room to spare. So if you have a ten-element ArrayList, adding at the end will cause it to internally allocate room for twenty elements, copy the ten you already had, and then add one. Then, when you add another element at the end, it just sticks that twelfth element into the space it already created.
This does not technically give it constant time insertion at the end, but it does give it amortized constant time insertion. That is to say, over a large number of operations, the cost approaches constant time; each time it grows, it doubles, so you'll have an ever-larger number of "free" constant-time inserts before you have to grow-and-copy again.
When you insert at the beginning, it can't do this and must always copy the whole list into a new location (linear time).
Removal from the end is always constant time because you just switch the last cell from being "filled" to "free space". You never need to copy the list.
As for your second question, a LinkedList keeps a pointer to the end of the list, so add and remove there just use that pointer and are thus constant time. There are no quick pointers into the middle of the list, so accessing an arbitrary element requires a linear-time traversal from start to (potentially) finish.
i) ArrayList -> You've got to push all the elements by one position in case of removal/addition in the beginning, hence linear time. At the end of array, you simply add or remove.
ii)LinkedList -> You have references of head and tail, hence you can add/remove anything there (in constant time).
Because removing at the end does not require moving the data. Adding may require copying to resize the storage array, but it's time is amortized.
Because adding at the end does not require walking the list, but indexing does.