1st Part :-
I was reading in the book - "Data Structure and Algorithms made easy in Java" that the time complexity for deleting the last element from Linkedlist and Arraylist is O(n). But the Linkedlist internally implements DoublyLinkedlist so the time complexity should be O(1) and similarly for Arraylist as it internally implement Array it should be O(1).
2nd Part :-
It also says that the insertion of an element at the ending of a linkedlist has a time complexity of O(n) but the linkedlist maintains pointers both at the end and at the front. So it this statement correct ? Moreover it says that the time complexity to insert an element in an arraylist at the end is O(1) if array is not full and O(n) if the array is full. Why O(n) if the array is full ?
Thanks for answering the 1st part . Can anyone please also explain the 2nd part. Thanks :)
It depends on what methods you're calling.
A glance at the implementation shows that if you're calling LinkedList.removeLast(), that's O(1). The LinkedList maintains pointers both to the first and last node in the list. So it doesn't have to traverse the list to get to the last node.
Calling LinkedList.remove(index) with the index of the last element is also O(1), because it traverses the list from the closest end. [Noted by user #andreas in comment below.]
But if you're calling LinkedList.remove(Object), then there's an O(n) search for the first matching node.
Similarly, for ArrayList, if you're calling ArrayList.remove(index) with the index of the last element, then that's O(1). For all other indices, there's a System.arrayCopy() call that can be O(n) -- but that's skipped entirely for the last element.
But if you call ArrayList.remove(Object), then again there's an O(n) search for the first matching node.
Related
Which would be faster when using a LinkedList? I haven't studied sorting and searching yet. I was thinking adding them directly in a sorted manner would be faster, as manipulating the nodes and pointers afterward intuitively seems to be very expensive. Thanks.
In general, using a linked list has many difficulties and is very expensive .
i)In the usual case, if you want to add values to the sorted linked list, you must go through the following steps:
If Linked list is empty then make the node as head and return it.
If the value of the node to be inserted is smaller than the value of the head node, then insert the node at the start and make it head.
In a loop, find the appropriate node after
which the input node is to be inserted.
To find the appropriate node start from the head,
keep moving until you reach a node GN who's value is greater than
the input node. The node just before GN is the appropriate node .
Insert the node after the appropriate node found in step 3.
the Time Complexity in this case is O(n). ( for each element )
but don't forget to sort the linked list before adding other elements in a sorted manner ; This means you have to pay extra cost to sort the linked list.
ii ) but if you want to add elements to the end of the linked list and then sort them the situation depends on the way of sorting ! for example you can use merge sort that has O(n*log n) time complexity ! and even you can use insertion sort with O(n^2) time complexity .
note : How to sort is a separate issue .
As you have already noticed! It is not easy to talk about which method is faster and it depends on the conditions of the problem, such as the number of elements of the link list and the elements needed to be added, or whether the cost of initial sorting is considered or not!
You have presented two options:
Keep the linked list sorted by inserting each new node at its sorted location.
Insert each new node at the end (or start) of the linked list and when all nodes have been added, sort the linked list
The worst case time complexity for the first option occurs when the list has each time to be traversed completely to find the position where a new node is to be inserted. In that case the time complexity is O(1+2+3+...+n) = O(n²)
The worst time complexity for the second option is O(n) for inserting n elements and then O(nlogn) for sorting the linked list with a good sorting algorithm. So in total the sorting algorithm determines the overall worst time complexity, i.e. O(nlogn).
So the second option has the better time complexity.
Merge sort has a complexity of O(nlogn) and is suitable for linked list.
If your data has a limited range, you can use radix sort and achieve O(kn) complexity, where k is log(range size).
In practice it is better to insert elements in a vector (dynamic array), then sort the array, and finally turn the array into a list. This will almost certainly give better running times.
I have to implement an algorithm that entry always inserted last and entry removed from first position.
i checked following link
Performing the fastest search - which collection should i use?
They say "ArrayList is stored in a continuous space in the memory. This allows the Operating System to use optimisations such as "when a byte in memory was accessed, most likely the next byte will be accessed soon". Because of this, ArrayList is faster than LinkedList in all"
but one case: when inserting/deleting the element at the beginning of the list (because all elements in the array have to be shifted). Adding/deleting at the end or in the middle, iterating over, accessing the element are all faster in case of ArrayList.
Here In my algorithm,always remove first element .So,always shifting happens.In that case i should not use arraylist??
It really depend on what more you wish to do with the structure.
If most of the time you are just adding at the end and removing from the start then any implementation of Deque would do. So ArrayDeque or LinkedList are probably your best candidates.
ArrayDeque is backed by an array and can therefore be accessed quickly by index with O(1) complexity but it has the downside that adding can be slower than LinkedList because sometimes the backing array needs to be resized.
LinkedList is just a linked-list so growing/shrinking it is consistently O(1) but accessing by index is not because to find the nth entry is O(n).
I was curious regarding a specific issue regarding unsorted linked lists. Let's say we have an unsorted linked list based on an array implementation. Would it be important or advantageous to maintain the current order of elements when removing an element from the center of the list? That hole would have to be filled, so let's say we take the last element in the list and insert it into that hole. Is the time complexity of shifting all elements over greater than moving that single element?
You can remove an item from a linked list without leaving a hole.
A linked list is not represented as an array of contiguous elements. Instead, it's a chain of elements with links. You can remove an element merely by linking its adjacent elements to each other, in a constant-time operation.
Now, if you had an array-based list, you could choose to implement deletion of an element by shifting the last element into position. This would give you O(1) deletion instead of O(n) deletion. However, you would want to document this behavior.
Is the time complexity of shifting all elements over greater than moving that single element?
Yes, for an array-based list. Shifting all the subsequent elements is O(n), and moving a single element is O(1).
java.util.List
If your list were an implementation of java.util.List, note that java Lists are defined to be ordered collections, and the List.remove(int index) method is defined to shift the remaining elements.
Yes, using an array implementation it would have a larger time complexity up to n/2(if the element was in the middle of the array) to shift all entires over. Where moving one element would be constant time.
Since you are using array the answer is yes, because you have to make multiple assignments.
If you would have used Nodes then it would be better in terms of complexity.
First of all, my basic understanding of a singly linked list has been that every node only points to the next subsequent node, so my problem might stem from the fact that my definition of such list is incorrect.
Given the list setup, getting to node n would require iterating through n-1 nodes that come before it, so search and access would be O(n). Now, apparently node insertion and deletion take O(1), but unless they are talking about first item insertion, then in reality it would be O(n) + O(1) for you to insert the item between nodes n and n+1.
Now, indexing a list would also have O(n) complexity, yet apparently building such indexes is frowned upon, and I cannot understand why. Couldn't we build a singly linked list index, which would allow us true O(1) insertion and deletion without having to perform an O(n) iteration over the list to get to our specific node? It wouldn't even need to be an index of all nodes, and we could have it pointing to subindexes i.e. for a list of 1000 items, first index would point to 10 different indexes for items between 1-100, 101-200, etc. and then those indexes could point to smaller indexes that go by 10. This way getting to node 543 could potentially take only 3+(index traversal) iterations, instead of 543 as it would for a typical singly linked list.
I guess, what I am asking is why such indexing should typically be avoided?
You are describing a skip-list.
A skip list have a search, insert, delete time complexity of O(logN), since this "smaller subindexes" you describe - you have logarithmic number of them (what happens if your list has 100 elements? How many of these levels do you need? And how much for 1,000,000 elements? and 10^30?).
Note that a skip list is usually maintained sorted, but you can do it unsorted (which is sorted by index - actually) if you wish as well.
With a singly-linked list, even if you already have a direct reference to the node to delete, the complexity is not O(1). This is because you have to update the prior node's next-node reference, which requires you to iterate through the list -- resulting in O(N) complexity. To get O(1) complexity for deletion, you'd need a doubly-linked list.
There is already a collections class that combines a HashMap with a doubly-linked list: the LinkedHashMap.
From what I understand, ConcurrentSkipListSet has an average complexity of O(log n) for insertion, search and removal of elements and a worst case of O(n). How about accessing the first and the last element? Is it any lower than log? I see that it retains a pointer to the head. Hence, I am guessing O(1) for the first element.
Yes, you are right about the head. => O(1)
However, when accessing the last one, you don't know which one it is, since after all it's a linked-list. Now because it is a skip-list you get O(log n) instead of processing all elements in linear time. Here you can look for a nil next pointer, but since you don't know which one to check it is still in O(log n).
There is a difference to note between real-measured time and asymptotic approximations!
Here is a possible depiction of it:
I hope this helps.