Singly linked list iteration complexity - java

First of all, my basic understanding of a singly linked list has been that every node only points to the next subsequent node, so my problem might stem from the fact that my definition of such list is incorrect.
Given the list setup, getting to node n would require iterating through n-1 nodes that come before it, so search and access would be O(n). Now, apparently node insertion and deletion take O(1), but unless they are talking about first item insertion, then in reality it would be O(n) + O(1) for you to insert the item between nodes n and n+1.
Now, indexing a list would also have O(n) complexity, yet apparently building such indexes is frowned upon, and I cannot understand why. Couldn't we build a singly linked list index, which would allow us true O(1) insertion and deletion without having to perform an O(n) iteration over the list to get to our specific node? It wouldn't even need to be an index of all nodes, and we could have it pointing to subindexes i.e. for a list of 1000 items, first index would point to 10 different indexes for items between 1-100, 101-200, etc. and then those indexes could point to smaller indexes that go by 10. This way getting to node 543 could potentially take only 3+(index traversal) iterations, instead of 543 as it would for a typical singly linked list.
I guess, what I am asking is why such indexing should typically be avoided?

You are describing a skip-list.
A skip list have a search, insert, delete time complexity of O(logN), since this "smaller subindexes" you describe - you have logarithmic number of them (what happens if your list has 100 elements? How many of these levels do you need? And how much for 1,000,000 elements? and 10^30?).
Note that a skip list is usually maintained sorted, but you can do it unsorted (which is sorted by index - actually) if you wish as well.

With a singly-linked list, even if you already have a direct reference to the node to delete, the complexity is not O(1). This is because you have to update the prior node's next-node reference, which requires you to iterate through the list -- resulting in O(N) complexity. To get O(1) complexity for deletion, you'd need a doubly-linked list.
There is already a collections class that combines a HashMap with a doubly-linked list: the LinkedHashMap.

Related

Given a LinkedList, is it better to add elements to the end and then sort, or add the elements in a sorted manner directly?

Which would be faster when using a LinkedList? I haven't studied sorting and searching yet. I was thinking adding them directly in a sorted manner would be faster, as manipulating the nodes and pointers afterward intuitively seems to be very expensive. Thanks.
In general, using a linked list has many difficulties and is very expensive .
i)In the usual case, if you want to add values to the sorted linked list, you must go through the following steps:
If Linked list is empty then make the node as head and return it.
If the value of the node to be inserted is smaller than the value of the head node, then insert the node at the start and make it head.
In a loop, find the appropriate node after
which the input node is to be inserted.
To find the appropriate node start from the head,
keep moving until you reach a node GN who's value is greater than
the input node. The node just before GN is the appropriate node .
Insert the node after the appropriate node found in step 3.
the Time Complexity in this case is O(n). ( for each element )
but don't forget to sort the linked list before adding other elements in a sorted manner ; This means you have to pay extra cost to sort the linked list.
ii ) but if you want to add elements to the end of the linked list and then sort them the situation depends on the way of sorting ! for example you can use merge sort that has O(n*log n) time complexity ! and even you can use insertion sort with O(n^2) time complexity .
note : How to sort is a separate issue .
As you have already noticed! It is not easy to talk about which method is faster and it depends on the conditions of the problem, such as the number of elements of the link list and the elements needed to be added, or whether the cost of initial sorting is considered or not!
You have presented two options:
Keep the linked list sorted by inserting each new node at its sorted location.
Insert each new node at the end (or start) of the linked list and when all nodes have been added, sort the linked list
The worst case time complexity for the first option occurs when the list has each time to be traversed completely to find the position where a new node is to be inserted. In that case the time complexity is O(1+2+3+...+n) = O(n²)
The worst time complexity for the second option is O(n) for inserting n elements and then O(nlogn) for sorting the linked list with a good sorting algorithm. So in total the sorting algorithm determines the overall worst time complexity, i.e. O(nlogn).
So the second option has the better time complexity.
Merge sort has a complexity of O(nlogn) and is suitable for linked list.
If your data has a limited range, you can use radix sort and achieve O(kn) complexity, where k is log(range size).
In practice it is better to insert elements in a vector (dynamic array), then sort the array, and finally turn the array into a list. This will almost certainly give better running times.

which is best collection ArrayList or LinkedList as always shifting position of it's all entry while removing first entry frequently?

I have to implement an algorithm that entry always inserted last and entry removed from first position.
i checked following link
Performing the fastest search - which collection should i use?
They say "ArrayList is stored in a continuous space in the memory. This allows the Operating System to use optimisations such as "when a byte in memory was accessed, most likely the next byte will be accessed soon". Because of this, ArrayList is faster than LinkedList in all"
but one case: when inserting/deleting the element at the beginning of the list (because all elements in the array have to be shifted). Adding/deleting at the end or in the middle, iterating over, accessing the element are all faster in case of ArrayList.
Here In my algorithm,always remove first element .So,always shifting happens.In that case i should not use arraylist??
It really depend on what more you wish to do with the structure.
If most of the time you are just adding at the end and removing from the start then any implementation of Deque would do. So ArrayDeque or LinkedList are probably your best candidates.
ArrayDeque is backed by an array and can therefore be accessed quickly by index with O(1) complexity but it has the downside that adding can be slower than LinkedList because sometimes the backing array needs to be resized.
LinkedList is just a linked-list so growing/shrinking it is consistently O(1) but accessing by index is not because to find the nth entry is O(n).

Maintaining order of elements in unsorted linked list after removal of element

I was curious regarding a specific issue regarding unsorted linked lists. Let's say we have an unsorted linked list based on an array implementation. Would it be important or advantageous to maintain the current order of elements when removing an element from the center of the list? That hole would have to be filled, so let's say we take the last element in the list and insert it into that hole. Is the time complexity of shifting all elements over greater than moving that single element?
You can remove an item from a linked list without leaving a hole.
A linked list is not represented as an array of contiguous elements. Instead, it's a chain of elements with links. You can remove an element merely by linking its adjacent elements to each other, in a constant-time operation.
Now, if you had an array-based list, you could choose to implement deletion of an element by shifting the last element into position. This would give you O(1) deletion instead of O(n) deletion. However, you would want to document this behavior.
Is the time complexity of shifting all elements over greater than moving that single element?
Yes, for an array-based list. Shifting all the subsequent elements is O(n), and moving a single element is O(1).
java.util.List
If your list were an implementation of java.util.List, note that java Lists are defined to be ordered collections, and the List.remove(int index) method is defined to shift the remaining elements.
Yes, using an array implementation it would have a larger time complexity up to n/2(if the element was in the middle of the array) to shift all entires over. Where moving one element would be constant time.
Since you are using array the answer is yes, because you have to make multiple assignments.
If you would have used Nodes then it would be better in terms of complexity.

The Best Search Algorithm for a Linked List

I have to write a program as efficiently as possible that will insert given nodes into a sorted LinkedList. I'm thinking of how binary search is faster than linear in average and worst case, but when I Googled it, the runtime was O(nlogn)? Should I do linear on a singly-LinkedList or binary search on a doubly-LinkedList and why is that one (the one to chose) faster?
Also how is the binary search algorithm > O(logn) for a doubly-LinkedList?
(No one recommend SkipList, I think they're against the rules since we have another implementation strictly for that data structure)
You have two choices.
Linearly search an unordered list. This is O(N).
Linearly search an ordered list. This is also O(N) but it is twice as fast, as on average the item you search for will be in the middle, and you can stop there if it isn't found.
You don't have the choice of binary searching it, as you don't have direct access to elements of a linked list.
But if you consider search to be a rate-determining step, you shouldn't use a linked list at all: you should use a sorted array, a heap, a tree, etc.
Binary search is very fast on arrays simply because it's very fast and simple to access the middle index between any two given indexes of elements in the array. This make it's running time complexity to be O(log n) while taking a constant O(1) space.
For the linked list, it's different, since in order to access the middle element we need to traverse it node by node and therefore finding the middle node could run in an order of O(n)
Thus binary search is slow on linked list and fast on arrays
Binary search is possible by using skip list. You will spend number of pointers as twice as linked list if you set skip 2, 4, 8, ..., 2^n at same time. And then you can get O(log n) for each search.
If your data store in each node is quite big, applying this will very efficient.
You can read more on https://www.geeksforgeeks.org/skip-list/amp/
So basically binary search on a LL is O(n log n) because you would need to traverse the list n times to search the item and then log n times to split the searched set. But this is only true if you are traversing the LL from the beginning every time.
Ideally if you figure out some method (which it's possible!) to start from somewhere else like... the middle of the searched set, then you eliminate the need to always traverse the list n times to start the search and can optimize your algorithm to O(log n).

Java: Inserting into LinkedList efficiently

I am optimizing an implementation of a sorted LinkedList.
To insert an element I traverse the list and compare each element until I have the correct index, and then break loop and insert.
I would like to know if there is any other way that I can insert the element at the same time as traversing the list to reduce the insert from O(n + (n capped at size()/2)) to O(n).
A ListIterator is almost what Im after because of its add() method, but unfortunately in the case where there are elements in the list equal to the insert, the insert has to be placed after them in the list. To implement this ListIterator needs a peek() which it doesnt have.
edit: I have my answer, but will add this anyway since a lot of people havent understood correctly:
I am searching for an insertion point AND inserting, which combined is higher than O(n)
You may consider a skip list, which is implemented using multiple linked lists at varying granularity. E.g. the linked list at level 0 contains all items, level 1 only links to every 2nd item on average, level 2 to only every 4th item on average, etc.... Searching starts from the top level and gradually descends to lower levels until it finds an exact match. This logic is similar to a binary search. Thus search and insertion is an O(log n) operation.
A concrete example in the Java class library is ConcurrentSkipListSet (although it may not be directly usable for you here).
I'd favor Péter Török suggestion, but I'd still like to add something for the iterator approach:
Note that ListIterator provides a previous() method to iterate through the list backwards. Thus first iterate until you find the first element that is greater and then go to the previous element and call add(...). If you hit the end, i.e. all elements are smaller, then just call add(...) without going back.
I have my answer, but will add this anyway since a lot of people havent understood correctly: I am searching for an insertion point AND inserting, which combined is higher than O(n).
Your require to maintain a collection of (possibly) non-unique elements that can iterated in an order given by a ordering function. This can be achieved in a variety of ways. (In the following I use "total insertion cost" to mean the cost of inserting a number (N) of elements into an initially empty data structure.)
A singly or doubly linked list offers O(N^2) total insertion cost (whether or not you combine the steps of finding the position and doing the insertion!), and O(N) iteration cost.
A TreeSet offers O(NlogN) total insertion cost and O(N) iteration cost. But has the restriction of no duplicates.
A tree-based multiset (e.g. TreeMultiset) has the same complexity as a TreeSet, but allows duplicates.
A skip-list data structure also has the same complexity as the previous two.
Clearly, the complexity measures say that a data structure that uses a linked list performs the worst as N gets large. For this particular group of requirements, a well-implemented tree-based multiset is probably the best, assuming there is only one thread accessing the collection. If the collection is heavily used by many threads (and it is a set), then a ConcurrentSkipListSet is probably better.
You also seem to have a misconception about how "big O" measures combine. If I have one step of an algorithm that is O(N) and a second step that is also O(N), then the two steps combined are STILL O(N) .... not "more than O(N)". You can derive this from the definition of "big O". (I won't bore you with the details, but the Math is simple.)

Categories