Problem Statement: Given a circular linked list, implement an algoirthm that returns the node at the beginning of the loop.
The answer key gives a more complicated solution than what I propose. What's wrong with mine?:
public static Node loopDetection(Node n1) {
ArrayList<Node> nodeStorage = new ArrayList<Node>();
while (n1.next != null) {
nodeStorage.add(n1);
if (nodeStorage.contains(n1.next)) {
return n1;
}
else {
n1 = n1.next;
}
}
return null;
}
Your solution isO(n^2) time (each contains() in ArrayList is O(n) time) and O(n) space (for storing nodeStorage), while the "more complicated" solution is O(n) time and O(1) space.
The book offers the following solution, to whomever is interested, which is O(n) time and O(1) space:
If we move two pointers, one with speed 1 and another with speed 2,
they will end up meeting if the linked list has a loop. Why? Think
about two cars driving on a track—the faster car will always pass the
slower one! The tricky part here is finding the start of the loop.
Imagine, as an analogy, two people racing around a track, one running
twice as fast as the other. If they start off at the same place, when
will they next meet? They will next meet at the start of the next lap.
Now, let’s suppose Fast Runner had a head start of k meters on an n
step lap. When will they next meet? They will meet k meters before the
start of the next lap. (Why? Fast Runner would have made k + 2(n - k)
steps, including its head start, and Slow Runner would have made n - k
steps. Both will be k steps before the start of the loop.) Now, going
back to the problem, when Fast Runner (n2) and Slow Runner (n1) are
moving around our circular linked list, n2 will have a head start on
the loop when n1 enters. Specifically, it will have a head start of k,
where k is the number of nodes before the loop. Since n2 has a head
start of k nodes, n1 and n2 will meet k nodes before the start of the
loop. So, we now know the following:
1. Head is k nodes from LoopStart (by definition).
2. MeetingPoint for n1 and n2 is k nodes from LoopStart (as shown above). Thus, if we move n1 back to Head and keep n2 at MeetingPoint,
and move them both at the same pace, they will meet at LoopStart.
LinkedListNode FindBeginning(LinkedListNode head) {
LinkedListNode n1 = head;
LinkedListNode n2 = head;
// Find meeting point
while (n2.next != null) {
n1 = n1.next;
n2 = n2.next.next;
if (n1 == n2) {
break;
}
}
// Error check - there is no meeting point, and therefore no loop
if (n2.next == null) {
return null;
}
/* Move n1 to Head. Keep n2 at Meeting Point. Each are k steps
/* from the Loop Start. If they move at the same pace, they must
* meet at Loop Start. */
n1 = head;
while (n1 != n2) {
n1 = n1.next;
n2 = n2.next;
}
// Now n2 points to the start of the loop.
return n2;
}
I had trouble visualizing what was going on with this algorithm. Hopefully this helps someone else.
At time t = k(3), p2 is twice the distance from the head(0) as p1, so for them to get back in line, we need p2 to 'catch up' to p1 and it will take L - k(8) 5 more steps to occur. p2 is travelling at 2x the speed of p1.
At time t = k + (L - k) (8), p2 needs to travel k steps forward to get back to k. If we reset p1 back to the head(0), we know that p1 and p2 will both meet back at k(3, 19 respectively) if p2 is travelling at the same speed as p1.
There is the solution given by amit. The problem is that you either know it or you don't, but you won't be able to figure it out in an interview. Since I have never had a need to find a cycle in a linked list, knowing it to me is pointless except for passing interviews. So for an interviewer, stating this as an interview question, and expecting amir's answer (which is nice because it has linear time and zero extra space), is quite stupid.
So your solution is mostly fine, except that you should use a hash table, and you must make sure that the hash table hashes references to nodes and not nodes. Say you have a node containing a string and a "next" pointer, and the hash function hashes the string and compares nodes as equal if the strings are equal. In that case you'd find the first node with a duplicate string, and not the node at the start of the loop, unless you are careful.
(amir's solution has a very similar problem in languages where == compares the objects, and not the references. For example in Swift, you'd have to use === (compares references) and not == (compares objects)).
Related
This question already has answers here:
What is a plain English explanation of "Big O" notation?
(43 answers)
Closed 8 months ago.
I wrote code for an enqueue method in a singly linked list and I'm wondering if anyone can tell me the Big O is for this code. I at first assumed it was O(n) because of the loop. However, the loop will always iterate a specific number of times depending on how many items are in the list. This makes me believe it's actually O(1). Am I wrong?
public Node<T> enqueue(T data){
Node<T> toQueue = new Node<>(data);
if (this.head == null) {
this.head = toQueue;
return toQueue;
}
Node<T> lastNode = this.head;
while(lastNode.next != null){
lastNode = lastNode.next;
}
lastNode.next = toQueue;
return toQueue;
}
Let's start from the following excerpt from the question:
However, the loop will always iterate a specific number of times depending on how many items are in the list.
This is a correct statement.
Please, note the dependency on the input size:
iterate a specific number of times depending on how many items are in the list
Therefore, the algorithm has the linear time complexity — O(n).
Linear time complexity
A slightly reformatted excerpt from the article: Time complexity: Linear_time - Wikipedia:
An algorithm is said to take linear time, or O(n) time, if its time complexity is O(n). Informally, this means that the running time increases at most linearly with the size of the input. More precisely, this means that there is a constant c such that the running time is at most cn for every input of size n. For example, a procedure that adds up all elements of a list requires time proportional to the length of the list, if the adding time is constant, or, at least, bounded by a constant.
A slightly reformatted excerpt from the article: Big O notation: Orders of common functions - Wikipedia: let's refer to the corresponding row:
Notation: O(n)
Name: linear
Example: Finding an item in an unsorted list or in an unsorted array; adding two n-bit integers by ripple carry
However, the loop will always iterate a specific number of times depending on how many items are in the list. This makes me believe it's actually O(1).
Am I wrong?
Your reasoning is wrong.
A complexity analysis of an algorithm needs to take account of all possible inputs.
While the number of elements in a single given list can be assumed to be constant while you are looping, the number of elements in any list is not a constant. If you consider all possible lists that could be inputs to the algorithm, the length of the list is a variable whose value can get arbitrarily large1.
If we call that variable N then it is clear that the complexity class for your algorithm is O(N). (I won't go into the details because I think you already understand them.)
The only way that your reasoning could be semi-correct would be if you could categorically state that the input list length was less than some constant L. The complexity class then collapses to O(1). However even this reasoning is dubious2, since the algorithm as written does not check that constraint. It has no control over the list length!
On the other hand, if you rewrote the algorithm as this:
public static final int L = 42;
public Node<T> enqueue(T data){
Node<T> toQueue = new Node<>(data);
if (this.head == null) {
this.head = toQueue;
return toQueue;
}
Node<T> lastNode = this.head;
int count = 0;
while(lastNode.next != null){
lastNode = lastNode.next;
if (count++ > L) {
throw InvalidArgumentException("list too long");
}
}
lastNode.next = toQueue;
return toQueue;
}
then we can legitimately say that the method is O(1). It will either give a result or throw an exception within a constant time.
1 - I am ignoring the fact that there are practical limits on how long a simple linked list like this can be in a Java program. If the list is too large, it won't fit in the heap. And there are limits on how large you could make the heap.
2 - A more mathematically sound way to describe the scenario is that your algorithm is O(N) but your use of the algorithm is O(1) because the calling code (not shown) enforces a bound on the list length.
I am trying to calculate the runtime of a function I wrote in Javq, I wrote a function that calculates the sum of all the right children in a BT.
I used recursion in the function, and I don't really understand how to calculate the runtime in recursion non the less in a BT (just started studying the subject).
This is the code I wrote:
public int sumOfRightChildren(){
return sumOfRightChildren(this.root);
}
private int sumOfRightChildren(Node root){
if(root == null) //O(1)
return 0;//O(1)
int sum = 0;//O(1)
if(root.right != null)//O(1)
sum+=root.right.data;//O(1)
sum += sumOfRightChildren(root.right); //worst case O(n) ?
if(root.left != null)
{
sum += sumOfRightChildren(root.left);//worst case O(n)?
}
return sum;
}
I tried writing down the runtimes I think it takes, but I don't think I am doing it right.
If someone can help guide me I'd be very thankful.
I'm trying to calculate T(n).
Since you visit every node exactly once is easy to see the runtime cost is T(n) = n * K where n is the number of nodes in the Binary Tree and K is the expected function cost.
If you want to explicitly consider the cost of certain operations you may not be able to calculate it exactly (without having an input example). For example, calculating the number of times sum+=... is executed is not possible because it depends on the particular tree.
In this case the worst case is a full Binary Tree and, if it is n=1,2,... their depth:
the complexity is O(2^n) (no matter the operations since all of them take O(1) as you have posted).
the cost of sum+=root.right.data; is T(n) = 2^n - 1 (all internal nodes).
the cost of sum+=... is T(n) = 3 * (2^n - 1) (twice for every internal node and one more for each node).
...
(NOTE: the exact final expression may vary since your if(root.left != null) is not usefull and prefereable let that condition to the if(root == null))
Ok I think I understood,
The worst case is that is has to check all the nodes in the tree so the answer is: O(n)
How can I write a function in java that reverse a singly linked list by dividing it into half that means (n/2) nodes for first part and rest is second Part (n is size of the liked list) until it reaches one Node and then merge this divided parts. Using two new Linked list is allowed in each divide but using list Node is not. The function must be void and there are no parameters for the function. I have n, head and tail of main Linked list.
I found this code on websites but it doesn't divide Linked list into half, so it is not helpful.
static ListNode reverseR(ListNode head) {
if (head == null || head.next == null) {
return head;
}
ListNode first = head;
ListNode rest = head.next;
// reverse the rest of the list recursively
head = reverseR(rest);
// fix the first node after recursion
first.next.next = first;
first.next = null;
return head;
}
Because you're working with a linked list, the approach you suggest is unlikely to be efficient. Ascertaining where the midpoint lies is a linear time operation, and even if the size of the list was known, you would still have to iterate up to the midpoint. Because you have a linear term at each node of the recurrence tree, the overall performance will be O(n lg n), slower than the O(n) bound for the code which you have provided.
That being said, you could still reverse the list by the following strategy:
Step 1: Split the list L into A (the first half) and B (the second half).
Step 2: Recurse on each of A and B. This recursion should bottom out
when given a list of length 1.
Step 3: Attach the new head of the reversed A to the new tail of the reversed B.
You can see that to begin with, our list is AB. We then recurse to get A' and B', each the reversed version of the half lists. We then output our new reversed list B'A'. The first element of the original A is now the last element of the list overall, and the last element of the original B is now the first overall.
If one binary tree has x nodes and the other has y nodes where x is bigger than y. I was thinking O(n2) because searching for each node is O(n).
And how about inserting then comparing the trees?
Assuming your binary trees are sorted, this is an O(n) operation (where n is the sum of the nodes in both trees, not the product).
You can simply run two "indexes" side by side through the trees stopping when an element is different. If you get to the end of both and no differences were found, then the trees were identical, something like the following pseudo-code:
def areEqual (tree1, tree2):
pos1 = first (tree1)
pos2 = first (tree2)
while pos1 != END and pos2 != END:
if tree1[pos1] != tree2[pos2]:
return false
pos1 = next (tree1, pos1)
pos2 = next (tree2, pos2)
if pos1 != END or pos2 != END:
return false
return true
If they're not sorted, and you have no other information that may allow you to optimise the function, and cannot use extra data structures, it will be O(n2), since you'll have to find an arbitrary equal node in the second tree for every single node in the first (as well as mark it somehow to indicate you've used it).
Keep in mind there are usually ways to trade space for time if the former is more important (and it often is).
For example, even with totally unordered trees, you can reduce the complexity considerably by using hashing for example:
def areEqual (tree1, tree2):
hash = []
# Add all items from first tree.
for item in tree1.allItems():
if not exists hash[item]
hash[item] = 0
hash[item] += 1
# Subtract all items from second tree.
for item in tree2.allItems():
if not exists hash[item]
hash[item] = 0
hash[item] -= 1
if hash[item] == 0:
delete hash[item]
if hash.size != 0:
return false
return true
Since hashing tends to amortise toward O(1), the problem as a whole can be considered O(n).
If you are provided the head of a linked list, and are asked to reverse every k sequence of nodes, how might this be done in Java? e.g., a->b->c->d->e->f->g->h with k = 3 would be c->b->a->f->e->d->h->g->f
Any general help or even pseudocode would be greatly appreciated! Thanks!
If k is expected to be reasonably small, I would just go for the simplest thing: ignore the fact that it's a linked list at all, and treat each subsequence as just an array-type thing of things to be reversed.
So, if your linked list's node class is a Node<T>, create a Node<?>[] of size k. For each segment, load k Nodes into the array list, then just reverse their elements with a simple for loop. In pseudocode:
// reverse the elements within the k nodes
for i from 0 to k/2:
nodeI = segment[i]
nodeE = segment[segment.length-i-1]
tmp = nodeI.elem
nodeI.elem = nodeE.elem
nodeE.elem = tmp
Pros: very simple, O(N) performance, takes advantage of an easily recognizable reversing algorithm.
Cons: requires a k-sized array (just once, since you can reuse it per segment)
Also note that this means that each Node doesn't move in the list, only the objects the Node holds. This means that each Node will end up holding a different item than it held before. This could be fine or not, depending on your needs.
This is pretty high-level, but I think it'll give some guidance.
I'd have a helper method like void swap3(Node first, Node last) that take three elements at an arbitrary position of the list and reverses them. This shouldn't be hard, and could could be done recursively (swap the outer elements, recurse on the inner elements until the size of the list is 0 or 1). Now that I think of it, you could generalize this into swapK() easily if you're using recursion.
Once that is done, then you can just walk along your linked list and call swapK() every k nodes. If the size of the list isn't divisble by k, you could either just not swap that last bit, or reverse the last length%k nodes using your swapping technique.
TIME O(n); SPACE O(1)
A usual requirement of list reversal is that you do it in O(n) time and O(1) space. This eliminates recursion or stack or temporary array (what if K==n?), etc.
Hence the challenge here is to modify an in-place reversal algorithm to account for the K factor. Instead of K I use dist for distance.
Here is a simple in-place reversal algorithm: Use three pointers to walk the list in place: b to point to the head of the new list; c to point to the moving head of the unprocessed list; a to facilitate swapping between b and c.
A->B->C->D->E->F->G->H->I->J->L //original
A<-B<-C<-D E->F->G->H->I->J->L //during processing
^ ^
| |
b c
`a` is the variable that allow us to move `b` and `c` without losing either of
the lists.
Node simpleReverse(Node n){//n is head
if(null == n || null == n.next)
return n;
Node a=n, b=a.next, c=b.next;
a.next=null; b.next=a;
while(null != c){
a=c;
c=c.next;
a.next=b;
b=a;
}
return b;
}
To convert the simpleReverse algorithm to a chunkReverse algorithm, do following:
1] After reversing the first chunk, set head to b; head is the permanent head of the resulting list.
2] for all the other chunks, set tail.next to b; recall that b is the head of the chunk just processed.
some other details:
3] If the list has one or fewer nodes or the dist is 1 or less, then return the list without processing.
4] use a counter cnt to track when dist consecutive nodes have been reversed.
5] use variable tail to track the tail of the chunk just processed and tmp to track the tail of the chunk being processed.
6] notice that before a chunk is processed, it's head, which is bound to become its tail, is the first node you encounter: so, set it to tmp, which is a temporary tail.
public Node reverse(Node n, int dist) {
if(dist<=1 || null == n || null == n.right)
return n;
Node tail=n, head=null, tmp=null;
while(true) {
Node a=n, b=a.right; n=b.right;
a.right=null; b.right=a;
int cnt=2;
while(null != n && cnt < dist) {
a=n; n=n.right; a.right=b; b=a;
cnt++;
}
if(null == head) head = b;
else {
tail.right=b;tail=tmp;
}
tmp=n;
if(null == n) return head;
if(null == n.right) {
tail.right=n;
return head;
}
}//true
}
E.g. by Common Lisp
(defun rev-k (k sq)
(if (<= (length sq) k)
(reverse sq)
(concatenate 'list (reverse (subseq sq 0 k)) (rev-k k (subseq sq k)))))
other way
E.g. by F# use Stack
open System.Collections.Generic
let rev_k k (list:'T list) =
seq {
let stack = new Stack<'T>()
for x in list do
stack.Push(x)
if stack.Count = k then
while stack.Count > 0 do
yield stack.Pop()
while stack.Count > 0 do
yield stack.Pop()
}
|> Seq.toList
Use a stack and recursively remove k items from the list, push them to the stack then pop them and add them in place. Not sure if it's the best solution, but stacks offer a proper way of inverting things. Notice that this also works if instead of a list you had a queue.
Simply dequeue k items, push them to the stack, pop them from the stack and enqueue them :)
This implementation uses ListIterator class:
LinkedList<T> list;
//Inside the method after the method's parameters check
ListIterator<T> it = (ListIterator<T>) list.iterator();
ListIterator<T> reverseIt = (ListIterator<T>) list.listIterator(k);
for(int i = 0; i< (int) k/2; i++ )
{
T element = it.next();
it.set(reverseIt.previous());
reverseIt.set(element);
}