I have a n-ary tree which contains key values (integers) in each node. I would like to calculate the minimum depth of the tree. Here is what I have come up with so far:
int min = 0;
private int getMinDepth(Node node, int counter, int temp){
if(node == null){
//if it is the first branch record min
//otherwise compare min to this value
//and record the minimum value
if(counter == 0){
temp = min;
}else{
temp = Math.min(temp, min);
min = 0;
}
counter++;//counter should increment by 1 when at end of branch
return temp;
}
min++;
getMinDepth(node.q1, counter, min);
getMinDepth(node.q2, counter, min);
getMinDepth(node.q3, counter, min);
getMinDepth(node.q4, counter, min);
return temp;
}
The code is called like so:
int minDepth = getMinDepth(root, 0, 0);
The idea is that if the tree is traversing down the first branch (branch number is tracked by counter), then we set the temp holder to store this branch depth. From then on, compare the next branch length and if it smaller, then make temp = that length. For some reason counter is not incrementing at all and always staying at zero. Anyone know what I am doing wrong?
I think you're better off doing a breadth-first search. Your current implementation tries to be depth-first, which means it could end up exploring the whole tree if the branches happen to be in an awkward order.
To do a breadth-first search, you need a queue (a ArrayDeque is probably the right choice). You'll then need a little class that holds a node and a depth. The algorithm goes a little something like this:
Queue<NodeWithDepth> q = new ArrayDeque<NodeWithDepth>();
q.add(new NodeWithDepth(root, 1));
while (true) {
NodeWithDepth nwd = q.remove();
if (hasNoChildren(nwd.node())) return nwd.depth();
if (nwd.node().q1 != null) q.add(new NodeWithDepth(nwd.node().q1, nwd.depth() + 1));
if (nwd.node().q2 != null) q.add(new NodeWithDepth(nwd.node().q2, nwd.depth() + 1));
if (nwd.node().q3 != null) q.add(new NodeWithDepth(nwd.node().q3, nwd.depth() + 1));
if (nwd.node().q4 != null) q.add(new NodeWithDepth(nwd.node().q4, nwd.depth() + 1));
}
This looks like it uses more memory than a depth-first search, but when you consider that stack frames consume memory, and that this will explore less of the tree than a depth-first search, you'll see that's not the case. Probably.
Anyway, see how you get on with it.
You are passing the counter variable by value, not by reference. Thus, any changes made to it are local to the current stack frame and are lost as soon as the function returns and that frame is popped of the stack. Java doesn't support passing primitives (or anything really) by reference, so you'd either have to pass it as a single element array or wrap it in an object to get the behavior you're looking for.
Here's a simpler (untested) version that avoids the need to pass a variable by reference:
private int getMinDepth(QuadTreeNode node){
if(node == null)
return 0;
return 1 + Math.min(
Math.min(getMinDepth(node.q1), getMinDepth(node.q2)),
Math.min(getMinDepth(node.q3), getMinDepth(node.q4)));
}
Both your version and the one above are inefficient because they search the entire tree, when really you only need to search down to the shallowest depth. To do it efficiently, use a queue to do a breadth-first search like Tom recommended. Note however, that the trade-off required to get this extra speed is the extra memory used by the queue.
Edit:
I decided to go ahead and write a breadth first search version that doesn't assume you have a class that keeps track of the nodes' depths (like Tom's NodeWithDepth). Once again, I haven't tested it or even compiled it... But I think it should be enough to get you going even if it doesn't work right out of the box. This version should perform faster on large, complex trees, but also uses more memory to store the queue.
private int getMinDepth(QuadTreeNode node){
// Handle the empty tree case
if(node == null)
return 0;
// Perform a breadth first search for the shallowest null child
// while keeping track of how deep into the tree we are.
LinkedList<QuadTreeNode> queue = new LinkedList<QuadTreeNode>();
queue.addLast(node);
int currentCountTilNextDepth = 1;
int nextCountTilNextDepth = 0;
int depth = 1;
while(!queue.isEmpty()){
// Check if we're transitioning to the next depth
if(currentCountTilNextDepth <= 0){
currentCountTilNextDepth = nextCountTilNextDepth;
nextCountTilNextDepth = 0;
depth++;
}
// If this node has a null child, we're done
QuadTreeNode node = queue.removeFirst();
if(node.q1 == null || node.q2 == null || node.q3 == null || node.q4 == null)
break;
// If it didn't have a null child, add all the children to the queue
queue.addLast(node.q1);
queue.addLast(node.q2);
queue.addLast(node.q3);
queue.addLast(node.q4);
// Housekeeping to keep track of when we need to increment our depth
nextCountTilNextDepth += 4;
currentCountTilNextDepth--;
}
// Return the depth of the shallowest node that had a null child
return depth;
}
Counter is always staying at zero because primitives in java are called by value. This means if you overwrite the value in a function call the caller won't see the change. Or if you're familiar with C++ notation it's foo(int x) instead of foo(int& x).
One solution would be to use an Integer object since objects are call-by-reference.
Since you're interested in the minimum depth a breadth first solution will work just fine, but you may get memory problems for large trees.
If you assume that the tree may become rather large an IDS solution would be the best. This way you'll get the time complexity of the breadth first variant with the space complexity of a depth first solution.
Here's a small example since IDS isn't as well known as its brethren (though much more useful for serious stuff!). I assume that every node has a list with children for simplicity (and since it's more general).
public static<T> int getMinDepth(Node<T> root) {
int depth = 0;
while (!getMinDepth(root, depth)) depth++;
return depth;
}
private static<T> boolean getMinDepth(Node<T> node, int depth) {
if (depth == 0)
return node.children.isEmpty();
for (Node<T> child : node.children)
if (getMinDepth(child, depth - 1)) return true;
return false;
}
For a short explanation see http://en.wikipedia.org/wiki/Iterative_deepening_depth-first_search
Related
I'm practicing Java by working through algorithms on leetcode. I just solved the "Construct a binary tree from inorder and postorder traversal" problem and was playing with my code to try to get better performance (as measured by the leetcode compilation/testing). Here is the code I wrote:
class Solution {
public TreeNode buildTree(int[] inorder, int[] postorder) {
if(inorder.length == 1){
TreeNode root = new TreeNode(inorder[0]);
return root;
}
if(inorder.length == 0)
return null;
//int j = inorder.length; //Calculate this once, instead of each time the for loop executes
return reBuild(inorder, postorder, 0, inorder.length - 1, 0, postorder.length - 1);
}
public TreeNode reBuild(int[] inorder, int[] postorder, int inStart, int inEnd, int postStart, int postEnd){ //j passed in as argument here
if(inStart > inEnd)
return null; //base case
int rIndex = 0;
int j = inorder.length;
TreeNode root = new TreeNode(postorder[postEnd]); //Root is the last item in the postorder array
if(inStart == inEnd)
return root; //This node has no children
//for(int i = 0; i < inorder.length; ++i)
for(int i = 0; i < j; ++i){ //Find the next root value in inorder and get index
if(inorder[i] == root.val){
rIndex = i;
break;
}
}
root.left = reBuild(inorder, postorder, inStart, rIndex - 1, postStart, postStart - inStart + rIndex - 1); //Build left subtree
root.right = reBuild(inorder, postorder, rIndex + 1, inEnd, postEnd - inEnd + rIndex, postEnd - 1); //Build right subtree
return root;
}
}
My question concerns the for loop in the reBuild function. My first submission calculated the length of inorder each time the loop ran, which is obviously inefficient. I then took this out, and stored the length in a variable j, and used that in the for loop instead. This gave me a boost of ~1ms runtime. So far, so good. Then, I tried moving the calculation of j to the buildTree function, rationalizing that I don't need to calculate it in each recursive call since it doesn't change. When I moved it there and passed it in as a parameter, my runtime went back up 1ms, but my memory usage decreased significantly. Is this a quirk of how leetcode measures efficiency? If not, why would that move increase runtime?
If by calculating the length you mean accessing inorder.length then this is likely why you are losing performance.
When created, arrays hold onto a fixed value for their length called "length". this is a value not a method(thus no real performance used).
If j is never changed (ie j always equals inorder.length) The compiler likely ignores "j = inorder.length;" and simply accesses inorder.length when it sees j. you are then adding complexity to the function call by passing j where inorder (and thus inorder.length) is also present. Though this depends on the compiler implementation and may not actually happen.
In terms of access time, I think public object variables are slower than in-scope variables (think access inorder then access length).
warning hardware talk:
Another thing to consider is registers. These are data storage locations on the CPU itself which the code is actually run from (think HDD/SSD>RAM>cache>registers) and generally cant hold much more than 100 values at a time. Thus depending on the size of the current method (number of variables in scope) the code can run much faster or slower. Java seems to add a lot of overhead to this so for small functions, 1 or 2 extra values in scope can drastically affect the speed (as the program has to access cache).
I'm a student and me and my team have to make a simulation of student's behaviour in a campus (like making "groups of friends") walking etc. For finding path that student has to go, I used A* algorithm (as I found out that its one of fastest path-finding algorithms). Unfortunately our simulation doesn't run fluently (it takes like 1-2 sec between successive iterations). I wanted to optimize the algorithm but I don't have any idea what I can do more. Can you guys help me out and share with me information if its possible to optimize my A* algorithm? Here goes code:
public LinkedList<Field> getPath(Field start, Field exit) {
LinkedList<Field> foundPath = new LinkedList<Field>();
LinkedList<Field> opensList= new LinkedList<Field>();
LinkedList<Field> closedList= new LinkedList<Field>();
Hashtable<Field, Integer> gscore = new Hashtable<Field, Integer>();
Hashtable<Field, Field> cameFrom = new Hashtable<Field, Field>();
Field x = new Field();
gscore.put(start, 0);
opensList.add(start);
while(!opensList.isEmpty()){
int min = -1;
//searching for minimal F score
for(Field f : opensList){
if(min==-1){
min = gscore.get(f)+getH(f,exit);
x = f;
}else{
int currf = gscore.get(f)+getH(f,exit);
if(min > currf){
min = currf;
x = f;
}
}
}
if(x == exit){
//path reconstruction
Field curr = exit;
while(curr != start){
foundPath.addFirst(curr);
curr = cameFrom.get(curr);
}
return foundPath;
}
opensList.remove(x);
closedList.add(x);
for(Field y : x.getNeighbourhood()){
if(!(y.getType()==FieldTypes.PAVEMENT ||y.getType() == FieldTypes.GRASS) || closedList.contains(y) || !(y.getStudent()==null))
{
continue;
}
int tentGScore = gscore.get(x) + getDist(x,y);
boolean distIsBetter = false;
if(!opensList.contains(y)){
opensList.add(y);
distIsBetter = true;
}else if(tentGScore < gscore.get(y)){
distIsBetter = true;
}
if(distIsBetter){
cameFrom.put(y, x);
gscore.put(y, tentGScore);
}
}
}
return foundPath;
}
private int getH(Field start, Field end){
int x;
int y;
x = start.getX()-end.getX();
y = start.getY() - end.getY();
if(x<0){
x = x* (-1);
}
if(y<0){
y = y * (-1);
}
return x+y;
}
private int getDist(Field start, Field end){
int ret = 0;
if(end.getType() == FieldTypes.PAVEMENT){
ret = 8;
}else if(start.getX() == end.getX() || start.getY() == end.getY()){
ret = 10;
}else{
ret = 14;
}
return ret;
}
//EDIT
This is what i got from jProfiler:
So getH is a bottlneck yes? Maybe remembering H score of field would be a good idea?
A linked list is not a good data structure for the open set. You have to find the node with the smallest F from it, you can either search through the list in O(n) or insert in sorted position in O(n), either way it's O(n). With a heap it's only O(log n). Updating the G score would remain O(n) (since you have to find the node first), unless you also added a HashTable from nodes to indexes in the heap.
A linked list is also not a good data structure for the closed set, where you need fast "Contains", which is O(n) in a linked list. You should use a HashSet for that.
You can optimize the problem by using a different algorithm, the following page illustrates and compares many different aglorihms and heuristics:
A*
IDA*
Djikstra
JumpPoint
...
http://qiao.github.io/PathFinding.js/visual/
From your implementation it seems that you are using naive A* algorithm. Use following way:-
A* is algorithm which is implemented using priority queue similar to BFS.
Heuristic function is evaluated at each node to define its fitness to be selected as next node to be visited.
As new node is visited its neighboring unvisited nodes are added into queue with its heuristic values as keys.
Do this till every heuristic value in the queue is less than(or greater) calculated value of goal state.
Find bottlenecks of your implementation using profiler . ex. jprofiler is easy to use
Use threads in areas where algorithm can run simultaneously.
Profile your JavaVM to run faster.
Allocate more RAM
a) As mentioned, you should use a heap in A* - either a basic binary heap or a pairing heap which should be theoretically faster.
b) In larger maps, it always happens that you need some time for the algorithm to run (i.e., when you request a path, it will simply have to take some time). What can be done is to use some local navigation algorithm (e.g., "run directly to the target") while the path computes.
c) If you have reasonable amount of locations (e.g., in a navmesh) and some time at the start of your code, why not to use Floyd-Warshall's algorithm? Using that, you can the information where to go next in O(1).
I built a new pathfinding algorithm. called Fast* or Fastaer, It is a BFS like A* but is faster and efficient than A*, the accuracy is 90% A*. please see this link for info and demo.
https://drbendanilloportfolio.wordpress.com/2015/08/14/fastaer-pathfinder/
It has a fast greedy line tracer, to make path straighter.
The demo file has it all. Check Task manager when using the demo for performance metrics. So far upon building this the profiler results of this has maximum surviving generation of 4 and low to nil GC time.
Suppose you want to find the middle node of a linked list in as efficient a way possible. The most typical "best" answer given is to maintain 2 pointers, a middle, and current. And to increment the middle pointer when the # of elements encountered is divisible by 2. Hence, we can find the middle in 1 pass. Efficient, right? Better than brute force, which involves 1 pass to the end, then 1 more pass until we reach size/2.
BUT... not so fast, why is the first method faster than the "brute force" way? In the first method, we're incrementing the middle pointer approximately size/2 times. But in the brute force way, in our 2nd pass, we're traversing the list until we reached the size/2th node. So aren't these 2 methods the same? Why is the first better than the 2nd?
//finding middle element of LinkedList in single pass
LinkedList.Node current = head;
int length = 0;
LinkedList.Node middle = head;
while(current.next() != null){
length++;
if(length%2 ==0){
middle = middle.next();
}
current = current.next();
}
if(length%2 == 1){
middle = middle.next();
}
If we modify the code to be:
while(current.next() != null){
current = current.next();
middle = middle.next();
if(current.next() != null){
current = current.next();
}
}
Now there are fewer assignments since length does not have to be incremented and I do believe this will give an identical result.
At the end of the day both solutions are O(N) so it is a micro-optimization.
As #Oleg Mikheev suggested, why can't we use Floyd's cycle-finding algorithm to find the middle element, as follows:
private int findMiddleElement() {
if (head == null)
return -1; // return -1 for empty linked list
Node temp = head;
Node oneHop, twoHop;
oneHop = twoHop = temp;
while (twoHop != null && twoHop.next != null) {
oneHop = oneHop.next;
twoHop = twoHop.next.next;
}
return oneHop.data;
}
The first answer has multiple advantages:
Since the two methods are of the same complexity O(N), any analysis on the efficiency needs to be careful, maybe involving the specific implementation and cost model. However, for the most naive implementation, the first method can save some loop variable increments.
It save you one variable's space - the two pointers v.s. the length, the counter and one pointer. Also, what if it is a huge list, and the length overflowed?
However, if you consider some specific model, then the second method might be much better. If the elements are all adjacent in memory, and the list is large enough , the cache can only hold one place of continuous memory, the first method might incur some memory access cost. At the end of the day, these two methods are mostly equivalent. Of course, the technique used in the first method is more flashy, and the thought process might be useful in other contexts.
public void middle(){
node slow=start.next;
node fast=start.next;
while(fast.next!=null)
{
slow=slow.next;
fast=fast.next.next;
}
System.out.println(slow.data);
}
10->9->8->7->6->5->4->3->2->1->
5
This is classic job interview question.
They don't want you to come with algorithm O(n), because both of them has O(n) complexity. Common person will say, there's no way to know where is middle if i don't traverse once (so traversing once to find length, and traversing 2nd time to find middle is two passes for those who interview you). They want you to think outside of box, and figure out way you mentioned which include two pointers.
So the complexity is same, but the way of thinking is different, and people who interview you want to see that.
I'm trying to answer the following programming question:
In the heap.java program, the insert() method inserts a new node in the heap and ensures the heap condition is preserved. Write a toss() method that places a new node in the heap array without attempting to maintain the heap condition. (Perhaps each new item can simply be placed at the end of the array.) Then write a restoreHeap() method that restores the heap condition throughout the entire heap. Using toss() repeatedly followed by a single restoreHeap() is more efficient than using insert() repeatedly when a large amount of data must be inserted at one time. See the description of heapsort for clues. To test your program, insert a few items, toss in some more, and then restore the heap.
I've written the code for the toss function which successfully inserts the node at the end and doesn't modify the heap condition. I'm having problems with the restoreHeap function though and I can't wrap my head around it. I've included the two functions below.
The full code of heap.java is here (includes toss() and restoreHeap() )
toss() - I based this off the insert function
public boolean toss(int key)
{
if(currentSize==maxSize)
return false;
Node newNode = new Node(key);
heapArray[currentSize] = newNode;
currentSize++;
return true;
} // end toss()
restoreHeap() - I based this off the trickleUp function and I'm getting a StackOverflowError.
public void restoreHeap(int index)
{
int parent = (index-1) / 2;
Node bottom = heapArray[index];
while( index > 0 &&
heapArray[parent].getKey() < bottom.getKey() )
{
heapArray[index] = heapArray[parent]; // move it down
index = parent;
parent = (parent-1) / 2;
} // end while
heapArray[index] = bottom;
while(index != 0)
{
restoreHeap(parent++);
}
} // end restoreHeap()
Any ideas? Help appreciated.
I'll give it a shot. Here is a way to do what you asked with some explanation.
Since you know that half of all nodes in a heap are leafs and a leaf, by itself, is a valid heap, you only have to run through the other half of the nodes to make sure they also are valid. If we do this from the bottom and up, we can maintain a valid heap structure "below" as we go up through the heap. This can easily be accomplished by a for loop:
public void rebuildHeap()
{
int half = heapArray.length / 2;
for(int i = half; i >= 0; i--)
restoreHeap(i);
}
How is restoreHeap implemented then?
It's supposed to check the node at index against its children to see if it needs to relocate the node. Because we make sure that the trees below the index node are heaps, we only have to move the index node to the right position. Hence we move it down in the tree.
First we need to locate the children. Since each row in the three have twice as many nodes as the row before, the children can be located like this:
private void restoreHeap(int index)
{
int leftChild = (index * 2) + 1; //+1 because arrays start at 0
int rightChild = leftChild +1;
...
Now you just have to compare the childrens value against your index nodes value. If a child have a bigger value you need to swap the index node with the child node. If both children have a bigger value, you need to swap with the child with the biggest value of the two (to maintain the heap structure after the swap). When the nodes have been swapped you need to call the method again to see if you need to move the index node further down the tree.
...
int biggest = index;
if(leftChild < currentSize && heapArray[leftChild].getKey() > heapArray[index].getKey())
biggest = leftChild; //LeftChild is bigger
if(rightChild < currentSize && heapArray[rightChild].getKey() > heapArray[biggest].getKey())
biggest = rightChild; //RightChild is bigger than both leftChild and the index node
if(biggest != index) //If a swap is needed
{
//Swap
Node swapper = heapArray[biggest];
heapArray[biggest] = heapArray[index];
heapArray[index] = swapper;
restoreHeap(biggest);
}
}
Recently I have been asked one question that in a singularly linked list how do we go to the middle of the list in one iteration.
A --> B --> C --> D (even nodes)
for this it should return address which points to B
A --> B --> C (odd nodes)
for this also it should return address which points to B
There is one solution of taking two pointers one moves one time and other moves two times but it does not seem working here
LinkedList p1,p2;
while(p2.next != null)
{
p1 = p1.next;
p2 = p2.next.next;
}
System.out.print("middle of the node" + p1.data); //This does not give accurate result in odd and even
Please help if anyone has did this before.
The basic algorithm would be
0 Take two pointers
1 Make both pointing to first node
2 Increment first with two node and first if its successful then traverse second to one node ahead
3 when second reaches end first one would be at middle.
Update:
It will definitely work in odd case, for even case you need to check one more condition if first point is allowed to move next but not next to next then both pointers will be at middle you need to decide which to take as middle
You can't advance p1 unless you successfully advanced p2 twice; otherwise, with a list length of 2 you end up with both pointing at the end (and you indicated even length lists should round toward the beginning).
So:
while ( p2.next != null ) {
p2 = p2.next;
if (p2.next != null) {
p2 = p2.next;
p1 = p1.next;
}
}
I know you've already accepted an answer, but this whole question sounds like an exercise in cleverness rather than an attempt to get the correct solution. Why would you do something in O(n) when you can do it in O(n/2)?
EDIT: This used to assert O(1) performance, and that is simply not correct. Thanks to ysth for pointing that out.
In practice, you would do this in zero iterations:
LinkedList list = ...
int size = list.size();
int middle = (size / 2) + (size % 2 == 0 ? 0 : 1) - 1; //index of middle item
Object o = list.get(middle); //or ListIterator it = list.listIterator(middle);
The solution of taking two pointers and one moves a half the rate should work fine. Most likely it is not the solution but your actual implementation that is the problem. Post more details of your implementation.
static ListElement findMiddle(ListElement head){
ListElement slower = head;
ListElement faster = head;
while(faster.next != null && faster.next.next !=null ){
faster = faster.next.next;
slower = slower.next;
}
return slower;
}
public static Node middle(Node head){
Node slow=head,fast=head;
while(fast!=null& fast.next!=null && fast.next.next!=null){
slow=slow.next;
fast=fast.next.next;
}
if(fast!=null && fast.next!=null){
slow=slow.next;
}
return slow;
}
public ListNode middleNode(ListNode head) {
if(head == null) return head;
ListNode slow = head;
ListNode fast = head;
while(fast != null && fast.next != null) {
fast = fast.next.next;
slow = slow.next;
}
return slow;
}