A* (A star) algorithm optimization - java

I'm a student and me and my team have to make a simulation of student's behaviour in a campus (like making "groups of friends") walking etc. For finding path that student has to go, I used A* algorithm (as I found out that its one of fastest path-finding algorithms). Unfortunately our simulation doesn't run fluently (it takes like 1-2 sec between successive iterations). I wanted to optimize the algorithm but I don't have any idea what I can do more. Can you guys help me out and share with me information if its possible to optimize my A* algorithm? Here goes code:
public LinkedList<Field> getPath(Field start, Field exit) {
LinkedList<Field> foundPath = new LinkedList<Field>();
LinkedList<Field> opensList= new LinkedList<Field>();
LinkedList<Field> closedList= new LinkedList<Field>();
Hashtable<Field, Integer> gscore = new Hashtable<Field, Integer>();
Hashtable<Field, Field> cameFrom = new Hashtable<Field, Field>();
Field x = new Field();
gscore.put(start, 0);
opensList.add(start);
while(!opensList.isEmpty()){
int min = -1;
//searching for minimal F score
for(Field f : opensList){
if(min==-1){
min = gscore.get(f)+getH(f,exit);
x = f;
}else{
int currf = gscore.get(f)+getH(f,exit);
if(min > currf){
min = currf;
x = f;
}
}
}
if(x == exit){
//path reconstruction
Field curr = exit;
while(curr != start){
foundPath.addFirst(curr);
curr = cameFrom.get(curr);
}
return foundPath;
}
opensList.remove(x);
closedList.add(x);
for(Field y : x.getNeighbourhood()){
if(!(y.getType()==FieldTypes.PAVEMENT ||y.getType() == FieldTypes.GRASS) || closedList.contains(y) || !(y.getStudent()==null))
{
continue;
}
int tentGScore = gscore.get(x) + getDist(x,y);
boolean distIsBetter = false;
if(!opensList.contains(y)){
opensList.add(y);
distIsBetter = true;
}else if(tentGScore < gscore.get(y)){
distIsBetter = true;
}
if(distIsBetter){
cameFrom.put(y, x);
gscore.put(y, tentGScore);
}
}
}
return foundPath;
}
private int getH(Field start, Field end){
int x;
int y;
x = start.getX()-end.getX();
y = start.getY() - end.getY();
if(x<0){
x = x* (-1);
}
if(y<0){
y = y * (-1);
}
return x+y;
}
private int getDist(Field start, Field end){
int ret = 0;
if(end.getType() == FieldTypes.PAVEMENT){
ret = 8;
}else if(start.getX() == end.getX() || start.getY() == end.getY()){
ret = 10;
}else{
ret = 14;
}
return ret;
}
//EDIT
This is what i got from jProfiler:
So getH is a bottlneck yes? Maybe remembering H score of field would be a good idea?

A linked list is not a good data structure for the open set. You have to find the node with the smallest F from it, you can either search through the list in O(n) or insert in sorted position in O(n), either way it's O(n). With a heap it's only O(log n). Updating the G score would remain O(n) (since you have to find the node first), unless you also added a HashTable from nodes to indexes in the heap.
A linked list is also not a good data structure for the closed set, where you need fast "Contains", which is O(n) in a linked list. You should use a HashSet for that.

You can optimize the problem by using a different algorithm, the following page illustrates and compares many different aglorihms and heuristics:
A*
IDA*
Djikstra
JumpPoint
...
http://qiao.github.io/PathFinding.js/visual/

From your implementation it seems that you are using naive A* algorithm. Use following way:-
A* is algorithm which is implemented using priority queue similar to BFS.
Heuristic function is evaluated at each node to define its fitness to be selected as next node to be visited.
As new node is visited its neighboring unvisited nodes are added into queue with its heuristic values as keys.
Do this till every heuristic value in the queue is less than(or greater) calculated value of goal state.

Find bottlenecks of your implementation using profiler . ex. jprofiler is easy to use
Use threads in areas where algorithm can run simultaneously.
Profile your JavaVM to run faster.
Allocate more RAM

a) As mentioned, you should use a heap in A* - either a basic binary heap or a pairing heap which should be theoretically faster.
b) In larger maps, it always happens that you need some time for the algorithm to run (i.e., when you request a path, it will simply have to take some time). What can be done is to use some local navigation algorithm (e.g., "run directly to the target") while the path computes.
c) If you have reasonable amount of locations (e.g., in a navmesh) and some time at the start of your code, why not to use Floyd-Warshall's algorithm? Using that, you can the information where to go next in O(1).

I built a new pathfinding algorithm. called Fast* or Fastaer, It is a BFS like A* but is faster and efficient than A*, the accuracy is 90% A*. please see this link for info and demo.
https://drbendanilloportfolio.wordpress.com/2015/08/14/fastaer-pathfinder/
It has a fast greedy line tracer, to make path straighter.
The demo file has it all. Check Task manager when using the demo for performance metrics. So far upon building this the profiler results of this has maximum surviving generation of 4 and low to nil GC time.

Related

Really slow Dijkstra algorithm, what am I doing wrong?

I was tasked to perform the Dijkstra Algorithm on big graphs (25 million nodes). These are represented as a 2D array: -each node as a double[] with latitude, longitude and offset (offset meaning index of the first outgoing edge of that node)
-each edge as a int[] with sourceNodeId,targetNodeId and weight of that edge
Below is the code, I used int[] as a tupel for the comparison in the priority queue.
The algorithm is working and gets the right results HOWEVER it is required to be finished in 15s but takes like 8min on my laptop. Is my algorithm fundamentally slow? Am I using the wrong data structures? Am I missing something? I tried my best optimizing as far as I saw fit.
Any help or any ideas would be greatly appreciated <3
public static int[] oneToAllArray(double[][]nodeList, int[][]edgeList,int sourceNodeId) {
int[] distance = new int[nodeList[0].length]; //the array that will be returned
//the priorityQueue will use arrays with the length 2, representing [index, weight] for each node and order them by their weight
PriorityQueue<int[]> prioQueue = new PriorityQueue<>((a, b) -> ((int[])a)[1] - ((int[])b)[1]);
int offset1; //used for determining the amount of outgoing edges
int offset2;
int newWeight; //declared here so we dont need to declare it a lot of times later (not sure if that makes a difference)
//currentSourceNode here means the node that will be looked at for OUTGOING edges
int[] currentSourceNode= {sourceNodeId,0};
prioQueue.add(currentSourceNode);
//at the start we only add the sourceNode, then we start the actual algorithm
while(!prioQueue.isEmpty()) {
if(prioQueue.size() % 55 == 2) {
System.out.println(prioQueue.size());
}
currentSourceNode=prioQueue.poll();
int sourceIndex = currentSourceNode[0];
if(sourceIndex == nodeList[0].length-1) {
offset1= (int) nodeList[2][sourceIndex];
offset2= edgeList[0].length;
} else {
offset1= (int) nodeList[2][sourceIndex];
offset2= (int) nodeList[2][sourceIndex+1];
}
//checking every outgoing edge for the currentNode
for(int i=offset1;i<offset2;i++) {
int targetIndex = edgeList[1][i];
//if the node hasnt been looked at yet, the weight is just the weight of this edge + distance to sourceNode
if(distance[targetIndex]==0&&targetIndex!=sourceNodeId) {
distance[targetIndex] = distance[sourceIndex] + edgeList[2][i];
int[]targetArray = {targetIndex, distance[targetIndex]};
prioQueue.add(targetArray);
} else if(prioQueue.stream().anyMatch(e -> e[0]==targetIndex)) {
//above else if checks if this index is already in the prioQueue
newWeight=distance[sourceIndex]+edgeList[2][i];
//if new weight is better, we have to update the distance + the prio queue
if(newWeight<distance[targetIndex]) {
distance[targetIndex]=newWeight;
int[] targetArray;
targetArray=prioQueue.stream().filter(e->e[0]==targetIndex).toList().get(0);
prioQueue.remove(targetArray);
targetArray[1]=newWeight;
prioQueue.add(targetArray);
}
}
}
}
return distance;
}
For each node that you process, you are doing a linear scan of the priority queue to see if something is already queued, and a second scan to find all the things that are queued if you have to update the distance. Instead, keep a separate multi-set of things that are in the queue.
This is not a proper Dijkstra's implementation.
One of the key elements of Dijkstra is that you mark nodes as "visited" when they have been evaluated and prevent looking at them again because you can't do any better. You are not doing that, so your algorithm is doing many many more computations than necessary. The only place where a priority queue or sort is required is to pick the next node to visit, from amongst the unvisited. You should re-read the algorithm, implement the "visitation tracking" and re-formulate.

Weighted Quick Union Find

I am taking an algorithms course where they go over weighted quick union find. I am confused about why we are concerned about the size of a tree as opposed to the depth?
When I tried writing out the code, my code looked different than the solution provided.
From my understanding, the size of the tree (total number of nodes in a tree) is not as important as the depth of the tree when it comes to the run time of the union function (lg n) because it is the depth that will determine how many look ups are needed to get to the root of a node?
Thanks
My code:
public void union(int p, int q) {
int root_p = root(p);
int root_q = root(q);
// If the two trees are not already connected, union them
if(root_p != root_q) {
// The two trees aren't connected, check which is deeper
// Attach the one that is more shallow to the deeper one
if (depth[root_p] > depth[root_q]) {
// p is deeper, point q's root to p
id[root_q] = root_p;
} else if (depth[root_q] > depth[root_p]) {
// q is deeper, point p's root to p
id[root_p] = root_q;
} else {
// They are of equal depth, point q's root to p and increment p's depth by 1
id[root_q] = root_p;
depth[root_p] += 1;
}
}
}
Solution code provided:
public void union(int p, int q) {
int rootP = find(p);
int rootQ = find(q);
if (rootP == rootQ) return;
// make smaller root point to larger one
if (size[rootP] < size[rootQ]) {
parent[rootP] = rootQ;
size[rootQ] += size[rootP];
}
else {
parent[rootQ] = rootP;
size[rootP] += size[rootQ];
}
count--;
}
You are correct that the depth (actually height) is more directly related to the run time, but using either one will result in O(log N) run time for union and find.
The proof is easy -- Given that when we begin (when all sets are disjoint), every root with height h has at least 2(h-1) nodes, this invariant is maintained by union and find operations. Therefore, if a node has size n, then its height will be at most floor(log2(n))+1
So either one will do. BUT, very soon you will learn about path compression, which makes it difficult to keep track of the height of roots, but the size will still be available. At that point you will be able to use rank, which is kind of like height, or continue to use the size. Again either one will do, but I find the size easier to reason about so I always use that.

How to quickly insert an element into array with duplicates after all of the equal elements?

I have an ArrayList, which contains game objects sorted by their 'Z' (float) position from lower to higher. I'm not sure if ArrayList is the best choice for it but I have come up with such a solution to find an index of insertion in a complexity faster than linear (worst case):
GameObject go = new GameObject();
int index = 0;
int start = 0, end = displayList.size(); // displayList is the ArrayList
while(end - start > 0)
{
index = (start + end) / 2;
if(go.depthZ >= displayList.get(index).depthZ)
start = index + 1;
else if(go.depthZ < displayList.get(index).depthZ)
end = index - 1;
}
while(index > 0 && go.depthZ < displayList.get(index).depthZ)
index--;
while(index < displayList.size() && go.depthZ >= displayList.get(index).depthZ)
index++;
The catch is that the element has to be inserted in a specific place in the chain of elements with equal value of depthZ - at the end of this chain. That's why I need 2 additional while loops after the binary search which I assume aren't too expensive becouse binary search gives me some approximation of this place.
Still I'm wondering if there's some better solution or some known algorithms for such problem which I haven't heard of? Maybe using different data structure than ArrayList? At the moment I ignore the worst case insertion O(n) (inserting at the begining or middle) becouse using a normal List I wouldn't be able to find an index to insert using method above.
You should try to use balanced search tree (red-black tree for example) instead of array. First you can try to use TreeMap witch uses a red-black tree inside to see if it's satisfy your requirements. Possible implementation:
Map<Float, List<Object>> map = new TreeMap<Float, List<Object>>(){
#Override
public List<Object> get(Object key) {
List<Object> list = super.get(key);
if (list == null) {
list = new ArrayList<Object>();
put((Float) key, list);
}
return list;
}
};
Example of usage:
map.get(0.5f).add("hello");
map.get(0.5f).add("world");
map.get(0.6f).add("!");
System.out.println(map);
One way to do it would to do a halving search, where the first search is half way thru your list (list.size()/2), then for the next one you can do half of that, and so on. With this exponential method, instead of having to do 4096 searches when you have 4096 objects, you only need 12 searches
sorry for the complete disregard for technical terms, I am not the best at terms :P
Unless I overlook something, your approach is essentially correct (but there's an error, see below), in the sense that your first while tries to compute the insert-index such that it will be placed after all lower OR EQUAL Z: there's correctly an equal sign in your first test (updating "start" if it yields TRUE).
Then, of course, there's no need to worry anymore about its position among equals. However, your follow-up while destroys this nice situation: the test in the first follow-up while yields always TRUE (one time) and so you move back; and then you need the second follow-up while to undo that. So, you should remove BOTH follow-up whiles and you're done...
However, there's a little problem with your first while, such that it doesn't always exactly do what the purpose is. I guess that the faulty outcomes triggered you to implement the follow-up whiles to "repair" that.
Here's the issue in your while. Suppose you have a try-index (start+end)/2 that points to a larger Z, but the one just before it has value Z. You then get into your second test (elseif) and set "end" to the position where that Z-value resides. Finally you wind up with precisely that position.
The remedy is simple: in your elseif assignment, put "end = index" (without the -1). Final remark: the test in the elseif is unnecessary, just else is sufficient.
So, all in all you get
GameObject go = new GameObject();
int index = 0;
int start = 0, end = displayList.size(); // displayList is the ArrayList
while(end - start > 0)
{
index = (start + end) / 2;
if(go.depthZ >= displayList.get(index).depthZ)
start = index + 1;
else
end = index;
}
(I hope I haven't overlooked something trivial...)
Add 1 to the least significant byte of the key (with carry); binary search for that insert position; and insert it there.
Your binary search has to be so constructed as to end at the leftmost of a sequence of duplicates, but this is trivial given an understanding of the various Binary search algorithms.

Alpha-beta move ordering

I have a basic implementation of alpha-beta pruning but I have no idea how to improve the move ordering. I have read that it can be done with a shallow search, iterative deepening or storing the bestMoves to transition table.
Any suggestions how to implement one of these improvements in this algorithm?
public double alphaBetaPruning(Board board, int depth, double alpha, double beta, int player) {
if (depth == 0) {
return board.evaluateBoard();
}
Collection<Move> children = board.generatePossibleMoves(player);
if (player == 0) {
for (Move move : children) {
Board tempBoard = new Board(board);
tempBoard.makeMove(move);
int nextPlayer = next(player);
double result = alphaBetaPruning(tempBoard, depth - 1, alpha,beta,nextPlayer);
if ((result > alpha)) {
alpha = result;
if (depth == this.origDepth) {
this.bestMove = move;
}
}
if (alpha >= beta) {
break;
}
}
return alpha;
} else {
for (Move move : children) {
Board tempBoard = new Board(board);
tempBoard.makeMove(move);
int nextPlayer = next(player);
double result = alphaBetaPruning(tempBoard, depth - 1, alpha,beta,nextPlayer);
if ((result < beta)) {
beta = result;
if (depth == this.origDepth) {
this.bestMove = move;
}
}
if (beta <= alpha) {
break;
}
}
return beta;
}
}
public int next(int player) {
if (player == 0) {
return 4;
} else {
return 0;
}
}
Node reordering with shallow search is trivial: calculate the
heuristic value for each child of the state before recursively
checking them. Then, sort the values of these states [descending
for max vertex, and ascending for min vertex], and recursively invoke
the algorithm on the sorted list. The idea is - if a state is good at
shallow depth, it is more likely to be good at deep state as well,
and if it is true - you will get more prunnings.
The sorting should be done before this [in both if and else clauses]
for (Move move : children) {
storing moves is also trivial - many states are calculated twice,
when you finish calculating any state, store it [with the depth of
the calculation! it is improtant!] in a HashMap. First thing you do
when you start calculation on a vertex - is check if it is already
calculated - and if it is, returned the cached value. The idea behind
it is that many states are reachable from different paths, so this
way - you can eliminate redundant calculations.
The changes should be done both in the first line of the method [something like if (cache.contains((new State(board,depth,player)) return cache.get(new State(board,depth,player))] [excuse me for lack of elegance and efficiency - just explaining an idea here].
You should also add cache.put(...) before each return statement.
First of all one has to understand the reasoning behind the move ordering in an alpha-beta pruning algorithm. Alpha-beta produces the same result as a minimax but in a lot of cases can do it faster because it does not search through the irrelevant branches.
It is not always faster, because it does not guarantee to prune, if fact in the worse case it will not prune at all and search absolutely the same tree as minimax and will be slower because of a/b values book-keeping. In the best case (maximum pruning) it allows to search a tree 2 times deep at the same time. For a random tree it can search 4/3 times deeper for the same time.
Move ordering can be implemented in a couple of ways:
you have a domain expert who gives you suggestion of what moves are better. For example in chess promotion of a pawn, capturing high value pieces with lower value piece are on average good moves. In checkers it is better to kill more checkers in a move then less checker and it is better to create a queen. So your move generation function return better moves before
you get the heuristic of how good is the move from evaluating the position at the 1 level of depth smaller (your shallow search / iterative deepening). You calculated the evaluation at the depth n-1, sorted the moves and then evaluate at the depth n.
The second approach you mentioned has nothing to do with a move ordering. It has to do with a fact that evaluation function can be expensive and many positions are evaluated many time. To bypass this you can store the values of the position in hash once you calculated it and reuse it later.

Calculating longest path

I have a n-ary tree which contains key values (integers) in each node. I would like to calculate the minimum depth of the tree. Here is what I have come up with so far:
int min = 0;
private int getMinDepth(Node node, int counter, int temp){
if(node == null){
//if it is the first branch record min
//otherwise compare min to this value
//and record the minimum value
if(counter == 0){
temp = min;
}else{
temp = Math.min(temp, min);
min = 0;
}
counter++;//counter should increment by 1 when at end of branch
return temp;
}
min++;
getMinDepth(node.q1, counter, min);
getMinDepth(node.q2, counter, min);
getMinDepth(node.q3, counter, min);
getMinDepth(node.q4, counter, min);
return temp;
}
The code is called like so:
int minDepth = getMinDepth(root, 0, 0);
The idea is that if the tree is traversing down the first branch (branch number is tracked by counter), then we set the temp holder to store this branch depth. From then on, compare the next branch length and if it smaller, then make temp = that length. For some reason counter is not incrementing at all and always staying at zero. Anyone know what I am doing wrong?
I think you're better off doing a breadth-first search. Your current implementation tries to be depth-first, which means it could end up exploring the whole tree if the branches happen to be in an awkward order.
To do a breadth-first search, you need a queue (a ArrayDeque is probably the right choice). You'll then need a little class that holds a node and a depth. The algorithm goes a little something like this:
Queue<NodeWithDepth> q = new ArrayDeque<NodeWithDepth>();
q.add(new NodeWithDepth(root, 1));
while (true) {
NodeWithDepth nwd = q.remove();
if (hasNoChildren(nwd.node())) return nwd.depth();
if (nwd.node().q1 != null) q.add(new NodeWithDepth(nwd.node().q1, nwd.depth() + 1));
if (nwd.node().q2 != null) q.add(new NodeWithDepth(nwd.node().q2, nwd.depth() + 1));
if (nwd.node().q3 != null) q.add(new NodeWithDepth(nwd.node().q3, nwd.depth() + 1));
if (nwd.node().q4 != null) q.add(new NodeWithDepth(nwd.node().q4, nwd.depth() + 1));
}
This looks like it uses more memory than a depth-first search, but when you consider that stack frames consume memory, and that this will explore less of the tree than a depth-first search, you'll see that's not the case. Probably.
Anyway, see how you get on with it.
You are passing the counter variable by value, not by reference. Thus, any changes made to it are local to the current stack frame and are lost as soon as the function returns and that frame is popped of the stack. Java doesn't support passing primitives (or anything really) by reference, so you'd either have to pass it as a single element array or wrap it in an object to get the behavior you're looking for.
Here's a simpler (untested) version that avoids the need to pass a variable by reference:
private int getMinDepth(QuadTreeNode node){
if(node == null)
return 0;
return 1 + Math.min(
Math.min(getMinDepth(node.q1), getMinDepth(node.q2)),
Math.min(getMinDepth(node.q3), getMinDepth(node.q4)));
}
Both your version and the one above are inefficient because they search the entire tree, when really you only need to search down to the shallowest depth. To do it efficiently, use a queue to do a breadth-first search like Tom recommended. Note however, that the trade-off required to get this extra speed is the extra memory used by the queue.
Edit:
I decided to go ahead and write a breadth first search version that doesn't assume you have a class that keeps track of the nodes' depths (like Tom's NodeWithDepth). Once again, I haven't tested it or even compiled it... But I think it should be enough to get you going even if it doesn't work right out of the box. This version should perform faster on large, complex trees, but also uses more memory to store the queue.
private int getMinDepth(QuadTreeNode node){
// Handle the empty tree case
if(node == null)
return 0;
// Perform a breadth first search for the shallowest null child
// while keeping track of how deep into the tree we are.
LinkedList<QuadTreeNode> queue = new LinkedList<QuadTreeNode>();
queue.addLast(node);
int currentCountTilNextDepth = 1;
int nextCountTilNextDepth = 0;
int depth = 1;
while(!queue.isEmpty()){
// Check if we're transitioning to the next depth
if(currentCountTilNextDepth <= 0){
currentCountTilNextDepth = nextCountTilNextDepth;
nextCountTilNextDepth = 0;
depth++;
}
// If this node has a null child, we're done
QuadTreeNode node = queue.removeFirst();
if(node.q1 == null || node.q2 == null || node.q3 == null || node.q4 == null)
break;
// If it didn't have a null child, add all the children to the queue
queue.addLast(node.q1);
queue.addLast(node.q2);
queue.addLast(node.q3);
queue.addLast(node.q4);
// Housekeeping to keep track of when we need to increment our depth
nextCountTilNextDepth += 4;
currentCountTilNextDepth--;
}
// Return the depth of the shallowest node that had a null child
return depth;
}
Counter is always staying at zero because primitives in java are called by value. This means if you overwrite the value in a function call the caller won't see the change. Or if you're familiar with C++ notation it's foo(int x) instead of foo(int& x).
One solution would be to use an Integer object since objects are call-by-reference.
Since you're interested in the minimum depth a breadth first solution will work just fine, but you may get memory problems for large trees.
If you assume that the tree may become rather large an IDS solution would be the best. This way you'll get the time complexity of the breadth first variant with the space complexity of a depth first solution.
Here's a small example since IDS isn't as well known as its brethren (though much more useful for serious stuff!). I assume that every node has a list with children for simplicity (and since it's more general).
public static<T> int getMinDepth(Node<T> root) {
int depth = 0;
while (!getMinDepth(root, depth)) depth++;
return depth;
}
private static<T> boolean getMinDepth(Node<T> node, int depth) {
if (depth == 0)
return node.children.isEmpty();
for (Node<T> child : node.children)
if (getMinDepth(child, depth - 1)) return true;
return false;
}
For a short explanation see http://en.wikipedia.org/wiki/Iterative_deepening_depth-first_search

Categories