How to improve the perfomance of my A* path finder? - java

So basically I coded an A* pathfinder that can find paths through obstacles and move diagnolly. I basically implemented the pseudocode from Link into real code and also used a binary heap method to add and delete items from the openlist.
Using binary heap led to significant performance boost , about 500 times faster than the insert sorting algorithm I used before.
the problem is that it still takes around on average 1.5 million nanoseconds which is around .0015 of a second.
So the question is, my plan is to make a tower defense game where the pathfinding for each mob needs to update everytime I add a tower to the map. If I were to have around a maximum of 50ish mobs on the map, that means it will take around .0015 * 50 = .075 seconds to update all paths for the entire mob. The game basically ticks( all the ingame stuff updates) every 1/60 seconds which is .016 of a second, so the problem is that it takes longer to update the paths than it takes to tick, which will lead to massive lag. So how should I go about this? DO I need to find a better algorithm for sorting the openlist or somehow divide the pathfinding tasks so that each tick only does X number of pathfinding tasks as opposed to all of them.

Rather than searching from each enemy to the checkpoint, search outwards from the checkpoint to every enemy at once. This way, rather than doing 50 searches, you only need to do one.
More specifically, just do a breadth-first search (or djikstra's, if your graph is weighted) from the player outwards, until every enemy has been reached.
You could alter this strategy to work with A* by changing your heuristic EstimatedDistanceToEnd (aka h(x)) to be the minimum estimate to any enemy, but with a lot of enemies this may end up being slower than the simpler option. The heuristic must be consistent for this to work.
Additionally, make sure you are using the correct tie-breaking criteria.
Also, and most importantly, remember that you don't need to run your pathfinder every single frame for most games - often you can get away with only once or twice a second, or even less, depending on the game.
If that is still too slow, you could look into using D* lite to reuse information between subsequent searches. But, I would bet money that running a single breadth-first search will be more than fast enough.
(copied from my answer to a similar question on gamedev)

Have you considered the Floyd-Warshall algorithm?
Essentially, A* is for path-finding from a single source to one or more destinations. However, in tower defense (depending on your rules of course), it is about multiple sources navigating around a map.
So for this, Floyd's algorithm seems more optimal. However, you could have your A* algorithm find paths for unit groups instead of individual units, which should optimize your calculation times.

Presumably, you can back-search from the exit towards all the creeps, so you need to explore your maze only once.

Related

Connect4 on Android, timing issue

I'm developing a simple Connect4 game in Android.
Currently I'm using a minimax algorithm with alpha-beta pruning and bit-board state representation so the search is very effective and fast.
The skill is set by setting the maximum depth the algorithm should reach during its DFS search inside the game tree.
I noticed that the time required to choose a move depends on how far we are in the game: at the beginning it takes more time (as there are many possibilities to explore), in mid-game it take a reasonable amount of time and near the end is very fast.
My problem is that if I set a given skill the user has to wait to much on the first/second/third moves. I'd like to speed-up the aperture but I suspect it depends even on the hardware itself how I want to implement the speed-up process.
Can I set a timeout for the thread running the DSF mimimax?
The simplest way to circumvent this issue is to use an opening book for the first few moves. An opening book is a set of predetermined moves for a given scenario. Since there are relatively few possibly board states for the opening moves, you can easily compile a database of all possibly moves for the first three turns, and call upon it instead of actually doing the search. Thus you no longer require a time out and you sped up the search with zero cost to accuracy.

Modifing AStar algorithm to connect gates in logic scheme

I've been working on a logic scheme simulator for a while now. The whole thing is pretty much working but there's an issue that I can't seem to solve.
Connections in a logic scheme should be vertical and horizontal. They should avoid logic gates and have as few turns as possible (avoiding the staircase effect). Connections can also intersect but they can never overlap.
I used AStar algorithm to find the shortest and the nicest path between two logic gates. The heuristics for pathfinding is Manhattan distance while the cost between the two nodes is a dimension of a single square on the canvas.
The whole issue arises between two conditions. "Min turns" and "no overlap". I solved "min turns" issue by punishing the algorithm with double the normal cost when it tries to make a turn. That causes the algorithm to postpone all turns to the latest possible moment which causes my next picture.
My condition of no overlapping is forbidding the second input from connecting to the second free input of the AND gate (note: simulator is made for Android and the number of inputs is variable, that's why inputs are close to each other). A situation like this is bound to happen sooner or later but I would like to make it as late as possible.
I have tried to:
introdue "int turnNumber" to count how many turns have been made so far and punish paths that make too many turns. (algorithm takes too long to complete, sometimes very very long)
calculate Manhattan distance from the start to the end. Divide that number by two and then remove the "double cost" punishment from nodes whose heuristics are near that middle. (for some situations algorithm fell into infinite loop)
Are there any ideas on how to redistribute turns in the middle so as many as possible connections can be made between logic gates while still satisfying the "min turn" condition?
In case you'd like to see the code: https://gist.github.com/linaran/c8c493bb54cfca764aeb
Note: The canvas that I'm working with isn't bounded.
EDIT: method for calculating cost and heuristics are -- "calculateCost" and "manhattan"
1. You wrote that you already tried swap start/end position
but my guts tell me if you compute path from In to Out
then the turn is near Output which is in most cases OK because most Gates has single output.
2. Anyway I would change your turn cost policy a bit:
let P0,P1 be the path endpoints
let Pm=(P0+P1)/2 be the mid point
so you want the turn be as close to the mid point as it can be
so change turn cost tc to be dependent to the distance from Pm
tc(x,y)=const0+const1*|(x,y)-Pm|
that should do the trick (but I didn't test this so handle with prejudice)
it could create some weird patterns so try euclidean and manhatan distances
and chose the one with better results
3. Another approach is
fill the map from both start and end points at once
And stop when they meet
you need to distinguish between costs from start and end point
so either use negative for star and positive values for end point origin
or allocate ranges for the two (range should be larger then map size xs*ys in cells/pixels)
or add some flag value to the cost inside map cell
4. you can mix 1. and 2. together
so compute Pm
find nearest free point to Pm and let it be P2
and solve path P0->P2 and P1->P2
so the turns will be near P2 which is near Pm which is desired
[notes]
the 3th approach is the most robust one and should lead to desired results

Minimising AI usage in a gme

I have been working on a 2D top down shooter game for a while, I've implemented most of the game and wrote the engine from scratch in JOGL but i ran into a small problem and would like to get other peoples view on how to best approach the problem. So I have creeps spawning at random locations in the map, and each of these creeps use A* path finding, it has been optimized to minimize unnecessary checks, but the maps are massive can be anything from 10x10 to 200x200 tiles and the only thing slowing down the game significantly is the AI, I've also tried to implement a distance based solution where the creeps Idle until i am in a certain range but that still slows down the game a lot because a lot of creeps are spawned. Any advice would be appreciated.
There are numbers of ways of speeding up your code.
First - there are many modifications of the A* algorithm, which may be used, like:
Hierarchical A*, which is often used method in games, where the map is analyzed on many resolution levels, from "general planning" to the "local path search" http://aigamedev.com/open/review/near-optimal-hierarchical-pathfinding/
Jump Point Search A*, which dramaticaly speeds up A* for maps with lots of open spaces (like RPG games) http://gamedev.tutsplus.com/tutorials/implementation/speed-up-a-star-pathfinding-with-the-jump-point-search-algorithm/
Other modifications can be more application specific, if your creeps are searching a path to the player (there is one goal for all creeps), then you can change your search to one of following algorithms:
calculate distance from player to each point in the map using Dijkstra algorithm, for 200x200 it will be very quick (40,000 vertices with O(nlgn) algorithm), and simply move your creep to any adjacent point with less distance to player then current one
run A* search from the player to any creep (with lowest id for example), once the path is found - change aim to the next creep but do not reset the algorithm itself, let if use already computed paths and distances (as they are already optimal paths from player), obviously - if during execution you encounter another creep then your goal - you simply record it (found path is optimal).
Another possible modification, which can be applied if your map is somehow specific (contains doors/entrances to some parts of it) is to place triggers, which "enable" creeps AI. This is O(1) solution, but requires a specific type of map.
And one final idea would be to implement some suboptimal solutions, by for example:
First, calculate A* for each creep
If the distance to the player is smaller then some threshold value T, then in next iteration - recalculate your path, so there is no lag
otherwise - follow your path for at least 10-50 iterations before another path search
There are countless more optimizations, but we would need more details regarding your game as well as time you wish to spend on those optimizations.

Guide on how to solve Rush Hour Puzzle in Java obtaining lowest number of moves using A* . Need help on how to start and steps to follow

Firstly, I have read every thread that I could find on stackoverflow or other internet searching. I did learn about different aspects, but it isn't exactly what I need.
I need to solve a Rush Hour puzzle of size no larger than 8 X 8 tiles.
As I have stated in title I want to use A*, as a heuristic for it I was going to use :
number of cars blocking the red car's ( the one that needs to be taken out ) path should decrease or stay the same.
I have read the BFS solution for Rush hour.
I don't know how to start or better said, what steps to follow.
In case anyone needs any explanation, here is the link to the task :
http://www.cs.princeton.edu/courses/archive/fall04/cos402/assignments/rushhour/index.html
So far from what have I read ( especially from polygenelubricants's answer ) I need to generate a graph of stages including initial one and "succes" one and determine the minimum path from initial to final using A* algorithm ?
Should I create a backtracking function to generate all the possible ( valid ) moves ?
As I have previously stated, I need help on outlining the steps I need to take rather than having issues with the implementation.
Edit : Do I need to generate all the possible moves so I convert them into graph nodes, isn't that time consuming ? I need to solve a 8X8 puzzle in less than 10 seconds
A* is an algorithm for searching graphs. Graphs consist of nodes and edges. So we need to represent your problem as a graph.
We can call each possible state of the puzzle a node. Two nodes have an edge between them if they can be reached from each other using exactly one move.
Now we need a start node and an end node. Which puzzle-states would represent our start- and end-nodes?
Finally, A* requires one more thing: an admissable distance heuristic - a guess at how many moves the puzzle will take to complete. The only restriction for this guess is that it must be less than the actual number of moves, so actually what we're looking for is a minimum-bound. Setting the heuristic to 0 would satisfy this, but if we can come up with a better minimum-bound, the algorithm will run faster. Can you come up with a minimum-bound on the number of moves the puzzle will take to complete?

java draughts AI (multithreaded)

This is my first question here, if I did something wrong, tell me...
I'm currently making a draughts game in Java. In fact everything works except the AI.
The AI is at the moment single threaded, using minimax and alpha-beta pruning. This code works, I think, it's just very slow, I can only go 5 deep into my game tree.
I have a function that recieves my mainboard, a depth (starts at 0) and a maxdepth. At this maxdepth it stops, returns the player's value (-1,1 or 0) with the most pieces on the board and ends the recursive call.
If maxdepth isn't reached yet, I calculate all the possible moves, I execute them one by one, storing my changes to the mainboard in someway.
I also use alpha-beta pruning, e.g. when I found a move that can make the player win I don't bother about the next possible moves.
I calculate the next set of moves from that mainboard state recursively. I undo those changes (from point 2) when coming out of the recursive call. I store the values returned by those recursive calls and use minimax on those.
That's the situation, now I have some questions.
I'd like to go deeper into my game tree, thus I have to diminish the time it takes to calculate moves.
Is it normal that the values of the possible moves of the AI (e.g. the moves that the AI can choose between) are always 0? Or will this change if I can go deeper into the recursion? Since at this moment I can only go 5 deep (maxdepth) into my recursion because otherwise it takes way too long.
I don't know if it's usefull, but how I can convert this recursion into a multithreaded recursion. I think this can divide the working time by some value...
Can someone help me with this please?
1. Is it normal that the values of the possible moves of the AI (e.g. the moves that the AI can choose between) are always 0?
Sounds strange to me. If the number of possible moves is 0, then that player can't play his turn. This shouldn't be very common, or have I misunderstood something?
If the value you're referring to represents the "score" of that move, then obviously "always 0" would indicate that all move are equally good, which obviously doesn't make a very good AI algorithm.
2. I don't know if it's usefull, but how I can convert this recursion into a multithreaded recursion. I think this can divide the working time by some value...
I'm sure it would be very useful, especially considering that most machines have several cores these days.
What makes it complicated is your "try a move, record it, undo it, try next move" approach. This indicates that you're working with a mutable data structure, which makes it extremely complicated to paralellize the algorithm.
If I were you, I would let the bord / game state be represented by an immutable data structure. You could then let each recursive call be treated as a separate task, and use a pool of threads to process them. You would get close to maximum utilization of the CPU(s) and at the same time simplify the code considerably (by removing the whole restore-to-previous-state code).
Assuming you do indeed have several cores on your machine, this could potentially allow you to go deeper in the tree.
I would strongly recommend reading this book:
One Jump Ahead: Computer Perfection At Checkers
It will give you a deep history about computer AI in the game of Checkers and will probably given you some help with your evaluation function.
Instead of having an evaluation function that just gives 1/0/-1 for differing pieces, give a score of 100 for every regular piece and 200 for a king. Then give bonuses for piece structures. For instance, if my pieces form a safe structure that can't be captured, then I get a bonus. If my piece is all alone in the middle of the board, then I get a negative bonus. It is this richness of features for piece configurations that will allow your program to play well. The final score is the difference in the evaluation for both players.
Also, you shouldn't stop your search at a uniform depth. A quiescence search extends search until the board is "quiet". In the case of Checkers, this means that there are no forced captures on the board. If you don't do this, your program will play extremely poorly.
As others have suggested, transposition tables will do a great job of reducing the size of your search tree, although the program will run slightly slower. I would also recommend the history heuristic, which is easy to program and will greatly improve the ordering of moves in the tree. (Google history heuristic for more information on this.)
Finally, the representation of your board can make a big difference. Fast implementations of search do not make copies of the board each time a move is applied, instead they try to quickly modify the board to apply and undo moves.
(I assume by draughts you mean what we would call checkers here in the States.)
I'm not sure if I understand your scoring system inside the game tree. Are you scoring by saying, "Position scores 1 point if player has more pieces than the opponent, -1 point is player has fewer pieces, 0 points if they have the same number of pieces?"
If so, then your algorithm might just be capture averse for the first five moves, or things are working out so that all captures are balanced. I'm not deeply familiar with checkers, but it doesn't seem impossible that this is so for only five moves into the game. And if it's only 5 plies (where a ply is one player's move, rather than a complete set of opposing moves) maybe its not unusual at all.
You might want to test this by feeding in a board position where you know absolutely the right answer, perhaps something with only two checkers on the board with one in a position to capture.
As a matter of general principle, though, the board evaluation function doesn't make a lot of sense-- it ignores the difference between a piece and a crowned piece, and it treats a three piece advantage the same as a one piece advantage.

Categories