Minimising AI usage in a gme - java

I have been working on a 2D top down shooter game for a while, I've implemented most of the game and wrote the engine from scratch in JOGL but i ran into a small problem and would like to get other peoples view on how to best approach the problem. So I have creeps spawning at random locations in the map, and each of these creeps use A* path finding, it has been optimized to minimize unnecessary checks, but the maps are massive can be anything from 10x10 to 200x200 tiles and the only thing slowing down the game significantly is the AI, I've also tried to implement a distance based solution where the creeps Idle until i am in a certain range but that still slows down the game a lot because a lot of creeps are spawned. Any advice would be appreciated.

There are numbers of ways of speeding up your code.
First - there are many modifications of the A* algorithm, which may be used, like:
Hierarchical A*, which is often used method in games, where the map is analyzed on many resolution levels, from "general planning" to the "local path search" http://aigamedev.com/open/review/near-optimal-hierarchical-pathfinding/
Jump Point Search A*, which dramaticaly speeds up A* for maps with lots of open spaces (like RPG games) http://gamedev.tutsplus.com/tutorials/implementation/speed-up-a-star-pathfinding-with-the-jump-point-search-algorithm/
Other modifications can be more application specific, if your creeps are searching a path to the player (there is one goal for all creeps), then you can change your search to one of following algorithms:
calculate distance from player to each point in the map using Dijkstra algorithm, for 200x200 it will be very quick (40,000 vertices with O(nlgn) algorithm), and simply move your creep to any adjacent point with less distance to player then current one
run A* search from the player to any creep (with lowest id for example), once the path is found - change aim to the next creep but do not reset the algorithm itself, let if use already computed paths and distances (as they are already optimal paths from player), obviously - if during execution you encounter another creep then your goal - you simply record it (found path is optimal).
Another possible modification, which can be applied if your map is somehow specific (contains doors/entrances to some parts of it) is to place triggers, which "enable" creeps AI. This is O(1) solution, but requires a specific type of map.
And one final idea would be to implement some suboptimal solutions, by for example:
First, calculate A* for each creep
If the distance to the player is smaller then some threshold value T, then in next iteration - recalculate your path, so there is no lag
otherwise - follow your path for at least 10-50 iterations before another path search
There are countless more optimizations, but we would need more details regarding your game as well as time you wish to spend on those optimizations.

Related

Connect4 on Android, timing issue

I'm developing a simple Connect4 game in Android.
Currently I'm using a minimax algorithm with alpha-beta pruning and bit-board state representation so the search is very effective and fast.
The skill is set by setting the maximum depth the algorithm should reach during its DFS search inside the game tree.
I noticed that the time required to choose a move depends on how far we are in the game: at the beginning it takes more time (as there are many possibilities to explore), in mid-game it take a reasonable amount of time and near the end is very fast.
My problem is that if I set a given skill the user has to wait to much on the first/second/third moves. I'd like to speed-up the aperture but I suspect it depends even on the hardware itself how I want to implement the speed-up process.
Can I set a timeout for the thread running the DSF mimimax?
The simplest way to circumvent this issue is to use an opening book for the first few moves. An opening book is a set of predetermined moves for a given scenario. Since there are relatively few possibly board states for the opening moves, you can easily compile a database of all possibly moves for the first three turns, and call upon it instead of actually doing the search. Thus you no longer require a time out and you sped up the search with zero cost to accuracy.

How to improve the perfomance of my A* path finder?

So basically I coded an A* pathfinder that can find paths through obstacles and move diagnolly. I basically implemented the pseudocode from Link into real code and also used a binary heap method to add and delete items from the openlist.
Using binary heap led to significant performance boost , about 500 times faster than the insert sorting algorithm I used before.
the problem is that it still takes around on average 1.5 million nanoseconds which is around .0015 of a second.
So the question is, my plan is to make a tower defense game where the pathfinding for each mob needs to update everytime I add a tower to the map. If I were to have around a maximum of 50ish mobs on the map, that means it will take around .0015 * 50 = .075 seconds to update all paths for the entire mob. The game basically ticks( all the ingame stuff updates) every 1/60 seconds which is .016 of a second, so the problem is that it takes longer to update the paths than it takes to tick, which will lead to massive lag. So how should I go about this? DO I need to find a better algorithm for sorting the openlist or somehow divide the pathfinding tasks so that each tick only does X number of pathfinding tasks as opposed to all of them.
Rather than searching from each enemy to the checkpoint, search outwards from the checkpoint to every enemy at once. This way, rather than doing 50 searches, you only need to do one.
More specifically, just do a breadth-first search (or djikstra's, if your graph is weighted) from the player outwards, until every enemy has been reached.
You could alter this strategy to work with A* by changing your heuristic EstimatedDistanceToEnd (aka h(x)) to be the minimum estimate to any enemy, but with a lot of enemies this may end up being slower than the simpler option. The heuristic must be consistent for this to work.
Additionally, make sure you are using the correct tie-breaking criteria.
Also, and most importantly, remember that you don't need to run your pathfinder every single frame for most games - often you can get away with only once or twice a second, or even less, depending on the game.
If that is still too slow, you could look into using D* lite to reuse information between subsequent searches. But, I would bet money that running a single breadth-first search will be more than fast enough.
(copied from my answer to a similar question on gamedev)
Have you considered the Floyd-Warshall algorithm?
Essentially, A* is for path-finding from a single source to one or more destinations. However, in tower defense (depending on your rules of course), it is about multiple sources navigating around a map.
So for this, Floyd's algorithm seems more optimal. However, you could have your A* algorithm find paths for unit groups instead of individual units, which should optimize your calculation times.
Presumably, you can back-search from the exit towards all the creeps, so you need to explore your maze only once.

java draughts AI (multithreaded)

This is my first question here, if I did something wrong, tell me...
I'm currently making a draughts game in Java. In fact everything works except the AI.
The AI is at the moment single threaded, using minimax and alpha-beta pruning. This code works, I think, it's just very slow, I can only go 5 deep into my game tree.
I have a function that recieves my mainboard, a depth (starts at 0) and a maxdepth. At this maxdepth it stops, returns the player's value (-1,1 or 0) with the most pieces on the board and ends the recursive call.
If maxdepth isn't reached yet, I calculate all the possible moves, I execute them one by one, storing my changes to the mainboard in someway.
I also use alpha-beta pruning, e.g. when I found a move that can make the player win I don't bother about the next possible moves.
I calculate the next set of moves from that mainboard state recursively. I undo those changes (from point 2) when coming out of the recursive call. I store the values returned by those recursive calls and use minimax on those.
That's the situation, now I have some questions.
I'd like to go deeper into my game tree, thus I have to diminish the time it takes to calculate moves.
Is it normal that the values of the possible moves of the AI (e.g. the moves that the AI can choose between) are always 0? Or will this change if I can go deeper into the recursion? Since at this moment I can only go 5 deep (maxdepth) into my recursion because otherwise it takes way too long.
I don't know if it's usefull, but how I can convert this recursion into a multithreaded recursion. I think this can divide the working time by some value...
Can someone help me with this please?
1. Is it normal that the values of the possible moves of the AI (e.g. the moves that the AI can choose between) are always 0?
Sounds strange to me. If the number of possible moves is 0, then that player can't play his turn. This shouldn't be very common, or have I misunderstood something?
If the value you're referring to represents the "score" of that move, then obviously "always 0" would indicate that all move are equally good, which obviously doesn't make a very good AI algorithm.
2. I don't know if it's usefull, but how I can convert this recursion into a multithreaded recursion. I think this can divide the working time by some value...
I'm sure it would be very useful, especially considering that most machines have several cores these days.
What makes it complicated is your "try a move, record it, undo it, try next move" approach. This indicates that you're working with a mutable data structure, which makes it extremely complicated to paralellize the algorithm.
If I were you, I would let the bord / game state be represented by an immutable data structure. You could then let each recursive call be treated as a separate task, and use a pool of threads to process them. You would get close to maximum utilization of the CPU(s) and at the same time simplify the code considerably (by removing the whole restore-to-previous-state code).
Assuming you do indeed have several cores on your machine, this could potentially allow you to go deeper in the tree.
I would strongly recommend reading this book:
One Jump Ahead: Computer Perfection At Checkers
It will give you a deep history about computer AI in the game of Checkers and will probably given you some help with your evaluation function.
Instead of having an evaluation function that just gives 1/0/-1 for differing pieces, give a score of 100 for every regular piece and 200 for a king. Then give bonuses for piece structures. For instance, if my pieces form a safe structure that can't be captured, then I get a bonus. If my piece is all alone in the middle of the board, then I get a negative bonus. It is this richness of features for piece configurations that will allow your program to play well. The final score is the difference in the evaluation for both players.
Also, you shouldn't stop your search at a uniform depth. A quiescence search extends search until the board is "quiet". In the case of Checkers, this means that there are no forced captures on the board. If you don't do this, your program will play extremely poorly.
As others have suggested, transposition tables will do a great job of reducing the size of your search tree, although the program will run slightly slower. I would also recommend the history heuristic, which is easy to program and will greatly improve the ordering of moves in the tree. (Google history heuristic for more information on this.)
Finally, the representation of your board can make a big difference. Fast implementations of search do not make copies of the board each time a move is applied, instead they try to quickly modify the board to apply and undo moves.
(I assume by draughts you mean what we would call checkers here in the States.)
I'm not sure if I understand your scoring system inside the game tree. Are you scoring by saying, "Position scores 1 point if player has more pieces than the opponent, -1 point is player has fewer pieces, 0 points if they have the same number of pieces?"
If so, then your algorithm might just be capture averse for the first five moves, or things are working out so that all captures are balanced. I'm not deeply familiar with checkers, but it doesn't seem impossible that this is so for only five moves into the game. And if it's only 5 plies (where a ply is one player's move, rather than a complete set of opposing moves) maybe its not unusual at all.
You might want to test this by feeding in a board position where you know absolutely the right answer, perhaps something with only two checkers on the board with one in a position to capture.
As a matter of general principle, though, the board evaluation function doesn't make a lot of sense-- it ignores the difference between a piece and a crowned piece, and it treats a three piece advantage the same as a one piece advantage.

Find location using only distance and bearing?

Triangulation works by checking your angle to three KNOWN targets.
"I know the that's the Lighthouse of Alexandria, it's located here (X,Y) on a map, and it's to my right at 90 degrees." Repeat 2 more times for different targets and angles.
Trilateration works by checking your distance from three KNOWN targets.
"I know the that's the Lighthouse of Alexandria, it's located here (X,Y) on a map, and I'm 100 meters away from that." Repeat 2 more times for different targets and ranges.
But both of those methods rely on knowing WHAT you're looking at.
Say you're in a forest and you can't differentiate between trees, but you know where key trees are. These trees have been hand picked as "landmarks."
You have a robot moving through that forest slowly.
Do you know of any ways to determine location based solely off of angle and range, exploiting geometry between landmarks? Note, you will see other trees as well, so you won't know which trees are key trees. Ignore the fact that a target may be occluded. Our pre-algorithm takes care of that.
1) If this exists, what's it called? I can't find anything.
2) What do you think the odds are of having two identical location 'hits?' I imagine it's fairly rare.
3) If there are two identical location 'hits,' how can I determine my exact location after I move the robot next. (I assume the chances of having 2 occurrences of EXACT angles in a row, after I reposition the robot, would be statistically impossible, barring a forest growing in rows like corn). Would I just calculate the position again and hope for the best? Or would I somehow incorporate my previous position estimate into my next guess?
If this exists, I'd like to read about it, and if not, develop it as a side project. I just don't have time to reinvent the wheel right now, nor have the time to implement this from scratch. So if it doesn't exist, I'll have to figure out another way to localize the robot since that's not the aim of this research, if it does, lets hope it's semi-easy.
Great question.
The name of the problem you're investigating is localization, and it, together with mapping, are two of the most important and challenging problems in robotics at the moment. Put simply, localization is the problem of "given some sensor observations how do I know where I am?"
Landmark identification is one of the hidden 'tricks' that underpin so much of the practice of robotics. If it isn't possible to uniquely identify a landmark, you can end up with a high proportion of misinformation, particularly given that real sensors are stochastic (ie/ there will be some uncertainty associate with the result). Your choice of an appropriate localisation method, will almost certainly depend on how well you can uniquely identify a landmark, or associate patterns of landmarks with a map.
The simplest method of self-localization in many cases is Monte Carlo localization. One common way to implement this is by using particle filters. The advantage of this is that they cope well when you don't have great models of motion, sensor capability and need something robust that can deal with unexpected effects (like moving obstacles or landmark obscuration). A particle represents one possible state of the vehicle. Initially particles are uniformly distributed, as the vehicle moves and add more sensor observations are incorporated. Particle states are updated to move away from unlikely states - in the example given, particles would move away from areas where the range / bearings don't match what should be visible from the current position estimate. Given sufficient time and observations particles tend to clump together into areas where there is a high probability of the vehicle being located. Look up the work of Sebastian Thrun, particularly the book "probabilistic robotics".
What you're looking for is Monte Carlo localization (also known as a particle filter). Here's a good resource on the subject.
Or nearly anything from the probabilistic robotics crowd, Dellaert, Thrun, Burgard or Fox. If you're feeling ambitious, you could try to go for a full SLAM solution - a bunch of libraries are posted here.
Or if you're really really ambitious, you could implement from first principles using Factor Graphs.
I assume you want to start by turning on the robot inside the forest. I further assume that the robot can calculate the position of every tree using angle and distance.
Then you can identify the landmarks by iterating through the trees and calculating the distance to all its neighbours. In Matlab you can use pdist to get a list of all (unique) pairwise distances.
Then you can iterate through the trees to identify landmarks. For every tree, compare the distances to all its neighbours to the known distances between landmarks. Whenever you find a candidate landmark, you check its possible landmark neighbours for the correct distance signature. Since you say that you always should be able to see five landmarks at any given time, you will be trying to match 20 distances, so I'd say that the chance of false positives is not too high. If the candidate landmark and its candidate fellow landmarks do not match the complete relative distance pattern, you go check the next tree.
Once you have found all the landmarks, you simply triangulate.
Note that depending on how accurately you can measure angles and distances, you need to be able to see more landmark trees at any given time. My guess is that you need to space landmarks with sufficiently density that you can see at least three at a time if you have high measurement accuracy.
I guess you need only distance to two landmarks and the order of seeing them (i.e. from left to right you see point A and B)
(1) "Robotic mapping" and "perceptual aliasing".
(2) Two identical hits are inevitable. Since the robot can only distinguish between a finite number X of distinguishable tree configurations, even if the configurations are completely random, there is almost certainly at least one location that looks "the same" as some other location even if you encounter far fewer than X/2 different trees. Those are called "birthday paradox collisions". You may be lucky that the particular location you are at is in fact actually unique, but I wouldn't bet my robot on it.
So you:
(a) have a map of a large area with
some, but not all trees on it.
(b) a
robot somewhere in the actual forest
that, without looking at the map, has
looked at the nearby trees and
generated an internal map of a all
the trees in a tiny area and its
relative position to them
(c) To the
robot, every tree looks the same as
every other tree.
You want to find: Where is the robot on the large map?
If only each actual tree had a unique name written on it that the robot could read, and then (some of) those trees and their names were on the map, this would be trivial.
One approach is to attach a (not necessarily unique) "signature" to each tree that describes its position relative to nearby trees.
Then, as you travel along, the robot drives up to a tree and finds a "signature" for that tree, and you find all the trees on the map that "match" that signature.
If only one unique tree on the map matches, then the tree the robot is looking might be that tree on the map (yay, you know where the robot is) -- put down a weighty but tentative dot on the map at the robot's relative position to the matching tree -- the tree the robot is next to is certainly not any of the other trees on the map.
If several of the trees on the map match -- they all have the same non-unique signature -- then you could put some less-weighty tentative dots on the map at the robots position relative to each one of them.
Alas, even if find one or more matches, it is still possible that the tree the robot is looking at is not on the map at all, and the signature of that tree is coincidentally the same as one or more trees on the map, and so the robot could be anywhere on the map.
If none of the trees on the map matches, then the tree the robot is looking at is definitely not on the map. (Perhaps later on, once the robot knows exactly where it is, it should start adding these trees to the map?)
As you drive down the path, you push the dots in your estimated direction and speed of travel.
Then as you inspect other trees, possibly after driving down the path a little further, you eventually have lots of dots on the map, and hopefully one heavy, highly overlapping cluster at the actual position, and hopefully each other dot is an easily-ignored isolated coincidences.
The simplest signature is a list of distances from a particular tree to nearby trees.
A particular tree on the map is "matched" to a particular tree in the forest when, for each and every nearby tree on the map, there is a corresponding nearby tree in the forest at "the same" distance, as far as you can tell with your known distance and angular errors.
(By "nearby", I mean "close enough that the robot should be able to definitely confirm that the tree is actually there", although it's probably simpler to approximate this with something like "My robot can see all trees out to a range of R, so I'm only going to bother even trying to match trees that are within a circle of R*1/3 from my robot, and my list of distances only include trees that are within a circle of R*2/3 from the particular tree I'm trying to match").
If you know your north-south orientation even very roughly, you can create signatures that are "more unique", i.e., have fewer spurious matches on the map and (hopefully) in the real forest.
A "match" for the tree the robot is next to occurs when, for each nearby tree on the map, there is a corresponding tree in the forest at "the same" distance and direction, as far as you can tell with your known distance and angular errors.
Say you see that tree "Fred" on the map has another tree 10 meters in the N to W quadrant from it, but the robot is next to a tree that definitely doesn't have any trees at that distance in the N to W quadrant, but it has a tree 10 meters away to the South.
In that case, then (using a more complex signature) you can definitely tell the robot is not next to Fred, even though the simple signature would give a (false) match.
Another approach:
The "digital paper" solves a similar problem ... Can you plant a few trees in a pattern that is specifically designed to be easily recognized?

Why does A* path finding sometimes go in straight lines and sometimes diagonals? (Java)

I'm in the process of developing a simple 2d grid based sim game, and have fully functional path finding.
I used the answer found in my previous question as my basis for implementing A* path finding. (Pathfinding 2D Java game?).
To show you really what I'm asking, I need to show you this video screen capture that I made.
I was just testing to see how the person would move to a location and back again, and this was the result...
http://www.screenjelly.com/watch/Bd7d7pObyFo
Different choice of path depending on the direction, an unexpected result. Any ideas?
If you're looking for a simple-ish solution, may I suggest a bit of randomization?
What I mean is this: in the cokeandcode code example, there is the nested-for-loops that generate the "successor states" (to use the AI term). I refer to the point where it loops over the 3x3 square around the "current" state, adding new locations on the pile to consider.
A relatively simple fix would (should :)) be isolate that code a bit, and have it, say, generated a linkedlist of nodes before the rest of the processing step. Then Containers.Shuffle (or is it Generics.Shuffle?) that linked list, and continue the processing there. Basically, have a routine say,
"createNaiveNeighbors(node)"
that returns a LinkedList = {(node.x-1,node.y), (node.x, node.y-1)... } (please pardon the pidgin Java, I'm trying (and always failing) to be brief.
Once you build the linked list, however, you should just be able to do a "for (Node n : myNewLinkedList)" instead of the
for (int x=-1;x<2;x++) {
for (int y=-1;y<2;y++) {
And still use the exact same body code!
What this would do, ideally, is sort of "shake up" the order of nodes considered, and create paths closer to the diagonal, but without having to change the heuristic. The paths will still be the most efficient, but usually closer to the diagonal.
The downside is, of course, if you go from A to B multiple times, a different path may be taken. If that is unnacceptable, you may need to consider a more drastic modification.
Hope this helps!
-Agor
Both of the paths are of the same length, so the algorithm is doing its job just fine - it's finding a shortest path. However the A* algorithm doesn't specify WHICH shortest path it will take. Implementations normally take the "first" shortest path. Without seeing yours, it's impossible to know exactly why, but if you want the same results each time you're going to have to add priority rules of some sort (so that you're desired path comes up first in the search).
The reason why is actually pretty simple: the path will always try to have the lowest heuristic possible because it searches in a greedy manner. Going closer to the goal is an optimal path.
If you allowed diagonal movement, this wouldn't happen.
The reason is the path you want the algorithm to go.
I don't know the heuristic your A* uses but in the first case it has to go to the end of the tunnel first and then plans the way from the end of the tunnel to the target.
In the second case the simplest moves to the targets are going down till it hits the wall and then it plans the way from the wall to the target.
Most A* I know work with a line of sight heuristic or a Manhattan Distance in the case of a block world. This heuristics give you the shortest way but in case of obstacles that force to go a way that is different from the line of sight the ways depend on your starting point.
The algorithm will go the line of sight as long as possible.
The most likely answer is that going straight south gets it closest to its goal first; going the opposite way, this is not a choice, so it optimizes the sub-path piecewise with the result that alternating up/across moves are seen as best.
If you want it to go along the diagonal going back, you are going to have to identify some points of interest along the path (for example the mouth of the tunnel) and take those into account in your heuristic. Alternatively, you could take them into account in your algorithm by re-computing any sub-path that passes through a point of interest.
Back in the day they used to do a pre-compiled static analysis of maps and placed pathfinding markers at chokepoints. Depending on what your final target is, that might be a good idea here as well.
If you're really interested in learning what's going on, I'd suggest rendering the steps of the A* search. Given your question, it might be very eye-opening for you.
In each case it's preferring the path that takes it closer to its goal node sooner, which is what A* is designed for.
If I saw right, the sphere is moving first to the right in a straigt line, because it cannot got directly toward the goal (path is blocked).
Then, it goes in a straight line toward the goal. It only looks diagonal.
Does your search look in the 'down' direction first? This might explain the algorithm. Try changing it to look 'up' first and I bet you would see the opposite behavior.
Depending on the implementation of your astar you will see different results with the same heuristic, as many people have mentioned. This is because of ties, when two or more paths tie the way you order your open set will determine the way the final path will look. You will always get the optimal path if you have an admissible heuristic, but the nodes visited will increase with the number of ties you have(relative to a heuristic producing not as many ties).
If you dont think visiting more nodes is a problem i would suggest using the randomization (which is your current accepted answer) suggestion. If you think searching more nodes is a problem and want to optimize i would suggest using some sort of tiebreaker. It seems you are using manhattan distance, if you use euclidian distance when two nodes tie as a tiebreaker you will get more straight paths to the goal and you will visit fewer nodes. This is ofcourse given no traps or block of line of sight to the goal.
To avoid visiting nodes with blocking elements in the line of sight path i would suggest finding a heuristic which takes into account these blocking elements. Ofcourse a new heuristic shouldnt do more work than a normal A star search would do.
I would suggest looking at my question as it might produce some ideas and solutions to this problem.

Categories