I would like to solve/implement the 8 puzzle problem using the A* algorithm in Java. Am asking if someone can help me by explaining to me the steps i must follow to solve it. I have read on the net how the A* works but i don't know how to begin the implementation in Java.
I will be very grateful if you guys can help me and give me the guidelines so that i can implement it myself in Java. I really want to do it to be able to understand it, so i just need the guidelines to start.
I will use priority queues and will read the initial configuration from a text file which looks like for example this:
4 3 6
1 2 5
7 8
Pointers to other sites for more explanation/tutorials are welcome.
I'd begin with deciding how you want to represent the game board states,
then implement the operators (eg. move (blank) tile up, move (blank) tile down, ...).
Typically you will have a data structure to represent the open list (ie. those states
discovered but as yet unexplored (ie. compared with goal state) and another for the
closed list (ie. those states discovered and explored and found not to be the goal state).
You seed the open list with the starting state, and repeatedly take the "next" state to
be explored from the open list, apply the operators to it to generate new possible states
and so on ...
There is a tutorial I prepared many years ago at:
http://www.cs.rmit.edu.au/AI-Search/
It is far from the definitive word on state space searching though, it is simply an educational tool for those brand new to the concept.
Check http://olympiad.cs.uct.ac.za/presentations/camp1_2004/heuristics.pdf it describes ways of tackling this very problem.
A* is a lot like Djikstra's algorithm except it includes a heuristic. You might want to read that wiki or read about single-source shortest path algorithms in general.
A lot of the basic stuff is important but obvious. You'll need to represent the board and create a method for generating the possible next states.
The base score for any position will obviously be the minimum number of actual moves to arrive at it. For A* to work, you need a heuristic that can help you pick the best option of possible next states. One heuristic might be the number of pieces in the correct position.
Related
I'm currently in an advanced data structure class and learned a good bit about the graph. For this summer, I was asked to help write an algorithm to match roommates. Now for my data structure class, I've written a City Path graph and performs some sorting and prims algorithms and I'm sort of thinking that a graph may be a great place to start with my roommate matching algorithm.
I was thinking that our data base could just be a text file, nothing too fancy. However I could initialize each nodes in the graph as a student each student would have an un-directed edge to many more students (no edge to the student who doesn't want to be roommate with another one, the sorority also doesn't want repeating roommate). Now I could also make the edge weights more, depending on the special interest.
Everything listed above is quite simple and I don't think I'll run into any problem implementing it. But here is my question:
How should I update the common interest field? Should I start that with a physical survey and then go back into the text file and update the weight of the edge manually? Or should I be creating a field that keeps track of the matching interests?
What you're trying to design is called bipartite matching. Fortunately unlike other bipartite matching algorithms, you won't need fancy graph algorithms and complex implementation for this. This is very close of Stable Marriage Problem and surprisingly there are very effective even easier algorithm for this.
If you are interested, I can share my C++ implementation of stable marriage problem.
I am developing a chess game and at the moment I'm trying to implement a minimax algorithm. I haven't done this before, also the little i known about how to programmatically represent and implement the following evaluation function features(material, mobility, piece square table, centre control, trapped piece, king safety, tempo and pawn structure) is not quite clear to me (I will be grateful if someone can explain to me in detail). I have been able to assign values to each chess pieces, piece action values and a square table for each piece. The problem am having at the moment is how to generate Piece attacked and defended values which will be added or subtracted from the score. The idea here is that i want to reward the AI agent for protecting its pieces and penalize it for having the pieces attacked. thanks in advances.
Each of the evaluation features you mentioned will take up compute time. As you may already be aware, playing strength of a chess engine comes from two sources:
Search
Evaluation
And both contend for the same valuable resource, compute time. Evaluation tends to be heuristics based and hence a bit fuzzy, whereas search tends to yield more concrete and relevant results. If you are starting to build an engine then I would recommend focusing on search while keeping evaluation basic (but not weak!). That way you will be able to tell exactly where something went wrong and hence avoid possible early disappointments. Moreover, popular engines like Stockfish also started out by first building a strong search algorithm.
If you've been patient enough to read this far, let me point you to two useful resources for evaluation:
Chess Programming Wiki's evaluation page: This website is probably the best online resource for chess engine development in general.
Link to a basic but not weak evaluation function: This is C# code. Unfortunately I can't find the original article that I based this evaluation on.
Hope it helps :)
I think that you shouldn't include the computation on attack and defended pieces. That functionality is already taken into account by the minmax algorithm in a more efficient way.
A piece in under attack if at the following move the opponent can take it. If you try to evaluate this possibility in a static evaluation function you will get into troubles if you want to do it correctly. If my protected pawn is taken by the opponent queen that is not an issue. How do you take this into account? If my queen is taken by the opposite pawn but moving the pawn puts the king under attack?
These considerations are better managed by the minmax algorithm, not the evaluator. Consider that to know how many pieces you can eat/can be eaten, you should take into account all possible moves and you probably would spend the same time that would be used to go one level deeper in the minmax algorithm. Moreover that time is wasted if you later decide to indeed proceed one step further in the minmax.
Firstly, I have read every thread that I could find on stackoverflow or other internet searching. I did learn about different aspects, but it isn't exactly what I need.
I need to solve a Rush Hour puzzle of size no larger than 8 X 8 tiles.
As I have stated in title I want to use A*, as a heuristic for it I was going to use :
number of cars blocking the red car's ( the one that needs to be taken out ) path should decrease or stay the same.
I have read the BFS solution for Rush hour.
I don't know how to start or better said, what steps to follow.
In case anyone needs any explanation, here is the link to the task :
http://www.cs.princeton.edu/courses/archive/fall04/cos402/assignments/rushhour/index.html
So far from what have I read ( especially from polygenelubricants's answer ) I need to generate a graph of stages including initial one and "succes" one and determine the minimum path from initial to final using A* algorithm ?
Should I create a backtracking function to generate all the possible ( valid ) moves ?
As I have previously stated, I need help on outlining the steps I need to take rather than having issues with the implementation.
Edit : Do I need to generate all the possible moves so I convert them into graph nodes, isn't that time consuming ? I need to solve a 8X8 puzzle in less than 10 seconds
A* is an algorithm for searching graphs. Graphs consist of nodes and edges. So we need to represent your problem as a graph.
We can call each possible state of the puzzle a node. Two nodes have an edge between them if they can be reached from each other using exactly one move.
Now we need a start node and an end node. Which puzzle-states would represent our start- and end-nodes?
Finally, A* requires one more thing: an admissable distance heuristic - a guess at how many moves the puzzle will take to complete. The only restriction for this guess is that it must be less than the actual number of moves, so actually what we're looking for is a minimum-bound. Setting the heuristic to 0 would satisfy this, but if we can come up with a better minimum-bound, the algorithm will run faster. Can you come up with a minimum-bound on the number of moves the puzzle will take to complete?
I have been struggling to think of some decent uses for things like vectors and stacks. Since I find myself best able to remember things once i've done something useful with it. I was after some short but useful applications you've found for several of the java data structures.
I'm not after any code samples, but more things that stick in your mind as 'that was a really great use of a hashmap/linked list etc' - things that i could then go on to try myself.
"Usefulness" is a subjective term, but in any case, an intuitive way to learn data structures is to use them to simulate real-life activities.
Stack
Simulate a secretary that is shredding a bunch of documents. She has -- guess what? -- a stack of documents on her desk, and she shreds them one by one by picking the top document and feeding it into the shredder, repeating this until all documents are shredded.
Her boss would intermittently come over to her desk and put a new document to shred on top of her stack.
Circular doubly linked list
Simulate kids playing in the playground. The kids stand in a circle, then each kid would -- guess what? -- link up by holding hands with the kid to the left (with the left hand) and to the right (with the right hand).
Do "Eeny, meeny, miny, moe" around the circle, say starting from the youngest kid. The "it" kid would then have to leave the circle, and the gap is closed in the most natural way, i.e. by having the two kids around the gap link up.
Restart the "Eeny, meeny, miny, moe" from the gap. Go the opposite direction on a whim. Do this until one kid remains.
Map
Dog says woof. Cow says moo. Yeah, simulate that.
A good use for a Stack would be bracket matching. Write a small program that will parse some input and report back if the bracket syntax was correct (i.e. every open bracket has a corresponding closing bracket).
How about an RPN Calculator for Stacks? Vectors can be applied to almost any problem.
I'm in the process of developing a simple 2d grid based sim game, and have fully functional path finding.
I used the answer found in my previous question as my basis for implementing A* path finding. (Pathfinding 2D Java game?).
To show you really what I'm asking, I need to show you this video screen capture that I made.
I was just testing to see how the person would move to a location and back again, and this was the result...
http://www.screenjelly.com/watch/Bd7d7pObyFo
Different choice of path depending on the direction, an unexpected result. Any ideas?
If you're looking for a simple-ish solution, may I suggest a bit of randomization?
What I mean is this: in the cokeandcode code example, there is the nested-for-loops that generate the "successor states" (to use the AI term). I refer to the point where it loops over the 3x3 square around the "current" state, adding new locations on the pile to consider.
A relatively simple fix would (should :)) be isolate that code a bit, and have it, say, generated a linkedlist of nodes before the rest of the processing step. Then Containers.Shuffle (or is it Generics.Shuffle?) that linked list, and continue the processing there. Basically, have a routine say,
"createNaiveNeighbors(node)"
that returns a LinkedList = {(node.x-1,node.y), (node.x, node.y-1)... } (please pardon the pidgin Java, I'm trying (and always failing) to be brief.
Once you build the linked list, however, you should just be able to do a "for (Node n : myNewLinkedList)" instead of the
for (int x=-1;x<2;x++) {
for (int y=-1;y<2;y++) {
And still use the exact same body code!
What this would do, ideally, is sort of "shake up" the order of nodes considered, and create paths closer to the diagonal, but without having to change the heuristic. The paths will still be the most efficient, but usually closer to the diagonal.
The downside is, of course, if you go from A to B multiple times, a different path may be taken. If that is unnacceptable, you may need to consider a more drastic modification.
Hope this helps!
-Agor
Both of the paths are of the same length, so the algorithm is doing its job just fine - it's finding a shortest path. However the A* algorithm doesn't specify WHICH shortest path it will take. Implementations normally take the "first" shortest path. Without seeing yours, it's impossible to know exactly why, but if you want the same results each time you're going to have to add priority rules of some sort (so that you're desired path comes up first in the search).
The reason why is actually pretty simple: the path will always try to have the lowest heuristic possible because it searches in a greedy manner. Going closer to the goal is an optimal path.
If you allowed diagonal movement, this wouldn't happen.
The reason is the path you want the algorithm to go.
I don't know the heuristic your A* uses but in the first case it has to go to the end of the tunnel first and then plans the way from the end of the tunnel to the target.
In the second case the simplest moves to the targets are going down till it hits the wall and then it plans the way from the wall to the target.
Most A* I know work with a line of sight heuristic or a Manhattan Distance in the case of a block world. This heuristics give you the shortest way but in case of obstacles that force to go a way that is different from the line of sight the ways depend on your starting point.
The algorithm will go the line of sight as long as possible.
The most likely answer is that going straight south gets it closest to its goal first; going the opposite way, this is not a choice, so it optimizes the sub-path piecewise with the result that alternating up/across moves are seen as best.
If you want it to go along the diagonal going back, you are going to have to identify some points of interest along the path (for example the mouth of the tunnel) and take those into account in your heuristic. Alternatively, you could take them into account in your algorithm by re-computing any sub-path that passes through a point of interest.
Back in the day they used to do a pre-compiled static analysis of maps and placed pathfinding markers at chokepoints. Depending on what your final target is, that might be a good idea here as well.
If you're really interested in learning what's going on, I'd suggest rendering the steps of the A* search. Given your question, it might be very eye-opening for you.
In each case it's preferring the path that takes it closer to its goal node sooner, which is what A* is designed for.
If I saw right, the sphere is moving first to the right in a straigt line, because it cannot got directly toward the goal (path is blocked).
Then, it goes in a straight line toward the goal. It only looks diagonal.
Does your search look in the 'down' direction first? This might explain the algorithm. Try changing it to look 'up' first and I bet you would see the opposite behavior.
Depending on the implementation of your astar you will see different results with the same heuristic, as many people have mentioned. This is because of ties, when two or more paths tie the way you order your open set will determine the way the final path will look. You will always get the optimal path if you have an admissible heuristic, but the nodes visited will increase with the number of ties you have(relative to a heuristic producing not as many ties).
If you dont think visiting more nodes is a problem i would suggest using the randomization (which is your current accepted answer) suggestion. If you think searching more nodes is a problem and want to optimize i would suggest using some sort of tiebreaker. It seems you are using manhattan distance, if you use euclidian distance when two nodes tie as a tiebreaker you will get more straight paths to the goal and you will visit fewer nodes. This is ofcourse given no traps or block of line of sight to the goal.
To avoid visiting nodes with blocking elements in the line of sight path i would suggest finding a heuristic which takes into account these blocking elements. Ofcourse a new heuristic shouldnt do more work than a normal A star search would do.
I would suggest looking at my question as it might produce some ideas and solutions to this problem.