I'm having some difficulties with the following problem:
I'm making a little game where you're at a specific spot and each spot has each some possible directions.
The available directions are N(ord),E(ast),S,W . I use the function getPosDirections to get the possible directions of that spot. The function returns the directions into an ArrayList<String> e.g. for spot J3: [E,W]
Now the game goes like this: 2 dice will be rolled so you get a number between 2 and 12, this number represents the number of steps you can make.
What I want is an ArrayList of all the possible routes
clarification of all the possible routes:
When I'm at the current position I check what the possibilities are from there. Let's say that's go East and go West. So we get 2 new positions and from there on we need to check for the next possibilities again for both positions (until we took x directions)
(x equals the number thrown by the dice).
e.g.: I throw 3 and I'm currently at spot J3:
[[E,N,E],[E,N,S],[E,S,E],[E,S,S],[W,N,E],[W,N,S],[W,S,E],[W,S,S]]
How would obtain the last mentioned Array(list)?
First, you might wish to think about your approach some more. In the worst case (a 12 is rolled, and all 4 directions are possible at every location), there will be 4^12 ~= 160 million routes. Is it really necessary to iterate over them all? And is it necessary to fill about 1 GB of memory to store that list?
Next, it is probably a good idea to represent directions in a type-safe manner, for instance using an enum.
That being said, recursion is your friend:
private void iteratePaths(Location currentLoc, List<Direction> currentPath, List<List<Direction>> allPaths, int pathLength) {
if (currentPath.size() >= pathLength) {
allPaths.add(new ArrayList<Direction>(currentPath));
return;
}
for (Direction d : currentLoc.getPosDirections()) {
currentPath.add(d);
Location newLoc = currentLoc.walk(d);
iteratePaths(newLoc, currentPath, allPaths, pathLength);
currentPath.remove(currentPath.size() - 1);
}
}
public void List<List<Direction>> getAllPaths(Location loc, int length) {
List<List<Direction>> allPaths = new ArrayList<List<Direction>>();
List<Direction> currentPath = new ArrayList<Direction>();
iteratePaths(loc, currentPath, allPaths, length);
return allPaths;
}
You can assume that your field of spots is a complete graph. Then you need to implement BFS or DFS with saving pathes.
You can implement all logic in any of these algorithms (like getting a list of possible directions from a certain node).
Related
i am facing problem with implementation of tree used for mini-max algorithm for my AI module.
The tree I need to write will have 4 levels: root(0) - AI move(1) - player move(2) and AI move(3). Every level will contain n of children and will have fields like(Board state, field rate and coordinates to move). With my calculations on the third level of tree possible number of children would be about 25.000. How should I implement this?
At the moment I've implemented 3 different ArrayLists of Objects, each list for specific level:
firstDepthList - contains Objects with possible board state, field rate and coordinates to move);
secondDepthList contains Objects with possible board state(for every element from firstDepthList), field rate and coordinates to move; and
thirdDepthList which contains Objects like above for every
element from secondDepthList. Of course I've linked lists together for
board and moves continuity.
Or maybe you would recommend better solution?
Mini-max algorithm needs only one value: max (or minimum, depends on level number). You don't have to store all tree.
It can be implemented as recursive function. And no creating many states, only one boardState using (for example, chess figure move from A to B may be reverted)
double getRating(BoardState state, int currentPlayer, int depth){
//current player has to be 1 or -1
if (depth <= 0){
return state.positionRating();
}
double bestRating = -Double.MAX_VALUE;
for(Move m: state.possibleMoves(currentPlayer)){
state.apply(m); // modify state
double rating = currentPlayer * getRating(state, -currentPlayer, depth-1);
// player 1 wants to have biggest number, player -1: lowest
bestRating = Math.max(bestRating, rating);
state.revert(m) // restore state
}
return bestState*currentPlayer;
}
I have created a gameboard (5x5) and I now want to decide when a move is legal as fast as possible. For example a piece at (0,0) wants to go to (1,1), is that legal? First I tried to find this out with computations but that seemed bothersome. I would like to hard-code the possible moves based on a position on the board and then iterate through all the possible moves to see if they match the destinations of the piece. I have problems getting this on paper. This is what I would like:
//game piece is at 0,0 now, decide if 1,1 is legal
Point destination = new Point(1,1);
destination.findIn(legalMoves[0][0]);
The first problem I face is that I don't know how to put a list of possible moves in an array at for example index [0][0]. This must be fairly obvious but I am stuck at this for some time. I would like to create an array in which there is a list of Point objects. So in semi-code: legalMoves[0][0] = {Point(1,1),Point(0,1),Point(1,0)}
I am not sure if this is efficient but it makes logically move sense than maybe [[1,1],[0,1],[1,0]] but I am not sold on this.
The second problem I have is that instead of creating the object at every start of the game with an instance variable legalMoves, I would rather have it read from disk. I think that it should be quicker this way? Is the serializable class the way to go?
My 3rd small problem is that for the 25 positions the legal moves are unbalanced. Some have 8 possible legal moves, others have 3. Maybe this is not a problem at all.
You are looking for a structure that will give you the candidate for a given point, i.e. Point -> List<Point>.
Typically, I would go for a Map<Point, List<Point>>.
You can initialise this structure statically at program start or dynamically when needing. For instance, here I use 2 helpers arrays that contains the possible translations from a point, and these will yield the neighbours of the point.
// (-1 1) (0 1) (1 1)
// (-1 0) (----) (1 0)
// (-1 -1) (0 -1) (1 -1)
// from (1 0) anti-clockwise:
static int[] xOffset = {1,1,0,-1,-1,-1,0,1};
static int[] yOffset = {0,1,1,1,0,-1,-1,-1};
The following Map contains the actual neighbours for a Point with a function that compute, store and return these neighbours. You can choose to initialise all neighbours in one pass, but given the small numbers, I would not think this a problem performance wise.
static Map<Point, List<Point>> neighbours = new HashMap<>();
static List<Point> getNeighbours(Point a) {
List<Point> nb = neighbours.get(a);
if (nb == null) {
nb = new ArrayList<>(xOffset.length); // size the list
for (int i=0; i < xOffset.length; i++) {
int x = a.getX() + xOffset[i];
int y = a.getY() + yOffset[i];
if (x>=0 && y>=0 && x < 5 && y < 5) {
nb.add(new Point(x, y));
}
}
neighbours.put(a, nb);
}
return nb;
}
Now checking a legal move is a matter of finding the point in the neighbours:
static boolean isLegalMove(Point from, Point to) {
boolean legal = false;
for (Point p : getNeighbours(from)) {
if (p.equals(to)) {
legal = true;
break;
}
}
return legal;
}
Note: the class Point must define equals() and hashCode() for the map to behave as expected.
The first problem I face is that I don't know how to put a list of possible moves in an array at for example index [0][0]
Since the board is 2D, and the number of legal moves could generally be more than one, you would end up with a 3D data structure:
Point legalMoves[][][] = new legalMoves[5][5][];
legalMoves[0][0] = new Point[] {Point(1,1),Point(0,1),Point(1,0)};
instead of creating the object at every start of the game with an instance variable legalMoves, I would rather have it read from disk. I think that it should be quicker this way? Is the serializable class the way to go?
This cannot be answered without profiling. I cannot imagine that computing legal moves of any kind for a 5x5 board could be so intense computationally as to justify any kind of additional I/O operation.
for the 25 positions the legal moves are unbalanced. Some have 8 possible legal moves, others have 3. Maybe this is not a problem at all.
This can be handled nicely with a 3D "jagged array" described above, so it is not a problem at all.
I'm trying to write a time efficient algorithm that can detect a group of overlapping circles and make a single circle in the "middle" of the group that will represent that group. The practical application of this is representing GPS locations over a map, put the conversion in to Cartesian co-ordinates is already handled so that's not relevant, the desired effect is that at different zoom levels clusters of close together points just appear as a single circle (that will have the number of points printed in the centre in the final version)
In this example the circles just have a radius of 15 so the distance calculation (Pythagoras) is not being square rooted and compared to 225 for the collision detection. I was trying anything to shave off time, but the problem is this really needs to happen very quickly becasue it's a user facing bit of code that needs to be snappy and good looking.
I've given this a go and I it works with small data sets pretty well. 2 big problems, it takes too long and it can run out of memory if all the points are on top of one another.
The route I've taken is to calculate distance between each point in a first pass, and then take the shortest distance first and start to combine from there, anything that's been combined becomes ineligible for combination on that pass, and the whole list is passed back around to the distance calculations again until nothing changes.
To be honest I think it needs a radical shift in approach and I think it's a little beyond me. I've re factored my code in to one class for ease of posting and generated random points to give an example.
package mergepoints;
import java.awt.Point;
import java.util.ArrayList;
import java.util.Collections;
import java.util.HashSet;
import java.util.List;
import java.util.Set;
public class Merger {
public static void main(String[] args) {
Merger m = new Merger();
m.subProcess(m.createRandomList());
}
private List<Plottable> createRandomList() {
List<Plottable> points = new ArrayList<>();
for (int i = 0; i < 50000; i++) {
Plottable p = new Plottable();
p.location = new Point((int) Math.floor(Math.random() * 1000),
(int) Math.floor(Math.random() * 1000));
points.add(p);
}
return points;
}
private List<Plottable> subProcess(List<Plottable> visible) {
List<PlottableTuple> tuples = new ArrayList<PlottableTuple>();
// create a tuple to store distance and matching objects together,
for (Plottable p : visible) {
PlottableTuple tuple = new PlottableTuple();
tuple.a = p;
tuples.add(tuple);
}
// work out each Plottable relative distance from
// one another and order them by shortest first.
// We may need to do this multiple times for one set so going in own
// method.
// this is the bit that takes ages
setDistances(tuples);
// Sort so that smallest distances are at the top.
// parse the set and combine any pair less than the smallest distance in
// to a combined pin.
// any plottable thats been combine is no longer eligable for combining
// so ignore on this parse.
List<PlottableTuple> sorted = new ArrayList<>(tuples);
Collections.sort(sorted);
Set<Plottable> done = new HashSet<>();
Set<Plottable> mergedSet = new HashSet<>();
for (PlottableTuple pt : sorted) {
if (!done.contains(pt.a) && pt.distance <= 225) {
Plottable merged = combine(pt, done);
done.add(pt.a);
for (PlottableTuple tup : pt.others) {
done.add(tup.a);
}
mergedSet.add(merged);
}
}
// if we haven't processed anything we are done just return visible
// list.
if (done.size() == 0) {
return visible;
} else {
// change the list to represent the new combined plottables and
// repeat the process.
visible.removeAll(done);
visible.addAll(mergedSet);
return subProcess(visible);
}
}
private Plottable combine(PlottableTuple pt, Set<Plottable> done) {
List<Plottable> plottables = new ArrayList<>();
plottables.addAll(pt.a.containingPlottables);
for (PlottableTuple otherTuple : pt.others) {
if (!done.contains(otherTuple.a)) {
plottables.addAll(otherTuple.a.containingPlottables);
}
}
int x = 0;
int y = 0;
for (Plottable p : plottables) {
Point position = p.location;
x += position.x;
y += position.y;
}
x = x / plottables.size();
y = y / plottables.size();
Plottable merged = new Plottable();
merged.containingPlottables.addAll(plottables);
merged.location = new Point(x, y);
return merged;
}
private void setDistances(List<PlottableTuple> tuples) {
System.out.println("pins: " + tuples.size());
int loops = 0;
// Start from the first item and loop through, then repeat but starting
// with the next item.
for (int startIndex = 0; startIndex < tuples.size() - 1; startIndex++) {
// Get the data for the start Plottable
PlottableTuple startTuple = tuples.get(startIndex);
Point startLocation = startTuple.a.location;
for (int i = startIndex + 1; i < tuples.size(); i++) {
loops++;
PlottableTuple compareTuple = tuples.get(i);
double distance = distance(startLocation, compareTuple.a.location);
setDistance(startTuple, compareTuple, distance);
setDistance(compareTuple, startTuple, distance);
}
}
System.out.println("loops " + loops);
}
private void setDistance(PlottableTuple from, PlottableTuple to,
double distance) {
if (distance < from.distance || from.others == null) {
from.distance = distance;
from.others = new HashSet<>();
from.others.add(to);
} else if (distance == from.distance) {
from.others.add(to);
}
}
private double distance(Point a, Point b) {
if (a.equals(b)) {
return 0.0;
}
double result = (((double) a.x - (double) b.x) * ((double) a.x - (double) b.x))
+ (((double) a.y - (double) b.y) * ((double) a.y - (double) b.y));
return result;
}
class PlottableTuple implements Comparable<PlottableTuple> {
public Plottable a;
public Set<PlottableTuple> others;
public double distance;
#Override
public int compareTo(PlottableTuple other) {
return (new Double(distance)).compareTo(other.distance);
}
}
class Plottable {
public Point location;
private Set<Plottable> containingPlottables;
public Plottable(Set<Plottable> plots) {
this.containingPlottables = plots;
}
public Plottable() {
this.containingPlottables = new HashSet<>();
this.containingPlottables.add(this);
}
public Set<Plottable> getContainingPlottables() {
return containingPlottables;
}
}
}
Map all your circles on a 2D grid first. You then only need to compare the circles in a cell with the other circles in that cell and in it's 9 neighbors (you can reduce that to five by using a brick pattern instead of a regular grid).
If you only need to be really approximate, then you can just group all the circles that fall into a cell together. You will probably also want to merge cells that only have a small number of circles together with there neighbors, but this will be fast.
This problem is going to take a reasonable amount of computation no matter how you do it, the question then is: can you do all the computation up-front so that at run-time it's just doing a look-up? I would build a tree-like structure where each layer is all the points that need to be drawn for a given zoom level. It takes more computation up-front, but at run-time you are simply drawing a list of point, fast.
My idea is to decide what the resolution of each zoom level is (ie at zoom level 1 points closer than 15 get merged; at zoom level 2 points closer than 30 get merged), then go through your points making groups of points that are within the 15 of each other and pick a point to represent group that group at the higher zoom. Now you have a 2 layer tree. Then you pass over the second layer grouping all points that are within 30 of each other, and so on all the way up to your highest zoom level. Now save this tree structure to file, and at run-time you can very quickly change zoom levels by simply drawing all points at the appropriate tree level. If you need to add or remove points, that can be done dynamically by figuring out where to attach them to the tree.
There are two downsides to this method that come to mind: 1) it will take a long time to compute the tree, but you only have to do this once, and 2) you'll have to think really carefully about how you build the tree, based on how you want the groupings to be done at higher levels. For example, in the image below the top level may not be the right grouping that you want. Maybe instead building the tree based off the previous layer, you always want to go back to the original points. That said, some loss of precision always happens when you're trying to trade-off for faster run-time.
EDIT
So you have a problem which requires O(n^2) comparisons, you say it has to be done in real-time, can not be pre-computed, and has to be fast. Good luck with that.
Let's analyze the problem a bit; if you do no pre-computation then in order to decide which points can be merged you have to compare every pair of points, that's O(n^2) comparisons. I suggested building a tree before-hand, O(n^2 log n) once, but then runtime is just a lookup, O(1). You could also do something in between where you do some work before and some at run-time, but that's how these problems always go, you have to do a certain amount of computation, you can play games by doing some of it earlier, but at the end of the day you still have to do the computation.
For example, if you're willing to do some pre-computation, you could try keeping two copies of the list of points, one sorted by x-value and one sorted by y-value, then instead of comparing every pair of points, you can do 4 binary searches to find all the points within, say, a 30 unit box of the current point. More complicated so would be slower for a small number of points (say <100), but would reduce the overall complexity to O(n log n), making it faster for large amounts of data.
EDIT 2
If you're worried about multiple points at the same location, then why don't you do a first pass removing the redundant points, then you'll have a smaller "search list"
list searchList = new list()
for pt1 in points :
boolean clean = true
for pt2 in searchList :
if distance(pt1, pt2) < epsilon :
clean = false
break
if clean :
searchList.add(pt1)
// Now you have a smaller list to act on with only 1 point per cluster
// ... I guess this is actually the same as my first suggestion if you make one of these search lists per zoom level. huh.
EDIT 3: Graph Traversal
A totally new approach would be to build a graph out of the points and do some sort of longest-edge-first graph traversal on them. So pick a point, draw it, and traverse its longest edge, draw that point, etc. Repeat this until you come to a point which doesn't have any untraversed edges longer than your zoom resolution. The number of edges per point gives you an easy way to tradeoff speed for correctness. If the number of edges per point was small and constant, say 4, then with a bit of cleverness you could build the graph in O(n) time and also traverse it to draw points in O(n) time. Fast enough to do it on the fly with no pre-computation.
Just a wild guess and something that occurred to me while reading responses from others.
Do a multi-step comparison. Assume your combining distance at the current zoom level is 20 meters. First, subtract (X1 - X2). If This is bigger than 20 meters then you are done, the points are too far. Next, subtract (Y1 - Y2) and do the same thing to reject combining the points.
You could stop here and be happy if you are good with using only horizontal/vertical distances as your metric for combining. Much less math (no squaring or square roots). Pythagoras wouldn't be happy but your users might.
If you really insist on exact answers, do the two subtraction/comparison steps above. If the points are within horizontal and vertical limits, THEN you do the full Pythagoras check with square roots.
Assuming all your points are not highly clustered very close to the combining limit, this should save some CPU cycles.
This is still approximately an O(n^2) technique, but the math should be simpler. If you have the memory, you could store distances between each set of points and then you never have to compute it again. This could take up more memory than you have and also grows at a rate of approximately O(n^2), so be careful.
Also, you could make a linked list or sorted array of all your points, sorted in order of increasing X or increasing Y. (I don't think you need both, just one). Then walk through the list in sorted order. For each point, check the neighbors out until (X1 - X2) is bigger than your combining distance. and then stop. You don't have to compare each set of points for O(N^2), you only have to compare neighbors that are close in one dimension to quickly prune your large list to a small one. As you move through the list, you only have to compare points that have a bigger X than your current candidate, because you already compared and combined with all previous X values. This gets you closer to the O(n) complexity you want. Of course, you would need to check the Y dimension and fully qualify the points to be combined before you actually do it. Don't just use the X distance to make your combining decision.
Ok, so I have a 3 x 3 jig saw puzzle game that I am writing and I am stuck on the solution method.
public Piece[][] solve(int r, int c) {
if (isSolved())
return board;
board[r][c] = null;
for (Piece p : pieces) {
if (tryInsert(p, r, c)) {
pieces.remove(p);
break;
}
}
if (getPieceAt(r, c) != null)
return solve(nextLoc(r, c).x, nextLoc(r, c).y);
else {
pieces.add(getPieceAt(prevLoc(r, c).x, prevLoc(r, c).y));
return solve(prevLoc(r, c).x, prevLoc(r, c).y);
}
}
I know I haven't provided much info on the puzzle, but my algorithm should work regardless of the specifics. I've tested all helper methods, pieces is a List of all the unused Pieces, tryInsert attempts to insert the piece in all possible orientations, and if the piece can be inserted, it will be. Unfortunately, when I test it, I get StackOverflow Error.
Your DFS-style solution algorithm never re-adds Piece objects to the pieces variable. This is not sound, and can easily lead to infinite recursion.
Suppose, for example, that you have a simple 2-piece puzzle, a 2x1 grid, where the only valid arrangement of pieces is [2, 1]. This is what your algorithm does:
1) Put piece 1 in slot 1
2) It fits! Remove this piece, pieces now = {2}. Solve on nextLoc()
3) Now try to fit piece 2 in slot 2... doesn't work
4) Solve on prevLoc()
5) Put piece 2 in slot 1
6) It fits! Remove this piece, pieces is now empty. Solve on nextLoc()
7) No pieces to try, so we fail. Solve on prevLoc()
8) No pieces to try, so we fail. Solve on prevLoc()
9) No pieces to try, so we fail. Solve on prevLoc()
Repeat ad infinitum...
As commenters have mentioned, though, this may only be part of the issue. A lot of critical code is missing from your post, and their may be errors there as well.
I think you need to structure your recursion differently. I'm also not sure adding and removing pieces from different places of the list is safe; much as I'd rather avoid allocation in the recursion it might be safest to create a list copy, or scan the board
so far for instances of the same piece to avoid re-use.
public Piece[][] solve(int r, int c, List<Piece> piecesLeft) {
// Note that this check is equivalent to
// 'have r and c gone past the last square on the board?'
// or 'are there no pieces left?'
if (isSolved())
return board;
// Try each remaining piece in this square
for (Piece p : piecesLeft) {
// in each rotation
for(int orientation = 0; orientation < 4; ++orientation) {
if (tryInsert(p, r, c, orientation)) {
// It fits: recurse to try the next square
// Create the new list of pieces left
List<Piece> piecesLeft2 = new ArrayList<Piece>(piecesLeft);
piecesLeft2.remove(p);
// (can stop here and return success if piecesLeft2 is empty)
// Find the next point
Point next = nextLoc(r, c);
// (could also stop here if this is past end of board)
// Recurse to try next square
Piece[][] solution = solve(next.x, next.y, piecesLeft2);
if (solution != null) {
// This sequence worked - success!
return solution;
}
}
}
}
// no solution with this piece
return null;
}
StackOverflowError with recursive functions means that you're either lacking a valid recursion stop condition or you're trying to solve too big problem and should try an iterated algorithm instead. Puzzle containing 9 pieces isn't too big problem so the first thing must be the case.
The condition for ending recursion is board completion. You're only trying to insert a piece in the for loop, so the problem is probably either that the tryInsert() method doesn't insert the piece or it doesn't get invoked. As you're sure that this method works fine, I'd suggest removing break; from
if (p.equals(prev[r][c]))
{
System.out.println("Hello");
break;
}
because it's the only thing that may prevent the piece from being inserted. I'm still unsure if I understand the prev role though.
I am generating my world (random, infinite and 2d) in sections that are x by y, when I reach the end of x a new section is formed. If in section one I have hills, how can I make it so that in section two those hills will continue? Is there some kind of way that I could make this happen?
So it would look something like this
1221
1 = generated land
2 = non generated land that will fill in the two ones
I get this now:
Is there any way to make this flow better?
This seems like just an algorithm issue. Your generation mechanism needs a start point. On the initial call it would be say 0, on subsequent calls it would be the finishing position of the previous "chunk".
If I was doing this, I'd probably make the height of the next point plus of minus say 0-3 from the previous, using some sort of distribution - e.g. 10% of the time it's +/1 3, 25% of the time it is +/- 2, 25% of the time it is 0 and 40% of the time it is +/- 1.
If I understood your problem correctly, here is a solution:
If you generated the delta (difference) between the hills and capped at a fixed value (so changes are never too big), then you can carry over the value of the last hill from the previous section when generating the new one and apply the first randomly genenarted delta (of the new section) to the carried-over hill size.
If you're generating these "hills" sequentially, I would create an accessor method that provides the continuation of said hill with a value to begin the next section. It seems that you are creating a random height for the hill to be constrained by some value already when drawing a hill in a single section. Extend that functionality with this new accessor method.
My take on a possible implementation of this.
public class DrawHillSection {
private int index;
private int x[50];
public void drawHillSection() {
for( int i = 0; i < 50; i++) {
if (i == 0) {
getPreviousHillSectionHeight(index - 1)
}
else {
...
// Your current implementation to create random
// height with some delta-y limit.
...
}
}
}
public void getPreviousHillSectionHeight(int index)
{
return (x[49].height);
}
}