Related
I am taking an algorithms course where they go over weighted quick union find. I am confused about why we are concerned about the size of a tree as opposed to the depth?
When I tried writing out the code, my code looked different than the solution provided.
From my understanding, the size of the tree (total number of nodes in a tree) is not as important as the depth of the tree when it comes to the run time of the union function (lg n) because it is the depth that will determine how many look ups are needed to get to the root of a node?
Thanks
My code:
public void union(int p, int q) {
int root_p = root(p);
int root_q = root(q);
// If the two trees are not already connected, union them
if(root_p != root_q) {
// The two trees aren't connected, check which is deeper
// Attach the one that is more shallow to the deeper one
if (depth[root_p] > depth[root_q]) {
// p is deeper, point q's root to p
id[root_q] = root_p;
} else if (depth[root_q] > depth[root_p]) {
// q is deeper, point p's root to p
id[root_p] = root_q;
} else {
// They are of equal depth, point q's root to p and increment p's depth by 1
id[root_q] = root_p;
depth[root_p] += 1;
}
}
}
Solution code provided:
public void union(int p, int q) {
int rootP = find(p);
int rootQ = find(q);
if (rootP == rootQ) return;
// make smaller root point to larger one
if (size[rootP] < size[rootQ]) {
parent[rootP] = rootQ;
size[rootQ] += size[rootP];
}
else {
parent[rootQ] = rootP;
size[rootP] += size[rootQ];
}
count--;
}
You are correct that the depth (actually height) is more directly related to the run time, but using either one will result in O(log N) run time for union and find.
The proof is easy -- Given that when we begin (when all sets are disjoint), every root with height h has at least 2(h-1) nodes, this invariant is maintained by union and find operations. Therefore, if a node has size n, then its height will be at most floor(log2(n))+1
So either one will do. BUT, very soon you will learn about path compression, which makes it difficult to keep track of the height of roots, but the size will still be available. At that point you will be able to use rank, which is kind of like height, or continue to use the size. Again either one will do, but I find the size easier to reason about so I always use that.
I'm trying to write a time efficient algorithm that can detect a group of overlapping circles and make a single circle in the "middle" of the group that will represent that group. The practical application of this is representing GPS locations over a map, put the conversion in to Cartesian co-ordinates is already handled so that's not relevant, the desired effect is that at different zoom levels clusters of close together points just appear as a single circle (that will have the number of points printed in the centre in the final version)
In this example the circles just have a radius of 15 so the distance calculation (Pythagoras) is not being square rooted and compared to 225 for the collision detection. I was trying anything to shave off time, but the problem is this really needs to happen very quickly becasue it's a user facing bit of code that needs to be snappy and good looking.
I've given this a go and I it works with small data sets pretty well. 2 big problems, it takes too long and it can run out of memory if all the points are on top of one another.
The route I've taken is to calculate distance between each point in a first pass, and then take the shortest distance first and start to combine from there, anything that's been combined becomes ineligible for combination on that pass, and the whole list is passed back around to the distance calculations again until nothing changes.
To be honest I think it needs a radical shift in approach and I think it's a little beyond me. I've re factored my code in to one class for ease of posting and generated random points to give an example.
package mergepoints;
import java.awt.Point;
import java.util.ArrayList;
import java.util.Collections;
import java.util.HashSet;
import java.util.List;
import java.util.Set;
public class Merger {
public static void main(String[] args) {
Merger m = new Merger();
m.subProcess(m.createRandomList());
}
private List<Plottable> createRandomList() {
List<Plottable> points = new ArrayList<>();
for (int i = 0; i < 50000; i++) {
Plottable p = new Plottable();
p.location = new Point((int) Math.floor(Math.random() * 1000),
(int) Math.floor(Math.random() * 1000));
points.add(p);
}
return points;
}
private List<Plottable> subProcess(List<Plottable> visible) {
List<PlottableTuple> tuples = new ArrayList<PlottableTuple>();
// create a tuple to store distance and matching objects together,
for (Plottable p : visible) {
PlottableTuple tuple = new PlottableTuple();
tuple.a = p;
tuples.add(tuple);
}
// work out each Plottable relative distance from
// one another and order them by shortest first.
// We may need to do this multiple times for one set so going in own
// method.
// this is the bit that takes ages
setDistances(tuples);
// Sort so that smallest distances are at the top.
// parse the set and combine any pair less than the smallest distance in
// to a combined pin.
// any plottable thats been combine is no longer eligable for combining
// so ignore on this parse.
List<PlottableTuple> sorted = new ArrayList<>(tuples);
Collections.sort(sorted);
Set<Plottable> done = new HashSet<>();
Set<Plottable> mergedSet = new HashSet<>();
for (PlottableTuple pt : sorted) {
if (!done.contains(pt.a) && pt.distance <= 225) {
Plottable merged = combine(pt, done);
done.add(pt.a);
for (PlottableTuple tup : pt.others) {
done.add(tup.a);
}
mergedSet.add(merged);
}
}
// if we haven't processed anything we are done just return visible
// list.
if (done.size() == 0) {
return visible;
} else {
// change the list to represent the new combined plottables and
// repeat the process.
visible.removeAll(done);
visible.addAll(mergedSet);
return subProcess(visible);
}
}
private Plottable combine(PlottableTuple pt, Set<Plottable> done) {
List<Plottable> plottables = new ArrayList<>();
plottables.addAll(pt.a.containingPlottables);
for (PlottableTuple otherTuple : pt.others) {
if (!done.contains(otherTuple.a)) {
plottables.addAll(otherTuple.a.containingPlottables);
}
}
int x = 0;
int y = 0;
for (Plottable p : plottables) {
Point position = p.location;
x += position.x;
y += position.y;
}
x = x / plottables.size();
y = y / plottables.size();
Plottable merged = new Plottable();
merged.containingPlottables.addAll(plottables);
merged.location = new Point(x, y);
return merged;
}
private void setDistances(List<PlottableTuple> tuples) {
System.out.println("pins: " + tuples.size());
int loops = 0;
// Start from the first item and loop through, then repeat but starting
// with the next item.
for (int startIndex = 0; startIndex < tuples.size() - 1; startIndex++) {
// Get the data for the start Plottable
PlottableTuple startTuple = tuples.get(startIndex);
Point startLocation = startTuple.a.location;
for (int i = startIndex + 1; i < tuples.size(); i++) {
loops++;
PlottableTuple compareTuple = tuples.get(i);
double distance = distance(startLocation, compareTuple.a.location);
setDistance(startTuple, compareTuple, distance);
setDistance(compareTuple, startTuple, distance);
}
}
System.out.println("loops " + loops);
}
private void setDistance(PlottableTuple from, PlottableTuple to,
double distance) {
if (distance < from.distance || from.others == null) {
from.distance = distance;
from.others = new HashSet<>();
from.others.add(to);
} else if (distance == from.distance) {
from.others.add(to);
}
}
private double distance(Point a, Point b) {
if (a.equals(b)) {
return 0.0;
}
double result = (((double) a.x - (double) b.x) * ((double) a.x - (double) b.x))
+ (((double) a.y - (double) b.y) * ((double) a.y - (double) b.y));
return result;
}
class PlottableTuple implements Comparable<PlottableTuple> {
public Plottable a;
public Set<PlottableTuple> others;
public double distance;
#Override
public int compareTo(PlottableTuple other) {
return (new Double(distance)).compareTo(other.distance);
}
}
class Plottable {
public Point location;
private Set<Plottable> containingPlottables;
public Plottable(Set<Plottable> plots) {
this.containingPlottables = plots;
}
public Plottable() {
this.containingPlottables = new HashSet<>();
this.containingPlottables.add(this);
}
public Set<Plottable> getContainingPlottables() {
return containingPlottables;
}
}
}
Map all your circles on a 2D grid first. You then only need to compare the circles in a cell with the other circles in that cell and in it's 9 neighbors (you can reduce that to five by using a brick pattern instead of a regular grid).
If you only need to be really approximate, then you can just group all the circles that fall into a cell together. You will probably also want to merge cells that only have a small number of circles together with there neighbors, but this will be fast.
This problem is going to take a reasonable amount of computation no matter how you do it, the question then is: can you do all the computation up-front so that at run-time it's just doing a look-up? I would build a tree-like structure where each layer is all the points that need to be drawn for a given zoom level. It takes more computation up-front, but at run-time you are simply drawing a list of point, fast.
My idea is to decide what the resolution of each zoom level is (ie at zoom level 1 points closer than 15 get merged; at zoom level 2 points closer than 30 get merged), then go through your points making groups of points that are within the 15 of each other and pick a point to represent group that group at the higher zoom. Now you have a 2 layer tree. Then you pass over the second layer grouping all points that are within 30 of each other, and so on all the way up to your highest zoom level. Now save this tree structure to file, and at run-time you can very quickly change zoom levels by simply drawing all points at the appropriate tree level. If you need to add or remove points, that can be done dynamically by figuring out where to attach them to the tree.
There are two downsides to this method that come to mind: 1) it will take a long time to compute the tree, but you only have to do this once, and 2) you'll have to think really carefully about how you build the tree, based on how you want the groupings to be done at higher levels. For example, in the image below the top level may not be the right grouping that you want. Maybe instead building the tree based off the previous layer, you always want to go back to the original points. That said, some loss of precision always happens when you're trying to trade-off for faster run-time.
EDIT
So you have a problem which requires O(n^2) comparisons, you say it has to be done in real-time, can not be pre-computed, and has to be fast. Good luck with that.
Let's analyze the problem a bit; if you do no pre-computation then in order to decide which points can be merged you have to compare every pair of points, that's O(n^2) comparisons. I suggested building a tree before-hand, O(n^2 log n) once, but then runtime is just a lookup, O(1). You could also do something in between where you do some work before and some at run-time, but that's how these problems always go, you have to do a certain amount of computation, you can play games by doing some of it earlier, but at the end of the day you still have to do the computation.
For example, if you're willing to do some pre-computation, you could try keeping two copies of the list of points, one sorted by x-value and one sorted by y-value, then instead of comparing every pair of points, you can do 4 binary searches to find all the points within, say, a 30 unit box of the current point. More complicated so would be slower for a small number of points (say <100), but would reduce the overall complexity to O(n log n), making it faster for large amounts of data.
EDIT 2
If you're worried about multiple points at the same location, then why don't you do a first pass removing the redundant points, then you'll have a smaller "search list"
list searchList = new list()
for pt1 in points :
boolean clean = true
for pt2 in searchList :
if distance(pt1, pt2) < epsilon :
clean = false
break
if clean :
searchList.add(pt1)
// Now you have a smaller list to act on with only 1 point per cluster
// ... I guess this is actually the same as my first suggestion if you make one of these search lists per zoom level. huh.
EDIT 3: Graph Traversal
A totally new approach would be to build a graph out of the points and do some sort of longest-edge-first graph traversal on them. So pick a point, draw it, and traverse its longest edge, draw that point, etc. Repeat this until you come to a point which doesn't have any untraversed edges longer than your zoom resolution. The number of edges per point gives you an easy way to tradeoff speed for correctness. If the number of edges per point was small and constant, say 4, then with a bit of cleverness you could build the graph in O(n) time and also traverse it to draw points in O(n) time. Fast enough to do it on the fly with no pre-computation.
Just a wild guess and something that occurred to me while reading responses from others.
Do a multi-step comparison. Assume your combining distance at the current zoom level is 20 meters. First, subtract (X1 - X2). If This is bigger than 20 meters then you are done, the points are too far. Next, subtract (Y1 - Y2) and do the same thing to reject combining the points.
You could stop here and be happy if you are good with using only horizontal/vertical distances as your metric for combining. Much less math (no squaring or square roots). Pythagoras wouldn't be happy but your users might.
If you really insist on exact answers, do the two subtraction/comparison steps above. If the points are within horizontal and vertical limits, THEN you do the full Pythagoras check with square roots.
Assuming all your points are not highly clustered very close to the combining limit, this should save some CPU cycles.
This is still approximately an O(n^2) technique, but the math should be simpler. If you have the memory, you could store distances between each set of points and then you never have to compute it again. This could take up more memory than you have and also grows at a rate of approximately O(n^2), so be careful.
Also, you could make a linked list or sorted array of all your points, sorted in order of increasing X or increasing Y. (I don't think you need both, just one). Then walk through the list in sorted order. For each point, check the neighbors out until (X1 - X2) is bigger than your combining distance. and then stop. You don't have to compare each set of points for O(N^2), you only have to compare neighbors that are close in one dimension to quickly prune your large list to a small one. As you move through the list, you only have to compare points that have a bigger X than your current candidate, because you already compared and combined with all previous X values. This gets you closer to the O(n) complexity you want. Of course, you would need to check the Y dimension and fully qualify the points to be combined before you actually do it. Don't just use the X distance to make your combining decision.
I was doing code forces and wanted to implement Dijkstra's Shortest Path Algorithm for a directed graph using Java with an Adjacency Matrix, but I'm having difficulty making it work for other sizes than the one it is coded to handle.
Here is my working code
int max = Integer.MAX_VALUE;//substitute for infinity
int[][] points={//I used -1 to denote non-adjacency/edges
//0, 1, 2, 3, 4, 5, 6, 7
{-1,20,-1,80,-1,-1,90,-1},//0
{-1,-1,-1,-1,-1,10,-1,-1},//1
{-1,-1,-1,10,-1,50,-1,20},//2
{-1,-1,-1,-1,-1,-1,20,-1},//3
{-1,50,-1,-1,-1,-1,30,-1},//4
{-1,-1,10,40,-1,-1,-1,-1},//5
{-1,-1,-1,-1,-1,-1,-1,-1},//6
{-1,-1,-1,-1,-1,-1,-1,-1} //7
};
int [] record = new int [8];//keeps track of the distance from start to each node
Arrays.fill(record,max);
int sum =0;int q1 = 0;int done =0;
ArrayList<Integer> Q1 = new ArrayList<Integer>();//nodes to transverse
ArrayList<Integer> Q2 = new ArrayList<Integer>();//nodes collected while transversing
Q1.add(0);//starting point
q1= Q1.get(0);
while(done<9) {// <<< My Problem
for(int q2 = 1; q2<8;q2++) {//skips over the first/starting node
if(points[q1][q2]!=-1) {//if node is connected by an edge
if(record[q1] == max)//never visited before
sum=0;
else
sum=record[q1];//starts from where it left off
int total = sum+points[q1][q2];//total distance of route
if(total < record[q2])//connected node distance
record[q2]=total;//if smaller
Q2.add(q2);//colleceted node
}
}
done++;
Q1.remove(0);//removes the first node because it has just been used
if(Q1.size()==0) {//if there are no more nodes to transverse
Q1=Q2;//Pours all the collected connecting nodes to Q1
Q2= new ArrayList<Integer>();
q1=Q1.get(0);
}
else//
q1=Q1.get(0);//sets starting point
}![enter image description here][1]
However, my version of the algorithm only works because I set the while loop to the solved answer. So in other words, it only works for this problem/graph because I solved it by hand first.
How could I make it so it works for all groups of all sizes?
Here is the pictorial representation of the example graph my problem was based on:
I think the main answer you are looking for is that you should let the while-loop run until Q1 is empty. What you're doing is essentially a best-first search. There are more changes required though, since your code is a bit unorthodox.
Commonly, Dijkstra's algorithm is used with a priority queue. Q1 is your "todo list" as I understand from your code. The specification of Dijkstra's says that the vertex that is closest to the starting vertex should be explored next, so rather than an ArrayList, you should use a PriorityQueue for Q1 that sorts vertices according to which is closest to the starting vertex. The most common Java implementation uses the PriorityQueue together with a tuple class: An internal class which stores a reference to a vertex and a "distance" to the starting vertex. The specification for Dijkstra's also specifies that if a new edge is discovered that makes a vertex closer to the start, the DecreaseKey operation should then be used on the entry in the priority queue to make the vertex come up earlier (since it is now closer). However, since PriorityQueue doesn't support that operation, a completely new entry is just added to the queue. If you have a good implementation of a heap that supports this operation (I made one myself, here) then decreaseKey can significantly increase efficiency as you won't need to create those tuples any more either then.
So I hope that is a sufficient answer then: Make a proper 'todo' list instead of Q1, and to make the algorithm generic, let that while-loop run until the todo list is empty.
Edit: I made you an implementation based on your format, that seems to work:
public void run() {
final int[][] points = { //I used -1 to denote non-adjacency/edges
//0, 1, 2, 3, 4, 5, 6, 7
{-1,20,-1,80,-1,-1,90,-1}, //0
{-1,-1,-1,-1,-1,10,-1,-1}, //1
{-1,-1,-1,10,-1,50,-1,20}, //2
{-1,-1,-1,-1,-1,-1,20,-1}, //3
{-1,50,-1,-1,-1,-1,30,-1}, //4
{-1,-1,10,40,-1,-1,-1,-1}, //5
{-1,-1,-1,-1,-1,-1,-1,-1}, //6
{-1,-1,-1,-1,-1,-1,-1,-1} //7
};
final int[] result = dijkstra(points,0);
System.out.print("Result:");
for(final int i : result) {
System.out.print(" " + i);
}
}
public int[] dijkstra(final int[][] points,final int startingPoint) {
final int[] record = new int[points.length]; //Keeps track of the distance from start to each vertex.
final boolean[] explored = new boolean[points.length]; //Keeps track of whether we have completely explored every vertex.
Arrays.fill(record,Integer.MAX_VALUE);
final PriorityQueue<VertexAndDistance> todo = new PriorityQueue<>(points.length); //Vertices left to traverse.
todo.add(new VertexAndDistance(startingPoint,0)); //Starting point (and distance 0).
record[startingPoint] = 0; //We already know that the distance to the starting point is 0.
while(!todo.isEmpty()) { //Continue until we have nothing left to do.
final VertexAndDistance next = todo.poll(); //Take the next closest vertex.
final int q1 = next.vertex;
if(explored[q1]) { //We have already done this one, don't do it again.
continue; //...with the next vertex.
}
for(int q2 = 1;q2 < points.length;q2++) { //Find connected vertices.
if(points[q1][q2] != -1) { //If the vertices are connected by an edge.
final int distance = record[q1] + points[q1][q2];
if(distance < record[q2]) { //And it is closer than we've seen so far.
record[q2] = distance;
todo.add(new VertexAndDistance(q2,distance)); //Explore it later.
}
}
}
explored[q1] = true; //We're done with this vertex now.
}
return record;
}
private class VertexAndDistance implements Comparable<VertexAndDistance> {
private final int distance;
private final int vertex;
private VertexAndDistance(final int vertex,final int distance) {
this.vertex = vertex;
this.distance = distance;
}
/**
* Compares two {#code VertexAndDistance} instances by their distance.
* #param other The instance with which to compare this instance.
* #return A positive integer if this distance is more than the distance
* of the specified object, a negative integer if it is less, or
* {#code 0} if they are equal.
*/
#Override
public int compareTo(final VertexAndDistance other) {
return Integer.compare(distance,other.distance);
}
}
Output: 0 20 40 50 2147483647 30 70 60
I'm a student and me and my team have to make a simulation of student's behaviour in a campus (like making "groups of friends") walking etc. For finding path that student has to go, I used A* algorithm (as I found out that its one of fastest path-finding algorithms). Unfortunately our simulation doesn't run fluently (it takes like 1-2 sec between successive iterations). I wanted to optimize the algorithm but I don't have any idea what I can do more. Can you guys help me out and share with me information if its possible to optimize my A* algorithm? Here goes code:
public LinkedList<Field> getPath(Field start, Field exit) {
LinkedList<Field> foundPath = new LinkedList<Field>();
LinkedList<Field> opensList= new LinkedList<Field>();
LinkedList<Field> closedList= new LinkedList<Field>();
Hashtable<Field, Integer> gscore = new Hashtable<Field, Integer>();
Hashtable<Field, Field> cameFrom = new Hashtable<Field, Field>();
Field x = new Field();
gscore.put(start, 0);
opensList.add(start);
while(!opensList.isEmpty()){
int min = -1;
//searching for minimal F score
for(Field f : opensList){
if(min==-1){
min = gscore.get(f)+getH(f,exit);
x = f;
}else{
int currf = gscore.get(f)+getH(f,exit);
if(min > currf){
min = currf;
x = f;
}
}
}
if(x == exit){
//path reconstruction
Field curr = exit;
while(curr != start){
foundPath.addFirst(curr);
curr = cameFrom.get(curr);
}
return foundPath;
}
opensList.remove(x);
closedList.add(x);
for(Field y : x.getNeighbourhood()){
if(!(y.getType()==FieldTypes.PAVEMENT ||y.getType() == FieldTypes.GRASS) || closedList.contains(y) || !(y.getStudent()==null))
{
continue;
}
int tentGScore = gscore.get(x) + getDist(x,y);
boolean distIsBetter = false;
if(!opensList.contains(y)){
opensList.add(y);
distIsBetter = true;
}else if(tentGScore < gscore.get(y)){
distIsBetter = true;
}
if(distIsBetter){
cameFrom.put(y, x);
gscore.put(y, tentGScore);
}
}
}
return foundPath;
}
private int getH(Field start, Field end){
int x;
int y;
x = start.getX()-end.getX();
y = start.getY() - end.getY();
if(x<0){
x = x* (-1);
}
if(y<0){
y = y * (-1);
}
return x+y;
}
private int getDist(Field start, Field end){
int ret = 0;
if(end.getType() == FieldTypes.PAVEMENT){
ret = 8;
}else if(start.getX() == end.getX() || start.getY() == end.getY()){
ret = 10;
}else{
ret = 14;
}
return ret;
}
//EDIT
This is what i got from jProfiler:
So getH is a bottlneck yes? Maybe remembering H score of field would be a good idea?
A linked list is not a good data structure for the open set. You have to find the node with the smallest F from it, you can either search through the list in O(n) or insert in sorted position in O(n), either way it's O(n). With a heap it's only O(log n). Updating the G score would remain O(n) (since you have to find the node first), unless you also added a HashTable from nodes to indexes in the heap.
A linked list is also not a good data structure for the closed set, where you need fast "Contains", which is O(n) in a linked list. You should use a HashSet for that.
You can optimize the problem by using a different algorithm, the following page illustrates and compares many different aglorihms and heuristics:
A*
IDA*
Djikstra
JumpPoint
...
http://qiao.github.io/PathFinding.js/visual/
From your implementation it seems that you are using naive A* algorithm. Use following way:-
A* is algorithm which is implemented using priority queue similar to BFS.
Heuristic function is evaluated at each node to define its fitness to be selected as next node to be visited.
As new node is visited its neighboring unvisited nodes are added into queue with its heuristic values as keys.
Do this till every heuristic value in the queue is less than(or greater) calculated value of goal state.
Find bottlenecks of your implementation using profiler . ex. jprofiler is easy to use
Use threads in areas where algorithm can run simultaneously.
Profile your JavaVM to run faster.
Allocate more RAM
a) As mentioned, you should use a heap in A* - either a basic binary heap or a pairing heap which should be theoretically faster.
b) In larger maps, it always happens that you need some time for the algorithm to run (i.e., when you request a path, it will simply have to take some time). What can be done is to use some local navigation algorithm (e.g., "run directly to the target") while the path computes.
c) If you have reasonable amount of locations (e.g., in a navmesh) and some time at the start of your code, why not to use Floyd-Warshall's algorithm? Using that, you can the information where to go next in O(1).
I built a new pathfinding algorithm. called Fast* or Fastaer, It is a BFS like A* but is faster and efficient than A*, the accuracy is 90% A*. please see this link for info and demo.
https://drbendanilloportfolio.wordpress.com/2015/08/14/fastaer-pathfinder/
It has a fast greedy line tracer, to make path straighter.
The demo file has it all. Check Task manager when using the demo for performance metrics. So far upon building this the profiler results of this has maximum surviving generation of 4 and low to nil GC time.
I have a basic implementation of alpha-beta pruning but I have no idea how to improve the move ordering. I have read that it can be done with a shallow search, iterative deepening or storing the bestMoves to transition table.
Any suggestions how to implement one of these improvements in this algorithm?
public double alphaBetaPruning(Board board, int depth, double alpha, double beta, int player) {
if (depth == 0) {
return board.evaluateBoard();
}
Collection<Move> children = board.generatePossibleMoves(player);
if (player == 0) {
for (Move move : children) {
Board tempBoard = new Board(board);
tempBoard.makeMove(move);
int nextPlayer = next(player);
double result = alphaBetaPruning(tempBoard, depth - 1, alpha,beta,nextPlayer);
if ((result > alpha)) {
alpha = result;
if (depth == this.origDepth) {
this.bestMove = move;
}
}
if (alpha >= beta) {
break;
}
}
return alpha;
} else {
for (Move move : children) {
Board tempBoard = new Board(board);
tempBoard.makeMove(move);
int nextPlayer = next(player);
double result = alphaBetaPruning(tempBoard, depth - 1, alpha,beta,nextPlayer);
if ((result < beta)) {
beta = result;
if (depth == this.origDepth) {
this.bestMove = move;
}
}
if (beta <= alpha) {
break;
}
}
return beta;
}
}
public int next(int player) {
if (player == 0) {
return 4;
} else {
return 0;
}
}
Node reordering with shallow search is trivial: calculate the
heuristic value for each child of the state before recursively
checking them. Then, sort the values of these states [descending
for max vertex, and ascending for min vertex], and recursively invoke
the algorithm on the sorted list. The idea is - if a state is good at
shallow depth, it is more likely to be good at deep state as well,
and if it is true - you will get more prunnings.
The sorting should be done before this [in both if and else clauses]
for (Move move : children) {
storing moves is also trivial - many states are calculated twice,
when you finish calculating any state, store it [with the depth of
the calculation! it is improtant!] in a HashMap. First thing you do
when you start calculation on a vertex - is check if it is already
calculated - and if it is, returned the cached value. The idea behind
it is that many states are reachable from different paths, so this
way - you can eliminate redundant calculations.
The changes should be done both in the first line of the method [something like if (cache.contains((new State(board,depth,player)) return cache.get(new State(board,depth,player))] [excuse me for lack of elegance and efficiency - just explaining an idea here].
You should also add cache.put(...) before each return statement.
First of all one has to understand the reasoning behind the move ordering in an alpha-beta pruning algorithm. Alpha-beta produces the same result as a minimax but in a lot of cases can do it faster because it does not search through the irrelevant branches.
It is not always faster, because it does not guarantee to prune, if fact in the worse case it will not prune at all and search absolutely the same tree as minimax and will be slower because of a/b values book-keeping. In the best case (maximum pruning) it allows to search a tree 2 times deep at the same time. For a random tree it can search 4/3 times deeper for the same time.
Move ordering can be implemented in a couple of ways:
you have a domain expert who gives you suggestion of what moves are better. For example in chess promotion of a pawn, capturing high value pieces with lower value piece are on average good moves. In checkers it is better to kill more checkers in a move then less checker and it is better to create a queen. So your move generation function return better moves before
you get the heuristic of how good is the move from evaluating the position at the 1 level of depth smaller (your shallow search / iterative deepening). You calculated the evaluation at the depth n-1, sorted the moves and then evaluate at the depth n.
The second approach you mentioned has nothing to do with a move ordering. It has to do with a fact that evaluation function can be expensive and many positions are evaluated many time. To bypass this you can store the values of the position in hash once you calculated it and reuse it later.