I am trying to implement the above community detection algorithm in Java, and while I have access to C++ code, and the original paper - I can't make it work at all. My major issue is that I don't understand the purpose of the code - i.e. how the algorithm works. In practical terms, my code gets stuck in what seems to be an infinite loop at mergeBestQ, the list heap seems to be getting larger on each iteration (as I would expect from the code), but the value of topQ is always returning the same value.
The graph I am testing this on is quite large (300,000 nodes, 650,000 edges). The original code I am using for my implementation is from the SNAP library (https://github.com/snap-stanford/snap/blob/master/snap-core/cmty.cpp). What would be great is if someone could explain to me the intuition of the algorithm, it seems to be initially setting each node to be in it's own community, then recording the modularity value (whatever that is) of each pair of connected nodes in the graph, then finding the pair of nodes which have the highest modularity and moving them to the same community. In addition, if someone could provide some mid level pseudo code, that would be great. Here is my implementation thus far, I have tried to keep it in one file for the sake of brevity, however CommunityGraph and CommunityNode are elsewhere (should not be required). Graph maintain a list of all nodes and each node maintains a list of its connections to other nodes. When running it never gets past the line while(this.mergeBestQ()){}
UPDATE - found several bugs in my code after a thorough review. The code now completes VERY quickly, but doesnt fully implement the algorithm, for example of the 300,000 nodes in the graph, it states there are approximately 299,000 communities (i.e. roughly 1 node per community). I have listed the updated code below.
/// Clauset-Newman-Moore community detection method.
/// At every step two communities that contribute maximum positive value to global modularity are merged.
/// See: Finding community structure in very large networks, A. Clauset, M.E.J. Newman, C. Moore, 2004
public class CNMMCommunityMetric implements CommunityMetric{
private static class DoubleIntInt implements Comparable{
public double val1;
public int val2;
public int val3;
DoubleIntInt(double val1, int val2, int val3){
this.val1 = val1;
this.val2 = val2;
this.val3 = val3;
}
#Override
public int compareTo(DoubleIntInt o) {
//int this_sum = this.val2 + this.val3;
//int oth_sum = o.val2 + o.val3;
if(this.equals(o)){
return 0;
}
else if(val1 < o.val1 || (val1 == o.val1 && val2 < o.val2) || (val1 == o.val1 && val2 == o.val2 && val3 < o.val3)){
return 1;
}
else{
return -1;
}
//return this.val1 < o.val1 ? 1 : (this.val1 > o.val1 ? -1 : this_sum - oth_sum);
}
#Override
public boolean equals(Object o){
return this.val2 == ((DoubleIntInt)o).val2 && this.val3 == ((DoubleIntInt)o).val3;
}
#Override
public int hashCode() {
int hash = 3;
hash = 79 * hash + this.val2;
hash = 79 * hash + this.val3;
return hash;
}
}
private static class CommunityData {
double DegFrac;
TIntDoubleHashMap nodeToQ = new TIntDoubleHashMap();
int maxQId;
CommunityData(){
maxQId = -1;
}
CommunityData(double nodeDegFrac, int outDeg){
DegFrac = nodeDegFrac;
maxQId = -1;
}
void addQ(int NId, double Q) {
nodeToQ.put(NId, Q);
if (maxQId == -1 || nodeToQ.get(maxQId) < Q) {
maxQId = NId;
}
}
void updateMaxQ() {
maxQId=-1;
int[] nodeIDs = nodeToQ.keys();
double maxQ = nodeToQ.get(maxQId);
for(int i = 0; i < nodeIDs.length; i++){
int id = nodeIDs[i];
if(maxQId == -1 || maxQ < nodeToQ.get(id)){
maxQId = id;
maxQ = nodeToQ.get(maxQId);
}
}
}
void delLink(int K) {
int NId=getMxQNId();
nodeToQ.remove(K);
if (NId == K) {
updateMaxQ();
}
}
int getMxQNId() {
return maxQId;
}
double getMxQ() {
return nodeToQ.get(maxQId);
}
};
private TIntObjectHashMap<CommunityData> communityData = new TIntObjectHashMap<CommunityData>();
private TreeSet<DoubleIntInt> heap = new TreeSet<DoubleIntInt>();
private HashMap<DoubleIntInt,DoubleIntInt> set = new HashMap<DoubleIntInt,DoubleIntInt>();
private double Q = 0.0;
private UnionFind uf = new UnionFind();
#Override
public double getCommunities(CommunityGraph graph) {
init(graph);
//CNMMCommunityMetric metric = new CNMMCommunityMetric();
//metric.getCommunities(graph);
// maximize modularity
while (this.mergeBestQ(graph)) {
}
// reconstruct communities
HashMap<Integer, ArrayList<Integer>> IdCmtyH = new HashMap<Integer, ArrayList<Integer>>();
Iterator<CommunityNode> ns = graph.getNodes();
int community = 0;
TIntIntHashMap communities = new TIntIntHashMap();
while(ns.hasNext()){
CommunityNode n = ns.next();
int r = uf.find(n);
if(!communities.contains(r)){
communities.put(r, community++);
}
n.setCommunity(communities.get(r));
}
System.exit(0);
return this.Q;
}
private void init(Graph graph) {
double M = 0.5/graph.getEdgesList().size();
Iterator<Node> ns = graph.getNodes();
while(ns.hasNext()){
Node n = ns.next();
uf.add(n);
int edges = n.getEdgesList().size();
if(edges == 0){
continue;
}
CommunityData dat = new CommunityData(M * edges, edges);
communityData.put(n.getId(), dat);
Iterator<Edge> es = n.getConnections();
while(es.hasNext()){
Edge e = es.next();
Node dest = e.getStart() == n ? e.getEnd() : e.getStart();
double dstMod = 2 * M * (1.0 - edges * dest.getEdgesList().size() * M);//(1 / (2 * M)) - ((n.getEdgesList().size() * dest.getEdgesList().size()) / ((2 * M) * (2 * M)));// * (1.0 - edges * dest.getEdgesList().size() * M);
dat.addQ(dest.getId(), dstMod);
}
Q += -1.0 * (edges*M) * (edges*M);
if(n.getId() < dat.getMxQNId()){
addToHeap(createEdge(dat.getMxQ(), n.getId(), dat.getMxQNId()));
}
}
}
void addToHeap(DoubleIntInt o){
heap.add(o);
}
DoubleIntInt createEdge(double val1, int val2, int val3){
DoubleIntInt n = new DoubleIntInt(val1, val2, val3);
if(set.containsKey(n)){
DoubleIntInt n1 = set.get(n);
heap.remove(n1);
if(n1.val1 < val1){
n1.val1 = val1;
}
n = n1;
}
else{
set.put(n, n);
}
return n;
}
void removeFromHeap(Collection<DoubleIntInt> col, DoubleIntInt o){
//set.remove(o);
col.remove(o);
}
DoubleIntInt findMxQEdge() {
while (true) {
if (heap.isEmpty()) {
break;
}
DoubleIntInt topQ = heap.first();
removeFromHeap(heap, topQ);
//heap.remove(topQ);
if (!communityData.containsKey(topQ.val2) || ! communityData.containsKey(topQ.val3)) {
continue;
}
if (topQ.val1 != communityData.get(topQ.val2).getMxQ() && topQ.val1 != communityData.get(topQ.val3).getMxQ()) {
continue;
}
return topQ;
}
return new DoubleIntInt(-1.0, -1, -1);
}
boolean mergeBestQ(Graph graph) {
DoubleIntInt topQ = findMxQEdge();
if (topQ.val1 <= 0.0) {
return false;
}
// joint communities
int i = topQ.val3;
int j = topQ.val2;
uf.union(i, j);
Q += topQ.val1;
CommunityData datJ = communityData.get(j);
CommunityData datI = communityData.get(i);
datI.delLink(j);
datJ.delLink(i);
int[] datJData = datJ.nodeToQ.keys();
for(int _k = 0; _k < datJData.length; _k++){
int k = datJData[_k];
CommunityData datK = communityData.get(k);
double newQ = datJ.nodeToQ.get(k);
//if(datJ.nodeToQ.containsKey(i)){
// newQ = datJ.nodeToQ.get(i);
//}
if (datI.nodeToQ.containsKey(k)) {
newQ = newQ + datI.nodeToQ.get(k);
datK.delLink(i);
} // K connected to I and J
else {
newQ = newQ - 2 * datI.DegFrac * datK.DegFrac;
} // K connected to J not I
datJ.addQ(k, newQ);
datK.addQ(j, newQ);
addToHeap(createEdge(newQ, Math.min(j, k), Math.max(j, k)));
}
int[] datIData = datI.nodeToQ.keys();
for(int _k = 0; _k < datIData.length; _k++){
int k = datIData[_k];
if (!datJ.nodeToQ.containsKey(k)) { // K connected to I not J
CommunityData datK = communityData.get(k);
double newQ = datI.nodeToQ.get(k) - 2 * datJ.DegFrac * datK.DegFrac;
datJ.addQ(k, newQ);
datK.delLink(i);
datK.addQ(j, newQ);
addToHeap(createEdge(newQ, Math.min(j, k), Math.max(j, k)));
}
}
datJ.DegFrac += datI.DegFrac;
if (datJ.nodeToQ.isEmpty()) {
communityData.remove(j);
} // isolated community (done)
communityData.remove(i);
return true;
}
}
UPDATE:the currently listed code is fairly quick, and has half the memory usage compared to the "quickest" solution, while only being ~5% slower. the difference is in the use of hashmap + treest vs priority queue, and ensuring only a single object for a given i, j pair exists at any time.
So here's the original paper, a neat lil' six pages, only two of which are about the design & implementation. Here's a cliffnotes:
For a partition of a given graph, the authors define the modularity, Q, of the partition to be the ratio of the number of edges within each community to the number of edges between each community, minus the ratio you'd expect from a completely random partition.
So it's effectively "how much better is this partition at defining communities than a completely random one?"
Given two communities i and j of a partition, they then define deltaQ_ij to be how much the modularity of the partition would change if communities i and j were merged. So if deltaQ_ij > 0, merging i and j will improve the modularity of the partition.
Which leads to a simple greedy algorithm: start with every node in its own community. Calculate deltaQ_ij for every pair of communities. Whichever two communities i, j have the largest deltaQ_ij, merge those two. Repeat.
You'll get maximum modularity when the deltaQ_ij all turn negative, but in the paper the authors let the algorithm run until there's only one community left.
That's pretty much it for understanding the algorithm. The details are in how to compute deltaQ_ij quickly and store the information efficiently.
Edit: Data structure time!
So first off, I think the implementation you're referencing does things a different way to the paper. I'm not quite sure how, because the code is impenetrable, but it seems to use union-find and hashsets in place of the author's binary trees and multiple heaps. Not a clue why they do it a different way. You might want to email the guy who wrote it and ask.
Anyway, the algorithm in the paper needs several things from the format deltaQ is stored in:
First, it needs to be able to recover the largest value in dQ quickly.
Second, it needs to be able to remove all deltaQ_ik and deltaQ_ki for a fixed i quickly.
Third, it needs to be able to update all deltaQ_kj and deltaQ_jk for a fixed j quickly.
The solution the authors come up to for this is as follows:
For each community i, each non-zero deltaQ_ik is stored in a balanced binary tree, indexed by k (so elements can be found easily), and in a heap (so the maximum for that community can be found easily).
The maximum deltaQ_ik from each community i's heap is then stored in another heap, so that the overall maximums can be found easily.
When community i is merged with community j, several things happen to the binary trees:
First, each element from the ith community is added to the jth community's binary tree. If an element with the same index k already exists, you sum the old and new values.
Second, we update all the remaining "old" values in the jth community's binary tree to reflect the fact that the jth community has just increased in size.
And for each other community's binary tree k, we update any deltaQ_kj.
Finally, the tree for community i is thrown away.
And similarly, several things must happen to the heaps:
First, the heap for community i is thrown away.
Then the heap for community j is rebuilt from scratch using the elements from the community's balanced binary tree.
And for each other community k's heap, the position of entry deltaQ_kj is updated.
Finally, the entry for community i in the overall heap is thrown away (causing bubbling) and the entries for community j and each community k connected to i or j are updated.
Strangely, when two communities are merged there's no reference in the paper as to removing deltaQ_ki values from the kth community's heap or tree. I think this might be dealt with by the setting of a_i = 0, but I don't understand the algorithm well enough to be sure.
Edit: Trying to decipher the implementation you linked. Their primary datastructures are
CmtyIdUF, a union-find structure that keeps track of which nodes are in which community (something that's neglected in the paper, but seems necessary unless you want to reconstruct community membership from a trace of the merge or something),
MxQHeap, a heap to keep track of which deltaQ_ij is largest overall. Strangely, when they update the value of a TFltIntIntTr in the heap, they don't ask the heap to re-heapify itself. This is worrying. Does it do it automatically or something?
CmtyQH, a hashmap that maps a community ID i to a structure TCmtyDat which holds what looks heap of the deltaQ_ik for that community. I think. Strangely though, the UpdateMaxQ of the TCmtyDat structure takes linear time, obviating any need for a heap. What's more, the UpdateMaxQ method only appears to be called when an element of the heap is deleted. It should definitely also be getting called when the value of any element in the heap is updated.
Related
I'm trying to implement the min-cut Karger's algorithm in Java. For this, I created a Graph class which stores a SortedMap, with an integer index as key and a Vertex object as value, and an ArrayList of Edge objects. Edges stores the index of its incident vertices. Than I merge the vertices of some random edge until the number of vertices reach 2. I repeat this steps a safe number of times. Curiously, in my output I get 2x the number of crossing edges. I mean, if the right answer is 10, after execute n times the algorithm (for n sufficient large), the min of these execution results is 20, what makes me believe the implementation is almost correct.
This is the relevant part of code:
void mergeVertex(int iV, int iW) {
for (int i = 0; i < edges.size(); i++) {
Edge e = edges.get(i);
if (e.contains(iW)) {
if (e.contains(iV)) {
edges.remove(i);
i--;
} else {
e.replace(iW, iV);
}
}
}
vertices.remove(iW);
}
public int kargerContraction(){
Graph copy = new Graph(this);
Random r = new Random();
while(copy.getVertices().size() > 2){
int i = r.nextInt(copy.getEdges().size());
Edge e = copy.getEdges().get(i);
copy.mergeVertex(e.getVertices()[0], e.getVertices()[1]);
}
return copy.getEdges().size()/2;
}
Actually the problem was much more simple than I thought. While reading the .txt which contains the graph data, I was counting twice each edge, so logically the minCut returned was 2 times the right minCut.
I am trying to build a 4 x 4 sudoku solver by using the genetic algorithm. I have some issues with values converging to local minima. I am using a ranked approach and removing the bottom two ranked answer possibilities and replacing them with a crossover between the two highest ranked answer possibilities. For additional help avoiding local mininma, I am also using mutation. If an answer is not determined within a specific amount of generation, my population is filled with completely new and random state values. However, my algorithm seems to get stuck in local minima. As a fitness function, I am using:
(Total Amount of Open Squares * 7 (possible violations at each square; row, column, and box)) - total Violations
population is an ArrayList of integer arrays in which each array is a possible end state for sudoku based on the input. Fitness is determined for each array in the population.
Would someone be able to assist me in determining why my algorithm converges on local minima or perhaps recommend a technique to use to avoid local minima. Any help is greatly appreciated.
Fitness Function:
public int[] fitnessFunction(ArrayList<int[]> population)
{
int emptySpaces = this.blankData.size();
int maxError = emptySpaces*7;
int[] fitness = new int[populationSize];
for(int i=0; i<population.size();i++)
{
int[] temp = population.get(i);
int value = evaluationFunc(temp);
fitness[i] = maxError - value;
System.out.println("Fitness(i)" + fitness[i]);
}
return fitness;
}
Crossover Function:
public void crossover(ArrayList<int[]> population, int indexWeakest, int indexStrong, int indexSecStrong, int indexSecWeak)
{
int[] tempWeak = new int[16];
int[] tempStrong = new int[16];
int[] tempSecStrong = new int[16];
int[] tempSecWeak = new int[16];
tempStrong = population.get(indexStrong);
tempSecStrong = population.get(indexSecStrong);
tempWeak = population.get(indexWeakest);
tempSecWeak = population.get(indexSecWeak);
population.remove(indexWeakest);
population.remove(indexSecWeak);
int crossoverSite = random.nextInt(14)+1;
for(int i=0;i<tempWeak.length;i++)
{
if(i<crossoverSite)
{
tempWeak[i] = tempStrong[i];
tempSecWeak[i] = tempSecStrong[i];
}
else
{
tempWeak[i] = tempSecStrong[i];
tempSecWeak[i] = tempStrong[i];
}
}
mutation(tempWeak);
mutation(tempSecWeak);
population.add(tempWeak);
population.add(tempSecWeak);
for(int j=0; j<tempWeak.length;j++)
{
System.out.print(tempWeak[j] + ", ");
}
for(int j=0; j<tempWeak.length;j++)
{
System.out.print(tempSecWeak[j] + ", ");
}
}
Mutation Function:
public void mutation(int[] mutate)
{
if(this.blankData.size() > 2)
{
Blank blank = this.blankData.get(0);
int x = blank.getPosition();
Blank blank2 = this.blankData.get(1);
int y = blank2.getPosition();
Blank blank3 = this.blankData.get(2);
int z = blank3.getPosition();
int rando = random.nextInt(4) + 1;
if(rando == 2)
{
int rando2 = random.nextInt(4) + 1;
mutate[x] = rando2;
}
if(rando == 3)
{
int rando2 = random.nextInt(4) + 1;
mutate[y] = rando2;
}
if(rando==4)
{
int rando3 = random.nextInt(4) + 1;
mutate[z] = rando3;
}
}
The reason you see rapid convergence is that your methodology for "mating" is not very good. You are always producing two offspring from "mating" of the top two scoring individuals. Imagine what happens when one of the new offspring is the same as your top individual (by chance, no crossover and no mutation, or at least none that have an effect on the fitness). Once this occurs, the top two individuals are identical which eliminates the effectiveness of crossover.
A more typical approach is to replace EVERY individual on every generation. There are lots of possible variations here, but you might do a random choice of two parents weighted fitness.
Regarding population size: I don't know how hard of a problem sudoku is given your genetic representation and fitness function, but I suggest that you think about millions of individuals, not dozens.
If you are working on really hard problems, genetic algorithms are much more effective when you place your population on a 2-D grid and choosing "parents" for each point in the grid from the nearby individuals. You will get local convergence, but each locality will have converged on different solutions; you get a huge amount of variation produced from the borders between the locally-converged areas of the grid.
Another technique you might think about is running to convergence from random populations many times and store the top individual from each run. After you build up a bunch of different local minima genomes, build a new random population from those top individuals.
I think the Sudoku is a permutation problem. therefore i suggest you to use random permutation numbers for initializing population and use the crossover method which Compatible to permutation problems.
I was doing code forces and wanted to implement Dijkstra's Shortest Path Algorithm for a directed graph using Java with an Adjacency Matrix, but I'm having difficulty making it work for other sizes than the one it is coded to handle.
Here is my working code
int max = Integer.MAX_VALUE;//substitute for infinity
int[][] points={//I used -1 to denote non-adjacency/edges
//0, 1, 2, 3, 4, 5, 6, 7
{-1,20,-1,80,-1,-1,90,-1},//0
{-1,-1,-1,-1,-1,10,-1,-1},//1
{-1,-1,-1,10,-1,50,-1,20},//2
{-1,-1,-1,-1,-1,-1,20,-1},//3
{-1,50,-1,-1,-1,-1,30,-1},//4
{-1,-1,10,40,-1,-1,-1,-1},//5
{-1,-1,-1,-1,-1,-1,-1,-1},//6
{-1,-1,-1,-1,-1,-1,-1,-1} //7
};
int [] record = new int [8];//keeps track of the distance from start to each node
Arrays.fill(record,max);
int sum =0;int q1 = 0;int done =0;
ArrayList<Integer> Q1 = new ArrayList<Integer>();//nodes to transverse
ArrayList<Integer> Q2 = new ArrayList<Integer>();//nodes collected while transversing
Q1.add(0);//starting point
q1= Q1.get(0);
while(done<9) {// <<< My Problem
for(int q2 = 1; q2<8;q2++) {//skips over the first/starting node
if(points[q1][q2]!=-1) {//if node is connected by an edge
if(record[q1] == max)//never visited before
sum=0;
else
sum=record[q1];//starts from where it left off
int total = sum+points[q1][q2];//total distance of route
if(total < record[q2])//connected node distance
record[q2]=total;//if smaller
Q2.add(q2);//colleceted node
}
}
done++;
Q1.remove(0);//removes the first node because it has just been used
if(Q1.size()==0) {//if there are no more nodes to transverse
Q1=Q2;//Pours all the collected connecting nodes to Q1
Q2= new ArrayList<Integer>();
q1=Q1.get(0);
}
else//
q1=Q1.get(0);//sets starting point
}![enter image description here][1]
However, my version of the algorithm only works because I set the while loop to the solved answer. So in other words, it only works for this problem/graph because I solved it by hand first.
How could I make it so it works for all groups of all sizes?
Here is the pictorial representation of the example graph my problem was based on:
I think the main answer you are looking for is that you should let the while-loop run until Q1 is empty. What you're doing is essentially a best-first search. There are more changes required though, since your code is a bit unorthodox.
Commonly, Dijkstra's algorithm is used with a priority queue. Q1 is your "todo list" as I understand from your code. The specification of Dijkstra's says that the vertex that is closest to the starting vertex should be explored next, so rather than an ArrayList, you should use a PriorityQueue for Q1 that sorts vertices according to which is closest to the starting vertex. The most common Java implementation uses the PriorityQueue together with a tuple class: An internal class which stores a reference to a vertex and a "distance" to the starting vertex. The specification for Dijkstra's also specifies that if a new edge is discovered that makes a vertex closer to the start, the DecreaseKey operation should then be used on the entry in the priority queue to make the vertex come up earlier (since it is now closer). However, since PriorityQueue doesn't support that operation, a completely new entry is just added to the queue. If you have a good implementation of a heap that supports this operation (I made one myself, here) then decreaseKey can significantly increase efficiency as you won't need to create those tuples any more either then.
So I hope that is a sufficient answer then: Make a proper 'todo' list instead of Q1, and to make the algorithm generic, let that while-loop run until the todo list is empty.
Edit: I made you an implementation based on your format, that seems to work:
public void run() {
final int[][] points = { //I used -1 to denote non-adjacency/edges
//0, 1, 2, 3, 4, 5, 6, 7
{-1,20,-1,80,-1,-1,90,-1}, //0
{-1,-1,-1,-1,-1,10,-1,-1}, //1
{-1,-1,-1,10,-1,50,-1,20}, //2
{-1,-1,-1,-1,-1,-1,20,-1}, //3
{-1,50,-1,-1,-1,-1,30,-1}, //4
{-1,-1,10,40,-1,-1,-1,-1}, //5
{-1,-1,-1,-1,-1,-1,-1,-1}, //6
{-1,-1,-1,-1,-1,-1,-1,-1} //7
};
final int[] result = dijkstra(points,0);
System.out.print("Result:");
for(final int i : result) {
System.out.print(" " + i);
}
}
public int[] dijkstra(final int[][] points,final int startingPoint) {
final int[] record = new int[points.length]; //Keeps track of the distance from start to each vertex.
final boolean[] explored = new boolean[points.length]; //Keeps track of whether we have completely explored every vertex.
Arrays.fill(record,Integer.MAX_VALUE);
final PriorityQueue<VertexAndDistance> todo = new PriorityQueue<>(points.length); //Vertices left to traverse.
todo.add(new VertexAndDistance(startingPoint,0)); //Starting point (and distance 0).
record[startingPoint] = 0; //We already know that the distance to the starting point is 0.
while(!todo.isEmpty()) { //Continue until we have nothing left to do.
final VertexAndDistance next = todo.poll(); //Take the next closest vertex.
final int q1 = next.vertex;
if(explored[q1]) { //We have already done this one, don't do it again.
continue; //...with the next vertex.
}
for(int q2 = 1;q2 < points.length;q2++) { //Find connected vertices.
if(points[q1][q2] != -1) { //If the vertices are connected by an edge.
final int distance = record[q1] + points[q1][q2];
if(distance < record[q2]) { //And it is closer than we've seen so far.
record[q2] = distance;
todo.add(new VertexAndDistance(q2,distance)); //Explore it later.
}
}
}
explored[q1] = true; //We're done with this vertex now.
}
return record;
}
private class VertexAndDistance implements Comparable<VertexAndDistance> {
private final int distance;
private final int vertex;
private VertexAndDistance(final int vertex,final int distance) {
this.vertex = vertex;
this.distance = distance;
}
/**
* Compares two {#code VertexAndDistance} instances by their distance.
* #param other The instance with which to compare this instance.
* #return A positive integer if this distance is more than the distance
* of the specified object, a negative integer if it is less, or
* {#code 0} if they are equal.
*/
#Override
public int compareTo(final VertexAndDistance other) {
return Integer.compare(distance,other.distance);
}
}
Output: 0 20 40 50 2147483647 30 70 60
I'm a student and me and my team have to make a simulation of student's behaviour in a campus (like making "groups of friends") walking etc. For finding path that student has to go, I used A* algorithm (as I found out that its one of fastest path-finding algorithms). Unfortunately our simulation doesn't run fluently (it takes like 1-2 sec between successive iterations). I wanted to optimize the algorithm but I don't have any idea what I can do more. Can you guys help me out and share with me information if its possible to optimize my A* algorithm? Here goes code:
public LinkedList<Field> getPath(Field start, Field exit) {
LinkedList<Field> foundPath = new LinkedList<Field>();
LinkedList<Field> opensList= new LinkedList<Field>();
LinkedList<Field> closedList= new LinkedList<Field>();
Hashtable<Field, Integer> gscore = new Hashtable<Field, Integer>();
Hashtable<Field, Field> cameFrom = new Hashtable<Field, Field>();
Field x = new Field();
gscore.put(start, 0);
opensList.add(start);
while(!opensList.isEmpty()){
int min = -1;
//searching for minimal F score
for(Field f : opensList){
if(min==-1){
min = gscore.get(f)+getH(f,exit);
x = f;
}else{
int currf = gscore.get(f)+getH(f,exit);
if(min > currf){
min = currf;
x = f;
}
}
}
if(x == exit){
//path reconstruction
Field curr = exit;
while(curr != start){
foundPath.addFirst(curr);
curr = cameFrom.get(curr);
}
return foundPath;
}
opensList.remove(x);
closedList.add(x);
for(Field y : x.getNeighbourhood()){
if(!(y.getType()==FieldTypes.PAVEMENT ||y.getType() == FieldTypes.GRASS) || closedList.contains(y) || !(y.getStudent()==null))
{
continue;
}
int tentGScore = gscore.get(x) + getDist(x,y);
boolean distIsBetter = false;
if(!opensList.contains(y)){
opensList.add(y);
distIsBetter = true;
}else if(tentGScore < gscore.get(y)){
distIsBetter = true;
}
if(distIsBetter){
cameFrom.put(y, x);
gscore.put(y, tentGScore);
}
}
}
return foundPath;
}
private int getH(Field start, Field end){
int x;
int y;
x = start.getX()-end.getX();
y = start.getY() - end.getY();
if(x<0){
x = x* (-1);
}
if(y<0){
y = y * (-1);
}
return x+y;
}
private int getDist(Field start, Field end){
int ret = 0;
if(end.getType() == FieldTypes.PAVEMENT){
ret = 8;
}else if(start.getX() == end.getX() || start.getY() == end.getY()){
ret = 10;
}else{
ret = 14;
}
return ret;
}
//EDIT
This is what i got from jProfiler:
So getH is a bottlneck yes? Maybe remembering H score of field would be a good idea?
A linked list is not a good data structure for the open set. You have to find the node with the smallest F from it, you can either search through the list in O(n) or insert in sorted position in O(n), either way it's O(n). With a heap it's only O(log n). Updating the G score would remain O(n) (since you have to find the node first), unless you also added a HashTable from nodes to indexes in the heap.
A linked list is also not a good data structure for the closed set, where you need fast "Contains", which is O(n) in a linked list. You should use a HashSet for that.
You can optimize the problem by using a different algorithm, the following page illustrates and compares many different aglorihms and heuristics:
A*
IDA*
Djikstra
JumpPoint
...
http://qiao.github.io/PathFinding.js/visual/
From your implementation it seems that you are using naive A* algorithm. Use following way:-
A* is algorithm which is implemented using priority queue similar to BFS.
Heuristic function is evaluated at each node to define its fitness to be selected as next node to be visited.
As new node is visited its neighboring unvisited nodes are added into queue with its heuristic values as keys.
Do this till every heuristic value in the queue is less than(or greater) calculated value of goal state.
Find bottlenecks of your implementation using profiler . ex. jprofiler is easy to use
Use threads in areas where algorithm can run simultaneously.
Profile your JavaVM to run faster.
Allocate more RAM
a) As mentioned, you should use a heap in A* - either a basic binary heap or a pairing heap which should be theoretically faster.
b) In larger maps, it always happens that you need some time for the algorithm to run (i.e., when you request a path, it will simply have to take some time). What can be done is to use some local navigation algorithm (e.g., "run directly to the target") while the path computes.
c) If you have reasonable amount of locations (e.g., in a navmesh) and some time at the start of your code, why not to use Floyd-Warshall's algorithm? Using that, you can the information where to go next in O(1).
I built a new pathfinding algorithm. called Fast* or Fastaer, It is a BFS like A* but is faster and efficient than A*, the accuracy is 90% A*. please see this link for info and demo.
https://drbendanilloportfolio.wordpress.com/2015/08/14/fastaer-pathfinder/
It has a fast greedy line tracer, to make path straighter.
The demo file has it all. Check Task manager when using the demo for performance metrics. So far upon building this the profiler results of this has maximum surviving generation of 4 and low to nil GC time.
I am currently having heavy performance issues with an application I'm developping in natural language processing. Basically, given texts, it gathers various data and does a bit of number crunching.
And for every sentence, it does EXACTLY the same. The algorithms applied to gather the statistics do not evolve with previously read data and therefore stay the same.
The issue is that the processing time does not evolve linearly at all: 1 min for 10k sentences, 1 hour for 100k and days for 1M...
I tried everything I could, from re-implementing basic data structures to object pooling to recycles instances. The behavior doesn't change. I get non-linear increase in time that seem impossible to justify by a little more hashmap collisions, nor by IO waiting, nor by anything! Java starts to be sluggish when data increases and I feel totally helpless.
If you want an example, just try the following: count the number of occurences of each word in a big file. Some code is shown below. By doing this, it takes me 3 seconds over 100k sentences and 326 seconds over 1.6M ...so a multiplicator of 110 times instead of 16 times. As data grows more, it just get worse...
Here is a code sample:
Note that I compare strings by reference (for efficiency reasons), this can be done thanks to the 'String.intern()' method which returns a unique reference per string. And the map is never re-hashed during the whole process for the numbers given above.
public class DataGathering
{
SimpleRefCounter<String> counts = new SimpleRefCounter<String>(1000000);
private void makeCounts(String path) throws IOException
{
BufferedReader file_src = new BufferedReader(new FileReader(path));
String line_src;
int n = 0;
while (file_src.ready())
{
n++;
if (n % 10000 == 0)
System.out.print(".");
if (n % 100000 == 0)
System.out.println("");
line_src = file_src.readLine();
String[] src_tokens = line_src.split("[ ,.;:?!'\"]");
for (int i = 0; i < src_tokens.length; i++)
{
String src = src_tokens[i].intern();
counts.bump(src);
}
}
file_src.close();
}
public static void main(String[] args) throws IOException
{
String path = "some_big_file.txt";
long timestamp = System.currentTimeMillis();
DataGathering dg = new DataGathering();
dg.makeCounts(path);
long time = (System.currentTimeMillis() - timestamp) / 1000;
System.out.println("\nElapsed time: " + time + "s.");
}
}
public class SimpleRefCounter<K>
{
static final double GROW_FACTOR = 2;
static final double LOAD_FACTOR = 0.5;
private int capacity;
private Object[] keys;
private int[] counts;
public SimpleRefCounter()
{
this(1000);
}
public SimpleRefCounter(int capacity)
{
this.capacity = capacity;
keys = new Object[capacity];
counts = new int[capacity];
}
public synchronized int increase(K key, int n)
{
int id = System.identityHashCode(key) % capacity;
while (keys[id] != null && keys[id] != key) // if it's occupied, let's move to the next one!
id = (id + 1) % capacity;
if (keys[id] == null)
{
key_count++;
keys[id] = key;
if (key_count > LOAD_FACTOR * capacity)
{
resize((int) (GROW_FACTOR * capacity));
}
}
counts[id] += n;
total += n;
return counts[id];
}
public synchronized void resize(int capacity)
{
System.out.println("Resizing counters: " + this);
this.capacity = capacity;
Object[] new_keys = new Object[capacity];
int[] new_counts = new int[capacity];
for (int i = 0; i < keys.length; i++)
{
Object key = keys[i];
int count = counts[i];
int id = System.identityHashCode(key) % capacity;
while (new_keys[id] != null && new_keys[id] != key) // if it's occupied, let's move to the next one!
id = (id + 1) % capacity;
new_keys[id] = key;
new_counts[id] = count;
}
this.keys = new_keys;
this.counts = new_counts;
}
public int bump(K key)
{
return increase(key, 1);
}
public int get(K key)
{
int id = System.identityHashCode(key) % capacity;
while (keys[id] != null && keys[id] != key) // if it's occupied, let's move to the next one!
id = (id + 1) % capacity;
if (keys[id] == null)
return 0;
else
return counts[id];
}
}
Any explanations? Ideas? Suggestions?
...and, as said in the beginning, it is not for this toy example in particular but for the more general case. This same exploding behavior occurs for no reason in the more complex and larger program.
Rather than feeling helpless use a profiler! That would tell you where exactly in your code all this time is spent.
Bursting the processor cache and thrashing the Translation Lookaside Buffer (TLB) may be the problem.
For String.intern you might want to do your own single-threaded implementation.
However, I'm placing my bets on the relatively bad hash values from System.identityHashCode. It clearly isn't using the top bit, as you don't appear to get ArrayIndexOutOfBoundsExceptions. I suggest replacing that with String.hashCode.
String[] src_tokens = line_src.split("[ ,.;:?!'\"]");
Just an idea -- you are creating a new Pattern object for every line here (look at the String.split() implementation). I wonder if this is also contributing to a ton of objects that need to be garbage collected?
I would create the Pattern once, probably as a static field:
final private static Pattern TOKEN_PATTERN = Pattern.compile("[ ,.;:?!'\"]");
And then change the split line do this:
String[] src_tokens = TOKEN_PATTERN.split(line_src);
Or if you don't want to create it as a static field, as least only create it once as a local variable at the beginning of the method, before the while.
In get, when you search for a nonexistent key, search time is proportional to the size of the set of keys.
My advice: if you want a HashMap, just use a HashMap. They got it right for you.
You are filling up the Perm Gen with the string intern. Have you tried viewing the -Xloggc output?
I would guess it's just memory filling up, growing outside the processor cache, memory fragmentation and the garbage collection pauses kicking in. Have you checked memory use at all? Tried to change the heap size the JVM uses?
Try to do it in python, and run the python module from Java.
Enter all the keys in the database, and then execute the following query:
select key, count(*)
from keys
group by key
Have you tried to only iterate through the keys without doing any calculations? is it faster? if yes then go with option (2).
Can't you do this? You can get your answer in no time.
It's me, the original poster, something went wrong during registration, so I post separately. I'll try the various suggestions given.
PS for Tom Hawtin: thanks for the hints, perhaps the 'String.intern()' takes more and more time as vocabulary grows, i'll check that tomorrow, as everything else.