I'm in a super trouble. I really don't know how to modify the code to print each cycle that has been found. Actually the code below is returning if the graph contains a cycle, but I also want to know what are all the possible cycles.
For example, the following graph contains three cycles 0->2->0, 0->1->2->0 and 3->3, so your function must return true.
// A Java Program to detect cycle in a graph
import java.util.ArrayList;
import java.util.LinkedList;
import java.util.List;
class Graph {
private final int V;
private final List<List<Integer>> adj;
public Graph(int V)
{
this.V = V;
adj = new ArrayList<>(V);
for (int i = 0; i < V; i++)
adj.add(new LinkedList<>());
}
// This function is a variation of DFSUytil() in
// https://www.geeksforgeeks.org/archives/18212
private boolean isCyclicUtil(int i, boolean[] visited, boolean[] recStack)
{
// Mark the current node as visited and
// part of recursion stack
if (recStack[i])
return true;
if (visited[i])
return false;
visited[i] = true;
recStack[i] = true;
List<Integer> children = adj.get(i);
for (Integer c: children)
if (isCyclicUtil(c, visited, recStack))
return true;
recStack[i] = false;
return false;
}
private void addEdge(int source, int dest) {
adj.get(source).add(dest);
}
// Returns true if the graph contains a
// cycle, else false.
// This function is a variation of DFS() in
// https://www.geeksforgeeks.org/archives/18212
private boolean isCyclic()
{
// Mark all the vertices as not visited and
// not part of recursion stack
boolean[] visited = new boolean[V];
boolean[] recStack = new boolean[V];
// Call the recursive helper function to
// detect cycle in different DFS trees
for (int i = 0; i < V; i++)
if (isCyclicUtil(i, visited, recStack))
return true;
return false;
}
// Driver code
public static void main(String[] args)
{
Graph graph = new Graph(4);
graph.addEdge(0, 1);
graph.addEdge(0, 2);
graph.addEdge(1, 2);
graph.addEdge(2, 0);
graph.addEdge(2, 3);
graph.addEdge(3, 3);
if(graph.isCyclic())
System.out.println("Graph contains cycle");
else
System.out.println("Graph doesn't "
+ "contain cycle");
}
}
Thank you so much.
Edit:
Previously I mentioned the possibility to use dfs instead of bfs,
however using dfs might produce non-minimal cycles. (e.g. if a cycle A->B->C->A exists and a cylce A->B->A exists, it might find the longer one first and it won't find the second one as nodes are only visited once).
As per definition an elementary cycle is one where a node does not repeat itself (besides the starting one), so the case is a bit different. As the questioner (of the bounty #ExceptionHandler) wanted those cycles excluded from the output, using bfs solves that issue.
For a pure (brute-force) elementary cycle search a different path finding algorithm would be required.
A general purpose (aka brute force) implementation would entail the following steps:
For every node n of a directed graph gfind all pathes (using bfs) back to n.If muliple edges between two nodes (with the same direction) exist they can be ignored at this step, as the algorithm itself should work on nodes rather than edges. Multiple edges can be reintroduced into the cycles during step 5.
if no pathes are found, continue in Step 1 with n+1
Every identified path is a cylceadd them to a list of cycles, and continue with Step 1 and n+1
After all nodes have been processed a list containing all possible cycles have been found (including permutations). Subcycles could not have been formed as every node can only be visited once during bfs.In this step all permutations of previously identified are grouped in sets. Only one cylce per set is considered. This can be done by ordering the node and removing duplicates.
Now the minimum set of cycles has been identified and can be printed out.In case you are looking for edge-specific cycles, replace the connection between two nodes with their respective edge(s).
Example for the graph A->B B->C C->D D->C C->A:
Step 1-3: node A
path identified: A,B,C (A->B B->C C->A)
Step 1-3: node B
path identified: B,C,A (B->C C->A A->B)
Step 1-3: node C
path identified: C,A,B (C->A A->B B->C)
path identified: C,D (C->D D->C)
Step 1-3: node D
path identified: D,C (D->C C->D)
Step 4:
Identified as identical after ordering:
Set1:
A,B,C (A->B B->C C->A)
B,C,A (B->C C->A A->B)
C,A,B (C->A A->B B->C)
Set2:
C,D (C->D D->C)
D,C (D->C C->D)
Therefore remaining cycles:
A,B,C (A->B B->C C->A)
C,D (C->D D->C)
Step 5:
Simply printing out the cycles
(Check the bracket expressions for that,
I simply added them to highlight the relevant edges).
A more efficient sample implementation to identify elementary cycles can be found here, which was directly taken from this answer. If someone wants to come up with a more detailed explanation how that algorithm works exactly feel free to do so.
Modifing the main method to:
public static void main(String[] args) {
String nodes[] = new String[4];
boolean adjMatrix[][] = new boolean[4][4];
for (int i = 0; i < 4; i++) {
nodes[i] = String.valueOf((char) ('A' + i));
}
adjMatrix[0][1] = true;
adjMatrix[1][2] = true;
adjMatrix[2][3] = true;
adjMatrix[3][2] = true;
adjMatrix[2][0] = true;
ElementaryCyclesSearch ecs = new ElementaryCyclesSearch(adjMatrix, nodes);
List cycles = ecs.getElementaryCycles();
for (int i = 0; i < cycles.size(); i++) {
List cycle = (List) cycles.get(i);
for (int j = 0; j < cycle.size(); j++) {
String node = (String) cycle.get(j);
if (j < cycle.size() - 1) {
System.out.print(node + " -> ");
} else {
System.out.print(node + " -> " + cycle.get(0));
}
}
System.out.print("\n");
}
}
leeds to the desired output of:
A -> B -> C -> A
C -> D -> C
Donald B. Johnson paper that describes the approach in more detail can be found here.
Related
Given a parent list with start and end times as numbers say (p1, p2):
1,5
2,2
4,10
Also another child list with their start and end times as (c1, c2):
2, 4
15,20
Find all the index positions from the parent and child list such that the below condition is satisfied:
p1 <= c1 <= c2 <= p2
For this example, the expected result is (0,0).
Explanation:
The valid combination is :
1 <= 2 <= 4 <= 5 that is position 0 from the parent list (1,5) matches with the condition for position 0 (2,4) of the child list.
So position 0 from the parent list and position 0 from the child list that is (0,0)
Constraints:
size of the parent and child list can be from 1 to 10^5
each element of this list can be from 1 to 10^9
Code that I tried:
static List<List<Integer>> process(List<List<Integer>> parent, List<List<Integer>> child) {
List<List<Integer>> answer = new ArrayList<>();
for(int i=0; i<parent.size(); i++) {
List<Integer> p = parent.get(i);
int p1 = p.get(0);
int p2 = p.get(1);
for(int j=0; j<child.size(); j++) {
List<Integer> c = child.get(j);
int c1 = c.get(0);
int c2 = c.get(1);
if((p1 <= c1) && (c1 <= c2) && (c2 <= p2)) {
answer.add(Arrays.asList(i, j));
}
}
}
return answer;
}
This code works for small inputs but fails for larger list sizes with time-out errors. What is the best approach to solve this problem?
Consider an alternative algorithm
The posted code is slow for large inputs,
because it checks all combinations of parents and children,
even for inputs where the number of answers will be a relatively small set.
I put an emphasis on the last point,
to highlight that when all children are within all parents,
then the answer must contain all pairings.
A more efficient solution is possible for inputs where the number of answers is significantly smaller than all possible pairings. (And without degrading the performance in case the answer is the complete set.)
Loop over the interesting positions from left to right. An interesting position is where a parent or child interval starts or ends.
If the position is a parent:
If this the start of the parent, add the parent to a linked hashset of started parents.
Otherwise it's the end of the parent. Remove this parent from the linked hashset.
If the position is the start of a child:
Loop over the linked hashset of started parents
If the parent was started before the child, add the index pair to the answers.
Break out of the loop, the remaining started parents were started after the child.
The key element that makes this fast is the following properties of a linked hashset:
Adding an item is O(1)
Removing an item is O(1)
The insertion order of items is preserved
The last point is especially important,
combined with the idea that we are looping over positions from left to right,
so we have the ordering that we need to eliminate parent-child pairs that won't be part of the answer.
The step of looping over interesting positions above is a bit tricky.
Here's one way to do it:
Define a new class to use for sorting, let's call it Tracker. It must have:
Position of an interesting index: the start or end of a parent or child
A flag to indicate if this position is a start or an end
A flag to indicate if this is a parent or a child
The original index in the parent or child list
Build a list of Tracker instances from the parent and child lists
For each parent, add two instances, one for the start and one for the end
For each child, add two instances, one for the start and one for the end
Sort the list, keeping in mind that the ordering is a bit tricky:
Must be ordered by position
When the position is the same, then:
The start of a parent must come before its own end
The start of a child must come before its own end
The start of a parent at some position X must come before the start of a child at the same position X
The end of a child at some position X must come before the end of a parent at the same position X
Evaluating the alternative algorithm
Given input with M parents and N children,
there are M * N possible combination of pairs.
To contrast the performance of the original and the suggested algorithms,
let's also consider a case where only a small subset of parents contain only a small subset of children,
that is, let's say that on average X parents contain Y children.
The original code will perform M * N comparisons, most of them will not be part of the answer.
The suggested alternative will perform an initial search step of 2 * (M + N) items, which is a log-linear operation: O(log (M + N)).
Then the main part of the algorithm performs linear logic,
generating the X * Y pairs with constant overhead: O(M + N).
The linked hashset makes this possible.
When X * Y is very close to M * N,
the overhead of the alternative algorithm may outweigh the benefits it brings.
However, the overhead grows log-linearly with M + N,
which is significantly smaller than M * N.
In other words, for large values of M and N and a uniformly random distribution of X and Y, the alternative algorithm will perform significantly better on average.
Ordering of the pairs in the answer
I want to point out that the question doesn't specify the ordering of pairs in the answers.
If a specific ordering is required,
it should be easy to modify the algorithm accordingly.
Alternative implementation
Here's an implementation of the ideas above,
and assuming that the pairs in the answer can be in any order.
List<List<Integer>> findPositions(List<List<Integer>> parent, List<List<Integer>> child) {
List<Tracker> items = new ArrayList<>();
// add the intervals with their original indexes from parent, and the parent flag set to true
for (int index = 0; index < parent.size(); index++) {
List<Integer> item = parent.get(index);
items.add(new Tracker(item.get(0), true, index, true));
items.add(new Tracker(item.get(1), false, index, true));
}
// add the intervals with their original indexes from child, and the parent flag set to false
for (int index = 0; index < child.size(); index++) {
List<Integer> item = child.get(index);
items.add(new Tracker(item.get(0), true, index, false));
items.add(new Tracker(item.get(1), false, index, false));
}
// sort the items by their position,
// parent start before child start,
// child end before parent end,
// start before end of child/parent
items.sort(Comparator.<Tracker>comparingInt(tracker -> tracker.position)
.thenComparing((a, b) -> {
if (a.isStart) {
if (b.isStart) return a.isParent ? -1 : 1;
return -1;
}
if (b.isStart) return 1;
return a.isParent ? 1 : -1;
}));
// prepare the list where we will store the answers
List<List<Integer>> answer = new ArrayList<>();
// track the parents that are started, in their insertion order
LinkedHashSet<Integer> startedParents = new LinkedHashSet<>();
// process the items one by one from left to right
for (Tracker item : items) {
if (item.isParent) {
if (item.isStart) startedParents.add(item.index);
else startedParents.remove(item.index);
} else {
if (!item.isStart) {
int childStart = child.get(item.index).get(0);
for (int parentIndex : startedParents) {
int parentStart = parent.get(parentIndex).get(0);
if (parentStart <= childStart) {
answer.add(Arrays.asList(parentIndex, item.index));
} else {
break;
}
}
}
}
}
return answer;
}
private static class Tracker {
final int position;
final boolean isStart;
final int index;
final boolean isParent;
Tracker(int position, boolean isStart, int index, boolean isParent) {
this.position = position;
this.isStart = isStart;
this.index = index;
this.isParent = isParent;
}
}
Lets consider each interval as an event. One idea would be to sort the parent and child list and then scan them from left to right. While doing this, we keep track of the "active events" from the parent list. For example, if the parent list has events e1 = (1, 5), e2 = (8, 11) and the child list has events e1' = (2, 6), e2' = (9, 10), a scan would look like this: start event e1 -> start event e1' -> end event e1 -> end event e1' -> start event e2 -> start event e2' -> end event e2' -> end event e2. While scanning, we keep track of the active events from the parent list by adding them to binary search tree, sorted by starting point. When we end an event ek' from the child list, we search for the starting point of ek' in the binary tree, and that way find all intervals, that have a smaller key. We can pair all of these up with the child Intervall and add it to the solution. The total time complexity is still O(n^2), since it is possible, that every child interval is in every parent interval. However, the complexity should be close to log(n)*n, if there is a very low amount of These pairs. I got part of the idea from the following link, so looking at this might help you to understand, what i am doing: Sub O(n^2) algorithm for counting nested intervals?
Firstly, here you can add a break in the if the condition:
if((p1 <= c1) && (c1 <= c2) && (c2 <= p2)) {
answer.add(i);
answer.add(j);
break;
}
Secondly, As this code has a time complexity of O(n^2) and tends to take time as your input increase to minimize it you can use some other data structures like trees where you get searching in O(log n) time.
RBinaryTree<Pair> tree = new RBinaryTree<>();
and in if condition...
tree.add(new Pair(i, j));
Create a Pare class like
private static class Pair {
int p;
int c;
Pair(int p, int c) {
this.p = p;
this.c = c;
}
}
Also, you can use some other approaches like divide and conquer by dividing to list into sublists.
It's my honor to share my thoughts. Maybe there are still some shortcomings that I haven't found, please correct them. This is for reference only.
First, process the parent list and child list, and add a third element to represent their input order. Then we need to write a Comparator
Comparator<List<Integer>> listComparator = (o1, o2) -> {
if (o1.get(0) < o2.get(0)) {
return -1;
} else if (o1.get(0) > o2.get(0)) {
return 1;
}
if (o1.get(1) < o2.get(1)) {
return -1;
} else if (o1.get(1) > o2.get(1)) {
return 1;
}
return 0;
}
and use list.stream ().sorted() to sort the elements in the list. At the same time, we can use list.stream().filter() to filter the illegal elements, so that we can get an ordered list; For the ordered list, we can search the parent list, find the elements that meet the size relationship in the child list, and record the index. In the subsequent element comparison of the parent list, we can directly start search from the record index.
Finally, the statistics results are sorted and output from small to large.
Here is the completion code:
static List<List<Integer>> process(List<List<Integer>> parent, List<List<Integer>> child) {
// The third element represents the original order number
int index = 0;
for (List<Integer> list : parent) {
list.add(index++);
}
index = 0;
for (List<Integer> list : child) {
list.add(index++);
}
Comparator<List<Integer>> listComparator = (o1, o2) -> {
if (o1.get(0) < o2.get(0)) {
return -1;
} else if (o1.get(0) > o2.get(0)) {
return 1;
}
if (o1.get(1) < o2.get(1)) {
return -1;
} else if (o1.get(1) > o2.get(1)) {
return 1;
}
return 0;
};
List<List<Integer>> parentSorted = parent.stream().filter(integers -> integers.get(0) <= integers.get(1)).sorted(listComparator).collect(Collectors.toList());
List<List<Integer>> childSorted = child.stream().filter(integers -> integers.get(0) <= integers.get(1)).sorted(listComparator).collect(Collectors.toList());
int childPointer = 0;
List<List<Integer>> answer = new ArrayList<>();
for (int i = 0; i < parentSorted.size(); i++) {
// Search the child list elements that meet the requirement that the parent list is greater than or equal to the ith element. The elements behind the parent list must be greater than or equal to the ith element. Therefore, for the following elements, you can directly search from the child list elements of the childPointer
if (parentSorted.get(i).get(0) <= childSorted.get(childPointer).get(0)) {
for (int j = childPointer; j < childSorted.size(); j++) {
if (parentSorted.get(i).get(0) <= childSorted.get(j).get(0)) {
if (childSorted.get(j).get(1) <= parentSorted.get(i).get(1)) {
answer.add(Arrays.asList(parentSorted.get(i).get(2), childSorted.get(j).get(2)));
} else {
break;
}
} else {
break;
}
}
} else {
// The child list pointer moves backward, and the parent list continues to judge the ith element
childPointer++;
i--;
}
}
return answer.stream().sorted(listComparator).collect(Collectors.toList());
}
Idea, it is similar to the balanced parenthesis ()()) is invalid and (())() is valid. Now we use (P1, -P1), (C1, -C1).... as as the symbols instead of (, ) where Pi is the start time for parent i and -Pi is the end time and similarly all the variables follow. We say Ci is balacned with Pi iff both Ci and -Ci are present between Pi and -Pi.
Some implementation detail, first sort all the numbers and make a stack and push the symbols from the start time (the first event), an example stack might lool like start: [P1, C3, P2, C2, C1, P3, -C2, -P1, -C3, -P3, -C1, -P2: top. Now maintain lists for all parents keeping track of the children between them and find the once that start and end in the scope of the parent i.e both Ci and -Ci are in list of Pi. Also the list closes when -Pi is read.
Hope this helps!
Usage of Streams API from Java 8 might be able to process more efficiently but not sure if it would help your context
static List<List<Integer>> process(List<List<Integer>> parent, List<List<Integer>> child) {
List<List<Integer>> answer = new ArrayList<>();
IntStream.range(0, parent.size()).forEach(parentIndex -> IntStream.range(0, child.size()).forEach(childIndex -> {
List<Integer> p = parent.get(parentIndex);
List<Integer> c = child.get(childIndex);
int p1 = p.get(0);
int p2 = p.get(1);
int c1 = c.get(0);
int c2 = c.get(1);
if((p1 <= c1) && (c1 <= c2) && (c2 <= p2)) {
answer.add(Arrays.asList(parentIndex, childIndex));
}
}));
return answer;
}
Following is another implementation using Streams API
static List<List<Integer>> process(List<List<Integer>> parent, List<List<Integer>> child) {
return
IntStream.range(0, parent.size()).mapToObj(parentIndex ->
IntStream.range(0, child.size()).filter(childIndex -> {
List<Integer> p = parent.get(parentIndex);
List<Integer> c = child.get(childIndex);
int p1 = p.get(0);
int p2 = p.get(1);
int c1 = c.get(0);
int c2 = c.get(1);
return ((p1 <= c1) && (c1 <= c2) && (c2 <= p2));
}).mapToObj(childIndex -> Arrays.asList(parentIndex, childIndex))
.flatMap(Collection::stream).collect(Collectors.toList())
).filter(value -> !value.isEmpty()).collect(Collectors.toList());
}
I want to implement dfs for nodes that are of type long in Java.
My algorithm calculates correctly the number of nodes, and the number
of edges, but not the sequence of nodes. Could you please help me
modify my algorithm so I calculate the order in which the nodes are
visited, correctly?
This is my code:
private int getNumberOfNodes(long firstNode) {
List<Long> marked = new ArrayList<>(); //------------------------------------------->
Stack<Long> stack = new Stack<Long>(); //step 1 Create/declare stack
stack.push(firstNode); //Step 2 Put/push inside the first node
while (!stack.isEmpty()) { //Repeat till stack is empty:
Long node = stack.pop(); //Step 3 Extract the top node in the stack
marked.add(node); //------------------------------------------->
long[] neighbors = xgraph.getNeighborsOf(node); //Get neighbors
if (neighbors.length % 2 == 0) {
} else {
numOfNodesWithOddDegree++;
}
int mnt = 0;
for (long currentNode : neighbors) {
if (!marked.contains(currentNode) && !stack.contains(currentNode) ) { //&& !stack.contains(currentNode)
stack.push(currentNode);
} else {
}
if (!marked.contains(currentNode)) {
numOfEdges++;
}
}
}
return marked.size(); //(int) Arrays.stream(neighbors).count();
}
I guess you exam the marked list for the sequence.
As your graph is undirected, the sequence of traversals could be varied based on which neighbor you pushed into the stack first. which means the logic of your function:
xgraph.getNeighborsOf(node)
could impact your sequence. see Vertex orderings from this wiki https://en.wikipedia.org/wiki/Depth-first_search
so my conclusion is: you may have a different traversal sequence, it does not mean your DFS is wrong, as long as it is Deep first search, it is ok to be a little bit different from the given answer.
I implemented Held-Karp in Java following Wikipedia and it gives the correct solution for total distance of a cycle, however I need it to give me the path (it doesn't end on the same vertex where is started). I can get path if I take out the edge with largest weight from the cycle, but there is a possibility that 2 different cycles have same total distance, but different maximum weight, therefore one of the cycles is wrong.
Here is my implementation:
//recursion is called with tspSet = [0, {set of all other vertices}]
private static TSPSet recursion (TSPSet tspSet) {
int end = tspSet.endVertex;
HashSet<Integer> set = tspSet.verticesBefore;
if (set.isEmpty()) {
TSPSet ret = new TSPSet(end, new HashSet<>());
ret.secondVertex = -1;
ret.totalDistance = matrix[end][0];
return ret;
}
int min = Integer.MAX_VALUE;
int minVertex = -1;
HashSet<Integer> copy;
for (int current: set) {
copy = new HashSet<>(set);
copy.remove(current);
TSPSet candidate = new TSPSet(current, copy);
int distance = matrix[end][current] + recursion(candidate).totalDistance;
if (distance < min) {
min = distance;
minVertex = current;
}
}
tspSet.secondVertex = minVertex;
tspSet.totalDistance = min;
return tspSet;
}
class TSPSet {
int endVertex;
int secondVertex;
int totalDistance;
HashSet<Integer> verticesBefore;
public TSPSet(int endVertex, HashSet<Integer> vertices) {
this.endVertex = endVertex;
this.secondVertex = -1;
this.verticesBefore = vertices;
}
}
You can slightly alter the dynamic programming state.
Let the path start in a node S. Let f(subset, end) be the optimal cost of the path that goes through all the vertices in the subset and ends in the end vertex (S and end must always be in the subset). A transition is just adding a new vertex V not the subset by using the end->V edge.
If you need a path that ends T, the answer is f(all vertices, T).
A side note: what you're doing now is not a dynamic programming. It's an exhaustive search as you do not memoize answers for subsets and end up checking all possibilities (which results in O(N! * Poly(N)) time complexity).
Problem with current approach
Consider this graph:
The shortest path visiting all vertices (exactly once each) is of length 3, but the shortest cycle is 1+100+200+300, which is 301 even if you remove the maximum weight edge.
In other words, it is not correct to construct the shortest path by deleting an edge from the shortest cycle.
Suggested approach
An alternative approach to convert your cycle algorithm into a path algorithm is to add a new node to the graph which has a zero cost edge to all of the other nodes.
Any path in the original graph corresponds to a cycle in this graph (the start and end points of the path are the nodes that the extra node connects to.
I made a little recursive algorithm to find a solution to a maze in the following format
###S###
##___##
##_#_##
#__#_##
#E___##
Where a '#' represents a wall, and '_' represents an open space (free to move through). 'S' represents the start location, 'E' represents the end location.
My algorithm works fine, but I'm wondering how to modify it to work for the shortest path.
/**
* findPath()
*
* #param location - Point to search
* #return true when maze solution is found, false otherwise
*/
private boolean findPath(Point location) {
// We have reached the end point, and solved the maze
if (location.equals(maze.getEndCoords())) {
System.out.println("Found path length: " + pathLength);
maze.setMazeArray(mazeArray);
return true;
}
ArrayList<Point> possibleMoves = new ArrayList<Point>();
// Move Right
possibleMoves.add(new Point(location.x + 1, location.y));
// Down Move
possibleMoves.add(new Point(location.x, location.y - 1));
// Move Left
possibleMoves.add(new Point(location.x - 1, location.y));
// Move Up
possibleMoves.add(new Point(location.x, location.y + 1));
for (Point potentialMove : possibleMoves) {
if (spaceIsFree(potentialMove)) {
// Move to the free space
mazeArray[potentialMove.x][potentialMove.y] = currentPathChar;
// Increment path characters as alphabet
if (currentPathChar == 'z')
currentPathChar = 'a';
else
currentPathChar++;
// Increment path length
pathLength++;
// Find the next path to traverse
if (findPath(potentialMove)) {
return true;
}
// Backtrack, this route doesn't lead to the end
mazeArray[potentialMove.x][potentialMove.y] = Maze.SPACE_CHAR;
if (currentPathChar == 'a')
currentPathChar = 'z';
else
currentPathChar--;
// Decrease path length
pathLength--;
}
}
// Previous space needs to make another move
// We will also return false if the maze cannot be solved.
return false;
}
In the first block is where I find the path and break it out. The char[][] array with the path written on it is set as well, which is later printed out as the result.
It works well, but I'm wondering what would be the best way to modify it to not break out after it finds the first successful path, but keep going until it finds the shortest possible path.
I tried doing something like this, modifying the findPath() method and adding a shortestPath and hasFoundPath variable. The first indicating length of the shortest path found so far, and the hasFoundPath variable indicating whether or not we have found any path.
// We have reached the end point, and solved the maze
if (location.equals(maze.getEndCoords())) {
System.out.println("Found path length: " + pathLength);
// Is this path shorter than the previous?
if (hasFoundPath && pathLength < shortestPathLength) {
maze.setMazeArray(mazeArray);
shortestPathLength = pathLength;
} else if (!hasFoundPath) {
hasFoundPath = true;
maze.setMazeArray(mazeArray);
shortestPathLength = pathLength;
}
//return true;
}
But I haven't been able to get it to set the mazeArray to the correct values of any shortest path it may find.
Any guidance would be appreciated :) Thanks
spaceIsFree() method simply makes sure the up/left/down/right coordinates are valid before moving to them. So it makes sure the char is an '_' or 'E' and it isn't out of bounds.
Your code appears to perform a depth-first search (DFS). To find the shortest path you will want to switch to a breadth-first search (BFS). It's not something you can do by adding a few variables to your existing code. It will require rewriting your algorithm.
One way to convert a DFS into a BFS is to get rid of the recursion and switch to using an explicit stack to keep track of which nodes you've visited so far. Each iteration of your search loop, you (1) pop a node off the stack; (2) check if that node is the solution; and (3) push each of its children onto the stack. In pseudo code, that looks like:
Depth-first search
stack.push(startNode)
while not stack.isEmpty:
node = stack.pop()
if node is solution:
return
else:
stack.pushAll(node.children)
If you then switch the stack to a queue this will implicitly become a BFS, and a BFS will naturally find the shortest path(s).
Breadth-first serarch
queue.add(startNode)
while not queue.isEmpty:
node = queue.remove()
if node is solution:
return
else:
queue.addAll(node.children)
A couple of additional notes:
The above algorithms are suitable for trees: mazes that don't have loops. If your mazes have loops then you'll need to make sure you don't revisit nodes you've already seen. In that case, you'll need to add logic to keep track of all the already visited nodes and avoid adding them onto the stack/queue a second time.
As written, these algorithms will find the target node but they don't remember the path that got them there. Adding that is an exercise for the reader.
Here's the BFS-search solution I came up with.
It marks the starting point as "1", then marks each adjacent one that it can travel to as "2", and each adjacent one to the 2's that can be traveled to as "3" and so on.
Then it starts at the end, and goes backwards using the decrementing "level" values which results in the shortest path.
private LinkedList<Point> findShortestPath(Point startLocation) {
// This double array keeps track of the "level" of each node.
// The level increments, starting at the startLocation to represent the path
int[][] levelArray = new int[mazeArray.length][mazeArray[0].length];
// Assign every free space as 0, every wall as -1
for (int i=0; i < mazeArray.length; i++)
for (int j=0; j< mazeArray[0].length; j++) {
if (mazeArray[i][j] == Maze.SPACE_CHAR || mazeArray[i][j] == Maze.END_CHAR)
levelArray[i][j] = 0;
else
levelArray[i][j] = -1;
}
// Keep track of the traversal in a queue
LinkedList<Point> queue = new LinkedList<Point>();
queue.add(startLocation);
// Mark starting point as 1
levelArray[startLocation.x][startLocation.y] = 1;
// Mark every adjacent open node with a numerical level value
while (!queue.isEmpty()) {
Point point = queue.poll();
// Reached the end
if (point.equals(maze.getEndCoords()))
break;
int level = levelArray[point.x][point.y];
ArrayList<Point> possibleMoves = new ArrayList<Point>();
// Move Up
possibleMoves.add(new Point(point.x, point.y + 1));
// Move Left
possibleMoves.add(new Point(point.x - 1, point.y));
// Down Move
possibleMoves.add(new Point(point.x, point.y - 1));
// Move Right
possibleMoves.add(new Point(point.x + 1, point.y));
for (Point potentialMove: possibleMoves) {
if (spaceIsValid(potentialMove)) {
// Able to move here if it is labeled as 0
if (levelArray[potentialMove.x][potentialMove.y] == 0) {
queue.add(potentialMove);
// Set this adjacent node as level + 1
levelArray[potentialMove.x][potentialMove.y] = level + 1;
}
}
}
}
// Couldn't find solution
if (levelArray[maze.getEndCoords().x][maze.getEndCoords().y] == 0)
return null;
LinkedList<Point> shortestPath = new LinkedList<Point>();
Point pointToAdd = maze.getEndCoords();
while (!pointToAdd.equals(startLocation)) {
shortestPath.push(pointToAdd);
int level = levelArray[pointToAdd.x][pointToAdd.y];
ArrayList<Point> possibleMoves = new ArrayList<Point>();
// Move Right
possibleMoves.add(new Point(pointToAdd.x + 1, pointToAdd.y));
// Down Move
possibleMoves.add(new Point(pointToAdd.x, pointToAdd.y - 1));
// Move Left
possibleMoves.add(new Point(pointToAdd.x - 1, pointToAdd.y));
// Move Up
possibleMoves.add(new Point(pointToAdd.x, pointToAdd.y + 1));
for (Point potentialMove: possibleMoves) {
if (spaceIsValid(potentialMove)) {
// The shortest level will always be level - 1, from this current node.
// Longer paths will have higher levels.
if (levelArray[potentialMove.x][potentialMove.y] == level - 1) {
pointToAdd = potentialMove;
break;
}
}
}
}
return shortestPath;
}
The spaceIsValid() is simply ensuring that the space is not out of bounds.
I was doing code forces and wanted to implement Dijkstra's Shortest Path Algorithm for a directed graph using Java with an Adjacency Matrix, but I'm having difficulty making it work for other sizes than the one it is coded to handle.
Here is my working code
int max = Integer.MAX_VALUE;//substitute for infinity
int[][] points={//I used -1 to denote non-adjacency/edges
//0, 1, 2, 3, 4, 5, 6, 7
{-1,20,-1,80,-1,-1,90,-1},//0
{-1,-1,-1,-1,-1,10,-1,-1},//1
{-1,-1,-1,10,-1,50,-1,20},//2
{-1,-1,-1,-1,-1,-1,20,-1},//3
{-1,50,-1,-1,-1,-1,30,-1},//4
{-1,-1,10,40,-1,-1,-1,-1},//5
{-1,-1,-1,-1,-1,-1,-1,-1},//6
{-1,-1,-1,-1,-1,-1,-1,-1} //7
};
int [] record = new int [8];//keeps track of the distance from start to each node
Arrays.fill(record,max);
int sum =0;int q1 = 0;int done =0;
ArrayList<Integer> Q1 = new ArrayList<Integer>();//nodes to transverse
ArrayList<Integer> Q2 = new ArrayList<Integer>();//nodes collected while transversing
Q1.add(0);//starting point
q1= Q1.get(0);
while(done<9) {// <<< My Problem
for(int q2 = 1; q2<8;q2++) {//skips over the first/starting node
if(points[q1][q2]!=-1) {//if node is connected by an edge
if(record[q1] == max)//never visited before
sum=0;
else
sum=record[q1];//starts from where it left off
int total = sum+points[q1][q2];//total distance of route
if(total < record[q2])//connected node distance
record[q2]=total;//if smaller
Q2.add(q2);//colleceted node
}
}
done++;
Q1.remove(0);//removes the first node because it has just been used
if(Q1.size()==0) {//if there are no more nodes to transverse
Q1=Q2;//Pours all the collected connecting nodes to Q1
Q2= new ArrayList<Integer>();
q1=Q1.get(0);
}
else//
q1=Q1.get(0);//sets starting point
}![enter image description here][1]
However, my version of the algorithm only works because I set the while loop to the solved answer. So in other words, it only works for this problem/graph because I solved it by hand first.
How could I make it so it works for all groups of all sizes?
Here is the pictorial representation of the example graph my problem was based on:
I think the main answer you are looking for is that you should let the while-loop run until Q1 is empty. What you're doing is essentially a best-first search. There are more changes required though, since your code is a bit unorthodox.
Commonly, Dijkstra's algorithm is used with a priority queue. Q1 is your "todo list" as I understand from your code. The specification of Dijkstra's says that the vertex that is closest to the starting vertex should be explored next, so rather than an ArrayList, you should use a PriorityQueue for Q1 that sorts vertices according to which is closest to the starting vertex. The most common Java implementation uses the PriorityQueue together with a tuple class: An internal class which stores a reference to a vertex and a "distance" to the starting vertex. The specification for Dijkstra's also specifies that if a new edge is discovered that makes a vertex closer to the start, the DecreaseKey operation should then be used on the entry in the priority queue to make the vertex come up earlier (since it is now closer). However, since PriorityQueue doesn't support that operation, a completely new entry is just added to the queue. If you have a good implementation of a heap that supports this operation (I made one myself, here) then decreaseKey can significantly increase efficiency as you won't need to create those tuples any more either then.
So I hope that is a sufficient answer then: Make a proper 'todo' list instead of Q1, and to make the algorithm generic, let that while-loop run until the todo list is empty.
Edit: I made you an implementation based on your format, that seems to work:
public void run() {
final int[][] points = { //I used -1 to denote non-adjacency/edges
//0, 1, 2, 3, 4, 5, 6, 7
{-1,20,-1,80,-1,-1,90,-1}, //0
{-1,-1,-1,-1,-1,10,-1,-1}, //1
{-1,-1,-1,10,-1,50,-1,20}, //2
{-1,-1,-1,-1,-1,-1,20,-1}, //3
{-1,50,-1,-1,-1,-1,30,-1}, //4
{-1,-1,10,40,-1,-1,-1,-1}, //5
{-1,-1,-1,-1,-1,-1,-1,-1}, //6
{-1,-1,-1,-1,-1,-1,-1,-1} //7
};
final int[] result = dijkstra(points,0);
System.out.print("Result:");
for(final int i : result) {
System.out.print(" " + i);
}
}
public int[] dijkstra(final int[][] points,final int startingPoint) {
final int[] record = new int[points.length]; //Keeps track of the distance from start to each vertex.
final boolean[] explored = new boolean[points.length]; //Keeps track of whether we have completely explored every vertex.
Arrays.fill(record,Integer.MAX_VALUE);
final PriorityQueue<VertexAndDistance> todo = new PriorityQueue<>(points.length); //Vertices left to traverse.
todo.add(new VertexAndDistance(startingPoint,0)); //Starting point (and distance 0).
record[startingPoint] = 0; //We already know that the distance to the starting point is 0.
while(!todo.isEmpty()) { //Continue until we have nothing left to do.
final VertexAndDistance next = todo.poll(); //Take the next closest vertex.
final int q1 = next.vertex;
if(explored[q1]) { //We have already done this one, don't do it again.
continue; //...with the next vertex.
}
for(int q2 = 1;q2 < points.length;q2++) { //Find connected vertices.
if(points[q1][q2] != -1) { //If the vertices are connected by an edge.
final int distance = record[q1] + points[q1][q2];
if(distance < record[q2]) { //And it is closer than we've seen so far.
record[q2] = distance;
todo.add(new VertexAndDistance(q2,distance)); //Explore it later.
}
}
}
explored[q1] = true; //We're done with this vertex now.
}
return record;
}
private class VertexAndDistance implements Comparable<VertexAndDistance> {
private final int distance;
private final int vertex;
private VertexAndDistance(final int vertex,final int distance) {
this.vertex = vertex;
this.distance = distance;
}
/**
* Compares two {#code VertexAndDistance} instances by their distance.
* #param other The instance with which to compare this instance.
* #return A positive integer if this distance is more than the distance
* of the specified object, a negative integer if it is less, or
* {#code 0} if they are equal.
*/
#Override
public int compareTo(final VertexAndDistance other) {
return Integer.compare(distance,other.distance);
}
}
Output: 0 20 40 50 2147483647 30 70 60