Can Dijkstra's algorithm be applicable to the Travelling Salesman Problem? - java

This is a general query. Dijkstra's algorithm is able to find the shortest distances between each of the nodes while the TSP problem says to start and end on the same node by traveling each node at least once with the shortest path.
Is there any way I can solve it using Dijkstra's algorithm approach because I am unable to implement the complex approach using dynamic programming?

Dijkstra's algorithm can be used but it doesn't help (a lot).
First you need to see that the graph you "need to use" to find a solution is not the input graph G=<V,E> but a graph which is derived from the input graph. Which can be
Gd=<Vd,Ed> where Vd is a ordered subset of V and Ed is a pair from Vd, where an edge '([v1,..,vn],[v1,..,vn,vm]) in Ed exists if (vn,vm)\in E.
Now the cost of an edge in Gd correspond to the cost in G. A node is a goal state when it contains all nodes from G
Brute-force Depth/Bredth/Iterative would work. How about dijkstra. You need to have
a consistent heuristic which estimate is always less than or equal to
the estimated distance from any neighboring vertex to the goal, plus
the step cost of reaching that neighbor.
Obviously, the constant zero is such a heuristic. Can you get a better heuristic?
Not really due to the NP nature of TSP. Interestingly in real world problems you can sometimes find in-consistent heuristics, which still produce good results.

Related

Brute Force Solution to Shortest Path in a directed weighted graph with negative cycle

I'm trying to write brute-force algorithm to find the shortest path from s to t. The graph is weighted and also has negative weighted edges. No need to think about negative cycles. Basically exit, if there is so.
I've written Bellman-Ford Algorithm to solve this problem. It works very well. (Incase of "use better algorithms" comments) However, as a second step, I need to implement Brute Force Algorithm. I tried to write it over Breadth First Search Alg.,however as i mentioned, there are negative edges. therefore, in some of the cases, i need to revisit some nodes.
Brute Force Alg. for graphs which have nonnegative edges:
Distance(s, t):
for each path p from s to t:
compute w(p)
return p encountered with smallest w(p)

find a cyclic path with a certain node with a certain weight

I am trying to build a navigation app. Im trying to think of an algorithm to find a cyclic path that includes a certain node and sums up to a certain weight.
the input for the algorithm would be a node and a weight.
Example: algo(a,30) - the algorithm wil return a path that can start from node A and finish in Node A and the total sum of it is 30.
extra info: for all w:weights w>0, the graph is directional (as streets are).
thanks ahead
Gal
This problem is stronger ( more difficult) than the Hamiltonian Cycle Problem. Because if we already have a solution for this problem algo(a,b), for any Hamiltonian Cycle Problem P we can design a new graph with weight=1 for edges in P and 0 for edges not, then use algo(1,n) to find a Hamiltonian Cycle, in which n is number of nodes in the graph. So we have a NP-complete problem here.
For applications with small n, a brute-force search with certain "pruning" should work fast enough.
The general problem is NP-Hard, and reduceable from the longest path problem, and thus is NP-Hard, and there is no known polynomial solution to this problem (and the general assumption is such a solution does not exist).
The longest path problem is: Given a graph G with weight function w, and a pair of vertices u,v - find the longest path from u to v.
Proof:
Assuming there is a polynomial algorithm to your problem - one can build an algorithm to longest path problem, with binary search (exponentially increase the wanted weight, until there is no solution, and then - binary search). Each step is polynomial, and there are O(log|PATH|) steps. Since log|PATH| is polynomial in the input (assuming simple pathes), the algorithm is polynomial.
It is also closely related to Hamiltonian Path Problem and Traveling Salesman Problem

Shortest path in an adjacency list in non-weighted graph

First, I would like to make sure I got the structure correct.
As far as I know, an adjacency list representing a graph looks like this:
AdjList is an ArrayList, where each element is an object. Each object contains an ArrayList inside to represent vertices connected. So for example, in the image above, Vertext 1 (first index in the AdjList) is connected to the vertex at index 2, 4, and 5 of the AdjList. Is this representation of an adjacency list correct? (ps: I know indices start at 0, i put 1 here for simplicity/ease).
If it is correct, which algorithm should I use to find the shortest path between two vertices?
There is no algorithm to give you just the shortest path between two vertices. You can use either:
Dijkstra's algorithm to find the shortest path between one vertex and all the others (and then choose the one you need).
Roy-Floyd algorithm to find the shortest path between all possible pairs of vertices.
The links also include pseudocode.
Here's an example for Dijkstra's shortest path algorithm in java along with explanations
You can use Dijkstra's and Floyd Warshall. For unweighed graph assume weight of each edge to be 1 and apply the algorithm.
Previous answers mention disjktra and floyd algorithms to resolve the problem and are valid solutions, but where the graph is unweighted, the best solution is use a BFS technique, simpler and optimal.
BFS has a algorithm complexity of O(n), while disjktra O(n * log(n)) and Floyd O(n^2)

Question about k-Connected Graphs

Given an undirected Graph, G, is there any standard algorithm to find the value of k, where (k-1) represents the number of vertices whose removal results in a graph that is still connected and the removal of the kth vertex makes the graph disconnected?
Thank you!
Hop
I don't know of any standard algorithm, but for a graph to have this property, every pair of vertices must have >= k independent paths between them (its a simple proof by contradiction to see that this is the case).
So a potential algorithm would be to check that for all pairs of vertices in your graph there are at least K independent paths. To find this you can use a Maximum Flow algorithm. Unfortunately doing this trivially will probably take a long time. Ford-Fulkerson network flow takes O(EV) time (on the graph you would use for this), and there are O(V^2) pairs of nodes to check. So worst case time is approx. O(V^5).

Why are you guaranteed to find your result if it is in the graph with BFS but not with DFS?

I've read somewhere that DFS is not gaurenteed to find a solution while BFS is.. why? I don't really get how this is true.. could someone demonstrate a case for me that proves this?
DFS, since its a Depth first search, could get stuck in an infinite branch and never reach the vertex you're looking for. BFS goes through all vertices at the same distance from the root each iteration, no matter what branch they're on so it will find the desired vertex eventually.
example:
root -> v1 -> v2 -> v3 -> ... goes on forever
|-> u.
in this example, if DFS begins at the root and then goes on to v1. it will never reach u since the branch it entered is infinite. BFS will go from root to either v1 or u and then to the other.
The output of both DFS and BFS (on graphs with a finite number of vertices) terminate and yield a path (or rather a tree, but the OP only seems to be interested in one path of that tree). It does not matter whether there are cycles in the graph, because both procedures keep a record of which vertices have already been visited and thus avoids visiting the same vertex more than once. Any sane implementation of DFS/BFS does this - otherwise you'd be constrained to acyclic graphs only (see the pseudocode given in CLRS).
As #yurib mentioned, if the graph has an infinite number of nodes, dfs can take forever. Since there are infinite nodes, we cannot practically keep track of which vertices were already visited (that would take potentially infinite memory) and even if we did, there maybe be an infinite path containing unique vertices which does not contain the vertex we are looking for.
However, that is not the only reason DFS does not always find the shortest path. Even in finite graphs, DFS may fail to find the shortest path.
The main reason is that BFS always explores all nodes at distance x from the root before moving on to those at distance x+1. Thus if a node is found at distance k, we can be sure the minimum distance from the root to that node is k and not k-1, k-2,...,0 (otherwise we would have encountered it earlier).
DFS, on the other hand, basically explores nodes along one path until there are no more new nodes down that path before looking at a different path. DFS just explores each successor of a node one by one, in an essentially arbitrary order. This means it may find a longer path to the target node just because it just happened to explore that path first.
In the image above, a BFS would explore B and E first, and then reach D via E - giving us the path to D as root->E->D. A DFS might start search from B first, thus finding the path root->B->C->D, which is clearly not the shortest.
Notice the crucial decision was to go for exploring B before E. A DFS might well have chosen E and arrived at the correct answer. But there is in general no way to know which path to go down first (if we knew that we would know the shortest path anyway). For a DFS the path which it finds simply depends on the order in which it explores the successor nodes, which may or may not yield a shortest path.
#yurib is correct, but there is a further complication.
If the desired vertex is NOT in the graph, then neither BFS or DFS will terminate if there is a cycle ... unless you take steps to detect cycles. And if you are taking steps to detect cycles, both BFS and DFS will terminate.

Categories