Application of BFS or DFS - java

I need help in solving this problem, I tried using a 2D array and then finding the least number of swaps. Not sure exactly how to go about this problem. Whether to use BFS or DFS?
You are given two four digits numbers. The first number is the initial number, and the second one is the target number. Write a java program to transform the initial number into the target number using the fewest possible operations. The available operations are as follows:
Add 1 to one of the four digits. Adding 1 to a 9 results in 0.
Subtract 1 from one of the four digits. Subtracting 1 from 0 results in 9.
Swap two adjacent digits
eg 1:
initial no :1111
final no : 9999
min no of operations :8
eg 2:
initial no :1234
final no : 2144
min no of operations :2

BFS.
When DFS finds first solution it is usually not one found in the smallest possible number of moves. It can also explore long, pointless paths when solution is close (it can get stuck in infinite loop if you don't remember visited nodes). These problems could be solved by iterative deepening DFS, which might be desirable if there are memory constraints, but BFS is simpler for such small search space.

You should use BFS algorithm, because it will give you the shortest possible way to transform the first number to the targeted one. DFS only explore the paths, and not by shortest way. In some cases, DFS might find the solutions faster than BFS, but there is no algorithmic guarantee for that.

Related

Find the lowest sum path from 2d Array

Just thinking about the one algorithm below is the statement for that
Given a matrix, with each node having a value. You start from 0,0 and have to reach n,m. From i,j you can either go to i+1,j or i,j+1. When you step on each block, the value on that block gets added to your current score. What’s the minimum initial score you must carry so that you can always reach n,m(through any possible path) having positive score at the end.
Eg:
Matrix -> 2 3 4
-5 -6 7
8 3 1
Ans -> 6 – for path 2,-5,-6,3,1 we need initial score of 6 so that when we land on 1, we have a positive score of 1
So I can do this using brute force and Dynamic programming, but still thinking for approach which could be better then this, please share ur thoughts, just thoughts/idea I do not need implementation, as I can do this.
There's many search algorithm, i encourage you reading these Wikipedia pages :
https://en.wikipedia.org/wiki/Pathfinding
https://en.wikipedia.org/wiki/Tree_traversal
One possible solution, is to transform the array to graph and apply shortest paths algorithms to it, another solution is to use some IA algorithms such as A*.
Link to Wikipedia for A* (prounced A Star) :
https://en.wikipedia.org/wiki/A*_search_algorithm

Simplest algorithm to find 4-cycles in an undirected graph

I have an input text file containing a line for each edge of a simple undirected graph. The file contains reciprocal edges, i.e. if there's a line u,v, then there's also the line v,u.
I need an algorithm which just counts the number of 4-cycles in this graph. I don't need it to be optimal because I only have to use it as a term of comparison. If you can suggest me a Java implementation, I would appreciate it for the rest of my life.
Thank you in advance.
Construct the adjacency matrix M, where M[i,j] is 1 if there's an edge between i and j. M² is then a matrix which counts the numbers of paths of length 2 between each pair of vertices.
The number of 4-cycles is sum_{i<j}(M²[i,j]*(M²[i,j]-1)/2)/2. This is because if there's n paths of length 2 between a pair of points, the graph has n choose 2 (that is n*(n-1)/2) 4-cycles. We sum only the top half of the matrix to avoid double counting and degenerate paths like a-b-a-b-a. We still count each 4-cycle twice (once per pair of opposite points on the cycle), so we divide the overall total by another factor of 2.
If you use a matrix library, this can be implemented in a very few lines code.
Detecting a cycle is one thing but counting all of the 4-cycles is another. I think what you want is a variant of breadth first search (BFS) rather than DFS as has been suggested. I'll not go deeply into the implementation details, but note the important points.
1) A path is a concatenation of edges sharing the same vertex.
2) A 4-cycle is a 4-edge path where the start and end vertices are the same.
So I'd approach it this way.
Read in graph G and maintain it using Java objects Vertex and Edge. Every Vertex object will have an ArrayList of all of the Edges that are connected to that Vertex.
The object Path will contain all of the vertexes in the path in order.
PathList will contain all of the paths.
Initialize PathList to all of the 1-edge paths which are exactly all of edges in G. BTW, this list will contain all of the 1-cycles (vertexes connected to themselves) as well as all other paths.
Create a function that will (pseudocode, infer the meaning from the function name)
PathList iterate(PathList currentPathList)
{
newPathList = new PathList();
for(path in currentPathList.getPaths())
{
for(edge in path.lastVertexPath().getEdges())
{
PathList.addPath(Path.newPathFromPathAndEdge(path,edge));
}
}
return newPathList;
}
Replace currentPathList with PathList.iterate(currentPathList) once and you will have all of the 2-cyles, call it twice and you will have all of the 3 cycles, call it 3 times and you will have all of the 4 cycles.
Search through all of the paths and find the 4-cycles by checking
Path.firstVertex().isEqualTo(path.lastVertex())
Depth-first search, DFS-this is what you need
Construct an adjacency matrix, as prescribed by Anonymous on Jan 18th, and then find all the cycles of size 4.
It is an enumeration problem. If we know that the graph is a complete graph, then we know off a generating function for the number of cycles of any length. But for most of other graphs, you have to find all the cycles to find the exact number of cycles.
Depth first search with backtracking should be the ideal strategy. Implement it with each node as the starting node, one by one. Keep track of visited nodes. If you run out of nodes without finding a cycle of size 4, just backtrack and try a different route.
Backtrack is not ideal for larger graphs. For example, even a complete graph of order 11 is a little to much for backtracking algorithms. For larger graphs you can look for a randomized algorithm.

divide and conquer assignment

I have to write a java program to simulate a robot to match lids with it's corresponding jar. The robot has two arms, one for the lids and one for the jars. I can't compare lids with lids or jars with jars. The user will enter three lines:
5(n)
9 7 2 5 6(size of lids)
2 6 5 7 9(size of jars)
The output should be:
3 5 4 2 1
The 3rd number in line 2 is equal to the 1st number in line 3 and so on.
We are supposed to use a divide and conquer algorithm and I really have no idea where to start. All I have to go by is it's similar to quicksort. Any help would be greatly appreciated.
Divide and conquer algorithms might be confusing at first. Think about it as if you have some relatively large problem that you can't solve, but if that problem was much, much smaller you could find the answer. Applying it to this situation: suppose instead of having 2 big lists of lid and jar sizes, you instead have 1 lid size and some number of jar sizes. You could easily tell me which jar that lid fits on, right? The idea of solving the problem for 1 lid is essentially breaking the large problem (several lids) into a smaller one (1 lid). Once that makes sense, you can move on the algorithm.
You will likely employ some recursion in order to write your algorithm. Start with the base case and solve the simplest meaningful problem (I like the 1 lid example). Once you can solve that problem, can you recursively solve the same problem for every lid? I'm not attaching any code because I don't want to spoil the learning experience for you (and this is clearly homework).
The whole point of "divide and conquer" is to divide up the work into multiple, smaller problems; then you solve the smaller problems and roll them up until they are combined into a solution. This pretty much implies a recursive solution.
With any recursive function, you always need a "basis case". This will be a simple case that is trivially easy to solve. For example, if you only have one jar and one lid, then you simply return that the jar matches the lid. (Because as part of the problem statement, you always have one matching lid for each jar.)
So one place to start is a trivial program that only works right for a length-1 list of jars/lids. Then add more machinery to make it more capable.
With quicksort, you choose a place to divide up the numbers (the "pivot"), then do a very rough sort (just take numbers that should be on the left of the pivot but are on the right and move them to the left, and vice versa). Then you call quicksort recursively on the sublist. Eventually each of the recursive calls to quicksort hits a basis case (a length-1 sublist); once they all have hit the basis case the quicksort is done. (Note: there are ways to optimize quicksort and make it faster by adding more code, but I'm talking about the simplest implementation of quicksort here.)
Maybe in this case you should start with a length-n list of just the numbers from 1 to n, and and then swap the numbers around until you have a correct list?
Hmm.  With length-2 lists, there are only two possibilities: the lists line up, or not.  If they line up you are done.  If not, you swap the numbers to make them line up, and you are done.  Hmm.  This is similar to sorting in a way, but you can't just compare numbers directly like you can when you are sorting.  (In sorting you always know that 3 sorts below 5, but here it might not be so.) So, now think about a way to break down the list and keep doing it until you have a length-2 or length-1 sublist, then handle those trivial cases.
Sounds like a fun problem. I hope you enjoy working on it.

Algorithm Complexity (Big-O) of sudoku solver

I'm look for the "how do you find it" because I have no idea how to approach finding the algorithm complexity of my program.
I wrote a sudoku solver using java, without efficiency in mind (I wanted to try to make it work recursively, which i succeeded with!)
Some background:
my strategy employs backtracking to determine, for a given Sudoku puzzle, whether the puzzle only has one unique solution or not. So i basically read in a given puzzle, and solve it. Once i found one solution, i'm not necessarily done, need to continue to explore for further solutions. At the end, one of three possible outcomes happens: the puzzle is not solvable at all, the puzzle has a unique solution, or the puzzle has multiple solutions.
My program reads in the puzzle coordinates from a file that has one line for each given digit, consisting of the row, column, and digit. By my own convention, the upper left square of 7 is written as 007.
Implementation:
I load the values in, from the file, and stored them in a 2-D array
I go down the array until i find a Blank (unfilled value), and set it to 1. And check for any conflicts (whether the value i entered is valid or not).
If yes, I move onto the next value.
If no, I increment the value by 1, until I find a digit that works, or if none of them work (1 through 9), I go back 1 step to the last value that I adjusted and I increment that one (using recursion).
I am done solving when all 81 elements have been filled, without conflicts.
If any solutions are found, I print them to the terminal.
Otherwise, if I try to "go back one step" on the FIRST element that I initially modified, it means that there were no solutions.
How can my programs algorithm complexity? I thought it might be linear [ O(n) ], but I am accessing the array multiple times, so i'm not sure :(
Any help is appreciated
O(n ^ m) where n is the number of possibilities for each square (i.e., 9 in classic Sudoku) and m is the number of spaces that are blank.
This can be seen by working backwards from only a single blank. If there is only one blank, then you have n possibilities that you must work through in the worst case. If there are two blanks, then you must work through n possibilities for the first blank and n possibilities for the second blank for each of the possibilities for the first blank. If there are three blanks, then you must work through n possibilities for the first blank. Each of those possibilities will yield a puzzle with two blanks that has n^2 possibilities.
This algorithm performs a depth-first search through the possible solutions. Each level of the graph represents the choices for a single square. The depth of the graph is the number of squares that need to be filled. With a branching factor of n and a depth of m, finding a solution in the graph has a worst-case performance of O(n ^ m).
In many Sudokus, there will be a few numbers that can be placed directly with a bit of thought. By placing a number in the first empty cell, you give up on a lot of opportunities to reduce the possibilities. If the first ten empty cells have lots of possibilities, you get exponential growth. I'd ask the questions:
Where in the first line can the number 1 go?
Where in the first line can the number 2 go?
...
Where in the last line can the number 9 go?
Same but with nine columns?
Same but with the nine boxes?
Which number can go into the first cell?
Which number can go into the 81st cell?
That's 324 questions. If any question has exactly one answer, you pick that answer. If any question has no answer at all, you backtrack. If every question has two or more answers, you pick a question with the minimal number of answers.
You may get exponential growth, but only for problems that are really hard.

Fast counting of 2D sub-matrices withing a large, dense 2D matrix?

What's a good algorithm for counting submatrices within a larger, dense matrix? If I had a single line of data, I could use a suffix tree, but I'm not sure if generalizing a suffix tree into higher dimensions is exactly straightforward or the best approach here.
Thoughts?
My naive solution to index the first element of the dense matrix and eliminate full-matrix searching provided only a modest improvement over full-matrix scanning.
What's the best way to solve this problem?
Example:
Input:
Full matrix:
123
212
421
Search matrix:
12
21
Output:
2
This sub-matrix occurs twice in the full matrix, so the output is 2. The full matrix could be 1000x1000, however, with a search matrix as large as 100x100 (variable size), and I need to process a number of search matrices in a row. Ergo, a brute force of this problem is far too inefficient to meet my sub-second search time for several matrices.
For an algorithms course, I once worked an exercise in which the Rabin-Karp string-search algorithm had to be extended slightly to search for a matching two-dimensional submatrix in the way you describe.
I think if you take the time to understand the algorithm as it is described on Wikipedia, the natural way of extending it to two dimensions will be clear to you. In essence, you just make several passes over the matrix, creeping along one column at a time. There are some little tricks to keep the time complexity as low as possible, but you probably won't even need them.
Searching an N×N matrix for a M×M matrix, this approach should give you an O(N²⋅M) algorithm. With tricks, I believe it can be refined to O(N²).
Algorithms and Theory of Computation Handbook suggests what is an N^2 * log(Alphabet Size) solution. Given a sub-matrix to search for, first of all de-dupe its rows. Now note that if you search the large matrix row by row at most one of the de-duped rows can appear at any position. Use Aho-Corasick to search this in time N^2 * log(Alphabet Size) and write down at each cell in the large matrix either null or an identifier for the matching row of the sub-matrix. Now use Aho-Corasick again to search down the columns of this matrix of row matches and signal a match where all the rows are present below each other.
This sounds similar to template matching. If motivated you could probably transform your original array with the FFT and drop a log from a brute force search. (Nlog(M)) instead of (NM)
I don't have a ready answer but here's how I would start:
-- You want very fast lookup, how much (time) can you spend on building index structures? When brute-force isn't fast enough you need indexes.
-- What do you know about your data that you haven't told us? Are all the values in all your matrices single-digit integers?
-- If they are single-digit integers (or anything else you can represent as a single character or index value), think about linearising your 2D structures. One way to do this would be to read the matrix along a diagonal running top-right to bottom-left and scanning from top-left to bottom-right. Difficult to explain in words, but read the matrix:
1234
5678
90ab
cdef
as 125369470c8adbef
(get it?)
Now you can index your super-matrix to whatever depth your speed and space requirements demand; in my example key 1253... points to element (1,1), key abef points to element (3,3). Not sure if this works for you, and you'll have to play around with the parameters to your solution. Choose your favourite method for storing the key-value pairs: a hash, a list, or even build some indexes into the index if things get wild.
Regards
Mark

Categories