Algorithm for finding all permutations of a number [closed] - java

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 8 years ago.
Improve this question
I have to write a program that gives all permutatioms of a number. For example, the user enters the number 4. I have to return all possible combinations of the numbers 1,2,3,4. I am having an issue with a good method for getting all of the permutations. I have it so my program puts the number 1-n in an array. However, I cannot think of a good way of getting permutation. I know I need to use recursion, but my issue is a way to get the permutations in the first place. I know switching the number around is probably the best way to go, but what is the best method for doing that?

The easiest way to understand the algorithm is this. To get all the permutations of 1 to 4, you want
1, followed by the permutations of 2,3,4
2, followed by the permutations of 1,3,4
3, followed by the permutations of 1,2,4
4, followed by the permutations of 1,2,3
This is a fairly standard bit of recursion. For each number, recursively find all the permutations of the other numbers, and add this number on the beginning.

We can simply use the recursion to get the possibilities by recursively selecting one element and then applying the recursion to the sub set of that array. this is an example for numbers 1 to 3

Recursion is your friend. If you have one element, your solution is obviously [[1]]. For two elements, you can take all your "one"-solutions (well, there is only one) and stick the 2 either before or after the 1, so you get [[2,1], [1,2]]. For three elements, take all your "two" solutions and put the three in first, second or third position. From [2,1] you get [3,2,1], [2,3,1] and [2,1,3]. From [1,2] you get [3,1,2], [1,3,2] and [1,2,3]. All together, your solution is [[3,2,1], [2,3,1], [2,1,3], [3,1,2], [1,3,2], [1,2,3]].

Notice how chiastic-security's algorithm for generating all permutations of n numbers relies on being able to generate all permutations of n-1 numbers? To use his algorithm, all we need is that ability, and one more thing: a way to generate all permutations of a set containing just one number.
Generating all permutations of a single number is trivial: there's only one, consisting of just that number by itself.
What about the other requirement -- the ability to generate all permutations of n-1 numbers? That seems harder, but it turns out not to be. Suppose n=2 for the problem we want to solve: then n-1 = 1, and then we're OK, because we know how to generate all permutations of 1 number. So, applying chiastic-security's algorithm here gives us a way to generate all permutations of 2 numbers. If instead n=3 for the problem we want to solve, then n-1 = 2, and we've just established that we can generate all permutations of 2 numbers, so we're still OK -- we know how to generate all permutations of 3 numbers. If instead n=4, then n-3 = 3, and we've just established that we can generate all permutations of 3 numbers, so we can also generate all permutations of 4 numbers. Clearly this reasoning can continue on forever -- so you can generate all permutations of any number of numbers this way.
That reasoning is informal, but it can be made completely precise with mathematical induction. If induction can be applied, then the surprising result is that we never have to solve any "big" problems "the hard way" -- we can always break them down into one or more small problems, which can in turn be broken into still smaller problems, until one or more "base cases" are reached that are easy to solve -- and then the solutions to the bigger problems can be pieced together from these solutions to the simpler subproblems.

Related

Given a set of inputs and a result, how would I get the equation used to get the result?

I looked through here and found questions similar but not solving for the equation. Here is what I'm looking to do. I need to be able to determine a collection of equations that potentially can be used to solve a set of input parameters and a result. I will always know the inputs and the results. I need to figure out a way to solve for the solution for lack of a better term.
For instance:
Input parameters: 5,1,1,2
Result: 8
I would like to input these numbers and result and get something like:
FirstNumber(5) * (SecondNumber(1) + ThirdNumber(1)) - FourthNumber(2) = 8
FirstNumber(5) * FourthNumber(2) - SecondNumber(1) - ThirdNumber(1) = 8
Obviously it can be complex and given more numbers could have many possible solutions. My general question is around feasibility.
It is actually quite difficult task to solve.
First - you have to be able to solve any equation with +-*/ and with (). How to do that? You have to create tree-like structure with these operators and be able to count the result.
If you visualize the tree it looks like this:
When you are finished with this task, you can start with generating all the possibilities that can happen. The Backtracking is actually quite useful, as it is automatically remove the paths that does not lead to anything and with proper implementation it finds all possible solutions.
To get a collection of all possible linear equations I would try permutation without repetition for the inputs except the result (ignore possible identical inputs) and permutation taken from each cobination of all operators taken G at a time where G is the number of gaps between input numbers to fill the gaps between numbers and then try every position and size more than one and less than N for the numbers inside of the "()" where N is the number of inputs excluding the result.
Check the answers in this link for how to get permutation in java
Generating all permutations of a given string

Where to start on a "generate and test" approach using Java

I am required to solve the "N Queens" problem using a generate and test approach in Java, so basically if N=8 my program must generate the 8^8 possible lists and test each one to return the 92 distinct lists that result in a solution to the problem. I must also use a DFS algorithm with backtracking to enumerate the possibilities.
To provide an example, list (2,4,6,8,3,1,7,5) means that the first queen is column 1 row 2, the second is column 2 row 4, the third is column 3 row 6...and so on.
The two main things preventing me from making headway on this are:
1) I have no idea how to generate every possible list of length N (and integers size N or less) in Java
2) I don't really understand how once I have all these lists, to abstract them to a datatype that can be traversed with a DFS algorithm.
I'm not begging someone to do my homework for me, more I'd like a conceptual walkthrough of how #2 can be thought of and a (somewhat) tangible example of how given an input N I can generate all N^N lists.

Suffix array nlogn creation

I have been learning suffix arrays creation, & i understand that We first sort all suffixes according to first character, then according to first 2 characters, then first 4 characters and so on while the number of characters to be considered is smaller than 2n.
But my doubt is why don't we choose the first 3 characters, then 9... and so on. Why only 2 characters are taken into account since the strings are a part of same strings and not different random strings?
I haven't analyzed the suffix array construction algorithm thoroughly, but still would like to share my thoughts.
In my humble opinion, your question is similar to the following ones:
Why do computers use binary encoding of information instead of ternary?
Why does binary search bisect the range instead of trisecting it?
Why are there two sexes rather than three?
The reason is that the number 2 is special - it is the smallest plural number. The difference between 1 and 2 is qualitative, whereas the difference between 2 and 3 (as well as any other positive integer) is quantitative and therefore not as drastic.
As a result, binary formulation of many algorithms and data structures turns out to be the simplest one, though some of them may be generalized, with various degrees of added complexity, for an arbitrary base.
Answer is given from the post you linked. And as #Leon answered, the algorithm work because it use a dichotomous approach to solve the sorting problem. if you correctly read the answer, the main purpose is to divide word be small 2 character fragments. So that 4 characters can be easily sort base on the arrangement of the 2 pair of characters, 6 characters with 4-2 or 2-4 or 2-2-2 and so one. Thus have a word of 3 letters in the table is non-sense since word of 3 characters may be seen has 2 characters + the position in the alphabet of the last character.
I think you are considering only the speed of 2^x versus 3^x where you obviously would prefer the latter.
But you have to consider the effort you need for each step.
Since 3^x needs about 1.58 less steps than 2^x you would need to be able to compute a single step for the 3^x growth in less than 1.58 times what you need for a single step in the 2^x growth to perform better.
Generally the problems will get much more complex when you have to handle three elements in each step instead of two.
Also if you could expand it to 3^x you could also do it for a bigger n^x and then with big n your algorithm is suddenly not exponential but effectively linear.

Addition of Two Numbers represented by Linked List nodes [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Questions asking for code must demonstrate a minimal understanding of the problem being solved. Include attempted solutions, why they didn't work, and the expected results. See also: Stack Overflow question checklist
Closed 9 years ago.
Improve this question
I'm preparing for an interview. The question I got is: Two numbers are represented by a linked list, where each node contains a single digit.The digits are stored in reverse oder, such that the 1's digit is at the head of the list. Write a function that adds the two numbers and returns the sum as a linked list.
The suggested answer adds each digits individually and keeps a "carry" number. For example, if the first digits of the two numbers are "5" and "7". The algorithm records "2" in the first digit of resulting sum and keeps "1" as a "carry" to add to 10th digit of result.
However, my solution is to traverse the two linked lists and translate them into two integers. Then I add the numbers and translate sum to a new linked list. Wouldn't my solution be more straight forward and efficient?
Thanks!
While your solution may be more straightforward, I don't think it's actually more efficient.
I'm assuming the "correct" algorithm is something like this:
pop the first element of both lists
Add them together (with the carry if there is one) and make a new node using the ones digit
Pass the carry (dividing the sum by 10 to get the actual thing to carry) and repeat 1) and 2), with each successive node being pointed to by the previous one.
The main things I see when I'm comparing your algorithm with that one is:
In terms of memory, you want to create two BigIntegers to store intermediate values (I'm assuming you're using BigInteger or some equivalent Object to avoid the constraints of an int or long), on top of the final linked list itself. The original algorithm doesn't use more than a couple of ints to do intermediate calculations, and so in the long run, the original algorithm actually uses less memory.
You're also suggesting that you do all of your arithmetic using the BigIntegers, rather that in ints. Although it's possible that BigInteger is really optimized to the point where it isn't much slower than primitive operations, I highly doubt that calling BigInteger#add is faster than doing the + operation on ints.
Also, some more food for thought. Suppose you didn't have something handy like BigInteger to store arbitrarily large integers. Then you're going to have to have some way to store arbitrarily large integers for your algorithm to work properly. After that, you basically need a way to add arbitrarily large integers to add arbitrarily large integers, and you end up with a problem where you either have to do something like the original algorithm anyway, or you end up using a completely different representation in a subroutine (yikes).
(Assuming by "integer" you mean int.)
Your solution does not scale beyond numbers that can fit in an int, whereas the original solution is only limited by the amount of available memory.
As far as efficiency is concerned, there is nothing about your solution that would make it more efficient than the original.
Your solution is more straightforward to describe, certainly - and an argument might be made in certain situations that the readability of the code your solution would produce would be preferable, when working with large teams.
However, most of the time - their suggested answer is a lot more memory efficient, and probably more CPU-efficient.
You're suggesting going through the first linked-list, storing it as a number (+1 store). Going through the second, storing it as a number (+1 store). Adding the 2 numbers, saving the result (+1 store). Converting this number into a linked list, and saving it (+1 store)
Their solution involves going through the first and second linked list, while writing to the third, and storing it as a new one (+1 store)
This is a +1 store, vs. your +4 store. This might seem like not much, but if we were to try and add n pairs of numbers at the same time (on a distributed system or something), you're looking at 4n stores, rather than just n stores. Which could be a big deal.

Algorithm Complexity (Big-O) of sudoku solver

I'm look for the "how do you find it" because I have no idea how to approach finding the algorithm complexity of my program.
I wrote a sudoku solver using java, without efficiency in mind (I wanted to try to make it work recursively, which i succeeded with!)
Some background:
my strategy employs backtracking to determine, for a given Sudoku puzzle, whether the puzzle only has one unique solution or not. So i basically read in a given puzzle, and solve it. Once i found one solution, i'm not necessarily done, need to continue to explore for further solutions. At the end, one of three possible outcomes happens: the puzzle is not solvable at all, the puzzle has a unique solution, or the puzzle has multiple solutions.
My program reads in the puzzle coordinates from a file that has one line for each given digit, consisting of the row, column, and digit. By my own convention, the upper left square of 7 is written as 007.
Implementation:
I load the values in, from the file, and stored them in a 2-D array
I go down the array until i find a Blank (unfilled value), and set it to 1. And check for any conflicts (whether the value i entered is valid or not).
If yes, I move onto the next value.
If no, I increment the value by 1, until I find a digit that works, or if none of them work (1 through 9), I go back 1 step to the last value that I adjusted and I increment that one (using recursion).
I am done solving when all 81 elements have been filled, without conflicts.
If any solutions are found, I print them to the terminal.
Otherwise, if I try to "go back one step" on the FIRST element that I initially modified, it means that there were no solutions.
How can my programs algorithm complexity? I thought it might be linear [ O(n) ], but I am accessing the array multiple times, so i'm not sure :(
Any help is appreciated
O(n ^ m) where n is the number of possibilities for each square (i.e., 9 in classic Sudoku) and m is the number of spaces that are blank.
This can be seen by working backwards from only a single blank. If there is only one blank, then you have n possibilities that you must work through in the worst case. If there are two blanks, then you must work through n possibilities for the first blank and n possibilities for the second blank for each of the possibilities for the first blank. If there are three blanks, then you must work through n possibilities for the first blank. Each of those possibilities will yield a puzzle with two blanks that has n^2 possibilities.
This algorithm performs a depth-first search through the possible solutions. Each level of the graph represents the choices for a single square. The depth of the graph is the number of squares that need to be filled. With a branching factor of n and a depth of m, finding a solution in the graph has a worst-case performance of O(n ^ m).
In many Sudokus, there will be a few numbers that can be placed directly with a bit of thought. By placing a number in the first empty cell, you give up on a lot of opportunities to reduce the possibilities. If the first ten empty cells have lots of possibilities, you get exponential growth. I'd ask the questions:
Where in the first line can the number 1 go?
Where in the first line can the number 2 go?
...
Where in the last line can the number 9 go?
Same but with nine columns?
Same but with the nine boxes?
Which number can go into the first cell?
Which number can go into the 81st cell?
That's 324 questions. If any question has exactly one answer, you pick that answer. If any question has no answer at all, you backtrack. If every question has two or more answers, you pick a question with the minimal number of answers.
You may get exponential growth, but only for problems that are really hard.

Categories