Why does this code work for this TopCoder prob? - java

I've been trying to think since HOURS about this TopCoder problem and couldn't come with a perfectly working solution and found the one given below that is insanely beautifully used!
I'm trying to figure how this solution works for the given probem? And how could I have originally thought of it? After reading the solution I figured it's a variant of Huffman coding but that's as far as I could get. I'm really enthralled and would like to know what line of thought could lead to this solution..
Here's the problem:
http://community.topcoder.com/stat?c=problem_statement&pm=11860&rd=15091
Fox Ciel has lots of homework to do. The homework consists of some
mutually independent tasks. Different tasks may take different amounts
of time to complete. You are given a int[] workCost. For each i, the
i-th task takes workCost[i] seconds to complete. She would like to
attend a party and meet her friends, thus she wants to finish all
tasks as quickly as possible.
The main problem is that all foxes, including Ciel, really hate doing
homework. Each fox is only willing to do one of the tasks. Luckily,
Doraemon, a robotic cat from the 22nd century, gave Fox Ciel a split
hammer: a magic gadget which can split any fox into two foxes.
You are given an int splitCost. Using the split hammer on a fox is
instantaneous. Once a hammer is used on a fox, the fox starts to
split. After splitCost seconds she will turn into two foxes -- the
original fox and another completely new fox. While a fox is splitting,
it is not allowed to use the hammer on her again.
The work on a task cannot be interrupted: once a fox starts working on
a task, she must finish it. It is not allowed for multiple foxes to
cooperate on the same task. A fox cannot work on a task while she is
being split using the hammer. It is possible to split the same fox
multiple times. It is possible to split a fox both before and after
she solves one of the tasks.
Compute and return the smallest amount of time in which the foxes can
solve all the tasks.
And here's the solution I found at this link
import java.util.*;
public class FoxAndDoraemon {
public int minTime(int[] workCost, int splitCost) {
PriorityQueue<Integer> pq = new PriorityQueue<Integer>();
for(int i : workCost) pq.offer(i);
while(pq.size()>=2) {
int i = pq.poll();
int j = pq.poll();
pq.offer(Math.max(i, j) + splitCost);
}
return pq.poll();
}
}

First of all you do realize the reasoning behind `max(i, j) + splitCost'. Don't you? Basically, if you have one fox, you split it into two and perform each task independently. Let us call this process "merging".
Now assume we have three jobs a,b and c such that a>b>c. You can either do merge(merge(a,b),c) or merge(merge(a,c),b) or merge(merge(b,c),a). Do the math and you can prove that merge(merge(b,c),a) is least among these three.
You can now use induction to prove that this solution is valid for any number of jobs (not just 3).

It's building a tree from the ground up.
The algorithm uses one basic fact to work:
If you remove the two smallest elements, the cost of computing the smallest element is always less than the cost of the next smallest plus a split. (See ElKamina's post for a much more thorough proof for this).
So if you only had two elements in your tree (say 4 and 2) and your split cost was 4, the lowest amount of time is always going to be 8 (the next to smallest element plus the cost of splitting.
So, to build our tree: let's say we got the workCost [1,3,4,5,7,8,9,10], and our splitCost is 3.
So, we look at the two smallest elements: 1,3. The 'cost' of computing these is at minimum 6 (the largest number plus the cost of a split. By then reinserting 6 into the queue, you're essentially adding the subtree:
6
1 3
Where the 'height'/'cost' is 6 total. By repeating this process, you will build a tree using the smallest subelements you can (either an existing subtree, or a new unfinished homework).
The solution will eventually look like this: (Where S(X) is the cost of splitting plus solving everything below it)
S(17)
S(13) S(14)
S(10) 9 10 S(11)
S(6) 7 S(8) 8
1 3 4 5
If you trace backwards up this tree, it's easy to see how the formula solved it: the cost of 1 and 3 is S(6), 4 and 5 is S(8), then S(6) and 7 is S(10), 8 and S(8) is S(11), etc.

Related

Where to start on a "generate and test" approach using Java

I am required to solve the "N Queens" problem using a generate and test approach in Java, so basically if N=8 my program must generate the 8^8 possible lists and test each one to return the 92 distinct lists that result in a solution to the problem. I must also use a DFS algorithm with backtracking to enumerate the possibilities.
To provide an example, list (2,4,6,8,3,1,7,5) means that the first queen is column 1 row 2, the second is column 2 row 4, the third is column 3 row 6...and so on.
The two main things preventing me from making headway on this are:
1) I have no idea how to generate every possible list of length N (and integers size N or less) in Java
2) I don't really understand how once I have all these lists, to abstract them to a datatype that can be traversed with a DFS algorithm.
I'm not begging someone to do my homework for me, more I'd like a conceptual walkthrough of how #2 can be thought of and a (somewhat) tangible example of how given an input N I can generate all N^N lists.

Behind the scenes of recursion? [duplicate]

Locked. This question and its answers are locked because the question is off-topic but has historical significance. It is not currently accepting new answers or interactions.
One of the topics that seems to come up regularly on mailing lists and online discussions is the merits (or lack thereof) of doing a Computer Science Degree. An argument that seems to come up time and again for the negative party is that they have been coding for some number of years and they have never used recursion.
So the question is:
What is recursion?
When would I use recursion?
Why don't people use recursion?
There are a number of good explanations of recursion in this thread, this answer is about why you shouldn't use it in most languages.* In the majority of major imperative language implementations (i.e. every major implementation of C, C++, Basic, Python, Ruby,Java, and C#) iteration is vastly preferable to recursion.
To see why, walk through the steps that the above languages use to call a function:
space is carved out on the stack for the function's arguments and local variables
the function's arguments are copied into this new space
control jumps to the function
the function's code runs
the function's result is copied into a return value
the stack is rewound to its previous position
control jumps back to where the function was called
Doing all of these steps takes time, usually a little bit more than it takes to iterate through a loop. However, the real problem is in step #1. When many programs start, they allocate a single chunk of memory for their stack, and when they run out of that memory (often, but not always due to recursion), the program crashes due to a stack overflow.
So in these languages recursion is slower and it makes you vulnerable to crashing. There are still some arguments for using it though. In general, code written recursively is shorter and a bit more elegant, once you know how to read it.
There is a technique that language implementers can use called tail call optimization which can eliminate some classes of stack overflow. Put succinctly: if a function's return expression is simply the result of a function call, then you don't need to add a new level onto the stack, you can reuse the current one for the function being called. Regrettably, few imperative language-implementations have tail-call optimization built in.
* I love recursion. My favorite static language doesn't use loops at all, recursion is the only way to do something repeatedly. I just don't think that recursion is generally a good idea in languages that aren't tuned for it.
** By the way Mario, the typical name for your ArrangeString function is "join", and I'd be surprised if your language of choice doesn't already have an implementation of it.
Simple english example of recursion.
A child couldn't sleep, so her mother told her a story about a little frog,
who couldn't sleep, so the frog's mother told her a story about a little bear,
who couldn't sleep, so the bear's mother told her a story about a little weasel...
who fell asleep.
...and the little bear fell asleep;
...and the little frog fell asleep;
...and the child fell asleep.
In the most basic computer science sense, recursion is a function that calls itself. Say you have a linked list structure:
struct Node {
Node* next;
};
And you want to find out how long a linked list is you can do this with recursion:
int length(const Node* list) {
if (!list->next) {
return 1;
} else {
return 1 + length(list->next);
}
}
(This could of course be done with a for loop as well, but is useful as an illustration of the concept)
Whenever a function calls itself, creating a loop, then that's recursion. As with anything there are good uses and bad uses for recursion.
The most simple example is tail recursion where the very last line of the function is a call to itself:
int FloorByTen(int num)
{
if (num % 10 == 0)
return num;
else
return FloorByTen(num-1);
}
However, this is a lame, almost pointless example because it can easily be replaced by more efficient iteration. After all, recursion suffers from function call overhead, which in the example above could be substantial compared to the operation inside the function itself.
So the whole reason to do recursion rather than iteration should be to take advantage of the call stack to do some clever stuff. For example, if you call a function multiple times with different parameters inside the same loop then that's a way to accomplish branching. A classic example is the Sierpinski triangle.
You can draw one of those very simply with recursion, where the call stack branches in 3 directions:
private void BuildVertices(double x, double y, double len)
{
if (len > 0.002)
{
mesh.Positions.Add(new Point3D(x, y + len, -len));
mesh.Positions.Add(new Point3D(x - len, y - len, -len));
mesh.Positions.Add(new Point3D(x + len, y - len, -len));
len *= 0.5;
BuildVertices(x, y + len, len);
BuildVertices(x - len, y - len, len);
BuildVertices(x + len, y - len, len);
}
}
If you attempt to do the same thing with iteration I think you'll find it takes a lot more code to accomplish.
Other common use cases might include traversing hierarchies, e.g. website crawlers, directory comparisons, etc.
Conclusion
In practical terms, recursion makes the most sense whenever you need iterative branching.
Recursion is a method of solving problems based on the divide and conquer mentality.
The basic idea is that you take the original problem and divide it into smaller (more easily solved) instances of itself, solve those smaller instances (usually by using the same algorithm again) and then reassemble them into the final solution.
The canonical example is a routine to generate the Factorial of n. The Factorial of n is calculated by multiplying all of the numbers between 1 and n. An iterative solution in C# looks like this:
public int Fact(int n)
{
int fact = 1;
for( int i = 2; i <= n; i++)
{
fact = fact * i;
}
return fact;
}
There's nothing surprising about the iterative solution and it should make sense to anyone familiar with C#.
The recursive solution is found by recognising that the nth Factorial is n * Fact(n-1). Or to put it another way, if you know what a particular Factorial number is you can calculate the next one. Here is the recursive solution in C#:
public int FactRec(int n)
{
if( n < 2 )
{
return 1;
}
return n * FactRec( n - 1 );
}
The first part of this function is known as a Base Case (or sometimes Guard Clause) and is what prevents the algorithm from running forever. It just returns the value 1 whenever the function is called with a value of 1 or less. The second part is more interesting and is known as the Recursive Step. Here we call the same method with a slightly modified parameter (we decrement it by 1) and then multiply the result with our copy of n.
When first encountered this can be kind of confusing so it's instructive to examine how it works when run. Imagine that we call FactRec(5). We enter the routine, are not picked up by the base case and so we end up like this:
// In FactRec(5)
return 5 * FactRec( 5 - 1 );
// which is
return 5 * FactRec(4);
If we re-enter the method with the parameter 4 we are again not stopped by the guard clause and so we end up at:
// In FactRec(4)
return 4 * FactRec(3);
If we substitute this return value into the return value above we get
// In FactRec(5)
return 5 * (4 * FactRec(3));
This should give you a clue as to how the final solution is arrived at so we'll fast track and show each step on the way down:
return 5 * (4 * FactRec(3));
return 5 * (4 * (3 * FactRec(2)));
return 5 * (4 * (3 * (2 * FactRec(1))));
return 5 * (4 * (3 * (2 * (1))));
That final substitution happens when the base case is triggered. At this point we have a simple algrebraic formula to solve which equates directly to the definition of Factorials in the first place.
It's instructive to note that every call into the method results in either a base case being triggered or a call to the same method where the parameters are closer to a base case (often called a recursive call). If this is not the case then the method will run forever.
Recursion is solving a problem with a function that calls itself. A good example of this is a factorial function. Factorial is a math problem where factorial of 5, for example, is 5 * 4 * 3 * 2 * 1. This function solves this in C# for positive integers (not tested - there may be a bug).
public int Factorial(int n)
{
if (n <= 1)
return 1;
return n * Factorial(n - 1);
}
Recursion refers to a method which solves a problem by solving a smaller version of the problem and then using that result plus some other computation to formulate the answer to the original problem. Often times, in the process of solving the smaller version, the method will solve a yet smaller version of the problem, and so on, until it reaches a "base case" which is trivial to solve.
For instance, to calculate a factorial for the number X, one can represent it as X times the factorial of X-1. Thus, the method "recurses" to find the factorial of X-1, and then multiplies whatever it got by X to give a final answer. Of course, to find the factorial of X-1, it'll first calculate the factorial of X-2, and so on. The base case would be when X is 0 or 1, in which case it knows to return 1 since 0! = 1! = 1.
Consider an old, well known problem:
In mathematics, the greatest common divisor (gcd) … of two or more non-zero integers, is the largest positive integer that divides the numbers without a remainder.
The definition of gcd is surprisingly simple:
where mod is the modulo operator (that is, the remainder after integer division).
In English, this definition says the greatest common divisor of any number and zero is that number, and the greatest common divisor of two numbers m and n is the greatest common divisor of n and the remainder after dividing m by n.
If you'd like to know why this works, see the Wikipedia article on the Euclidean algorithm.
Let's compute gcd(10, 8) as an example. Each step is equal to the one just before it:
gcd(10, 8)
gcd(10, 10 mod 8)
gcd(8, 2)
gcd(8, 8 mod 2)
gcd(2, 0)
2
In the first step, 8 does not equal zero, so the second part of the definition applies. 10 mod 8 = 2 because 8 goes into 10 once with a remainder of 2. At step 3, the second part applies again, but this time 8 mod 2 = 0 because 2 divides 8 with no remainder. At step 5, the second argument is 0, so the answer is 2.
Did you notice that gcd appears on both the left and right sides of the equals sign? A mathematician would say this definition is recursive because the expression you're defining recurs inside its definition.
Recursive definitions tend to be elegant. For example, a recursive definition for the sum of a list is
sum l =
if empty(l)
return 0
else
return head(l) + sum(tail(l))
where head is the first element in a list and tail is the rest of the list. Note that sum recurs inside its definition at the end.
Maybe you'd prefer the maximum value in a list instead:
max l =
if empty(l)
error
elsif length(l) = 1
return head(l)
else
tailmax = max(tail(l))
if head(l) > tailmax
return head(l)
else
return tailmax
You might define multiplication of non-negative integers recursively to turn it into a series of additions:
a * b =
if b = 0
return 0
else
return a + (a * (b - 1))
If that bit about transforming multiplication into a series of additions doesn't make sense, try expanding a few simple examples to see how it works.
Merge sort has a lovely recursive definition:
sort(l) =
if empty(l) or length(l) = 1
return l
else
(left,right) = split l
return merge(sort(left), sort(right))
Recursive definitions are all around if you know what to look for. Notice how all of these definitions have very simple base cases, e.g., gcd(m, 0) = m. The recursive cases whittle away at the problem to get down to the easy answers.
With this understanding, you can now appreciate the other algorithms in Wikipedia's article on recursion!
A function that calls itself
When a function can be (easily) decomposed into a simple operation plus the same function on some smaller portion of the problem. I should say, rather, that this makes it a good candidate for recursion.
They do!
The canonical example is the factorial which looks like:
int fact(int a)
{
if(a==1)
return 1;
return a*fact(a-1);
}
In general, recursion isn't necessarily fast (function call overhead tends to be high because recursive functions tend to be small, see above) and can suffer from some problems (stack overflow anyone?). Some say they tend to be hard to get 'right' in non-trivial cases but I don't really buy into that. In some situations, recursion makes the most sense and is the most elegant and clear way to write a particular function. It should be noted that some languages favor recursive solutions and optimize them much more (LISP comes to mind).
A recursive function is one which calls itself. The most common reason I've found to use it is traversing a tree structure. For example, if I have a TreeView with checkboxes (think installation of a new program, "choose features to install" page), I might want a "check all" button which would be something like this (pseudocode):
function cmdCheckAllClick {
checkRecursively(TreeView1.RootNode);
}
function checkRecursively(Node n) {
n.Checked = True;
foreach ( n.Children as child ) {
checkRecursively(child);
}
}
So you can see that the checkRecursively first checks the node which it is passed, then calls itself for each of that node's children.
You do need to be a bit careful with recursion. If you get into an infinite recursive loop, you will get a Stack Overflow exception :)
I can't think of a reason why people shouldn't use it, when appropriate. It is useful in some circumstances, and not in others.
I think that because it's an interesting technique, some coders perhaps end up using it more often than they should, without real justification. This has given recursion a bad name in some circles.
Recursion is an expression directly or indirectly referencing itself.
Consider recursive acronyms as a simple example:
GNU stands for GNU's Not Unix
PHP stands for PHP: Hypertext Preprocessor
YAML stands for YAML Ain't Markup Language
WINE stands for Wine Is Not an Emulator
VISA stands for Visa International Service Association
More examples on Wikipedia
Recursion works best with what I like to call "fractal problems", where you're dealing with a big thing that's made of smaller versions of that big thing, each of which is an even smaller version of the big thing, and so on. If you ever have to traverse or search through something like a tree or nested identical structures, you've got a problem that might be a good candidate for recursion.
People avoid recursion for a number of reasons:
Most people (myself included) cut their programming teeth on procedural or object-oriented programming as opposed to functional programming. To such people, the iterative approach (typically using loops) feels more natural.
Those of us who cut our programming teeth on procedural or object-oriented programming have often been told to avoid recursion because it's error prone.
We're often told that recursion is slow. Calling and returning from a routine repeatedly involves a lot of stack pushing and popping, which is slower than looping. I think some languages handle this better than others, and those languages are most likely not those where the dominant paradigm is procedural or object-oriented.
For at least a couple of programming languages I've used, I remember hearing recommendations not to use recursion if it gets beyond a certain depth because its stack isn't that deep.
A recursive statement is one in which you define the process of what to do next as a combination of the inputs and what you have already done.
For example, take factorial:
factorial(6) = 6*5*4*3*2*1
But it's easy to see factorial(6) also is:
6 * factorial(5) = 6*(5*4*3*2*1).
So generally:
factorial(n) = n*factorial(n-1)
Of course, the tricky thing about recursion is that if you want to define things in terms of what you have already done, there needs to be some place to start.
In this example, we just make a special case by defining factorial(1) = 1.
Now we see it from the bottom up:
factorial(6) = 6*factorial(5)
= 6*5*factorial(4)
= 6*5*4*factorial(3) = 6*5*4*3*factorial(2) = 6*5*4*3*2*factorial(1) = 6*5*4*3*2*1
Since we defined factorial(1) = 1, we reach the "bottom".
Generally speaking, recursive procedures have two parts:
1) The recursive part, which defines some procedure in terms of new inputs combined with what you've "already done" via the same procedure. (i.e. factorial(n) = n*factorial(n-1))
2) A base part, which makes sure that the process doesn't repeat forever by giving it some place to start (i.e. factorial(1) = 1)
It can be a bit confusing to get your head around at first, but just look at a bunch of examples and it should all come together. If you want a much deeper understanding of the concept, study mathematical induction. Also, be aware that some languages optimize for recursive calls while others do not. It's pretty easy to make insanely slow recursive functions if you're not careful, but there are also techniques to make them performant in most cases.
Hope this helps...
I like this definition:
In recursion, a routine solves a small part of a problem itself, divides the problem into smaller pieces, and then calls itself to solve each of the smaller pieces.
I also like Steve McConnells discussion of recursion in Code Complete where he criticises the examples used in Computer Science books on Recursion.
Don't use recursion for factorials or Fibonacci numbers
One problem with
computer-science textbooks is that
they present silly examples of
recursion. The typical examples are
computing a factorial or computing a
Fibonacci sequence. Recursion is a
powerful tool, and it's really dumb to
use it in either of those cases. If a
programmer who worked for me used
recursion to compute a factorial, I'd
hire someone else.
I thought this was a very interesting point to raise and may be a reason why recursion is often misunderstood.
EDIT:
This was not a dig at Dav's answer - I had not seen that reply when I posted this
1.)
A method is recursive if it can call itself; either directly:
void f() {
... f() ...
}
or indirectly:
void f() {
... g() ...
}
void g() {
... f() ...
}
2.) When to use recursion
Q: Does using recursion usually make your code faster?
A: No.
Q: Does using recursion usually use less memory?
A: No.
Q: Then why use recursion?
A: It sometimes makes your code much simpler!
3.) People use recursion only when it is very complex to write iterative code. For example, tree traversal techniques like preorder, postorder can be made both iterative and recursive. But usually we use recursive because of its simplicity.
Here's a simple example: how many elements in a set. (there are better ways to count things, but this is a nice simple recursive example.)
First, we need two rules:
if the set is empty, the count of items in the set is zero (duh!).
if the set is not empty, the count is one plus the number of items in the set after one item is removed.
Suppose you have a set like this: [x x x]. let's count how many items there are.
the set is [x x x] which is not empty, so we apply rule 2. the number of items is one plus the number of items in [x x] (i.e. we removed an item).
the set is [x x], so we apply rule 2 again: one + number of items in [x].
the set is [x], which still matches rule 2: one + number of items in [].
Now the set is [], which matches rule 1: the count is zero!
Now that we know the answer in step 4 (0), we can solve step 3 (1 + 0)
Likewise, now that we know the answer in step 3 (1), we can solve step 2 (1 + 1)
And finally now that we know the answer in step 2 (2), we can solve step 1 (1 + 2) and get the count of items in [x x x], which is 3. Hooray!
We can represent this as:
count of [x x x] = 1 + count of [x x]
= 1 + (1 + count of [x])
= 1 + (1 + (1 + count of []))
= 1 + (1 + (1 + 0)))
= 1 + (1 + (1))
= 1 + (2)
= 3
When applying a recursive solution, you usually have at least 2 rules:
the basis, the simple case which states what happens when you have "used up" all of your data. This is usually some variation of "if you are out of data to process, your answer is X"
the recursive rule, which states what happens if you still have data. This is usually some kind of rule that says "do something to make your data set smaller, and reapply your rules to the smaller data set."
If we translate the above to pseudocode, we get:
numberOfItems(set)
if set is empty
return 0
else
remove 1 item from set
return 1 + numberOfItems(set)
There's a lot more useful examples (traversing a tree, for example) which I'm sure other people will cover.
Well, that's a pretty decent definition you have. And wikipedia has a good definition too. So I'll add another (probably worse) definition for you.
When people refer to "recursion", they're usually talking about a function they've written which calls itself repeatedly until it is done with its work. Recursion can be helpful when traversing hierarchies in data structures.
An example: A recursive definition of a staircase is:
A staircase consists of:
- a single step and a staircase (recursion)
- or only a single step (termination)
To recurse on a solved problem: do nothing, you're done.
To recurse on an open problem: do the next step, then recurse on the rest.
In plain English:
Assume you can do 3 things:
Take one apple
Write down tally marks
Count tally marks
You have a lot of apples in front of you on a table and you want to know how many apples there are.
start
Is the table empty?
yes: Count the tally marks and cheer like it's your birthday!
no: Take 1 apple and put it aside
Write down a tally mark
goto start
The process of repeating the same thing till you are done is called recursion.
I hope this is the "plain english" answer you are looking for!
A recursive function is a function that contains a call to itself. A recursive struct is a struct that contains an instance of itself. You can combine the two as a recursive class. The key part of a recursive item is that it contains an instance/call of itself.
Consider two mirrors facing each other. We've seen the neat infinity effect they make. Each reflection is an instance of a mirror, which is contained within another instance of a mirror, etc. The mirror containing a reflection of itself is recursion.
A binary search tree is a good programming example of recursion. The structure is recursive with each Node containing 2 instances of a Node. Functions to work on a binary search tree are also recursive.
This is an old question, but I want to add an answer from logistical point of view (i.e not from algorithm correctness point of view or performance point of view).
I use Java for work, and Java doesn't support nested function. As such, if I want to do recursion, I might have to define an external function (which exists only because my code bumps against Java's bureaucratic rule), or I might have to refactor the code altogether (which I really hate to do).
Thus, I often avoid recursion, and use stack operation instead, because recursion itself is essentially a stack operation.
You want to use it anytime you have a tree structure. It is very useful in reading XML.
Recursion as it applies to programming is basically calling a function from inside its own definition (inside itself), with different parameters so as to accomplish a task.
"If I have a hammer, make everything look like a nail."
Recursion is a problem-solving strategy for huge problems, where at every step just, "turn 2 small things into one bigger thing," each time with the same hammer.
Example
Suppose your desk is covered with a disorganized mess of 1024 papers. How do you make one neat, clean stack of papers from the mess, using recursion?
Divide: Spread all the sheets out, so you have just one sheet in each "stack".
Conquer:
Go around, putting each sheet on top of one other sheet. You now have stacks of 2.
Go around, putting each 2-stack on top of another 2-stack. You now have stacks of 4.
Go around, putting each 4-stack on top of another 4-stack. You now have stacks of 8.
... on and on ...
You now have one huge stack of 1024 sheets!
Notice that this is pretty intuitive, aside from counting everything (which isn't strictly necessary). You might not go all the way down to 1-sheet stacks, in reality, but you could and it would still work. The important part is the hammer: With your arms, you can always put one stack on top of the other to make a bigger stack, and it doesn't matter (within reason) how big either stack is.
Recursion is the process where a method call iself to be able to perform a certain task. It reduces redundency of code. Most recurssive functions or methods must have a condifiton to break the recussive call i.e. stop it from calling itself if a condition is met - this prevents the creating of an infinite loop. Not all functions are suited to be used recursively.
hey, sorry if my opinion agrees with someone, I'm just trying to explain recursion in plain english.
suppose you have three managers - Jack, John and Morgan.
Jack manages 2 programmers, John - 3, and Morgan - 5.
you are going to give every manager 300$ and want to know what would it cost.
The answer is obvious - but what if 2 of Morgan-s employees are also managers?
HERE comes the recursion.
you start from the top of the hierarchy. the summery cost is 0$.
you start with Jack,
Then check if he has any managers as employees. if you find any of them are, check if they have any managers as employees and so on. Add 300$ to the summery cost every time you find a manager.
when you are finished with Jack, go to John, his employees and then to Morgan.
You'll never know, how much cycles will you go before getting an answer, though you know how many managers you have and how many Budget can you spend.
Recursion is a tree, with branches and leaves, called parents and children respectively.
When you use a recursion algorithm, you more or less consciously are building a tree from the data.
In plain English, recursion means to repeat someting again and again.
In programming one example is of calling the function within itself .
Look on the following example of calculating factorial of a number:
public int fact(int n)
{
if (n==0) return 1;
else return n*fact(n-1)
}
Any algorithm exhibits structural recursion on a datatype if basically consists of a switch-statement with a case for each case of the datatype.
for example, when you are working on a type
tree = null
| leaf(value:integer)
| node(left: tree, right:tree)
a structural recursive algorithm would have the form
function computeSomething(x : tree) =
if x is null: base case
if x is leaf: do something with x.value
if x is node: do something with x.left,
do something with x.right,
combine the results
this is really the most obvious way to write any algorith that works on a data structure.
now, when you look at the integers (well, the natural numbers) as defined using the Peano axioms
integer = 0 | succ(integer)
you see that a structural recursive algorithm on integers looks like this
function computeSomething(x : integer) =
if x is 0 : base case
if x is succ(prev) : do something with prev
the too-well-known factorial function is about the most trivial example of
this form.
function call itself or use its own definition.

divide and conquer assignment

I have to write a java program to simulate a robot to match lids with it's corresponding jar. The robot has two arms, one for the lids and one for the jars. I can't compare lids with lids or jars with jars. The user will enter three lines:
5(n)
9 7 2 5 6(size of lids)
2 6 5 7 9(size of jars)
The output should be:
3 5 4 2 1
The 3rd number in line 2 is equal to the 1st number in line 3 and so on.
We are supposed to use a divide and conquer algorithm and I really have no idea where to start. All I have to go by is it's similar to quicksort. Any help would be greatly appreciated.
Divide and conquer algorithms might be confusing at first. Think about it as if you have some relatively large problem that you can't solve, but if that problem was much, much smaller you could find the answer. Applying it to this situation: suppose instead of having 2 big lists of lid and jar sizes, you instead have 1 lid size and some number of jar sizes. You could easily tell me which jar that lid fits on, right? The idea of solving the problem for 1 lid is essentially breaking the large problem (several lids) into a smaller one (1 lid). Once that makes sense, you can move on the algorithm.
You will likely employ some recursion in order to write your algorithm. Start with the base case and solve the simplest meaningful problem (I like the 1 lid example). Once you can solve that problem, can you recursively solve the same problem for every lid? I'm not attaching any code because I don't want to spoil the learning experience for you (and this is clearly homework).
The whole point of "divide and conquer" is to divide up the work into multiple, smaller problems; then you solve the smaller problems and roll them up until they are combined into a solution. This pretty much implies a recursive solution.
With any recursive function, you always need a "basis case". This will be a simple case that is trivially easy to solve. For example, if you only have one jar and one lid, then you simply return that the jar matches the lid. (Because as part of the problem statement, you always have one matching lid for each jar.)
So one place to start is a trivial program that only works right for a length-1 list of jars/lids. Then add more machinery to make it more capable.
With quicksort, you choose a place to divide up the numbers (the "pivot"), then do a very rough sort (just take numbers that should be on the left of the pivot but are on the right and move them to the left, and vice versa). Then you call quicksort recursively on the sublist. Eventually each of the recursive calls to quicksort hits a basis case (a length-1 sublist); once they all have hit the basis case the quicksort is done. (Note: there are ways to optimize quicksort and make it faster by adding more code, but I'm talking about the simplest implementation of quicksort here.)
Maybe in this case you should start with a length-n list of just the numbers from 1 to n, and and then swap the numbers around until you have a correct list?
Hmm.  With length-2 lists, there are only two possibilities: the lists line up, or not.  If they line up you are done.  If not, you swap the numbers to make them line up, and you are done.  Hmm.  This is similar to sorting in a way, but you can't just compare numbers directly like you can when you are sorting.  (In sorting you always know that 3 sorts below 5, but here it might not be so.) So, now think about a way to break down the list and keep doing it until you have a length-2 or length-1 sublist, then handle those trivial cases.
Sounds like a fun problem. I hope you enjoy working on it.

Genetic Algorithm - Grouping people: Only find solutions containing criteria X,Y and Z

I am trying to solve the following problem:
I have a list of 30 people.
These people need to be divided into groups of 6.
Each person has given the names of 3 other people who they would like to be in a group with.
I thought of solving this problem using a genetic algorithm.
The fitness function could evaluate all the groups, and assign a fitness score based on how many people per room have all their preferences met. (or a scoring system similar to that)
Example:
One of the generated solutions is: 1,3,19,5,22,2,7,8,11,12,13,14,15,13,17....etc
I would assume the first 5 people are in the first group, and the next 5 in the the next group and calculate a fitness value from that.
I think that this solution would work - does anyone see a better way of doing this?
My main question is this:
If I want to make sure person A and B are definitely in the same group, I could implement the fitness function to check for this and assign a terrible fitness if this condition isn't met. Is this the best way to do it? It seems quite inefficient.
Is there a way to 'lock' certain parts of the solution ("certain genes") and just solve or the remainder?
Any help or insights will be appreciated.
Thanks in advance.
AK
Just to clarify a bit, your problem isn't about genetic programming but genetic algorithms, which are two different things. Genetic programming is about generating (using evolutionary algorithms) executable individuals that will generate your solutions while genetic algorithms individuals represent directly your solutions.
That being said, your two assumptions are corrects. Data representation is a key element of evolutionary algorithms in general and a bad representation may hinder efficient solution space exploration. Your current data representation seems correct to me, given groups are only allowed to have exactly 5 individuals. Your second thought about the way to enforce some criteria is also right. Putting a large fitness value (preferably one that can't represent a potentially valid even if bad solution) such as infinity (if your library / language allows it easily) is the preferred way to express invalid solutions in literature. This has multiple advantages over simply deleting invalid individuals: During the selection stage, bad individuals won't be selected and thus the solution space they represent won't be explored as much as interesting ones, which is computationally good because it surely won't contain optimal solutions. Knowing a solution is bad is good knowledge, after all. At the same time, genetic diversity is really important in evolutionary algorithms in order to avoid stagnation. At least some bad individual should be kept for the sake of genetic diversity in order to explore solution spaces between currently represented zones.
The goal of genetic algorithms is to compute solutions that are either impossible or too hard to compute analytically or by brute-force. Trying to dynamically lock down some genes with heuristics would require much knowledge about the inner working of your problems as well as the underlying evolution mechanisms and would be defeating the purpose of using evolutionary algorithms. The effective goal of evolutionary algorithms is to lock down genes that seems correct.
In fact, if you are a priori absolutely positively certain that some given genes must have a given value, don't represent them in your individuals. For instance, make your first group 3 individuals long if you are sure that the 2 others must be of some given value. You can then code your evaluation function as if there was 5 individuals in the first group but won't be evolving / searching to replace the 2 fixed ones.
What does your crossover operation look like? The way you have it laid out in your description, I'm not sure how you implement it cleanly. For instance if you have two solutions:
1, 2, 3, 4, 5, ....., 30
and
1, 2, 30, 29,......,10
Assuming you're using single point crossover function, you would have the potential to get multiple assignments for the same people and other people not being assigned at all using the genomes above.
I would have a genome with 30 values, where each value defines a person's group assignment (1-6). It would look like 656324113255632....etc. So person 1 is assigned group 6, person 2 group 5, etc. This would make the crossover operation easier to implement, because you don't have to ensure that after crossover each new solution is a valid assignment regardless of whether it's optimal.
The fitness function would assign a penalty for each group not having the proper number of members (5), and additional penalties for group member assignments that are suboptimal. I would make the first penalty significantly larger than the second, and then adjust these to get the results you're looking for.
This can be modeled as a generalized quadratic assignment problem (GQAP). This problem allows to specify a number of equipment (people) that demand a certain capacity, a number of locations (groups) that offer a capacity and the weights matrix that specifies the closeness between equipment and the distance matrix specifying the distance between locations. Additionally, there are install costs, but these are not required for your problem. I have implemented this problem in HeuristicLab. It's not part of the trunk, but I can send you the plugin if you're interested (or you compile it yourself).
It seems that the most challenging part of using a genetic algorithm for this problem is implementing the crossover. Here's how I would do it:
First choose a constant, C. C will stay constant throughout all generations, and I will explain it's purpose in a moment.
I will use a smaller example than 5 groups of 6 to demonstrate this crossover, but the premise is the same. Say we have 2 parents, each consisting of 3 groups of 3. Let's make one [[1,2,3],[4,5,6],[7,8,9]], and the other [[9,4,3],[5,7,8],[6,1,2]].
Make a list of possible numbers (1 through total number of people), in this case it is simply [1,2,3,4,5,6,7,8,9]. Remove 1 random number from the list. Let's say we remove 2. The list becomes [1,3,4,5,6,7,8,9]
We assign each remaining number a probability. The probability starts at 1, and goes up by C for any matches with the parents. For example, in parent 1, 3 and 2 are in the same group so 3 would have a probability of 1+C. Same thing with 6 because it forms a match in parent 2. 1 would have a probability of 1+2C, because it is in the same group as 2 in both parents. Based on these probabilities, use a roulette wheel type selection. Let's say we pick 6.
Now, we have 2 and 6 in the same group. We similarly look for matches with these numbers and make probabilities. For each parent, we add C if it matches with only 2 or only 6, and 2C if it matches with both. Continue this until the group is done (for 3x3 this is the last selection, but for 5x6 there would be a few more)
4.Choose a new random number that has not been picked and continue for other groups
One of the good things about this crossover, is that it basically includes mutations already. There are chances built in to group people that were not grouped in their parents
Credit: I adapted the idea from the Omicron Genetic Algorithm

Algorithm Complexity (Big-O) of sudoku solver

I'm look for the "how do you find it" because I have no idea how to approach finding the algorithm complexity of my program.
I wrote a sudoku solver using java, without efficiency in mind (I wanted to try to make it work recursively, which i succeeded with!)
Some background:
my strategy employs backtracking to determine, for a given Sudoku puzzle, whether the puzzle only has one unique solution or not. So i basically read in a given puzzle, and solve it. Once i found one solution, i'm not necessarily done, need to continue to explore for further solutions. At the end, one of three possible outcomes happens: the puzzle is not solvable at all, the puzzle has a unique solution, or the puzzle has multiple solutions.
My program reads in the puzzle coordinates from a file that has one line for each given digit, consisting of the row, column, and digit. By my own convention, the upper left square of 7 is written as 007.
Implementation:
I load the values in, from the file, and stored them in a 2-D array
I go down the array until i find a Blank (unfilled value), and set it to 1. And check for any conflicts (whether the value i entered is valid or not).
If yes, I move onto the next value.
If no, I increment the value by 1, until I find a digit that works, or if none of them work (1 through 9), I go back 1 step to the last value that I adjusted and I increment that one (using recursion).
I am done solving when all 81 elements have been filled, without conflicts.
If any solutions are found, I print them to the terminal.
Otherwise, if I try to "go back one step" on the FIRST element that I initially modified, it means that there were no solutions.
How can my programs algorithm complexity? I thought it might be linear [ O(n) ], but I am accessing the array multiple times, so i'm not sure :(
Any help is appreciated
O(n ^ m) where n is the number of possibilities for each square (i.e., 9 in classic Sudoku) and m is the number of spaces that are blank.
This can be seen by working backwards from only a single blank. If there is only one blank, then you have n possibilities that you must work through in the worst case. If there are two blanks, then you must work through n possibilities for the first blank and n possibilities for the second blank for each of the possibilities for the first blank. If there are three blanks, then you must work through n possibilities for the first blank. Each of those possibilities will yield a puzzle with two blanks that has n^2 possibilities.
This algorithm performs a depth-first search through the possible solutions. Each level of the graph represents the choices for a single square. The depth of the graph is the number of squares that need to be filled. With a branching factor of n and a depth of m, finding a solution in the graph has a worst-case performance of O(n ^ m).
In many Sudokus, there will be a few numbers that can be placed directly with a bit of thought. By placing a number in the first empty cell, you give up on a lot of opportunities to reduce the possibilities. If the first ten empty cells have lots of possibilities, you get exponential growth. I'd ask the questions:
Where in the first line can the number 1 go?
Where in the first line can the number 2 go?
...
Where in the last line can the number 9 go?
Same but with nine columns?
Same but with the nine boxes?
Which number can go into the first cell?
Which number can go into the 81st cell?
That's 324 questions. If any question has exactly one answer, you pick that answer. If any question has no answer at all, you backtrack. If every question has two or more answers, you pick a question with the minimal number of answers.
You may get exponential growth, but only for problems that are really hard.

Categories