nth fibonacci number using using dynamic programming - java

I am able to understand the dynamic programming implementation given HERE.
But I am not clear about the another version given in cracking the coding interview book which I am copy pasting. Can someone please help me understand this, moreover is this not more expensive than the above geeksforgeek dynamic programming implementation.
int[] fib = new int[max];
int fibonacci(int i){
if(i == 0) return 0;
if(i == 1) return 1;
if (fib[i] != 0) return fid[i];
fib[i] = fibonacci(i-1) + fibonacci(i-2);
return fib[i];
}

Basically int[] fib is a cache in which the ith fibonacci number is stored.
This is a great time saver. Otherwise the recursive fibonacci procedure would need to recalculate a lot of values.
E.g.
fib[8] = fibonacci(7) + fibonacci(6)
But then:
fib[7] = fibonacci(6) + fibonacci(5)
As you can see, without caching, the value for fibonacci(6) would be needed to calculated two times.

Related

Java Recursion Fibonacci Value

Question:
How many calls are needed to recursively calculate the 7th Fibonacci value?
So this was a problem given to me and the answer was given to me as 41. Then I went to a professor because I didn't understand it, but I was given another answer. I think it was 25? (don't quote me on that) Then I went to another professor... and he told me the person who gave you this problem should have given you the sample code because there can be multiple ways to write this recursive function which would result in different amounts of calls.
So if this is true can you guys find different recursive functions that would result in a different amount of calls needed to get the 7th value of the sequence?
One way:
static long fibonacciR(int i)
{
if (i <= 1)
return i;
return fibonacciR(i - 1) + fibonacciR(i - 2);
}
Another way:
static final int f[] = {0,1,1,2,3,5,8,13,21,34,55,89,144};
static long fibonacciR2(int i)
{
if (i < f.length)
return f[i];
return fibonacciR2(i-1)+fibonacciR2(i-2);
}
In fact 'another' way is any number of other ways, depending on how big you make the table. When the table has two elements both methods are equal. When it has three there are 25 calls. When 4, 15. And so on.
Yet another way, to get specifically 25 calls:
static long fibonacciR3(int i)
{
if (i == 0)
return 0;
if (i <= 2)
return 1;
return fibonacciR(i - 1) + fibonacciR(i - 2);
}

Recursion and Recursive Methods

I'm studying for my computer science final and am going back over some of the things that I never quite grasped when we went over them in class. The main thing being recursion. I think I've got the hang of the simple recursion example but am trying to work through one that was on a previous exam and am having trouble figuring out how it should be done.
Here is the question:
Texas numbers (Tx(n)) are defined as follows for non-negative numbers (assume true):
Tx(n) = 10 if n is 0
Tx(n) = 5 if n is 1
Tx(n) = 2*(Tx(n-1) + Tx(n-2) if n >= 2
We are then to write the recursion function for Texas numbers, after making some corrections after the test, here's what I've come up with, I think it's right, but not 100% sure.
public int Tx(int n) {
if(n == 0)
return 10;
else if (n == 1)
return 5;
else
return 2*(Tx(n-1) + Tx(n-2));
}
Then we are asked to computer the value of Tx(5). This is where I'm stuck. If the return statement for the else was simply n-1, I think I'd be able to figure it out, but the n-1 + n-2 is completely throwing me off.
Can anyone explain how this would work, or share some links that have similar examples. I have tried looking this up online and in my textbook but the examples I've found are either so advanced that I have no clue what's going on, or they only deal with something like return n-1, which I already know how to do.
Let's start with Tx(2). n > 1, so we have 2*(Tx(n-1) + Tx(n-2)) which is 2*(Tx(1) + Tx(0)).
But we already know Tx(1) and Tx(0)! So just substitute them in and you get 2*(5 + 10) -> 30. Great, so now we know T(2).
What about T(3)? 2*(Tx(2) + Tx(1)). Nice, we already know these too :) Again, just fill them in to get 2*(30 + 5) -> 70.
You can work forwards to get to Tx(5).
Your code is logically correct, you should just be using == to test equality, a single = is for assignment.
When you run your method, it will work backwards and solve smaller and smaller subproblems until it gets to a point where the answer is known, these are your base cases.
Tx(3)
2* Tx(2) + Tx(1)
2*Tx(1) + Tx(0) (5)
(5) (10)
In order for recursion to work, whatever you are doing each time to break the problem down into smaller problems needs to make some progress towards the base case. If it doesn't, you will just infinitely recurse until your computer runs out of space to store all of the repeated calls to the same function.
public int Tx(int n) {
if(n == 0)
return 10;
else
return Tx(n+1); // n will never reach 0!
}
Tx(1) becomes Tx(2) -> Tx(3) -> Tx(4) -> Tx(5) etc.
Your implementation is good, only one minor mistake - in the conditions you should replace = with == - it's not an assignment - it's a comparison.
By the way, what would you expect your method to return for Tx(-1) ?
You have implemented it right just change = with ==.
If you want to further reduce the time complexity you can store the result in an array global to the function so that your function doesnot compute results again and again for a same number this will only save you some time for large computations.
You can use something like this.
public int tx(int n , int []arr) {
if (arr[n] == 0) {
if (n == 1) {
arr[n] = 10;
}
else if (n == 2) {
arr[n] = 5;
}
else {
arr[n] = 2 * (tx((n - 1), arr) + tx((n - 2), arr));
}
}
return arr[n];
}
See whenever you ask the computer for the value Tx(5) it will call the recursive function and so the program will execute the else part because value of n=5.
Now in the else part 2*(Tx(n-1)+Tx(n-2)) will be executed.
In first iteration it will become 2*((2*(Tx(3)+Tx(2)))+(2*(Tx(2)+Tx(1)))) . The iteration will be continued until the value of n become 0 or 1.

Why use recursion if the same task can be accomplished with loop control structures?

As I start learning recursion different questions are crossing my mind. Recursion uses more memory for the stack and it's usually slower due to maintaining the stack.
What is the benefit of using recursion if I still can use for loops? We describe actions to be repeated over and over until a condition is true, and we can use both recursion or for loops.
Why would I choose recursive version while I have faster control structures as an option?
Recursion uses more memory for the stack and it usually slower due to maintaining the stack
That statement is far from being universally true. It applies in situations when you do not need to save the state for more than a fixed number of levels, but that does not cover many important tasks that can be solved recursively. For example, if you want to implement a depth-first search on a graph, you need to make your own data structure to store the state that would otherwise go on the stack.
What is the benefit of using Recursion if I still can use for loop?
You get more clarity when you apply a recursive algorithm to a task that is best understood through recursion, such as processing recursively-defined structures. In cases like that, a loop by itself is no longer sufficient: you need a data structure to go along with your loop.
Why would I choose recursive version while I have faster control structure?
You wouldn't necessarily choose recursion when you could implement the same algorithm with faster control structures that are easy to understand. However, there are situations when you may want to code a recursion to improve readability, even though you know that you can code the algorithm using a loop, with no additional data structures. Modern compilers can detect situations like that, and "rewrite" your recursive code behind the scene to make it use iterations. This lets you have the best of both worlds - a recursive program that matches reader's expectations, and an iterative implementation that does not waste space on the stack.
Unfortunately, demonstrating examples of situations when recursion gives you clear advantages requires knowledge of advanced topics, so many educators take shortcuts by demonstrating recursion using wrong examples, such as factorials and Fibonacci numbers. One relatively simple example is implementing a parser for an expression with parentheses. You can do it in many different ways, with or without recursion, but the recursive way of parsing expressions gives you an amazingly concise solution that is easy to understand.
The great example of when the recursive solution is better than the itterative one is the tower of Hanoi. Consider the following two solutions -
Recursive (from this question):
public class Hanoi {
public static void main(String[] args) {
playHanoi (2,"A","B","C");
}
//move n disks from position "from" to "to" via "other"
private static void playHanoi(int n, String from , String other, String to) {
if (n == 0)
return;
if (n > 0)
playHanoi(n-1, from, to, other);
System.out.printf("Move one disk from pole %s to pole %s \n ", from, to);
playHanoi(n-1, other, from, to);
}
}
Iterative (copied from RIT):
import java.io.*;
import java.lang.*;
public class HanoiIterative{
// -------------------------------------------------------------------------
// All integers needed for program calculations.
public static int n;
public static int numMoves;
public static int second = 0;
public static int third;
public static int pos2;
public static int pos3;
public static int j;
public static int i;
public static void main(String args[]) {
try{
if( args.length == 1 ){
System.out.println();
n = Integer.parseInt(args[0]); //Sets n to commandline int
int[] locations = new int[ n + 2 ]; //Sets location size
for ( j=0; j < n; j++ ){ //For loop - Initially all
locations[j] = 0; //discs are on tower 1
}
locations[ n + 1 ] = 2; //Final disk destination
numMoves = 1;
for ( i = 1; i <= n; i++){ //Calculates minimum steps
numMoves *= 2; //based on disc size then
} //subtracts one. ( standard
numMoves -= 1; //algorithm 2^n - 1 )
//Begins iterative solution. Bound by min number of steps.
for ( i = 1; i <= numMoves; i++ ){
if ( i%2 == 1 ){ //Determines odd or even.
second = locations[1];
locations[1] = ( locations[1] + 1 ) % 3;
System.out.print("Move disc 1 to ");
System.out.println((char)('A'+locations[1]));
}
else { //If number is even.
third = 3 - second - locations[1];
pos2 = n + 1;
for ( j = n + 1; j >=2; j-- ) //Iterative vs Recursive.
if ( locations[j] == second )
pos2 = j;
pos3 = n + 1;
for ( j = n + 1; j >= 2; j-- ) //Iterative vs Recursive.
if ( locations[j] == third )
pos3 = j;
System.out.print("Move disc "); //Assumes something is moving.
//Iterative set. Much slower here than Recursive.
if ( pos2 < pos3 ){
System.out.print( pos2 );
System.out.print(" to ");
System.out.println((char)('A' + third));
locations[pos2] = third;
}
//Iterative set. Much slower here than Recursive.
else {
System.out.print( pos3 );
System.out.print(" to ");
System.out.println((char)('A' + second));
locations[ pos3 ] = second;
}
}
}
}
} //Protects Program Integrity.
catch( Exception e ){
System.err.println("YOU SUCK. ENTER A VALID INT VALUE FOR #");
System.err.println("FORMAT : java HanoiIterative #");
} //Protects Program Integrity.
finally{
System.out.println();
System.out.println("CREATED BY: KEVIN SEITER");
System.out.println();
}
}
}//HanoiIterative
//--------------------------------------------------------------------------------
Im guessing you didnt really read that iterative one. I didnt either. Its much more complicated. You change change some stuff here and there, but ultimately its always going to be complicated and there is no way around it. While any recursive algorithm CAN be converted to iterative form, it is sometimes much more complicated code wise, and sometimes even significantly less efficient.
How would you search a directory full of sub directories that are themselves full of sub directories and so on (like JB Nizet stated, tree nodes) or calculate a Fibonacci sequence with less ease than using recursion?
All algorithms can be translated from recursive to iterative. Worst case scenario you can explicitly use a stack to keep track of your data (as opposed to the call stack). So if efficiency is really paramount and you know recursion is slowing you down significantly, it's always possible to fall back on the iterative version. Note that some languages have compilers that convert tail recursive methods to their iterative counterparts, e.g., Scala.
The advantage of recursive methods is that most of the time they are much easier to write and understand because they are so intuitive. It is good practice to understand and write recursive programs since many algorithms can be naturally expressed that way. Recursion is just a tool to write expressive and correct programs. And again, once you know your recursive code is correct, it's easier to convert it to its iterative counterpart.
Recursion is usually more elegant and intuitive than other methods. But it's not the universal solution to everything.
Take the Fibonacci sequence for example. You can find the nth term by recursively using the definition of fibonacci number (and the base case with n == 1). But you'll find yourself calculating the mth term ( m < n - 2 ) more than once .
Use instead an array [1, 1] and append the next ith term as the sum of a[i-1] + a[i-2]. You'll find a linear algorithm a lot faster than the other.
BUT you'll love recursion.
It's elegant and often powerful.
Imagine you want to traverse a tree to print something in order. It could be something like:
public void print (){
if( this == null )
return;
left.print();
System.out.println(value);
right.print();
}
To make this with a while loop you need to do your own backtracking stack because it has calls which are not in tail position (though one call is). It won't be as easy to understand as this though and IMO technically it would still be recursion since goto+stack is recursion.
If your trees are not too deep you won't blow the stack and the program works. There is no need to do premature optimizations. I even would have increased the JVM's stack before changing this to do it's own stack.
Now in a future version of the runtime even JVM can get tail call optimization just like proper runtimes should have. Then all recursions in tail positions won't grow the stack and then it's no difference from other control structures so you choose which has the most clear syntax.
My understanding is that standard iterative loops are more applicable when your data set is small, has minimal edge cases, and for which the logical conditions are simple for determining how many times to iterate one or more dedicated functions.
More importantly, recursive functions are more useful when applied to more complex nested data structures, in which you may not be able to intuitively or accurately estimate how many times you need to loop, because the amount of times you need dedicated functions to reiterate is based on a handful of conditions, some of which may not be mutually exclusive, and for which you care to specifically deliberate the order of the call stack and an intuitive path to a base case, for ease of readability and debugging***.
Recursion is recommended for prototype programming for non programmers or junior programmers. For more serious programming you should avoid recursion as much as you can. Please read
NASA coding standard

HOW to port "compact" Python to "compact" Java?

A friend is doing an online Scala course and shared this.
# Write a recursive function that counts how many different ways you can make
# change for an amount, given a list of coin denominations. For example, there
# are 3 ways to give change for 4 if you have coins with denomiation 1 and 2:
# 1+1+1+1, 1+1+2, 2+2.
If you are attending and still working on a solution, don't read this!
(disclaimer: even if my Python solution may be wrong, I don't want to influence your thinking if you are on the course, one way or the other! I guess it is the thinking that goes into it that yields learning, not just the "solving"...)
That aside...
I thought I'd have a go at it in Python as I don't have the Scala chops for it (I am not on the course myself, just interested in learning Python and Java and welcome "drills" to practice on).
Here's my solution, which I'd like to port to Java using as compact a notation as possible:
def show_change(money, coins, total, combo):
if total == money:
print combo, '=', money
return 1
if total > money or len(coins) == 0:
return 0
c = coins[0]
return (show_change(money, coins, total + c, combo + [c]) +
show_change(money, coins[1:], total, combo))
def make_change(money, coins):
if money == 0 or len(coins) == 0:
return 0
return show_change(money, list(set(coins)), 0, [])
def main():
print make_change(4, [2, 1])
if __name__ == '__main__':
main()
Question
How compact can I make the above in Java, allowing the use of libraries external to the JDK if they help?
I tried doing the porting myself but it was getting very verbose and I thought the usual "there must be a better way of doing this"!
Here my attempt:
import java.util.ArrayList;
import java.util.List;
import com.google.common.collect.Lists;
import com.google.common.primitives.Ints;
public class MakeChange {
static int makeChange(int money, int[] coins) {
if (money == 0 || coins.length == 0) {
return 0;
}
return showChange(money, Ints.asList(coins), 0, new ArrayList<Integer>());
}
static int showChange(int money, List<Integer> coins, int total,
List<Integer> combo) {
if (total == money) {
System.out.printf("%s = %d%n", combo, total);
return 1;
}
if (total > money || coins.isEmpty()) {
return 0;
}
int c = coins.get(0);
List<Integer> comboWithC = Lists.newArrayList(combo);
comboWithC.add(c);
return (showChange(money, coins, total + c, comboWithC) + showChange(money,
coins.subList(1, coins.size()), total, combo));
}
public static void main(String[] args) {
System.out.println(makeChange(4, new int[] { 1, 2 }));
}
}
Specifically, what I dislike a lot is having to do the stuff below just to pass a copy of the list with an element appended to it:
List<Integer> comboWithC = Lists.newArrayList(combo);
comboWithC.add(c);
Please show me how compact and readable Java can be. I am still a beginner in both languages...
Really, almost everything you're doing here is directly convertible to Java, without much extra verbosity.
For example:
def make_change(money, coins):
if money == 0 or len(coins) == 0: return 0
return calculate_change(money, list(set(coins)), 0)
The obvious Java equivalent is:
public static int make_change(int money, int coins[]) {
if (money == 0 || coins.length == 0) return 0;
return calculate_change(money, coins, 0);
}
A few extra words here and there, an extra line because of the closing brace, and of course the explicit types… but beyond that, there's no big change.
Of course a more Python and Javariffic (what is the equivalent word?) version would be:
def make_change(money, coins):
if money == 0 or len(coins) == 0:
return 0
return calculate_change(money, list(set(coins)), 0)
The obvious Java equivalent is:
public static int make_change(int money, int coins[]) {
if (money == 0 || coins.length == 0) {
return 0;
}
return calculate_change(money, coins, 0);
}
So, Java gets one extra closing brace plus a few chars of whitespace; still not a big deal.
Putting the whole thing inside a class, and turning main into a method, adds about 3 more lines. Initializing an explicit array variable instead of using [2, 1] as a literal is 1 more. And System.out.println is a few characters longer than print, and length is 3 characters longer than len, and each comment takes two characters // instead of one #. But I doubt any of that is what you're worried about.
Ultimately, there's a total of one line that's tricky:
return (calculate_change(money, coins, total + c, combo + [c]) +
calculate_change(money, coins[1:], total, combo))
A Java int coins[] doesn't have any way to say "give me a new array with the tail of the current one". The easiest solution is to pass an extra start parameter, so:
public static int calculate_change(int money, int coins[], int start, int total) {
if (total == money) {
return 1;
}
if (total > money || coins.length == start) {
return 0;
}
return calculate_change(money, coins, 0, total + coins[start]) +
calculate_change(money, coins, start + 1 total);
}
In fact, nearly everything can be trivially converted to C; you just need to pass yet another param for the length of the array, because you can't calculate it at runtime as in Java or Python.
The one line you're complaining about is an interesting point that's worth putting a bit more thought into. In Python, you've got (in effect):
comboWithC = combo + [c]
With Java's List, this is:
List<Integer> comboWithC = Lists.newArrayList(combo);
comboWithC.add(c);
This is more verbose. But that's intentional. Java List<> is not meant to be used this way. For small lists, copying everything around is no big deal, but for big lists, it can be a huge performance penalty. Python's list was designed around the assumption that most of the time, you're dealing with small lists, and copying them around is perfectly fine, so it should be trivial to write. Java's List was designed around the assumption that sometimes, you're dealing with huge lists, and copying them around is a very bad idea, so your code should make it clear that you really want to do that.
The ideal solution would be to either use an algorithm that didn't need to copy lists around, or to find a data structure that was designed to be copied that way. For example, in Lisp or Haskell, the default list type is perfect for this kind of algorithm, and there are about 69105 recipes for "Lisp-style lists in Java" or "Java cons" that you should be able to find online. (Of course you could also just write your a trivial wrapper around List that added an "addToCopy" method like Python's __add__, but that's probably not the right answer; you want to write idiomatic Java, or why use Java instead of one of the many other JVM languages?)

dynamic programming - what's the asymptotic runtime?

I'm teaching myself dynamic programming. It's almost magical. But seriously. Anyway, the problem I worked out was : Given a stairs of N steps and a child who can either take 1, 2, or 3 steps at a time, how many different ways can the child reach the top step?. The problem wasn't too hard, my implementation is below.
import java.util.HashMap;
public class ChildSteps {
private HashMap<Integer, Integer> waysToStep;
public ChildSteps() {
waysToStep = new HashMap<Integer, Integer>();
}
public int getNthStep(int n) {
if (n < 0) return 0; // 0 ways to get to a negative step
// Base Case
if (n == 0) return 1;
// If not yet memorized
if (!waysToStep.containsKey(n)) {
waysToStep.put(n, getNthStep(n - 3) + getNthStep(n - 2) + getNthStep(n - 1));
}
return waysToStep.get(n);
}
}
However, now I want to get the runtime. How should I figure this out? I am familiar (and not much more) with Akra-Bazzi and Master Theorem. Do those apply here?
http://en.wikipedia.org/wiki/Master_theorem
Here it would seem that it could be: T(N) = 3 * T(???) + O(1) but I'm really not sure.
thanks guys.
In a worst case scenario analysis it would be:
T(N) = N * (containsKey(N) + 8)
Assuming that containsKey = N (it is probably N^2 or Log(N)) then this simplifies to T(N) = N.
You would have to find out the function for containsKey(N) to get the actual equation.
You're really over thinking this though; you don't need to do a algorithm analysis for this. Good quote for you: "Premature optimization is the root of all evil"

Categories