Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 2 years ago.
Improve this question
so I was wondering if any of you can give me tips regarding this. I've been doing some challenges like (the classical) making a method to calculate the nth number of a Fibonacci sequence using a single recursive call (aka. avoid return fibo(n-1) + fibo(n-2);).
I really scratched my head on that one and ended up looking at the solution that made use of a helper method -
public static int fibonacci(int n) {
if (n < 2) {
return n;
}
return fibonacci_helper(n, 1, 0);
}
public static int fibonacci_helper(int n, int previous, int current) {
if (n < 1) {
return current;
}
return fibonacci_helper(n - 1, current, previous + current);
}
I'm not really sure what approach one takes to solve questions like that quickly (without first solving it iteratively and translating that to a tail recursion, which takes a lot of time).
Would really appreciate some tips, thanks in advance.
You need to first decide if the question needs a recursive solution.Typically a recursion is needed when a present solution is dependent on some previous (already calculated) solution.
To start with , first check on small inputs(call them corner/base cases) . Then build on it (manually by dry running) on small inputs.Once you have done this, you can , in most cases , figure out the recurrence relation(like here in fibonacci).Test its validity , and then using base cases and current recurrence relation , write a recursion.
For example , the given code searches for a node with particular value in a binary tree(check out if you don't know what binary tree is: https://en.wikipedia.org/wiki/Binary_tree)
bool search(Node root,int val){
if(root==null)//base case 1
return false;
if(root.value==val)//base case 2
return true;
return(search(root.left,val)||search(root.right,val));//recursing left and right subtrees for looking out for the value
}
Play with it on paper, and try discover hidden computations that are redone needlessly. Then try to avoid them.
Here you have f(n) = f(n-1) + f(n-2); obviously f(n-1) = f(n-2) + f(n-3) redoes f(n-2) needlessly, etc. etc. etc.. What if you could do the two at once?
Have f2(n) return two values, for n and for (n-1); then you do (in pseudocode)
f(n) = let { (a,b) := f2(n-1) } in (a+b)
Now you have two functions, none is yet defined, what good does it do? Turn this f into f2 as well, so it returns two values, not one, just as we expect it to do:
f2(n) = let { (a,b) := f2(n-1) } in (a+b,a)
And voila, a recursive definition where a is reused.
All that's left is to add some corner/edge/base case(s), and check for the off-by-1 errors.
Or even better, reverse the time arrow, start from the base case, and get your iterative version for free.
Recursion is a tool which is there to help us, to make problem solving easier.
The area you're thinking of is called Dynamic Programming. The way it works is that the solution to the larger problem you're trying to solve is composed of solutions to smaller problems, and the time complexity can be reduced dramatically if you keep those solutions and reuse them, instead of calculating them multiple times. The general approach to take is to consider how the problem can be broken down, and which solutions to the smaller problems you'll need to remember in order to solve it. In this case, you could do it in linear time and linear space by keeping all the results in an array, which should be pretty easy to think of if you're looking for a DP solution. Of course that can be simplified because you don't need to keep all those numbers, but that's a separate problem.
Typically, DP solutions will be iterative rather than recursive, because you need to keep a large number of solutions available to calculate the next larger one. To change it to use recursion, you just need to figure out which solutions you need to pass on, and include those as the parameters.
Related
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 6 years ago.
Improve this question
Contextualisation
Im am implementing a bytecode instrumenter using the soot framework in a testing context and I want to know which design is better.
I am building the TraceMethod object for every Method in a Class that I am instrumenting and I want to run this instrumenter on multiple Classes.
Which Option offers more performance(Space–time)?
Option 1: (Maps)
public class TraceMethod {
boolean[] decisionNodeList;
boolean[] targetList;
Map<Integer,List<Integer>> dependenciesMap;
Map<Integer,List<Double>> decisionNodeBranchDistance;
}
Option 2: (Objects)
public class TraceMethod {
ArrayList<Target> targets = new ArrayList<Target>();
ArrayList<DecisionNode> decisionNodes = new ArrayList<DecisionNode>();
}
public class DecisionNode {
int id;
Double branchDistance;
boolean reached;
}
public class Target {
int id;
boolean reached;
List<DecisionNode> dependencies;
}
I have implemented the option 2 by myself, but my boss suggest me the option 1 and he argue that is "lighter". I saw that in this article "Class Object vs Hashmap" that HashMaps use more memory than Objects, but im still not convinced that my solution(option 2) is better.
Its a simple detail but i want to be sure that I am using the optimal solution, my concern is about performance(Space–time). I know that the second option are way better in term of maintainability but i can sacrifice that if its not optimal.
In general you should always go for maintenance, and not for supposed performance. There are few good reasons for this:
We tend to be fascinated by speed difference between array vs HashMap, but in real enterprise application these differences are not big enough to account in visible difference in application speed.
Most common bottlenecks in application are in either database or network.
JVM optimizes code to some extent
It is very unlikely that your application will have performance issues due to maintainable code. More likely case is your boss will run out of money when you will have millions lines of unmaintainable code .
Approach 1 has the potentical to be much faster and uses less space.
Especially for a byte code instrumenter, I would first implement approach 1.
And then when it works, replace both Lists with non generic lists that use primitive types instead of the Integer and Double object.
Note that an int needs 4 bytes while an Integer (Object) need 16 - 20 bytes, depending on the machine (16 at PC, 20 at android).
The List can be replaced with GrowingIntArray (I have found that in an statistic package of Apache if I remeber correctly) which uses primitive ints. (Or maybe just replaced by an int[] once you know that the content cannot change anymore)
Then you just write your own GrowingDoubleArray (or use double[])
Remember Collections are handy but slower.
Objects use 4 times more space than primitives.
A byte code instrumenter needs performance, it is not a software that is run once a week.
Finally I would not replace that Maps with non generic ones, that seems
for me to much work. But you may try it as last step.
As a final optimization step: look how many elements are in your lists or maps. If that are usually less than 16 (you have to try that out), you may switch to a linear search,
which is the fastest, for a very low number of elements.
You even can make your code intelligent to switch the search algorithms once the number of elements exceed a specific number.
(Sun/Oracle java does this, and Apple/ios, to) in some of their Collections.
However this last step will make you code much more complex.
Space as an exmample:
DecisionNode: 16 for the class + 4 (id) + 20 (Double) +4 (boolean) = 44 + 4 padding to then next multiple of 8 = 48 bytes.
Objects First with Java
A Practical Introduction using BlueJ
Working threw this book and I do not understand what this exercise is asking me to do.
The exercise is...
Exercise 4.51 Rewrite getLot so that it does not rely on a lot with a particular number being stored at index (number–1) in the collection. For instance, if lot number 2 has been removed, then lot number 3 will have been moved from index 2 to index 1, and all higher lot numbers will also have been moved by one index position. You may assume that lots are al- ways stored in increasing order according to their lot numbers.
/**
* Return the lot with the given number. Return null if a lot with this
* number does not exist.
*
* #param lotNumber The number of the lot to return.
*/
public Lot getLot(int lotNumber) {
if ((lotNumber >= 1) && (lotNumber < nextLotNumber)) {
// The number seems to be reasonable.
Lot selectedLot = lots.get(lotNumber - 1);
// Include a confidence check to be sure we have the
// right lot.
if (selectedLot.getNumber() != lotNumber) {
System.out.println("Internal error: Lot number "
+ selectedLot.getNumber()
+ " was returned instead of "
+ lotNumber);
// Don't return an invalid lot.
selectedLot = null;
}
return selectedLot;
} else {
System.out.println("Lot number: " + lotNumber
+ " does not exist.");
return null;
}
}
A hint in the right direction with pesudo code would be fine.
I am really confused in what the exercise is asking me to do.
I will be upfront about this, this is for a class and the teacher is really just handing us the book, with very little guidance. So I am not looking for someone to write my homework, I just want some help. Please don't flame me because I am asking. This is a place to ask questions about coding? No? Thanks in advance.
The algorithm of the given method relies on lot lotNumber being stored in index lotNumber-1. It just looks it up by index and verifies it has found the correct one.
The exercise is to give up this assumption. Lot number and index are no longer this closely related. So you cannot just calculate the index, you have to search for the lot.
The simplest possible approach is to look at each lot in your collection and return it once you found a matching lot number. You can use an iterator, explicitly or implicitly ("foreach"), for this. If your course hasn't covered iterators yet, you can also use a for loop to count through all existing indexes of your collection.
But the exercise specifies that the lots are still stored in order. This allows you to modify the simple approach to give up once you found a lot number higher than the one you're looking for.
The optimal approach would be using a search algorithm for sorted lists, such as binary search.
In the code you provided, there is a big assumption: the assumption that lot number i is stored in the array at the position i-1. Now what if we don't assume that? Well we have no idea where lot i might be in the array, so the only solution is to go through the array and look for lot number i and hopefully we'll found it.
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 9 years ago.
Improve this question
I am making a Connect 4 app for Android and right now I am using a minimax algorithm coupled with alpha-beta prunning and heuristic evaluation function for leaf nodes. I also ordered the moves to further maximize the pruning process. Unfortunately with that strategy the algorithm takes too much time in depth 7, leading me to abandon it in favor of using a transposition table.
Now, I've read information about transposition tables and have got a general idea of how they work, but I am unsure how to proceed with the actual implementation in code. I am not a Java expert so I need any help you can give me.
In my game I am using a int[42] array for the board positions. I thought of using a hash map and storing some kind of data structure objects, where every one of these object will include the board position(array) and an int "score"variable (which would in fact be the score given to this position by the evaluation function). But, that means that every time I want to put a new board position in the table I need to perform some kind of check to see if this position does not exist already(??). And if not, only then insert into the table?
I will be glad for any technical help you guys can give me on this subject. I can put some code examples if needed, but this is a general question and I don't think they are really necessary at this point.
Thanks in advance.
You can use a technique from chess transposition tables: Zobrist hashing. Basically instead of storing the entire board, you compute a long that serves as a hash key for the position, and store that along with the relevant data. It has the additional benefit of being able to be incrementally updated. Instead of generating the key from scratch when making moves, you can update the key with a single bitwise XOR operation (very fast).
Basically, generate some random numbers for each square (slot?). You need one for each side. I assume that black = 0 and red = 1 for easy indexing. Initialization looks like
long[][] zobrist = new long[42][2];
for (int square = 0; square < zobrist.length; square++)
for (int side = 0; side < zobrist[i].length; side++)
zobrist[square][side] = RandomLong();
You will need to find a PRNG that generates a random long for RandomLong(). Make sure it has good randomness when looking at the bits. I recommend against using LCGs.
To compute the hash key for a position from scratch, you just need to XOR together all the zobrist values.
long computeKey(int[] board) {
long hashKey = 0;
for (int square = 0; square < board.length; square++)
if (hasPiece(board[square])) {
int side = getColour(board[square]);
hashKey ^= zobrist[square][side];
}
}
To incrementally update, just XOR the effect of the move. This is when you want to make a move and just update the key.
long updateKey(long oldKey, int moveSquare, int moveSide) {
return oldKey ^ zobrist[moveSquare][moveSide];
}
To unmake the move and get the old key, the above function works too! XOR behaves like negation in logic, so applying it twice gets you back your original key.
I'm trying to solve a problem that calls for recursive backtracking and my solution produces a stackoverflow error. I understand that this error often indicates a bad termination condition, but my ternimation condition appears correct. Is there anything other than a bad termination condition that would be likely to cause a stackoverflow error? How can I figure out what the problem is?
EDIT: sorry tried to post the code but its too ugly..
As #irreputable says, even if your code has a correct termination condition, it could be that the problem is simply too big for the stack (so that the stack is exhausted before the condition is reached). There is also a third possibility: that your recursion has entered into a loop. For example, in a depth-first search through a graph, if you forget to mark nodes as visited, you'll end up going in circles, revisiting nodes that you have already seen.
How can you determine which of these three situations you are in? Try to make a way to describe the "location" of each recursive call (this will typically involve the function parameters). For instance, if you are writing a graph algorithm where a function calls itself on neighbouring nodes, then the node name or node index is a good description of where the recursive function is. In the top of the recursive function, you can print the description, and then you'll see what the function does, and perhaps you can tell whether it does the right thing or not, or whether it goes in circles. You can also store the descriptions in a HashMap in order to detect whether you have entered a circle.
Instead of using recursion, you could always have a loop which uses a stack. E.g. instead of (pseudo-code):
function sum(n){
if n == 0, return 0
return n + sum(n-1)
}
Use:
function sum(n){
Stack stack
while(n > 0){
stack.push(n)
n--
}
localSum = 0
while(stack not empty){
localSum += stack.pop()
}
return localSum
}
In a nutshell, simulate recursion by saving the state in a local stack.
You can use the -Xss option to give your stack more memory if your problem is too large to fix in the default stack limit size.
As the other fellas already mentioned, there might be few reasons for that:
Your code has problem by nature or in the logic of the recursion. It has to be a stoping condition, base case or termination point for any recursive function.
Your memory is too small to keep the number of recursive calls into the stack. Big Fibonacci numbers might be good example here. Just FYI Fibonacci is as follows (sometimes starts at zero):
1,1,2,3,5,8,13,...
Fn = Fn-1 + Fn-2
F0 = 1, F1 = 1, n>=2
If your code is correct, then the stack is simply too small for your problem. We don't have real Turing machines.
There are two common coding errors that could cause your program to get into an infinite loop (and therefore cause a stack overflow):
Bad termination condition
Bad recursion call
Example:
public static int factorial( int n ){
if( n < n ) // Bad termination condition
return 1;
else
return n*factorial(n+1); // Bad recursion call
}
Otherwise, your program could just be functioning properly and the stack is too small.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 6 years ago.
Improve this question
I saw this question, but the answers there are not very relevant.
A friend needs a bank of solved recursion problems to help him study for a test tomorrow.
He learned the issue theoretically, but is having problems grasping how to actually solve recursion problems. Do you know a good source of solved recursion problems (preferably in C, but can be in a C-style language as well) available on the net?
Note - examples in functional languages will not help much here. My friend is in a study race to pass his test tomorrow, and I'm sure switching languages will just confuse him at this point (it might be educational on other, less stressed times).
One of the best ways to learn recursion is to get some experience in a functional programming language such as Haskell or Lisp or Scheme.
So finding recursive problems can be reduced to finding some problems and answers related to functional programming languages. Here's an example 99 lisp problems.
It really only takes 5 minutes to learn Scheme or Lisp so you can get started with examples right away for the test tomorrow you mentioned.
Another great way to learn recursion is to get some practice in mathematical proofs involving induction.
Key concepts relating to recursion:
With recursion you don't need to know how to solve the problem. You just need to know 2 things. 1) how to solve the smallest instance of the problem, and 2) how to break it up into smaller parts.
Equivalently, you just have to keep in mind that you need: 1) a base case and 2) a recursive case.
The base case handles 1 single instance of what you want to do with smallest input.
The recursive case breaks the problem into a subproblem. Eventually this subproblem will reduce to the base case.
Example:
//1+...+n = n*n(+1)/2 = sumAll(n):
int sumAll(int x)
{
if(x == 0) //base case
return 0;
else
return sumAll(x-1) + x; //recursive case
}
It is important to understand that the base case is not hard to figure out. It just has to exist. Here is an equivalent solution for x> 0:
//1+...+n = n*n(+1)/2 = sumAll(n):
int sumAll(int x)
{
if(x == 1) //base case
return 1;
else
return sumAll(x-1) + x; //recursive case
}
This article explains recursion and has some simple C examples for traversing linked list and binary tree
This is going to sound like a very lame answer, but recursion is a paradigm that's often very hard to grasp at the first for beginners. It will take more than a day's meditation on the subject for your friend to firmly grasp the concept.
You may want to have him peruse Project Euler for a potential direction to study.
I think Haskell's syntax is great for thinking recursively, because the pattern matching construct makes the base case and the recursive case so obvious. Translating this into another language is then fairly straightforward.
sumAll [] = 0
sumAll (x:xs) = x + sumAll xs
To understand this, you really only need to know that
[] represents an empty list,
(x:xs) splits a list into a head (x) and a tail (xs)
You don't need to learn all of Haskell (which is, let's face it, hard) - but doing some of the basics certainly helps you think in recursion.
In c /c++ language a function can call itself and this case is called Recursion. Mainly recursion have two cases:
Base case.
recursive case.
and we have some recursive categories like as...
Liner recursion
Binary recursion
Nested recursion
Mutual recursion
Tail recursion
Here take a example to discuss recursion ...
// a recursive program to calculate NcR //
#include <stdio.h>
int ncr(int x,int y)
{
if(y>x)
{
printf("!sorry,the operation can't processed.");
getch();
exit(1);
}
else if(y==0 ||y==x) //base case
{
return(1);
}
else
{
return(ncr(x-1,y-1)+ncr(x-1,y)); //recursive case
}
}
Read SICP(Structure and Interpretation of Computer Programs)
#include<iostream>
using namesspace std;
int E(int x);
int main()
{
int x;
cout << E(x) << endl;
return 0;
}
int E(int x)
{
return x ? (x % 10 + E(x/10)) : 0;
}