I am a beginner.I already learned C. But now Java is seeming difficult to me. As in C programming my approach was simple , when I looked at Book's programs for simple task such as Factorial, its given very complex programs like below -
class Factorial {
// this is a recursive method
int fact(int n) {
int result;
if(n==1) return 1;
result = fact(n-1) * n;
return result;
}
}
class Recursion {
public static void main(String args[]) {
Factorial f = new Factorial();
System.out.println("Factorial of 3 is " + f.fact(3));
System.out.println("Factorial of 4 is " + f.fact(4));
System.out.println("Factorial of 5 is " + f.fact(5));
}
}
Instead, when I made my own program (given below) keeping it simple , it also worked and was easy. Can anyone tell me what's the difference between two ?
public class Simplefacto {
public static void main(String[] args) {
int n = 7;
int result = 1;
for (int i = 1; i <= n; i++) {
result = result * i;
}
System.out.println("The factorial of 7 is " + result);
}
}
also can anyone tell me what is java EE and java SE ?
The first approach is that of recursion. Which is not always fast and easy. (and usually leads to StackOverflowError, if you are not careful). The second approach is that of a normal for loop. Interstingly, both approaches are valid even in "C".
I think you should not compare Java programs with C programs. Both languages were designed for different reasons.
There are two main differences between those programs:
Program 1 uses recursion
Program 2 uses the imperative approach
Program 1 uses a class where all program logic is encapsuled
Program 2 has all the logic "like the good old C programs" in one method
The first method is Recursive. This means that the method makes calls to itself and the idea behind this is that recursion (when used appropriately) can yield extremely clean code, much like your factorial method. Formatted correctly is should look more like:
private int factorial(int n) {
if(n==1) return n;
return fact(n-1) * n;
}
So that's a factorial calculator in two lines, which is extremely clean and short. The problem is that you can run into problems for large values of n. Namely, the infamous StackOverflowError.
The second method is what is known as iterative. Iterative methods usually involve some form of a loop, and are the other option to recursion. The advantage is that they make quite readable and easy to follow code, even if it is somewhat more verbose and lengthy. This code is more robust and won't fall over for large values of n, unless n! > Integer.MAX_VALUE.
In the first case, you are adding a behavior that can be reused in multiple behaviors or main() while in the second case, you are putting inline code thats not reusable. The other difference is the recursion vs iteration. fact() is based on recursion while the inline code in main() is achieving the same thing using iteration
Related
Like i sad , i am working on Euler problem 12 https://projecteuler.net/problem=12 , i believe that this program will give the correct answer but is too slow , i tried to wait it out but even after 9min it still cant finish it. How can i modify it to run faster ?
package highlydivisibletriangularnumber_ep12;
public class HighlyDivisibleTriangularNumber_EP12 {
public static void findTriangular(int triangularNum){
triangularValue = triangularNum * (triangularNum + 1)/2;
}
static long triangularValue = 0l;
public static void main(String[] args) {
long n = 1l;
int counter = 0;
int i = 1;
while(true){
findTriangular(i);
while(n<=triangularValue){
if(triangularValue%n==0){
counter++;
}
n++;
}
if(counter>500){
break;
}else{
counter = 0;
}
n=1;
i++;
}
System.out.println(triangularValue);
}
}
Just two simple tricks:
When x%n == 0, then also x%m == 0 with m = x/n. This way you need to consider only n <= Math.ceil(sqrt(x)), which is a huge speed up. With each divisor smaller than the square root, you get another one for free. Beware of the case of equality. The speed gain is huge.
As your x is a product of two numbers i and i+1, you can generate all its divisors as product of the divisors of i and i+1. What makes it more complicated is the fact that in general, the same product can be created using different factors. Can it happen here? Do you need to generate products or can you just count them? Again, the speed gain is huge.
You could use prime factorization, but I'm sure, these tricks alone are sufficient.
It appears to me that your algorithm is a bit too brute-force, and due to this, will consume an enormous amount of cpu time regardless of how you might rearrange it.
What is needed is an algorithm that implements a formula that calculates at least part of the solution, instead of brute-forcing the whole thing.
If you get stuck, you can use your favorite search engine to find a number of solutions, with varying degrees of efficiency.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 8 years ago.
Improve this question
Idea is to make the most elegant code as you can. Theme of section: loops.
Task: return sum of squares of numbers from 1 to (n-1)
example: 6 -> 55 (which is 1^2 + 2^2 + 3^2 + 4^2 + 5^2)
I chose Java as language and wrote this code:
public class Program {
public static int Puzzle(int n) {
int r=0;
for(--n;n>=0;r+=n*n--);
return r;
}
}
but compiler says that my code is not elegant enough. Can you help?
Link:CodeHunt
You don't need loop that series. See http://en.wikipedia.org/wiki/Square_pyramidal_number
public class Program {
public static int Puzzle(int n) {
int x = n -1;
//sum of first n-1 squares
return x*(x+1)*(2*x+1)/6;
}
}
Elegance may be a combination of simplicity and accuracy. The biggest issue with your method is that it isn't simple; it may produce the correct result, but it's needlessly complicated with unusual iteration and the fact that you're running a for-loop for its side effects.
Why not go with the more direct approach instead?
public static Puzzle(int n) {
int sum = 0;
for(int i = 1; i <= n; i++) {
sum += Math.pow(i, 2);
}
return sum;
}
Several very inelegant aspects:
Inline increments/decrements: they make the code very confusing, because most people are no experts in when the variable will actually decrement, what will for instance happen with m = (n--)*(--n); (the answer is m = n*(n-2); n -= 2;). For some small simple expressions inline decrements make code more readible. But in nearly all cases, there is no performance gain, as the compiler is smart enough to convert a readible code to one with inline increment/decrement itself.
loops with no body: most people simply get confused, think the next instruction is part of the body, etc. Most IDEs even advice to always use braces and write something in the body.
manipulation of a parameters: this is confusing and makes code less extensible. Say you want to extend your code with some part below and you perform copy-past, since the parameters don't have their original value, the pasted code will work differently. IDEs mostly advice to make at least a copy. Nearly every compiler can optimize this if it turns out the parameters is not used any further.
decrement in for loop: although this sometimes yields a small improvement in code performance, most programmers are used to for loops that increment.
semantical variable names (something a compiler cannot detect): it is recommended that you name your variables appropriately, use sum instead of r. The java compiler sees names simply as identifiers. So at runtime there is no difference, it is however more readable for other people and yourself when you revisit your code months later.
These are all very bad ways to write an algorithm. Most books strongly suggest that unless you really need to write a code that takes the absolute maximum out of your CPU, you better write nice, well structured and readable code. And furthermore if that is the case, there are more efficient languages than Java.
As a better version, I recommend the following code:
public class Program {
public static int Puzzle(int n) {
int sum = 0;
for(int i = 1; i < n; i++) {
sum += i*i;
}
return sum;
}
}
Furthermore you don't need a for-loop to calculate this (as pointed out here):
public class Program {
public static int Puzzle(int n) {
return n*(n-1)*(2*n-1)/6;
}
}
As I start learning recursion different questions are crossing my mind. Recursion uses more memory for the stack and it's usually slower due to maintaining the stack.
What is the benefit of using recursion if I still can use for loops? We describe actions to be repeated over and over until a condition is true, and we can use both recursion or for loops.
Why would I choose recursive version while I have faster control structures as an option?
Recursion uses more memory for the stack and it usually slower due to maintaining the stack
That statement is far from being universally true. It applies in situations when you do not need to save the state for more than a fixed number of levels, but that does not cover many important tasks that can be solved recursively. For example, if you want to implement a depth-first search on a graph, you need to make your own data structure to store the state that would otherwise go on the stack.
What is the benefit of using Recursion if I still can use for loop?
You get more clarity when you apply a recursive algorithm to a task that is best understood through recursion, such as processing recursively-defined structures. In cases like that, a loop by itself is no longer sufficient: you need a data structure to go along with your loop.
Why would I choose recursive version while I have faster control structure?
You wouldn't necessarily choose recursion when you could implement the same algorithm with faster control structures that are easy to understand. However, there are situations when you may want to code a recursion to improve readability, even though you know that you can code the algorithm using a loop, with no additional data structures. Modern compilers can detect situations like that, and "rewrite" your recursive code behind the scene to make it use iterations. This lets you have the best of both worlds - a recursive program that matches reader's expectations, and an iterative implementation that does not waste space on the stack.
Unfortunately, demonstrating examples of situations when recursion gives you clear advantages requires knowledge of advanced topics, so many educators take shortcuts by demonstrating recursion using wrong examples, such as factorials and Fibonacci numbers. One relatively simple example is implementing a parser for an expression with parentheses. You can do it in many different ways, with or without recursion, but the recursive way of parsing expressions gives you an amazingly concise solution that is easy to understand.
The great example of when the recursive solution is better than the itterative one is the tower of Hanoi. Consider the following two solutions -
Recursive (from this question):
public class Hanoi {
public static void main(String[] args) {
playHanoi (2,"A","B","C");
}
//move n disks from position "from" to "to" via "other"
private static void playHanoi(int n, String from , String other, String to) {
if (n == 0)
return;
if (n > 0)
playHanoi(n-1, from, to, other);
System.out.printf("Move one disk from pole %s to pole %s \n ", from, to);
playHanoi(n-1, other, from, to);
}
}
Iterative (copied from RIT):
import java.io.*;
import java.lang.*;
public class HanoiIterative{
// -------------------------------------------------------------------------
// All integers needed for program calculations.
public static int n;
public static int numMoves;
public static int second = 0;
public static int third;
public static int pos2;
public static int pos3;
public static int j;
public static int i;
public static void main(String args[]) {
try{
if( args.length == 1 ){
System.out.println();
n = Integer.parseInt(args[0]); //Sets n to commandline int
int[] locations = new int[ n + 2 ]; //Sets location size
for ( j=0; j < n; j++ ){ //For loop - Initially all
locations[j] = 0; //discs are on tower 1
}
locations[ n + 1 ] = 2; //Final disk destination
numMoves = 1;
for ( i = 1; i <= n; i++){ //Calculates minimum steps
numMoves *= 2; //based on disc size then
} //subtracts one. ( standard
numMoves -= 1; //algorithm 2^n - 1 )
//Begins iterative solution. Bound by min number of steps.
for ( i = 1; i <= numMoves; i++ ){
if ( i%2 == 1 ){ //Determines odd or even.
second = locations[1];
locations[1] = ( locations[1] + 1 ) % 3;
System.out.print("Move disc 1 to ");
System.out.println((char)('A'+locations[1]));
}
else { //If number is even.
third = 3 - second - locations[1];
pos2 = n + 1;
for ( j = n + 1; j >=2; j-- ) //Iterative vs Recursive.
if ( locations[j] == second )
pos2 = j;
pos3 = n + 1;
for ( j = n + 1; j >= 2; j-- ) //Iterative vs Recursive.
if ( locations[j] == third )
pos3 = j;
System.out.print("Move disc "); //Assumes something is moving.
//Iterative set. Much slower here than Recursive.
if ( pos2 < pos3 ){
System.out.print( pos2 );
System.out.print(" to ");
System.out.println((char)('A' + third));
locations[pos2] = third;
}
//Iterative set. Much slower here than Recursive.
else {
System.out.print( pos3 );
System.out.print(" to ");
System.out.println((char)('A' + second));
locations[ pos3 ] = second;
}
}
}
}
} //Protects Program Integrity.
catch( Exception e ){
System.err.println("YOU SUCK. ENTER A VALID INT VALUE FOR #");
System.err.println("FORMAT : java HanoiIterative #");
} //Protects Program Integrity.
finally{
System.out.println();
System.out.println("CREATED BY: KEVIN SEITER");
System.out.println();
}
}
}//HanoiIterative
//--------------------------------------------------------------------------------
Im guessing you didnt really read that iterative one. I didnt either. Its much more complicated. You change change some stuff here and there, but ultimately its always going to be complicated and there is no way around it. While any recursive algorithm CAN be converted to iterative form, it is sometimes much more complicated code wise, and sometimes even significantly less efficient.
How would you search a directory full of sub directories that are themselves full of sub directories and so on (like JB Nizet stated, tree nodes) or calculate a Fibonacci sequence with less ease than using recursion?
All algorithms can be translated from recursive to iterative. Worst case scenario you can explicitly use a stack to keep track of your data (as opposed to the call stack). So if efficiency is really paramount and you know recursion is slowing you down significantly, it's always possible to fall back on the iterative version. Note that some languages have compilers that convert tail recursive methods to their iterative counterparts, e.g., Scala.
The advantage of recursive methods is that most of the time they are much easier to write and understand because they are so intuitive. It is good practice to understand and write recursive programs since many algorithms can be naturally expressed that way. Recursion is just a tool to write expressive and correct programs. And again, once you know your recursive code is correct, it's easier to convert it to its iterative counterpart.
Recursion is usually more elegant and intuitive than other methods. But it's not the universal solution to everything.
Take the Fibonacci sequence for example. You can find the nth term by recursively using the definition of fibonacci number (and the base case with n == 1). But you'll find yourself calculating the mth term ( m < n - 2 ) more than once .
Use instead an array [1, 1] and append the next ith term as the sum of a[i-1] + a[i-2]. You'll find a linear algorithm a lot faster than the other.
BUT you'll love recursion.
It's elegant and often powerful.
Imagine you want to traverse a tree to print something in order. It could be something like:
public void print (){
if( this == null )
return;
left.print();
System.out.println(value);
right.print();
}
To make this with a while loop you need to do your own backtracking stack because it has calls which are not in tail position (though one call is). It won't be as easy to understand as this though and IMO technically it would still be recursion since goto+stack is recursion.
If your trees are not too deep you won't blow the stack and the program works. There is no need to do premature optimizations. I even would have increased the JVM's stack before changing this to do it's own stack.
Now in a future version of the runtime even JVM can get tail call optimization just like proper runtimes should have. Then all recursions in tail positions won't grow the stack and then it's no difference from other control structures so you choose which has the most clear syntax.
My understanding is that standard iterative loops are more applicable when your data set is small, has minimal edge cases, and for which the logical conditions are simple for determining how many times to iterate one or more dedicated functions.
More importantly, recursive functions are more useful when applied to more complex nested data structures, in which you may not be able to intuitively or accurately estimate how many times you need to loop, because the amount of times you need dedicated functions to reiterate is based on a handful of conditions, some of which may not be mutually exclusive, and for which you care to specifically deliberate the order of the call stack and an intuitive path to a base case, for ease of readability and debugging***.
Recursion is recommended for prototype programming for non programmers or junior programmers. For more serious programming you should avoid recursion as much as you can. Please read
NASA coding standard
I am trying to test what I know about BigO not very confident not totally illiterate either but please guide.
This is not a home work , I am not a student any where but interested in understanding this and various other related concepts.
//What is bigO of this Application ?
public class SimpleBigOTest {
// What is BigO of this method -> I am certain it is O(n) but just checking
private void useItirativeApprachToPrintNto0(int n) {
for (int i = 0; i < n; i++) {
System.out.println("useItirativeApprachToPrintNto0: " + i);
}
}
// What is BigO of this method -> I am reasonabily certain it is O(n)
private void useRecurrsiveApprachToPrintNto0(int n) {
if (n != 0) {
System.out.println("useRecurrsiveApprachToPrintNto0: " + n);
useRecurrsiveApprachToPrintNto0(n - 1);
}
}
// What is BigO of this method -> I think now it is O(n^2)
private void mutltipleLinearItirationsDependentOnValueOfN(int n) {
int localCounter = n + n;
for (int i = 0; i < localCounter; i++) {
System.out.println("mutltipleLinearItirationsDependentOnValueOfN: "
+ i);
}
for (int i = 0; i < n; i++) {
System.out.println("mutltipleLinearItirationsDependentOnValueOfN: "
+ i);
}
}
// What is BigO of this method -> I think this is again O(n)
private void mutltipleLinearItirationsNotDependentOnValueOfN(int n, int j) {
int localCounter = j;
for (int i = 0; i < localCounter; i++) {
System.out
.println("mutltipleLinearItirationsNotDependentOnValueOfN: "
+ i);
}
for (int i = 0; i < n; i++) {
System.out
.println("mutltipleLinearItirationsNotDependentOnValueOfN: "
+ i);
}
}
// What is bigO of this main -> I would say O(n^2) because
// mutltipleLinearItirationsDependentOnValueOfN has biggest BigO of O(n^2)
// if I am correct
public static void main(String[] args) {
SimpleBigOTest test = new SimpleBigOTest();
int n = 1000;
int j = 1234;
test.useItirativeApprachToPrintNto0(n);
test.useRecurrsiveApprachToPrintNto0(n);
test.mutltipleLinearItirationsDependentOnValueOfN(n);
test.mutltipleLinearItirationsNotDependentOnValueOfN(n, j);
}
}
As a side question why do all the books on Algorithms speak so highly of Recursion where as in my practical experience I have always used iteration. Using Recursion we can run out memory quickly and nightmare to debug.
Your answers to the first two are correct.
Your answer to the third function is incorrect; this is also O(N). The reason is that the first loop iterates 2N times, and the second loop iterates N times. This is a total of 3N iterations, and 3N = O(N) because big-O ignores constant factors.
Your answer to the fourth function is also incorrect; this is O(N + J). It is possible to have a function's runtime dependent on multiple parameters, and that is the case here. Greatly increasing N or J will cause the runtime to depend on that parameter more than the other. Many important algorithms like Dijkstra's algorithm, the KMP string matching algorithm, etc. have runtimes that depend on multiple parameters. Some algorithms have runtimes that depend on the value they produce (these are sometimes called output-sensitive algorithms). It's good to keep this in mind when analyzing or designing algorithms.
Finally, the complexity of main is O(1) because you are calling all four functions with fixed values for the arguments. Since the program always does exactly the same amount of work (some constant). If you allow n and j to vary with the command-line arguments, then the runtime would be O(n + j), but since they're fixed the complexity is O(1).
As a final note, I'd suggest not dismissing recursion so quickly. Recursion is an extremely useful technique for designing algorithms, and many recursive algorithms (quicksort, mergesort, etc.) use little stack space and are quite practical. Thinking recursively often helps you design iterative algorithms by allowing you to think about the structure of the problem in a different way. Plus, many major data structures (linked lists, trees, tries, etc.) are defined recursively, and understanding their recursive structure will help you write algorithms that operate over them. Trust me, it's a good skill to have! :-)
Hope this helps!
Regarding the complexity scores #templatetypedef has already provided the correct once.
Now for your question regarding recursion vs loops.
Many problems in real world have recursive behavior and they can be best designed using this property. For example Tower of Hanoi, recursion provides a very simple solution whereas if you make use of some iterative approach then it can become quite complex :(
Lastly recursion does have some additional parameters overheads. If you need to have extremely optimized behavior then you have to decide amongst them.
Finally, remember that programmer time is more expensive than CPU time. Before you micro-optimize your code, it really is a good idea to measure to see if it really will be an issue.
I have 2 parameters and I want the method to return an int result.. I was given this code but I don't understand naff all about binoms etc and don't know how to "convert" it
has double BC[126][126]; defined somewhere above it. But i don't need that i just want a result for these n and m. (I probably sound like numpty for putting like that)
private void binom(int n, int m) {
int i, j;
if (n>=0)
if (m>n||m<0) System.err.println("Illegal m!!\n");
else {
for(i=0;i<=n;i++) BC[i][0] = 1;
for(i=1;i<=m;i++) BC[0][i] = 0;
for(j=1;j<=m;j++) for(i=1;i<=n;i++)
BC[i][j] = BC[i-1][j-1] + BC[i-1][j];
}
else System.err.println("Negative n!!\n");
}
You could just return BC[n][m] which is the element you calculate with the three for cycles..
by the way you have at least three possible implementations:
trivial recursive
this one (dynamic programming)
using the formula n! / (n-m)!m! which is no good since fact operations are annoying
A correction: your approach would be dynamic programming if you avoid to recalculate all the coefficients everytime the method gets invoked but it is not your case..
See the article Computing Binomial Coefficients for an example with comparable complexity, O(n2), but using only O(n) space instead of O(n2).