First of all I know this is kind of a simple question and I am a beginner so please bear with me .
I have been having problems with this exercise in Java and I'm practising for a test and this one really messed up my self-confidence.
So anyways the problem looks like this
// Returns true if (and only if) n is a prime number; n > 1 is assumed.
private static boolean isPrime(long n) {
return isPrime(n, 2);
// See the method isPrime below.
}
// Helper method for isPrime ...
private static boolean isPrime(long n, long m) {
return (m * n > (m /* TODO: modify expression if necessary */))
|| (n % m == (0 /* TODO: modify expression if necessary */)
&& isPrime((m /* TODO: modify expression if necessary */), n + 1));
}
So you're supposed to fill these expressions inside of the brackets where TODO is .
My problem is that I just can't trace what this does.
isPrime((.....),n+1);
If someone could just offer some advice on how to start solving this problem I would be very grateful .
This problem is not amenable to recursive solution. Or at least not an efficient recursive solution.
The definition of primality is that N is prime if it is not divisible by any positive integer other than itself or 1. The normal way to handle that using recursion would be to define a recursive "is_divisible" function:
# for integers m >= 1
is_prime(m):
return is_divisible(m, 1)
is_divisible(m, n):
if n >= m: return true
if n == 1: return is_divisible(m, n + 1)
if m % n == 0: return false // HERE
return is_divisible(m, n + 1)
OR (more efficient 3 parameter version)
# for all integers m
is_prime(m):
if m == 1: return false
if m >= 2: return is_divisible(m, sqrt(m), 2)
error "m is not a positive integer"
is_divisible(m, max, n) :
if n >= max: return true
if m % n == 0: return false // HERE
return is_divisible(m, max, n + 1)
(In the literature, they often call functions like is_divisible "auxiliary" functions. This is a common tool in functional programming.)
If you try to "optimize" that to only consider prime numbers at HERE, you will end up with double recursion .. and exponential complexity.
This is all very "unnatural" in Java, and will be horribly inefficient (even compared with a naive primality test using a loop1) because Java doesn't do tail-call optimization of recursion. Indeed, for large enough N you will get a StackOverflowError.
1 - A better, but still simple approach is the Sieve of Erathones. There are better tests for primality than that, though they are rather complicated, and in some cases probabilistic.
Related
I'm trying to make a program where a method takes a single positive integer as its argument and returns the number of factors of 2 in the number, an example being numbers that are twice an odd number returns one, numbers that are four times an odd returns two, etc.
public int twos(int n){
int twos=0;
if(n%2!=0){
return(0);
}
if(n/3>=1){
System.out.println("\n"+n);;
return twos(n/3)+1;
}
else{
return(0);
}
// return(twos);
}
The code I have above only works for the integers 6 and 12, but not for all integers. How would you make this work for all digits?
While creating a recursive implementation, you need always define Base case and Recursive case (every recursive method would contain these two parts, either implicitly or explicitly).
Base case - represents a condition (or group of conditions) under which recursion should terminate. The result for base case is trivial and known in advance. For this task, Base case is when the number is not even, return value is 0.
Recursive case - is the place where the main logic of the recursive method resides and where recursive calls are made. In the Recursive case we need to perform a recursive call with n / 2 passed as an argument, and return the result of this call +1 (because the received argument is even, and we need to add one power of 2 to the total count).
That's how implementation might look like:
public int twos(int n) {
if (n % 2 != 0) return 0; // base case
// recursive case
return 1 + twos(n / 2);
}
Or if you would use a so-called ternary operator (which is an equivalent of if-else) it can be written as:
public int twos(int n) {
return n % 2 != 0 ? 0 : 1 + twos(n / 2);
}
Usage example (in order to be able to call twos() from the main() add static modifier to it):
public static void main(String[] args) {
System.out.println(twos(2));
System.out.println(twos(4));
System.out.println(twos(12));
System.out.println(twos(38));
System.out.println(twos(128));
}
Output:
1 // 2
2 // 4
2 // 12
1 // 38
7 // 128
I am currently struggling with computing the Big O for a recursive exponent function that takes a shortcut whenever n%2 == 0. The code is as follows:
public static int fasterExponent(int x, int n){
if ( n == 0 ) return 1;
if ( n%2 == 0 ){
int temp = fasterExponent(x, n/2);
return temp * temp;
}
return x * fasterExponent(x, --n); //3
}
I understand that, without the (n%2 == 0) case, this recursive exponent function would be O(n). The inclusion of the (n%2 == 0) case speeds up the execution time, but I do not know how to determine neither its complexity nor the values of its witness c.
The answer is O(log n).
Reason: fasterExponent(x, n/2) this halfes the input in each step and when it reaches 0 we are done. This obviously needs log n steps.
But what about fasterExponent(x, --n);? We do this when the input is odd and in the next step it will be even and we fall back to the n/2 case. Let's consider the worst case that we have to do this every time we divide n by 2. Well then we do the second recursive step once for every time we do the first recursive step. So we need 2 * log n operations. That is still O(log n).
I hope my explanation helps.
It's intuitive to see that at each stage you are cutting the problem size by half. For instance, to find x4, you find x2(let's call this A), and return the result as A*A. Again x2 itself is found by dividing it into x and x.
Considering multiplication of two numbers as a primitive operation, you can see that the recurrence is:
T(N) = T(N/2) + O(1)
Solving this recurrence(using say the Master Theorem) yields:
T(N) = O(logN)
I need to do a MOD of a number which is a long datatype with 1965.
Something like this -
number % 1965
Will the above modulus result always be within 0 and 1964?
Or there are some cases in which it won't return any number between 0 and 1664?
I am using Java as programming language and I will be running my program on Ubuntu machines.
Initially I thought its a Math question but it depends mostly on the Compiler and Language... So kind of confused it will always return number between 0 and 1664 or there are some exception cases?
This is what I have in my method -
private static int getPartitionNumber() {
return (int) (number % 1965);
}
UPDATE:
One thing I forgot to mention is, here number will always be positive number. Any negative number I am throwing IllegalArgumentException at the starting of the program.
No, java's implementation of modulus will return a value in the range (-n, n) for the value x % n. I.e. If you have a negative number as the left operand, then the result will be negative. to get around this, try something like the following:
((x % n) + n) % n;
Which will return a value in the range [0,n)
EDIT (to reflect UPDATE in question)
In the case of positive numbers in the left operand, then simply x % n will produce numbers in the range [0,n) where x >= 0.
Actual code is easier but I'm having trouble finding the base case as well. I was able to write pretty decent pseudocode, but I'm having trouble. I don't know if I'm allowed to ask homework questions on here, but this was a question I could not answer:
Let f(n) be the number of additions performed by this computation.
Write a recurrence equation for f(n). (Note that the number of
addition steps should be exactly the same for both the non-recursive
and recursive versions. In fact, they both should make exactly the
same sequence of addition steps.)
Any help would be great, if I'm not allowed to ask the homework question that's okay.
int sum(int A[], int n ):
T=A[0];
for i = 1; to n-1
T=T+A[i];
return T;}
Use the following property of your sum function:
sum(A[], n) == sum(A[], n-1) + A[n]
and take into account that:
sum(A[], 1) == A[1]
Just re-wrote your variant
int sum(int A[], int n ):
if (n > 1){
T=A[n-1] + A[n-2];
T += sum(A, n-2)
}else{
if (n > 0) { return A[n -1];}
}
return T;
}
I'm trying to solve problem 3 from http://projecteuler.net. However, when I run thing program nothing prints out.
What am I doing wrong?
Problem: What is the largest prime factor of the number 600851475143 ?
public class project_3
{
public boolean prime(long x) // if x is prime return true
{
boolean bool = false;
for(long count=1L; count<x; count++)
{
if( x%count==0 )
{
bool = false;
break;
}
else { bool = true; }
}
return bool;
}
public static void main(String[] args)
{
long ultprime = 0L; // largest prime value
project_3 object = new project_3();
for(long x=1L; x <= 600851475143L; x++)
{
if( object.prime(x)==true )
{
ultprime = ((x>ultprime) ? x : ultprime);
}
}
System.out.println(ultprime);
}
}
Not only does your prime checking function always return false; even if it were functioning properly, your main loop does not seek the input number's factors at all, but rather just the largest prime smaller or equal to it. In pseudocode, your code is equivalent to:
foo(n):
x := 0 ;
foreach d from 1 to n step 1:
if is_prime(d): // always false
x := d
return x // always 0
is_prime(d):
not( d % 1 == 0 ) // always false
But you don't need the prime checking function here at all. The following finds all factors of a number, by trial division:
factors(n):
fs := []
d := 2
while ( d <= n/d ):
if ( n % d == 0 ): { n := n/d ; fs := append(fs,d) }
else: { d := d+1 }
if ( n > 1 ): { fs := append(fs, n) }
return fs
The testing for divisibility is done only up to the square root of the number. Each factor, as it is found, is divided out of the number being factorized, thus further reducing the run time. Factorization of the number in question runs instantly, taking just 1473 iterations.
By construction all the factors thus found are guaranteed to be prime (that's why no prime checking is needed). It is crucial to enumerate the possible divisors in ascending order for this to happen1. Ascending order is also the most efficient, because any given number is more likely to have smaller prime factor than larger one. Enumerating the primes instead of odds, though not necessary, will be more efficient if you have an efficient way of getting those primes, to test divide by.
It is trivial to augment the above to find the largest factor: just implement append as
append(fs,d):
return d
1
because then for any composite divisor d of the original number being factorized, when we'll reach d, we will have already divided its prime factors out of the original number, and so the reduced number will have no common prime factors with it, i.e. d won't divide the reduced number even though it divides the original.
Two things:
1) You are starting count at 1 instead of 2. All integers are divisible by 1.
2) You are running an O(n^2) algorithm against a rather large N (or at least you will be once you fix point #1). The runtime will be quite long.
The whole point of Project Euler is that the most obvious approaches to finding the answer will take so long to compute that they aren't worth running. That way you learn to look for the less obvious, more efficient approaches.
Your approach is technically correct in terms of whether or not it is capable of computing the largest prime of some number. The reason you aren't seeing anything print out is that your algorithm is not capable of solving the problem quickly.
The way you've designed this, it'll take somewhere around 4,000,000 years to finish.
If you replaced the 600851475143 number with say 20 it would be able to finish fairly quickly. But you have the 600 billion number, so it's not that simple.