I have several questions with the following algorithms to tell if a number is prime, I also know that with the sieve of Eratosthenes can be faster response.
Why is faster to compute i i * sqrt (n) times. than sqrt (n) just one time ?
Why Math.sqrt() is faster than my sqrt() method ?
What is the complexity of these algorithms O (n), O (sqrt (n)), O (n log (n))?
public class Main {
public static void main(String[] args) {
// Case 1 comparing Algorithms
long startTime = System.currentTimeMillis(); // Start Time
for (int i = 2; i <= 100000; ++i) {
if (isPrime1(i))
continue;
}
long stopTime = System.currentTimeMillis(); // End Time
System.out.printf("Duracion: %4d ms. while (i*i <= N) Algorithm\n",
stopTime - startTime);
// Case 2 comparing Algorithms
startTime = System.currentTimeMillis();
for (int i = 2; i <= 100000; ++i) {
if (isPrime2(i))
continue;
}
stopTime = System.currentTimeMillis();
System.out.printf("Duracion: %4d ms. while (i <= sqrt(N)) Algorithm\n",
stopTime - startTime);
// Case 3 comparing Algorithms
startTime = System.currentTimeMillis();
for (int i = 2; i <= 100000; ++i) {
if (isPrime3(i))
continue;
}
stopTime = System.currentTimeMillis();
System.out.printf(
"Duracion: %4d ms. s = sqrt(N) while (i <= s) Algorithm\n",
stopTime - startTime);
// Case 4 comparing Algorithms
startTime = System.currentTimeMillis();
for (int i = 2; i <= 100000; ++i) {
if (isPrime4(i))
continue;
}
stopTime = System.currentTimeMillis();
System.out.printf(
"Duracion: %4d ms. s = Math.sqrt(N) while (i <= s) Algorithm\n",
stopTime - startTime);
}
public static boolean isPrime1(int n) {
for (long i = 2; i * i <= n; i++) {
if (n % i == 0)
return false;
}
return true;
}
public static boolean isPrime2(int n) {
for (long i = 2; i <= sqrt(n); i++) {
if (n % i == 0)
return false;
}
return true;
}
public static boolean isPrime3(int n) {
double s = sqrt(n);
for (long i = 2; i <= s; i++) {
if (n % i == 0)
return false;
}
return true;
}
public static boolean isPrime4(int n) {
// Proving wich if faster between my sqrt method or Java's sqrt
double s = Math.sqrt(n);
for (long i = 2; i <= s; i++) {
if (n % i == 0)
return false;
}
return true;
}
public static double abs(double n) {
return n < 0 ? -n : n;
}
public static double sqrt(double n) {
// Newton's method, from book Algorithms 4th edition by Robert Sedgwick
// and Kevin Wayne
if (n < 0)
return Double.NaN;
double err = 1e-15;
double p = n;
while (abs(p - n / p) > err * n)
p = (p + n / p) / 2.0;
return p;
}
}
This is the link of my code also: http://ideone.com/Fapj1P
1. Why is faster to compute i*i, sqrt (n) times. than sqrt (n) just one time ?
Look at the complexities below. The additional cost of computing square root.
2. Why Math.sqrt() is faster than my sqrt() method ?
Math.sqrt() delegates call to StrictMath.sqrt which is done in hardware or native code.
3. What is the complexity of these algorithms?
The complexity of each function you described
i=2 .. i*i<n O(sqrt(n))
i=2 .. sqrt(n) O(sqrt(n)*log(n))
i=2 .. sqrt (by Newton's method) O(sqrt(n)) + O(log(n))
i=2 .. sqrt (by Math.sqrt) O(sqrt(n))
Newton's method's complexity from
http://en.citizendium.org/wiki/Newton%27s_method#Computational_complexity
Squaring a number is effectively an integer operation while sqrt is floating point. Recognizing the run-time allocations for casting and computation the results you have observed are not surprising.
The Wikipedia page on sqrt http://en.wikipedia.org/wiki/Square_root has a nice section on computation.
As for the faster method I trust you can investigate (sub)linear run-time operation n^2.
On the note of run-times you might like this little piece of code I wrote up to demonstrate the number of system calls made to functions during iteration, you may find it, or something similar to it in java useful as you think about this sort of stuff. gist.github.com/Krewn/1ea0c788ac7210efc475
edit: Here is a nice explanation of integer sqrt. run-times http://www.codecodex.com/wiki/Calculate_an_integer_square_root
edit: on a 64 nm core2 to--- How slow (how many cycles) is calculating a square root?
Please include output in your post when possible
Related to your question although approaching primes in a different way,
def getPrimes(n):
primes = [2]
# instantiates our list to a list of one element, 2
k = 3
while(len(primes) < n):
# python uses the prefix function len(var) for lists dictionaries and strings
k2 = 0
isprime=True
#Vacuously true assumption that every number is prime unless
while(primes[k2]**2<=k): # <> this is where you are substituting sqrt with sq <> #
if(k%primes[k2]==0):
isprime=False
break
k2+=1
if(isprime):primes.append(k)
k+=2
return(primes)
print (getPrimes(30))
Related
https://www.geeksforgeeks.org/program-to-calculate-the-value-of-ncr-efficiently/
this is the code I want to understand. Here is a video that explains it more in-depth https://www.youtube.com/watch?v=lhXwT7Zm3EU -> however, I still don't understand a certain aspect of it.
Here is the code:
// Java implementation to find nCr
class GFG {
// Function to find the nCr
static void printNcR(int n, int r)
{
// p holds the value of n*(n-1)*(n-2)...,
// k holds the value of r*(r-1)...
long p = 1, k = 1;
// C(n, r) == C(n, n-r),
// choosing the smaller value
if (n - r < r) {
r = n - r;
}
if (r != 0) {
while (r > 0) {
p *= n;
k *= r;
// gcd of p, k
long m = __gcd(p, k);
// dividing by gcd, to simplify
// product division by their gcd
// saves from the overflow
p /= m;
k /= m;
n--;
r--;
}
// k should be simplified to 1
// as C(n, r) is a natural number
// (denominator should be 1 ) .
}
else {
p = 1;
}
// if our approach is correct p = ans and k =1
System.out.println(p);
}
static long __gcd(long n1, long n2)
{
long gcd = 1;
for (int i = 1; i <= n1 && i <= n2; ++i) {
// Checks if i is factor of both integers
if (n1 % i == 0 && n2 % i == 0) {
gcd = i;
}
}
return gcd;
}
// Driver code
public static void main(String[] args)
{
int n = 50, r = 25;
printNcR(n, r);
}
}
Specifically, why does this code work:
if (n - r < r)
r = n - r;
Why, by doing this simple operation, produce the right answer eventually after going through and exiting the main while loop? I don't understand why this is necessary or why it makes sense to do. Like, why would not having this code make the nCr calculation fail or not work the way it's intended???? If someone can either explain this or point me to somewhere that does explain it or the math concept or something that would be great :) Maybe another way of coding the same thing would help. I just want to understand why this produces the right answer as a math and coding student.
To give a bit of perspective on my abilities (so you know what level I'm at), I am learning object-oriented programming, and have completed high school maths and basic computer science. I am by no means an expert.
The nCr operation has a speciality and it is mentioned in the comment above the if condition : // C(n, r) == C(n, n-r). Now, the while loop iterates when r>0 and with each iteration the value of r is decremented by 1. So in order to reduce the number of times the loop is executed, we need to reduce the value of r (if possible). Since C(n, r) == C(n, n-r), we take the smaller value among r and n-r so that the number of iterations are minimized but the result remains the same.
Assume that n = 100 and r=99. In this case if we skip the if condition, then the loop would be executed 99 times, whereas using the if condition we could update r as r = n-r so that r=1, then the loop would have been executed only once. Thus we are saving 98 unwanted iterations.
So there is a big performance improvement if we include the if condition.
The motto is to find the sum of all the multiples of 3 or 5 below N.
Here's my code:
public class Solution
{
public static void main(String[] args)
{
Scanner in = new Scanner(System.in);
int t = in.nextInt();
long n=0;
long sum=0;
for(int a0 = 0; a0 < t; a0++)
{
n = in.nextInt();
sum=0;
for(long i=1;i<n;i++)
{
if(i%3==0 || i%5==0)
sum = sum + i;
}
System.out.println(sum);
}
}
}
It's taking more than 1sec to execute for some of the test cases. Can anyone please help me out so as to reduce the time complexity?
We can find the sum of all multiples of number d that are below N as a sum of an arithmetic progression (their sum is equal to d + 2*d + 3*d + ...).
long multiplesSum(long N, long d) {
long highestMultiple = (N-1) / d * d;
long numberOfMultiples = highestMultiple / d;
return (d + highestMultiple) * numberOfMultiples / 2;
}
Then the result will be equal to:
long resultSum(long N) {
return multiplesSum(N, 3) + multiplesSum(N, 5) - multiplesSum(N, 3*5);
}
We need to subtract multiplesSum(N, 15) because there are numbers that are multiples of both 3 and 5 and we added them twice.
Complexity: O(1)
You can't reduce the time complexity in this case as there are still O(N) of each set of numbers. However you can reduce the constant multiplier by using integer division:
static int findMultiples(int N, int s)
{
int c = N / s, sum = 0;
for (int i = 0, k = s; i < c; i++, k += s)
sum += k;
return sum;
}
This way you only loop through the multiples themselves instead of the whole range [0, N].
Note that you will need to do findMultiples(N, 3) + findMultiples(N, 5) - findMultiples(N, 15), to remove the duplicated multiples of both 3 and 5. The number of loops is therefore N/3 + N/5 + N/15 = 0.6N instead of N.
EDIT: in general the solution for an arbitrary number of divisors is sum(findMultiples(N,divisor_i) - findMultiples(N,LCM(all_divisors)); however it is only worth doing this if sum(1/divisor_i) + 1/LCM(all_divisors) < 1, otherwise there will be more loops. Luckily this will never be true for 2 divisors.
The sum of all numbers from 1 to (including) N is known to be N(N+1)/2 (no need for iteration).
So, the sum of all multiples of K, from K to KM is K times the above formula, giving KM(M+1)/2.
Combine this with #meowgoesthedog's findMultiples(N, 3) + findMultiples(N, 5) - findMultiples(N, 15) idea, and you have a constant-time solution.
A solution for your problem.Fastest method for solving your problem.
import java.util.*;
class Solution {
public static void main(String[] args) {
Scanner in = new Scanner(System.in);
int t = in.nextInt();
while(t!=0)
{
long a=in.nextLong();
long q=a-1;
long aa=q/3;
long bb=q/5;
long cc=q/15;
long aaa=((aa*(aa+1))/2)*3;
long bbb=((bb*(bb+1))/2)*5;
long ccc=((cc*(cc+1))/2)*15;
System.out.println(aaa+bbb-ccc);
t-=1;}
}
}
I wrote the following program for the second problem of project Euler, for the question: "Project Euler #3: Largest prime factor".It is supposed to print out all the highest prime factors of the provided inputs.
import java.util.Scanner;
public class euler_2 {
public static boolean isPrime(int n) {
if (n % 2 == 0) return false;
for (int i = 3; i * i <= n; i += 2) {
if (n % i == 0)
return false;
}
return true;
}
public static void main(String[] args) {
Scanner sc = new Scanner(System.in);
int a = sc.nextInt();
for (int i = 0; i < a; i++) {
int b = sc.nextInt();
for (int j = b; j >= 1; j--) {
boolean aa = isPrime(j);
if (aa == true && b % j == 0) {
b = j;
break;
}
}
System.out.println(b);
}
}
}
What changes can I make to the program to make it execute faster? What would be a better algorithm for this problem?
The problem with your approach is that for every number N, you try each number smaller or equal to N whether it is a prime and after that whether it is a divisor of N.
Obvious improvement is to check whether it is a divisor first and only then whether it is a prime. But most probably this will not help that much.
What you can do instead is just to start checking each number whether it is a divisor of a number. If it is a divisor, divide it. You continue this till sqrt(N).
I have not done anything with java in a long time, but here is Go implementation, which most probably any Java person will be able to transform to Java.
func biggestPrime(n uint64) uint64 {
p, i := uint64(1), uint64(0)
for i = 2; i < uint64(math.Sqrt(float64(n))) + uint64(1); i++ {
for n % i == 0 {
n /= i
p = i
}
}
if n > 1 {
p = n
}
return p
}
Using my algorithm it will take you O(sqrt(N)) to find the biggest prime of a number. In your case it was O(N * sqrt(N))
Attempt to factor the number into 2 factors. Repeat on the largest factor found so far until you find one that can't be factored -- that is the largest prime factor.
There are many different ways you might try to factor the numbers, but since they are only ints, then Fermat's method or even trial division (going down from sqrt(N)) will probably do. See http://mathworld.wolfram.com/FermatsFactorizationMethod.html
I need a function which can calculate the mathematical combination of (n, k) for a card game.
My current attempt is to use a function based on usual Factorial method :
static long Factorial(long n)
{
return n < 2 ? 1 : n * Factorial(n - 1);
}
static long Combinatory(long n , long k )
{
return Factorial(n) / (Factorial(k) * Factorial(n - k));
}
It's working very well but the matter is when I use some range of number (n value max is 52 and k value max is 4), it keeps me returning a wrong value. E.g :
long comb = Combinatory(52, 2) ; // return 1 which should be actually 1326
I know that it's because I overflow the long when I make Factorial(52) but the range result I need is not as big as it seems.
Is there any way to get over this issue ?
Instead of using the default combinatory formula n! / (k! x (n - k)!), use the recursive property of the combinatory function.
(n, k) = (n - 1, k) + (n - 1, k - 1)
Knowing that : (n, 0) = 1 and (n, n) = 1.
-> It will make you avoid using factorial and overflowing your long.
Here is sample of implementation you can do :
static long Combinatory(long n, long k)
{
if (k == 0 || n == k )
return 1;
return Combinatory(n - 1, k) + Combinatory(n - 1, k - 1);
}
EDIT : With a faster iterative algorithm
static long Combinatory(long n, long k)
{
if (n - k < k)
k = n - k;
long res = 1;
for (int i = 1; i <= k; ++i)
{
res = (res * (n - i + 1)) / i;
}
return res;
}
In C# you can use BigInteger (I think there's a Java equivalent).
e.g.:
static long Combinatory(long n, long k)
{
return (long)(Factorial(new BigInteger(n)) / (Factorial(new BigInteger(k)) * Factorial(new BigInteger(n - k))));
}
static BigInteger Factorial(BigInteger n)
{
return n < 2 ? 1 : n * Factorial(n - 1);
}
You need to add a reference to System.Numerics to use BigInteger.
If this is not for a homework assignment, there is an efficient implementation in Apache's commons-math package
http://commons.apache.org/proper/commons-math/apidocs/org/apache/commons/math3/util/ArithmeticUtils.html#binomialCoefficientDouble%28int,%20int%29
If it is for a homework assignment, start avoiding factorial in your implementation.
Use the property that (n, k) = (n, n-k) to rewrite your choose using the highest value for k.
Then note that you can reduce n!/k!(n-k)! to n * n-1 * n-2 .... * k / (n-k) * (n-k-1) ... * 1 means that you are multiplying every number from [k, n] inclusive, then dividing by every number [1,n-k] inclusive.
// From memory, please verify correctness independently before trusting its use.
//
public long choose(n, k) {
long kPrime = Math.max(k, n-k);
long returnValue = 1;
for(i = kPrime; i <= n; i++) {
returnValue *= i;
}
for(i = 2; i <= n - kPrime; i++) {
returnValue /= i;
}
return returnValue;
}
Please double check the maths, but this is a basic idea you could go down to get a reasonably efficient implementation that will work for numbers up to a poker deck.
The recursive formula is also known as Pascal's triangle, and IMO it's the easiest way to calculate combinatorials. If you're only going to need C(52,k) (for 0<=k<=52) I think it would be best to fill a table with them at program start. The following C code fills a table using this method:
static int64_t* pascals_triangle( int N)
{
int n,k;
int64_t* C = calloc( N+1, sizeof *C);
for( n=0; n<=N; ++n)
{ C[n] = 1;
for( k=n-1; k>0; --k)
{ C[k] += C[k-1];
}
}
return C;
}
After calling this with N=52, for example returns, C[k] will hold C(52,k) for k=0..52
I just tried implementing code (in Java) for various means by which the nth term of the Fibonacci sequence can be computed and I'm hoping to verify what I've learnt.
The iterative implementation is as follows:
public int iterativeFibonacci(int n)
{
if ( n == 1 ) return 0;
else if ( n == 2 ) return 1;
int i = 0, j = 1, sum = 0;
for ( ; (n-2) != 0; --n )
{
sum = i + j;
i = j;
j = sum;
}
return sum;
}
The recursive implementation is as follows :-
public int recursiveFibonacci(int n)
{
if ( n == 1 ) return 0;
else if ( n == 2 ) return 1;
return recursiveFibonacci(n-1) + recursiveFibonacci(n-2);
}
The memoized implementation is as follows :-
public int memoizedFibonacci(int n)
{
if ( n <= 0 ) return -1;
else if ( n == 1 ) return 0;
else if ( n == 2 ) return 1;
if ( memory[n-1] == 0 )
memory[n-1] = memoizedFibonacci(n-1);
if ( memory[n-2] == 0 )
memory[n-2] = memoizedFibonacci(n-2);
return memory[n-1]+memory[n-2];
}
I'm having a bit of a doubt when trying to figure out the Big-O of these implementations. I believe the iterative implementation to be O(n) as it loops through N-2 times.
In the recursive function, there are values recomputed, hence I think it's O(n^2).
In the memoized function, more than half of the values are accessed based on memoization. I've read that an algorithm is O(log N) if it takes constant time to reduce the problem space by a fraction and that an algorithm is O(N) if it takes constant time to reduce the problem space by a constant amount. Am I right in believing that the memoized implementation is O(n) in complexity? If so, wouldn't the iterative implementation be the best among all three? (as it does not use the additional memory that memoization requires).
The recursive version is not polynomial time - it's exponential, tightly bounded at φn where φ is the golden ratio (≈ 1.618034). The recursive version will use O(n) memory (the usage comes from the stack).
The memoization version will take O(n) time on first run, since each number is only computed once. However, in exchange, it also take O(n) memory for your current implementation (the n comes from storing the computed value, and also for the stack on the first run). If you run it many times, the time complexity will become O(M + q) where M is the max of all input n and q is the number of queries. The memory complexity will become O(M), which comes from the array which holds all the computed values.
The iterative implementation is the best if you consider one run, as it also runs in O(n), but uses constant amount of memory O(1) to compute. For a large number of runs, it will recompute everything, so its performance may not be as good as memoization version.
(However, practically speaking, long before the problem of performance and memory, the number is likely to overflow even 64-bit integer, so an accurate analysis must take into account the time it takes to do addition if you are computing the full number).
As plesiv mentioned, the Fibonacci number can also be computed in O(log n) by matrix multiplication (using the same trick as fast exponentiation by halving the exponent at every step).
A java implementation to find Fibonacci number using matrix multiplication is as follows:
private static int fibonacci(int n)
{
if(n <= 1)
return n;
int[][] f = new int[][]{{1,1},{1,0}};
//for(int i = 2; i<=n;i++)
power(f,n-1);
return f[0][0];
}
// method to calculate power of the initial matrix (M = [][]{{1,1},{1,0}})
private static void power(int[][] f, int n)
{
int[][] m = new int[][]{{1,1},{1,0}};
for(int i = 2; i<= n; i++)
multiply(f, m);
}
// method to multiply two matrices
private static void multiply(int[][] f, int[][] m)
{
int x = f[0][0] * m[0][0] + f[0][1] * m[1][0];
int y = f[0][0] * m[0][1] + f[0][1] * m[1][1];
int z = f[1][0] * m[0][0] + f[1][1] * m[1][0];
int w = f[1][0] * m[0][1] + f[1][1] * m[1][1];
f[0][0] = x;
f[0][1] = y;
f[1][0] = z;
f[1][1] = w;
}
Even another faster method than Matrix exponentiation method to calculate Fibonacci number is Fast Doubling method. The amortized time complexity for both the methods is O(logn).
The method follows the following formula
F(2n) = F(n)[2*F(n+1) – F(n)]
F(2n + 1) = F(n)2 + F(n+1)2
One such java implementation for Fast Doubling method is as below:
private static void fastDoubling(int n, int[] ans)
{
int a, b,c,d;
// base case
if(n == 0)
{
ans[0] = 0;
ans[1] = 1;
return;
}
fastDoubling((n/2), ans);
a = ans[0]; /* F(n) */
b = ans[1]; /* F(n+1) */
c = 2*b-a;
if(c < 0)
c += MOD;
c = (a*c) % MOD; /* F(2n) */
d = (a*a + b*b) % MOD ; /* F(2n+1) */
if(n%2 == 0)
{
ans[0] = c;
ans[1] = d;
}
else
{
ans[0] = d;
ans[1] = (c+d);
}
}