I have run into a weird issue for problem 3 of Project Euler. The program works for other numbers that are small, like 13195, but it throws this error when I try to crunch a big number like 600851475143:
Exception in thread "main" java.lang.ArithmeticException: / by zero
at euler3.Euler3.main(Euler3.java:16)
Here's my code:
//Number whose prime factors will be determined
long num = 600851475143L;
//Declaration of variables
ArrayList factorsList = new ArrayList();
ArrayList primeFactorsList = new ArrayList();
//Generates a list of factors
for (int i = 2; i < num; i++)
{
if (num % i == 0)
{
factorsList.add(i);
}
}
//If the integer(s) in the factorsList are divisable by any number between 1
//and the integer itself (non-inclusive), it gets replaced by a zero
for (int i = 0; i < factorsList.size(); i++)
{
for (int j = 2; j < (Integer) factorsList.get(i); j++)
{
if ((Integer) factorsList.get(i) % j == 0)
{
factorsList.set(i, 0);
}
}
}
//Transfers all non-zero numbers into a new list called primeFactorsList
for (int i = 0; i < factorsList.size(); i++)
{
if ((Integer) factorsList.get(i) != 0)
{
primeFactorsList.add(factorsList.get(i));
}
}
Why is it only big numbers that cause this error?
Your code is just using Integer, which is a 32-bit type with a maximum value of 2147483647. It's unsurprising that it's failing when used for numbers much bigger than that. Note that your initial loop uses int as the loop variable, so would actually loop forever if it didn't throw an exception. The value of i will go from the 2147483647 to -2147483648 and continue.
Use BigInteger to handle arbitrarily large values, or Long if you're happy with a limited range but a larger one. (The maximum value of long / Long is 9223372036854775807L.)
However, I doubt that this is really the approach that's expected... it's going to take a long time for big numbers like that.
Not sure if it's the case as I don't know which line is which - but I notice your first loop uses an int.
//Generates a list of factors
for (int i = 2; i < num; i++)
{
if (num % i == 0)
{
factorsList.add(i);
}
}
As num is a long, its possible that num > Integer.MAX_INT and your loop is wrapping around to negative at MAX_INT then looping until 0, giving you a num % 0 operation.
Why does your solution not work?
Well numbers are discrete in hardware. Discrete means thy have a min and max values. Java uses two's complement, to store negative values, so 2147483647+1 == -2147483648. This is because for type int, max value is 2147483647. And doing this is called overflow.
It seems as if you have an overflow bug. Iterable value i first becomes negative, and eventually 0, thus you get java.lang.ArithmeticException: / by zero. If your computer can loop 10 million statements a second, this would take 1h 10min to reproduce, so I leave it as assumption an not a proof.
This is also reason trivially simple statements like a+b can produce bugs.
How to fix it?
package margusmartseppcode.From_1_to_9;
public class Problem_3 {
static long lpf(long nr) {
long max = 0;
for (long i = 2; i <= nr / i; i++)
while (nr % i == 0) {
max = i;
nr = nr / i;
}
return nr > 1 ? nr : max;
}
public static void main(String[] args) {
System.out.println(lpf(600851475143L));
}
}
You might think: "So how does this work?"
Well my tough process went like:
(Dynamical programming approach) If i had list of primes x {2,3,5,7,11,13,17, ...} up to value xi > nr / 2, then finding largest prime factor is trivial:
I start from the largest prime, and start testing if devision reminder with my number is zero, if it is, then that is the answer.
If after looping all the elements, I did not find my answer, my number must be a prime itself.
(Brute force, with filters) I assumed, that
my numbers largest prime factor is small (under 10 million).
if my numbers is a multiple of some number, then I can reduce loop size by that multiple.
I used the second approach here.
Note however, that if my number would be just little off and one of {600851475013, 600851475053, 600851475067, 600851475149, 600851475151}, then my approach assumptions would fail and program would take ridiculously long time to run. If computer could execute 10m statements per second it would take 6.954 days, to find the right answer.
In your brute force approach, just generating a list of factors would take longer - assuming you do not run out of memory before.
Is there a better way?
Sure, in Mathematica you could write it as:
P3[x_] := FactorInteger[x][[-1, 1]]
P3[600851475143]
or just FactorInteger[600851475143], and lookup the largest value.
This works because in Mathematica you have arbitrary size integers. Java also has arbitrary size integer class called BigInteger.
Apart from the BigInteger problem mentioned by Jon Skeet, note the following:
you only need to test factors up to sqrt(num)
each time you find a factor, divide num by that factor, and then test that factor again
there's really no need to use a collection to store the primes in advance
My solution (which was originally written in Perl) would look something like this in Java:
long n = 600851475143L; // the original input
long s = (long)Math.sqrt(n); // no need to test numbers larger than this
long f = 2; // the smallest factor to test
do {
if (n % f == 0) { // check we have a factor
n /= f; // this is our new number to test
s = (long)Math.sqrt(n); // and our range is smaller again
} else { // find next possible divisor
f = (f == 2) ? 3 : f + 2;
}
} while (f < s); // required result is in "n"
Related
I am trying to write a Java method that checks whether a number is a perfect number or not.
A perfect number is a number that is equal to the sum of all its divisor (excluding itself).
For example, 6 is a perfect number because 1+2+3=6. Then, I have to write a Java program to use the method to display the first 5 perfect numbers.
I have no problem with this EXCEPT that it is taking forever to get the 5th perfect number which is 33550336.
I am aware that this is because of the for loop in my isPerfectNumber() method. However, I am very new to coding and I do not know how to come up with a better code.
public class Labreport2q1 {
public static void main(String[] args) {
//Display the 5 first perfect numbers
int counter = 0,
i = 0;
while (counter != 5) {
i++;
isPerfectNumber(i);
if (isPerfectNumber(i)) {
counter++;
System.out.println(i + " ");
}
}
}
public static boolean isPerfectNumber(int a) {
int divisor = 0;
int sum = 0;
for (int i = 1; i < a; i++) {
if (a % i == 0) {
divisor = i;
sum += divisor;
}
}
return sum == a;
}
}
This is the output that is missing the 5th perfect number
Let's check the properties of a perfect number. This Math Overflow question tells us two very interesting things:
A perfect number is never a perfect square.
A perfect number is of the form (2k-1)×(2k-1).
The 2nd point is very interesting because it reduces our search field to barely nothing. An int in Java is 32 bits. And here we see a direct correlation between powers and bit positions. Thanks to this, instead of making millions and millions of calls to isPerfectNumber, we will be making less than 32 to find the 5th perfect number.
So we can already change the search field, that's your main loop.
int count = 0;
for (int k = 1; count < 5; k++) {
// Compute candidates based on the formula.
int candidate = (1L << (k - 1)) * ((1L << k) - 1);
// Only test candidates, not all the numbers.
if (isPerfectNumber(candidate)) {
count++;
System.out.println(candidate);
}
}
This here is our big win. No other optimization will beat this: why test for 33 million numbers, when you can test less than 100?
But even though we have a tremendous improvement, your application as a whole can still be improved, namely your method isPerfectNumber(int).
Currently, you are still testing way too many numbers. A perfect number is the sum of all proper divisors. So if d divides n, n/d also divides n. And you can add both divisors at once. But the beauty is that you can stop at sqrt(n), because sqrt(n)*sqrt(n) = n, mathematically speaking. So instead of testing n divisors, you will only test sqrt(n) divisors.
Also, this means that you have to start thinking about corner cases. The corner cases are 1 and sqrt(n):
1 is a corner case because you if you divide n by 1, you get n but you don't add n to check if n is a perfect number. You only add 1. So we'll probably start our sum with 1 just to avoid too many ifs.
sqrt(n) is a corner case because we'd have to check whether sqrt(n) is an integer or not and it's tedious. BUT the Math Overflow question I referenced says that no perfect number is a perfect square, so that eases our loop condition.
Then, if at some point sum becomes greater than n, we can stop. The sum of proper divisors being greater than n indicates that n is abundant, and therefore not perfect. It's a small improvement, but a lot of candidates are actually abundant. So you'll probably save a few cycles if you keep it.
Finally, we have to take care of a slight issue: the number 1 as candidate. 1 is the first candidate, and will pass all our tests, so we have to make a special case for it. We'll add that test at the start of the method.
We can now write the method as follow:
static boolean isPerfectNumber(int n) {
// 1 would pass the rest because it has everything of a perfect number
// except that its only divisor is itself, and we need at least 2 divisors.
if (n < 2) return false;
// divisor 1 is such a corner case that it's very easy to handle:
// just start the sum with it already.
int sum = 1;
// We can stop the divisors at sqrt(n), but this is floored.
int sqrt = (int)Math.sqrt(n);
// A perfect number is never a square.
// It's useful to make this test here if we take the function
// without the context of the sparse candidates, because we
// might get some weird results if this method is simply
// copy-pasted and tested on all numbers.
// This condition can be removed in the final program because we
// know that no numbers of the form indicated above is a square.
if (sqrt * sqrt == n) {
return false;
}
// Since sqrt is floored, some values can still be interesting.
// For instance if you take n = 6, floor(sqrt(n)) = 2, and
// 2 is a proper divisor of 6, so we must keep it, we do it by
// using the <= operator.
// Also, sqrt * sqrt != n, so we can safely loop to sqrt
for (int div = 2; div <= sqrt; div++) {
if (n % div == 0) {
// Add both the divisor and n / divisor.
sum += div + n / div;
// Early fail if the number is abundant.
if (sum > n) return false;
}
}
return n == sum;
}
These are such optimizations that you can even find the 7th perfect number under a second, on the condition that you adapt the code for longs instead of ints. And you could still find the 8th within 30 seconds.
So here's that program (test it online). I removed the comments as the explanations are here above.
public class Main {
public static void main(String[] args) {
int count = 0;
for (int k = 1; count < 8; k++) {
long candidate = (1L << (k - 1)) * ((1L << k) - 1);
if (isPerfectNumber(candidate)) {
count++;
System.out.println(candidate);
}
}
}
static boolean isPerfectNumber(long n) {
if (n < 2) return false;
long sum = 1;
long sqrt = (long)Math.sqrt(n);
for (long div = 2; div <= sqrt; div++) {
if (n % div == 0) {
sum += div + n / div;
if (sum > n) return false;
}
}
return n == sum;
}
}
The result of the above program is the list of the first 8 perfect numbers:
6
28
496
8128
33550336
8589869056
137438691328
2305843008139952128
You can find further optimization, notably in the search if you check whether 2k-1 is prime or not as Eran says in their answer, but given that we have less than 100 candidates for longs, I don't find it useful to potentially gain a few milliseconds because computing primes can also be expensive in this program. If you want to check for bigger perfect primes, it makes sense, but here? No: it adds complexity and I tried to keep these optimization rather simple and straight to the point.
There are some heuristics to break early from the loops, but finding the 5th perfect number still took me several minutes (I tried similar heuristics to those suggested in the other answers).
However, you can rely on Euler's proof that all even perfect numbers (and it is still unknown if there are any odd perfect numbers) are of the form:
2i-1(2i-1)
where both i and 2i-1 must be prime.
Therefore, you can write the following loop to find the first 5 perfect numbers very quickly:
int counter = 0,
i = 0;
while (counter != 5) {
i++;
if (isPrime (i)) {
if (isPrime ((int) (Math.pow (2, i) - 1))) {
System.out.println ((int) (Math.pow (2, i -1) * (Math.pow (2, i) - 1)));
counter++;
}
}
}
Output:
6
28
496
8128
33550336
You can read more about it here.
If you switch from int to long, you can use this loop to find the first 7 perfect numbers very quickly:
6
28
496
8128
33550336
8589869056
137438691328
The isPrime method I'm using is:
public static boolean isPrime (int a)
{
if (a == 1)
return false;
else if (a < 3)
return true;
else {
for (int i = 2; i * i <= a; i++) {
if (a % i == 0)
return false;
}
}
return true;
}
*disclaimer, when I say "I have verified this is the correct result", please interpret this as I have checked my solution against the answer according to WolframAlpha, which I consider to be pretty darn accurate.
*goal, to find the sum of all the prime numbers less than or equal to 2,000,000 (two million)
*issue, my code will output the correct result whenever my range of tested values is approximately less than or equal to
I do not output correct result once test input becomes larger than approximately 1,300,000; my output will be off...
test input: ----199,999
test output: ---1,709,600,813
correct result: 1,709,600,813
test input: ----799,999
test output: ---24,465,663,438
correct result: 24,465,663,438
test input: ----1,249,999
test output: ---57,759,511,224
correct result: 57,759,511,224
test input: ----1,499,999
test output:--- 82,075,943,263
correct result: 82,074,443,256
test input: ----1,999,999
test output:--- 142,915,828,925
correct result: 142,913,828,925
test input: ----49,999,999
test output:--- 72,619,598,630,294
correct result: 72,619,548,630,277
*my code, what's going on, why does it work for smaller inputs? I even used long, rather than int...
long n = 3;
long i = 2;
long prime = 0;
long sum = 0;
while (n <= 1999999) {
while (i <= Math.sqrt(n)) { // since a number can only be divisible by all
// numbers
// less than or equal to its square roots, we only
// check from i up through n's square root!
if (n % i != 0) { // saves computation time
i += 2; // if there's a remainder, increment i and check again
} else {
i = 3; // i doesn't need to go back to 2, because n+=2 means we'll
// only ever be checking odd numbers
n += 2; // makes it so we only check odd numbers
}
} // if there's not a remainder before i = n (meaning all numbers from 0
// to n were relatively prime) then move on
prime = n; // set the current prime to what that number n was
sum = sum + prime;
i = 3; // re-initialize i to 3
n += 2; // increment n by 2 so that we can check the next odd number
}
System.out.println(sum+2); // adding 2 because we skip it at beginning
help please :)
The problem is that you don't properly check whether the latest prime to be added to the sum is less than the limit. You have two nested loops, but you only check the limit on the outer loop:
while (n <= 1999999) {
But you don't check in the inner loop:
while (i <= Math.sqrt(n)) {
Yet you repeatedly advance to the next candidate prime (n += 2;) inside that loop. This allows the candidate prime to exceed the limit, since the limit is only checked for the very first candidate prime in each iteration of the outer loop and not for any subsequent candidate primes visited by the inner loop.
To take an example, in the case with the limit value of 1,999,999, this lets in the next prime after 1,999,999, which is 2,000,003. You'll note that the correct value, 142,913,828,922, is exactly 2,000,003 less than your result of 142,915,828,925.
A simpler structure
Here's one way the code could be structured, using a label and continue with that label to simplify structure:
public static final long primeSum(final long maximum) {
if (maximum < 2L) return 0L;
long sum = 2L;
// Put a label here so that we can skip to the next outer loop iteration.
outerloop:
for (long possPrime = 3L; possPrime <= maximum; possPrime += 2L) {
for (long possDivisor = 3L; possDivisor*possDivisor <= possPrime; possDivisor += 2L) {
// If we find a divisor, continue with the next outer loop iteration.
if (possPrime % possDivisor == 0L) continue outerloop;
}
// This possible prime passed all tests, so it's an actual prime.
sum += possPrime;
}
return sum;
}
I am writing a Java program that calculates the largest prime factor of a large number. But I have an issue with the program's complexity, I don't know what has caused the program to run forever for large numbers, it works fine with small numbers.
I have proceeded as follow :
import java.util.ArrayList;
import java.util.Arrays;
import java.util.Collections;
public class Largest_prime_factor {
public static void main(String[] args)
{
//ArrayList primesArray = new ArrayList();
ArrayList factorArray = new ArrayList();
long largest = 1;
long number = 600851475143L ;
long i, j, k;
//the array list factorArray will have all factors of number
for (i = 2; i < number; i++)
{
if( number % i == 0)
{
factorArray.add(i);
}
}
Here, the Array List will have all the factors of the number.
So I'll need to get only the prime ones, for that, I used a method that checks if a number is prime or not, if it's not a prime number, I remove it from the list using the following method :
java.util.ArrayList.remove()
So the next part of the code is as follow :
for (i = 2; i < number; i++)
{
if (!isPrime(i))
{
factorArray.remove(i);
System.out.println(factorArray);
}
}
System.out.println(Collections.max(factorArray));
}
The last line prints the largest number of factorArray, which is what I am looking for.
public static boolean isPrime(long n)
{
if(n > 2 && (n & 1) == 0)
return false;
for(int i = 3; i * i <= n; i += 2)
if (n % i == 0)
return false;
return true;
}
}
The function above is what I used to determine if the number is a prime or not before removing it from the list.
This program works perfectly for small numbers, but it takes forever to give an output for large numbers, although the last function is pretty fast.
At first, I used to check if a number is prime or not inside of the first loop, but it was even slower.
You are looping over 600851475143 numbers.
long number = 600851475143L ;
for (i = 2; i < number; i++)
Even if we assume that each iteration takes very very small time (as small as 1 microsecond), it'll still take days before the loop finishes.
You need to optimise your prime-finding logic in order for this program to run faster.
One way to reduce the iterations to reasonable number is to loop until square root of number.
for (i = 2; i < Math.sqrt(number); i++)
or
for (i = 2; i*i < number; i++)
The calculation of the prime factors of 600851475143L should take less than a milli-second (with a not totally inefficient algorithm). The main parts your code is currently missing:
The border should be sqrt(number) and not number.
The current value should be checked in a while-loop (to prevent that non-prime-factors are added to the list, reduces range to check).
The max. value should be decreased (as well as the border) to number/factor after finding a factor.
Further improvements are possible, e.g. to iterate only over non-even numbers (or only iterate over numbers that are neither a multiple of 2 and 3) etc.
An example implementation for the same question on codereview (link):
public static long largestPrimeFactor(
final long input) {
////
if (input < 2)
throw new IllegalArgumentException();
long n = input;
long last = 0;
for (; (n & 1) == 0; n >>= 1)
last = 2;
for (; n % 3 == 0; n /= 3)
last = 3;
for (long v = 5, add = 2, border = (long) Math.sqrt(n); v <= border; v += add, add ^= 6)
while (n % v == 0)
border = (long) Math.sqrt(n /= last = v);
return n == 1 ? last : n;
}
for (i = 2; i < number; i++)
{
if( number % i == 0)
{
factorArray.add(i);
}
}
For an large input size, you will be visiting up to the value of the number. Same for the loop of removing factors.
long number = 600851475143L ;
this is a huge number, and you're looping through this twice. Try putting in a count for every 10,000 or 100,000 (if i%10000 print(i)) and you'll get an idea of how fast it's moving.
One of the possible solutions is to only test if the the prime numbers smaller than the large number divide it.
So I checked
for (i=2; i < number; i++)
{
if(isPrime(i))
{
if( number % i == 0)
{
factorArray.add(i);
}
}
}
So here I'll only be dividing by prime numbers instead of dividing by all numbers smaller than 600851475143.
But this is still not fast, a complete modification of the algorithm is necessary to obtain an optimal one.
#Balkrishna Rawool suggestion is the right way to go. For that I would suggest to change the iteration like this: for (i = 3; i < Math.sqrt(number); i+=2) and handle the 2 manually. That will decrease your looping because none of the even numbers except 2 are prime.
I've managed to get a version of Eulers Totient Function working, albeit one that works for smaller numbers (smaller here being smaller compared to the 1024 bit numbers I need it to calculate)
My version is here -
public static BigInteger eulerTotientBigInt(BigInteger calculate) {
BigInteger count = new BigInteger("0");
for(BigInteger i = new BigInteger("1"); i.compareTo(calculate) < 0; i = i.add(BigInteger.ONE)) {
BigInteger check = GCD(calculate,i);
if(check.compareTo(BigInteger.ONE)==0) {//coprime
count = count.add(BigInteger.ONE);
}
}
return count;
}
While this works for smaller numbers, it works by iterating through every possible from 1 to the number being calculated. With large BigIntegers, this is totally unfeasible.
I've read that it's possible to divide the number on each iteration, removing the need to go through them one by one. I'm just not sure what I'm supposed to divide by what (some of the examples I've looked at are in C and use longs and a square root - as far as I know I can't calculate an accurate an accurate square root of a BigInteger. I'm also wondering that if for modular arithmetic such as this, does the function need to include an argument stating what the mod is. I'm totally unsure on that so any advice much appreciated.
Can anyone point me in the right direction here?
PS I deleted this question when I found modifying Euler Totient Function. I adapted it to work with BigIntegers -
public static BigInteger etfBig(BigInteger n) {
BigInteger result = n;
BigInteger i;
for(i = new BigInteger("2"); (i.multiply(i)).compareTo(n) <= 0; i = i.add(BigInteger.ONE)) {
if((n.mod(i)).compareTo(BigInteger.ZERO) == 0)
result = result.divide(i);
while(n.mod(i).compareTo(BigInteger.ZERO)== 0 )
n = n.divide(i);
}
if(n.compareTo(BigInteger.ONE) > 0)
result = result.subtract((result.divide(n)));
return result;
}
And it does give an accurate result, bit when passed a 1024 bit number it runs forever (I'm still not sure if it even finished, it's been running for 20 minutes).
There is a formula for the totient function, which required the prime factorization of n.
Look here.
The formula is:
phi(n) = n * (p1 - 1) / p1 * (p2 - 1) / p2 ....
were p1, p2, etc. are all the prime divisors of n.
Note that you only need BigInteger, not floating point, because the division is always exact.
So now the problem is reduced to finding all prime factors, which is better than iteration.
Here is the whole solution:
int n; //this is the number you want to find the totient of
int tot = n; //this will be the totient at the end of the sample
for (int p = 2; p*p <= n; p++)
{
if (n%p==0)
{
tot /= p;
tot *= (p-1);
while ( n % p == 0 )
n /= p;
}
}
if ( n > 1 ) { // now n is the largest prime divisor
tot /= n;
tot *= (n-1);
}
The algorithm you are trying to write is equivalent to factoring the argument n, which means you should expect it to run forever, practically speaking until either your computer dies or you die. See this post in mathoverflow for more information: How hard is it to compute the Euler totient function?.
If, on the other hand, you want the value of the totient for some large number for which you have the factorization, pass the argument as sequence of (prime, exponent) pairs.
The etfBig method has a problem.
Euler's product formula is n*((factor-1)/factor) for all factors.
Note: Petar's code has it as:
tot /= p;
tot *= (p-1);
In the etfBig method, replace result = result.divide(i);
with
result = result.multiply(i.subtract(BigInteger.ONE)).divide(i);
Testing from 2 to 200 then produces the same results as the regular algorithm.
Resolution:
It turns out there is (probably) "nothing wrong" with the code itself; it is just inefficient. If my math is correct, If I leave it running it will be done by Friday, October 14, 2011. I'll let you know!
Warning: this may contain spoilers if you are trying to solve Project Euler #3.
The problem says this:
The prime factors of 13195 are 5, 7, 13 and 29.
What is the largest prime factor of the number 600851475143 ?
Here's my attempt to solve it. I'm just starting with Java and programming in general, and I know this isn't the nicest or most efficient solution.
import java.util.ArrayList;
public class Improved {
public static void main(String[] args) {
long number = 600851475143L;
// long number = 13195L;
long check = number - 1;
boolean prime = true;
ArrayList<Number> allPrimes = new ArrayList<Number>();
do {
for (long i = check - 1; i > 2; i--) {
if (check % i == 0) {
prime = false;
}
}
if (prime == true && number % check == 0) {
allPrimes.add(check);
}
prime = true;
check--;
} while (check > 2);
System.out.println(allPrimes);
}
}
When number is set to 13195, the program works just fine, producing the result [29, 13, 7, 5] as it should.
Why doesn't this work for larger values of number?
Closely related (but not dupe): "Integer number too large" error message for 600851475143
The code is very slow; it is probably correct but will run for an unacceptably large amount of time (about n^2/2 iterations of the innermost loop for an input n). Try computing the factors from smallest to largest, and divide out each factor as you find it, such as:
for (i = 2; i*i <= n; ++i) {
if (n % i == 0) {
allPrimes.add(i);
while (n % i == 0) n /= i;
}
}
if (n != 1) allPrimes.add(n);
Note that this code will only add prime factors, even without an explicit check for primality.
Almost all the Project Euler problems can be solved using a signed datatype with 64 bits (with the exception of problems that purposefully try to go big like problem 13).
If your going to be working with primes (hey, its project Euler, your going to be working with primes) get a headstart and implement the Sieve of Eratosthenes, Sieve of Atkin, or
Sieve of Sundaram.
One mathematical trick used across many problems is short circuiting finding factors by working to the square root of the target. Anything greater than the square corresponds to a factor less than the square.
You could also speed this up by only checking from 2 to the square root of the target number. Each factor comes in a pair, one above the square root and one below, so when you find one factor you also find it's pair. In the case of the prime test, once you find any factor you can break out of the loop.
Another optimization could be to find the factors before checking that they are prime.
And for very large numbers, it really is faster to experiment with a sieve rather than brute forcing it, especially if you are testing a lot of numbers for primes. Just be careful you're not doing something algorithmically inefficient to implement the sieve (for example, adding or removing primes from lists will cost you an extra O(n)).
Another approach (there is no need to store all primes):
private static boolean isOddPrime(long x) {
/* because x is odd, the even factors can be skipped */
for ( int i = 3 ; i*i <= x ; i+=2 ) {
if ( x % i == 0 ) {
return false;
}
}
return true;
}
public static void main(String[] args) {
long nr = 600851475143L;
long max = 1;
for ( long i = 3; i <= nr ; i+=2 ) {
if ( nr % i == 0 ) {
nr/=i;
if ( isOddPrime(i) ){
max = i;
}
}
}
System.out.println(max);
}
It takes less than 1 ms.