so I'm working on a java code right now. I've gotten it working totally fine, however the point of the assignment is to make it factorize big number (over 30 digits). It does that, however it can take over 15 minutes for it to do it, which is no good. My professor assures me that the algorithm I am using will work for numbers up to 2^70 and should do it in about five minutes. I have been trying to come up with a way to do it (incrementing by 2 instead of 1, etc), but I can't really seem to figure out how to get it to move quicker without skipping some factors. Any ideas? I also figured that the Elliptic Curve method would be better, but he told me to not deal with that right now.
Here is my code (ps, the sqrt is my own function but I am sure it is working):
public String factorizer(BigInteger monster){
System.out.println("monster =" + monster);
String factors = "";
BigInteger top = maths.monsterSqrt(monster);
if(monster.mod(two).equals(0));
BigInteger jump = two;
for(BigInteger bi = two; bi.compareTo(top) <= 0; bi = bi.add(jump)){
while(monster.mod(bi).equals(zero)){
factors += "+" + bi + "";
monster = monster.divide(bi);
jump = one;
}
}
if(monster.compareTo(BigInteger.ONE) == 1){
factors += "+" + monster;
}
return factors;
}
Here is my version of integer factorization by trial division:
public static LinkedList tdFactors(BigInteger n)
{
BigInteger two = BigInteger.valueOf(2);
LinkedList fs = new LinkedList();
if (n.compareTo(two) < 0)
{
throw new IllegalArgumentException("must be greater than one");
}
while (n.mod(two).equals(BigInteger.ZERO))
{
fs.add(two);
n = n.divide(two);
}
if (n.compareTo(BigInteger.ONE) > 0)
{
BigInteger f = BigInteger.valueOf(3);
while (f.multiply(f).compareTo(n) <= 0)
{
if (n.mod(f).equals(BigInteger.ZERO))
{
fs.add(f);
n = n.divide(f);
}
else
{
f = f.add(two);
}
}
fs.add(n);
}
return fs;
}
This code is explained in an essay on my blog, where there is also an explanation of Pollard's rho algorithm, which may be more suitable for factoring big integers.
By the way, 30 digits is not a particularly big factorization problem these days. Anything more than a few seconds is too long.
When you divide your monster by a prime factor, you should also adjust top accordingly. As it is, the outer loop will always run up to the square root of the original number, in increments of 1 or 2, which for a 30-digit number takes in the order of 10^15 steps... it's to be wondered that you are done in only 15 minutes!
If your monster number has very large prime factors (say it is a prime itself), then you can forget about good performance in any case.
Note that your example code does the increments wrong: if the original number is not even then jump will always remain two, meaning that you only investigate even factors and hence will not find any.
Not sure why you are returning a String!
This works for me. Note that it compares i with n / i and n = n / i each time around:
// Memoization of factors.
static Map<BigInteger, List<BigInteger>> factors = new HashMap<>();
private static final BigInteger TWO = BigInteger.ONE.add(BigInteger.ONE);
public static List<BigInteger> factors(BigInteger n, boolean duplicates) {
// Have we done this one before?
List<BigInteger> f = factors.get(n);
if (f == null) {
// Start empty.
f = new ArrayList<>();
// Check for duplicates.
BigInteger last = BigInteger.ZERO;
// Limit the range as far as possible.
for (BigInteger i = TWO; i.compareTo(n.divide(i)) <= 0; i = i.add(BigInteger.ONE)) {
// Can have multiple copies of the same factor.
while (n.mod(i).equals(BigInteger.ZERO)) {
if (duplicates || !i.equals(last)) {
f.add(i);
last = i;
}
// Remove that factor.
n = n.divide(i);
}
}
if (n.compareTo(BigInteger.ONE) > 0) {
// Could be a residue.
if (duplicates || n != last) {
f.add(n);
}
}
// Memoize.
factors.put(n, f);
}
return f;
}
Related
The purpose of this class is to calculate the nth number of the Lucas Sequence. I am using data type long because the problems wants me to print the 215th number. The result of the 215th number in the Lucas Sequence is: 855741617674166096212819925691459689505708239. The problem I am getting is that at some points, the result is negative. I do not understand why I am getting a negative number when the calculation is always adding positive numbers. I also have two methods, since the question was to create an efficient algorithm. One of the methods uses recursion but the efficiency is O(2^n) and that is of no use to me when trying to get the 215th number. The other method is using a for loop, which the efficiency is significantly better. If someone can please help me find where the error is, I am not sure if it has anything to do with the data type or if it is something else.
Note: When trying to get the 91st number I get a negative number and when trying to get the 215th number I also get a negative number.
import java.util.Scanner;
public class Problem_3
{
static long lucasNum;
static long firstBefore;
static long secondBefore;
static void findLucasNumber(long n)
{
if(n == 0)
{
lucasNum = 2;
}
if(n == 1)
{
lucasNum = 1;
}
if(n > 1)
{
firstBefore = 1;
secondBefore = 2;
for(int i = 1; i < n; i++)
{
lucasNum = firstBefore + secondBefore;
secondBefore = firstBefore;
firstBefore = lucasNum;
}
}
}
static long recursiveLucasNumber(int n)
{
if(n == 0)
{
return 2;
}
if(n == 1)
{
return 1;
}
return recursiveLucasNumber(n - 1) + recursiveLucasNumber(n - 2);
}
public static void main(String[] args)
{
System.out.println("Which number would you like to know from "
+ "the Lucas Sequence?");
Scanner scan = new Scanner(System.in);
long num = scan.nextInt();
findLucasNumber(num);
System.out.println(lucasNum);
//System.out.println(recursiveLucasNumber(num));
}
}
Two observations:
The answer you are expecting (855741617674166096212819925691459689505708239) is way larger than you can represent using a long. So (obviously) if you attempt to calculate it using long arithmetic you are going to get integer overflow ... and a garbage answer.
Note: this observation applies for any algorithm in which you use a Java integer primitive value to represent the Lucas numbers. You would run into the same errors with recursion ... eventually.
Solution: use BigInteger.
You have implemented iterative and pure recursion approaches. There is a third approach: recursion with memoization. If you apply memorization correctly to the recursive solution, you can calculate LN in O(N) arithmetical operations.
Java data type long can contain only 64-bit numbers in range -9223372036854775808 .. 9223372036854775807. Negative numbers arise due to overflow.
Seems you need BigInteger class for arbitrary-precision integer numbers
I wasn't aware of the lucas numbers before this thread, but from wikipedia it looks like they are related to the fibonacci sequence with (n = nth number, F = fibonacci, L = lucas):
Ln = F_(n-1) + F_(n+1)
Thus, if your algorithm is too slow, you could use the closed form fibonacci and than compute the lucas number from it, alternative you could also use the closed form given in the wikipedia article directly (see https://en.wikipedia.org/wiki/Lucas_number).
Example code:
public static void main(String[] args) {
long n = 4;
double fibo = computeFibo(n);
double fiboAfter = computeFibo(n + 1);
double fiboBefore = computeFibo(n - 1);
System.out.println("fibonacci n:" + Math.round(fibo));
System.out.println("fibonacci: n+1:" + Math.round(fiboAfter));
System.out.println("fibonacci: n-1:" + Math.round(fiboBefore));
System.out.println("lucas:" + (Math.round(fiboAfter) + Math.round(fiboBefore)));
}
private static double computeFibo(long n) {
double phi = (1 + Math.sqrt(5)) / 2.0;
double psi = -1.0 / phi;
return (Math.pow(phi, n) - Math.pow(psi, n)) / Math.sqrt(5);
}
To work around the long size limit you could use java BigDecimal (https://docs.oracle.com/javase/7/docs/api/java/math/BigDecimal.html). This is needed earlier in this approach as the powers in the formula will grow very quickly.
It was asked to find a way to check whether a number is in the Fibonacci Sequence or not.
The constraints are
1≤T≤10^5
1≤N≤10^10
where the T is the number of test cases,
and N is the given number, the Fibonacci candidate to be tested.
I wrote it the following using the fact a number is Fibonacci if and only if one or both of (5*n2 + 4) or (5*n2 – 4) is a perfect square :-
import java.io.*;
import java.util.*;
public class Solution {
public static void main(String[] args) {
Scanner sc = new Scanner(System.in);
int n = sc.nextInt();
for(int i = 0 ; i < n; i++){
int cand = sc.nextInt();
if(cand < 0){System.out.println("IsNotFibo"); return; }
int aTest =(5 * (cand *cand)) + 4;
int bTest = (5 * (cand *cand)) - 4;
int sqrt1 = (int)Math.sqrt(aTest);// Taking square root of aTest, taking into account only the integer part.
int sqrt2 = (int)Math.sqrt(bTest);// Taking square root of bTest, taking into account only the integer part.
if((sqrt1 * sqrt1 == aTest)||(sqrt2 * sqrt2 == bTest)){
System.out.println("IsFibo");
}else{
System.out.println("IsNotFibo");
}
}
}
}
But its not clearing all the test cases? What bug fixes I can do ?
A much simpler solution is based on the fact that there are only 49 Fibonacci numbers below 10^10.
Precompute them and store them in an array or hash table for existency checks.
The runtime complexity will be O(log N + T):
Set<Long> nums = new HashSet<>();
long a = 1, b = 2;
while (a <= 10000000000L) {
nums.add(a);
long c = a + b;
a = b;
b = c;
}
// then for each query, use nums.contains() to check for Fibonacci-ness
If you want to go down the perfect square route, you might want to use arbitrary-precision arithmetics:
// find ceil(sqrt(n)) in O(log n) steps
BigInteger ceilSqrt(BigInteger n) {
// use binary search to find smallest x with x^2 >= n
BigInteger lo = BigInteger.valueOf(1),
hi = BigInteger.valueOf(n);
while (lo.compareTo(hi) < 0) {
BigInteger mid = lo.add(hi).divide(2);
if (mid.multiply(mid).compareTo(x) >= 0)
hi = mid;
else
lo = mid.add(BigInteger.ONE);
}
return lo;
}
// checks if n is a perfect square
boolean isPerfectSquare(BigInteger n) {
BigInteger x = ceilSqrt(n);
return x.multiply(x).equals(n);
}
Your tests for perfect squares involve floating point calculations. That is liable to give you incorrect answers because floating point calculations typically give you inaccurate results. (Floating point is at best an approximate to Real numbers.)
In this case sqrt(n*n) might give you n - epsilon for some small epsilon and (int) sqrt(n*n) would then be n - 1 instead of the expected n.
Restructure your code so that the tests are performed using integer arithmetic. But note that N < 1010 means that N2 < 1020. That is bigger than a long ... so you will need to use ...
UPDATE
There is more to it than this. First, Math.sqrt(double) is guaranteed to give you a double result that is rounded to the closest double value to the true square root. So you might think we are in the clear (as it were).
But the problem is that N multiplied by N has up to 20 significant digits ... which is more than can be represented when you widen the number to a double in order to make the sqrt call. (A double has 15.95 decimal digits of precision, according to Wikipedia.)
On top of that, the code as written does this:
int cand = sc.nextInt();
int aTest = (5 * (cand * cand)) + 4;
For large values of cand, that is liable to overflow. And it will even overflow if you use long instead of int ... given that the cand values may be up to 10^10. (A long can represent numbers up to +9,223,372,036,854,775,807 ... which is less than 1020.) And then we have to multiply N2 by 5.
In summary, while the code should work for small candidates, for really large ones it could either break when you attempt to read the candidate (as an int) or it could give the wrong answer due to integer overflow (as a long).
Fixing this requires a significant rethink. (Or deeper analysis than I have done to show that the computational hazards don't result in an incorrect answer for any large N in the range of possible inputs.)
According to this link a number is Fibonacci if and only if one or both of (5*n2 + 4) or (5*n2 – 4) is a perfect square so you can basically do this check.
Hope this helps :)
Use binary search and the Fibonacci Q-matrix for a O((log n)^2) solution per test case if you use exponentiation by squaring.
Your solution does not work because it involves rounding floating point square roots of large numbers (potentially large enough not to even fit in a long), which sometimes will not be exact.
The binary search will work like this: find Q^m: if the m-th Fibonacci number is larger than yours, set right = m, if it is equal return true, else set left = m + 1.
As it was correctly said, sqrt could be rounded down. So:
Even if you use long instead of int, it has 18 digits.
even if you use Math.round(), not simply (int) or (long). Notice, your function wouldn't work correctly even on small numbers because of that.
double have 14 digits, long has 18, so you can't work with squares, you need 20 digits.
BigInteger and BigDecimal have no sqrt() function.
So, you have three ways:
write your own sqrt for BigInteger.
check all numbers around the found unprecise double sqrt() for being a real sqrt. That means also working with numbers and their errors simultaneously. (it's horror!)
count all Fibonacci numbers under 10^10 and compare against them.
The last variant is by far the simplest one.
Looks like to me the for-loop doesn't make any sense ?
When you remove the for-loop for me the program works as advertised:
import java.io.*;
import java.util.*;
public class Solution {
public static void main(String[] args) {
Scanner sc = new Scanner(System.in);
int cand = sc.nextInt();
if(cand < 0){System.out.println("IsNotFibo"); return; }
int aTest = 5 * cand *cand + 4;
int bTest = 5 * cand *cand - 4;
int sqrt1 = (int)Math.sqrt(aTest);
int sqrt2 = (int)Math.sqrt(bTest);
if((sqrt1 * sqrt1 == aTest)||(sqrt2 * sqrt2 == bTest)){
System.out.println("IsFibo");
}else{
System.out.println("IsNotFibo");
}
}
}
You only need to test for a given candidate, yes? What is the for loop accomplishing? Could the results of the loop be throwing your testing program off?
Also, there is a missing } in the code. It will not run as posted without adding another } at the end, after which it runs fine for the following input:
10 1 2 3 4 5 6 7 8 9 10
IsFibo
IsFibo
IsFibo
IsNotFibo
IsFibo
IsNotFibo
IsNotFibo
IsFibo
IsNotFibo
IsNotFibo
Taking into account all the above suggestions I wrote the following which passed all test cases
import java.io.*;
import java.util.*;
public class Solution {
public static void main(String[] args) {
Scanner sc = new Scanner(System.in);
long[] fib = new long[52];
Set<Long> fibSet = new HashSet<>(52);
fib[0] = 0L;
fib[1] = 1L;
for(int i = 2; i < 52; i++){
fib[i] = fib[i-1] + fib[i - 2];
fibSet.add(fib[i]);
}
int n = sc.nextInt();
long cand;
for(int i = 0; i < n; i++){
cand = sc.nextLong();
if(cand < 0){System.out.println("IsNotFibo");continue;}
if(fibSet.contains(cand)){
System.out.println("IsFibo");
}else{
System.out.println("IsNotFibo");
}
}
}
}
I wanted to be on the safer side hence I choose 52 as the number of elements in the Fibonacci sequence under consideration.
I've managed to get a version of Eulers Totient Function working, albeit one that works for smaller numbers (smaller here being smaller compared to the 1024 bit numbers I need it to calculate)
My version is here -
public static BigInteger eulerTotientBigInt(BigInteger calculate) {
BigInteger count = new BigInteger("0");
for(BigInteger i = new BigInteger("1"); i.compareTo(calculate) < 0; i = i.add(BigInteger.ONE)) {
BigInteger check = GCD(calculate,i);
if(check.compareTo(BigInteger.ONE)==0) {//coprime
count = count.add(BigInteger.ONE);
}
}
return count;
}
While this works for smaller numbers, it works by iterating through every possible from 1 to the number being calculated. With large BigIntegers, this is totally unfeasible.
I've read that it's possible to divide the number on each iteration, removing the need to go through them one by one. I'm just not sure what I'm supposed to divide by what (some of the examples I've looked at are in C and use longs and a square root - as far as I know I can't calculate an accurate an accurate square root of a BigInteger. I'm also wondering that if for modular arithmetic such as this, does the function need to include an argument stating what the mod is. I'm totally unsure on that so any advice much appreciated.
Can anyone point me in the right direction here?
PS I deleted this question when I found modifying Euler Totient Function. I adapted it to work with BigIntegers -
public static BigInteger etfBig(BigInteger n) {
BigInteger result = n;
BigInteger i;
for(i = new BigInteger("2"); (i.multiply(i)).compareTo(n) <= 0; i = i.add(BigInteger.ONE)) {
if((n.mod(i)).compareTo(BigInteger.ZERO) == 0)
result = result.divide(i);
while(n.mod(i).compareTo(BigInteger.ZERO)== 0 )
n = n.divide(i);
}
if(n.compareTo(BigInteger.ONE) > 0)
result = result.subtract((result.divide(n)));
return result;
}
And it does give an accurate result, bit when passed a 1024 bit number it runs forever (I'm still not sure if it even finished, it's been running for 20 minutes).
There is a formula for the totient function, which required the prime factorization of n.
Look here.
The formula is:
phi(n) = n * (p1 - 1) / p1 * (p2 - 1) / p2 ....
were p1, p2, etc. are all the prime divisors of n.
Note that you only need BigInteger, not floating point, because the division is always exact.
So now the problem is reduced to finding all prime factors, which is better than iteration.
Here is the whole solution:
int n; //this is the number you want to find the totient of
int tot = n; //this will be the totient at the end of the sample
for (int p = 2; p*p <= n; p++)
{
if (n%p==0)
{
tot /= p;
tot *= (p-1);
while ( n % p == 0 )
n /= p;
}
}
if ( n > 1 ) { // now n is the largest prime divisor
tot /= n;
tot *= (n-1);
}
The algorithm you are trying to write is equivalent to factoring the argument n, which means you should expect it to run forever, practically speaking until either your computer dies or you die. See this post in mathoverflow for more information: How hard is it to compute the Euler totient function?.
If, on the other hand, you want the value of the totient for some large number for which you have the factorization, pass the argument as sequence of (prime, exponent) pairs.
The etfBig method has a problem.
Euler's product formula is n*((factor-1)/factor) for all factors.
Note: Petar's code has it as:
tot /= p;
tot *= (p-1);
In the etfBig method, replace result = result.divide(i);
with
result = result.multiply(i.subtract(BigInteger.ONE)).divide(i);
Testing from 2 to 200 then produces the same results as the regular algorithm.
I have run into a weird issue for problem 3 of Project Euler. The program works for other numbers that are small, like 13195, but it throws this error when I try to crunch a big number like 600851475143:
Exception in thread "main" java.lang.ArithmeticException: / by zero
at euler3.Euler3.main(Euler3.java:16)
Here's my code:
//Number whose prime factors will be determined
long num = 600851475143L;
//Declaration of variables
ArrayList factorsList = new ArrayList();
ArrayList primeFactorsList = new ArrayList();
//Generates a list of factors
for (int i = 2; i < num; i++)
{
if (num % i == 0)
{
factorsList.add(i);
}
}
//If the integer(s) in the factorsList are divisable by any number between 1
//and the integer itself (non-inclusive), it gets replaced by a zero
for (int i = 0; i < factorsList.size(); i++)
{
for (int j = 2; j < (Integer) factorsList.get(i); j++)
{
if ((Integer) factorsList.get(i) % j == 0)
{
factorsList.set(i, 0);
}
}
}
//Transfers all non-zero numbers into a new list called primeFactorsList
for (int i = 0; i < factorsList.size(); i++)
{
if ((Integer) factorsList.get(i) != 0)
{
primeFactorsList.add(factorsList.get(i));
}
}
Why is it only big numbers that cause this error?
Your code is just using Integer, which is a 32-bit type with a maximum value of 2147483647. It's unsurprising that it's failing when used for numbers much bigger than that. Note that your initial loop uses int as the loop variable, so would actually loop forever if it didn't throw an exception. The value of i will go from the 2147483647 to -2147483648 and continue.
Use BigInteger to handle arbitrarily large values, or Long if you're happy with a limited range but a larger one. (The maximum value of long / Long is 9223372036854775807L.)
However, I doubt that this is really the approach that's expected... it's going to take a long time for big numbers like that.
Not sure if it's the case as I don't know which line is which - but I notice your first loop uses an int.
//Generates a list of factors
for (int i = 2; i < num; i++)
{
if (num % i == 0)
{
factorsList.add(i);
}
}
As num is a long, its possible that num > Integer.MAX_INT and your loop is wrapping around to negative at MAX_INT then looping until 0, giving you a num % 0 operation.
Why does your solution not work?
Well numbers are discrete in hardware. Discrete means thy have a min and max values. Java uses two's complement, to store negative values, so 2147483647+1 == -2147483648. This is because for type int, max value is 2147483647. And doing this is called overflow.
It seems as if you have an overflow bug. Iterable value i first becomes negative, and eventually 0, thus you get java.lang.ArithmeticException: / by zero. If your computer can loop 10 million statements a second, this would take 1h 10min to reproduce, so I leave it as assumption an not a proof.
This is also reason trivially simple statements like a+b can produce bugs.
How to fix it?
package margusmartseppcode.From_1_to_9;
public class Problem_3 {
static long lpf(long nr) {
long max = 0;
for (long i = 2; i <= nr / i; i++)
while (nr % i == 0) {
max = i;
nr = nr / i;
}
return nr > 1 ? nr : max;
}
public static void main(String[] args) {
System.out.println(lpf(600851475143L));
}
}
You might think: "So how does this work?"
Well my tough process went like:
(Dynamical programming approach) If i had list of primes x {2,3,5,7,11,13,17, ...} up to value xi > nr / 2, then finding largest prime factor is trivial:
I start from the largest prime, and start testing if devision reminder with my number is zero, if it is, then that is the answer.
If after looping all the elements, I did not find my answer, my number must be a prime itself.
(Brute force, with filters) I assumed, that
my numbers largest prime factor is small (under 10 million).
if my numbers is a multiple of some number, then I can reduce loop size by that multiple.
I used the second approach here.
Note however, that if my number would be just little off and one of {600851475013, 600851475053, 600851475067, 600851475149, 600851475151}, then my approach assumptions would fail and program would take ridiculously long time to run. If computer could execute 10m statements per second it would take 6.954 days, to find the right answer.
In your brute force approach, just generating a list of factors would take longer - assuming you do not run out of memory before.
Is there a better way?
Sure, in Mathematica you could write it as:
P3[x_] := FactorInteger[x][[-1, 1]]
P3[600851475143]
or just FactorInteger[600851475143], and lookup the largest value.
This works because in Mathematica you have arbitrary size integers. Java also has arbitrary size integer class called BigInteger.
Apart from the BigInteger problem mentioned by Jon Skeet, note the following:
you only need to test factors up to sqrt(num)
each time you find a factor, divide num by that factor, and then test that factor again
there's really no need to use a collection to store the primes in advance
My solution (which was originally written in Perl) would look something like this in Java:
long n = 600851475143L; // the original input
long s = (long)Math.sqrt(n); // no need to test numbers larger than this
long f = 2; // the smallest factor to test
do {
if (n % f == 0) { // check we have a factor
n /= f; // this is our new number to test
s = (long)Math.sqrt(n); // and our range is smaller again
} else { // find next possible divisor
f = (f == 2) ? 3 : f + 2;
}
} while (f < s); // required result is in "n"
Resolution:
It turns out there is (probably) "nothing wrong" with the code itself; it is just inefficient. If my math is correct, If I leave it running it will be done by Friday, October 14, 2011. I'll let you know!
Warning: this may contain spoilers if you are trying to solve Project Euler #3.
The problem says this:
The prime factors of 13195 are 5, 7, 13 and 29.
What is the largest prime factor of the number 600851475143 ?
Here's my attempt to solve it. I'm just starting with Java and programming in general, and I know this isn't the nicest or most efficient solution.
import java.util.ArrayList;
public class Improved {
public static void main(String[] args) {
long number = 600851475143L;
// long number = 13195L;
long check = number - 1;
boolean prime = true;
ArrayList<Number> allPrimes = new ArrayList<Number>();
do {
for (long i = check - 1; i > 2; i--) {
if (check % i == 0) {
prime = false;
}
}
if (prime == true && number % check == 0) {
allPrimes.add(check);
}
prime = true;
check--;
} while (check > 2);
System.out.println(allPrimes);
}
}
When number is set to 13195, the program works just fine, producing the result [29, 13, 7, 5] as it should.
Why doesn't this work for larger values of number?
Closely related (but not dupe): "Integer number too large" error message for 600851475143
The code is very slow; it is probably correct but will run for an unacceptably large amount of time (about n^2/2 iterations of the innermost loop for an input n). Try computing the factors from smallest to largest, and divide out each factor as you find it, such as:
for (i = 2; i*i <= n; ++i) {
if (n % i == 0) {
allPrimes.add(i);
while (n % i == 0) n /= i;
}
}
if (n != 1) allPrimes.add(n);
Note that this code will only add prime factors, even without an explicit check for primality.
Almost all the Project Euler problems can be solved using a signed datatype with 64 bits (with the exception of problems that purposefully try to go big like problem 13).
If your going to be working with primes (hey, its project Euler, your going to be working with primes) get a headstart and implement the Sieve of Eratosthenes, Sieve of Atkin, or
Sieve of Sundaram.
One mathematical trick used across many problems is short circuiting finding factors by working to the square root of the target. Anything greater than the square corresponds to a factor less than the square.
You could also speed this up by only checking from 2 to the square root of the target number. Each factor comes in a pair, one above the square root and one below, so when you find one factor you also find it's pair. In the case of the prime test, once you find any factor you can break out of the loop.
Another optimization could be to find the factors before checking that they are prime.
And for very large numbers, it really is faster to experiment with a sieve rather than brute forcing it, especially if you are testing a lot of numbers for primes. Just be careful you're not doing something algorithmically inefficient to implement the sieve (for example, adding or removing primes from lists will cost you an extra O(n)).
Another approach (there is no need to store all primes):
private static boolean isOddPrime(long x) {
/* because x is odd, the even factors can be skipped */
for ( int i = 3 ; i*i <= x ; i+=2 ) {
if ( x % i == 0 ) {
return false;
}
}
return true;
}
public static void main(String[] args) {
long nr = 600851475143L;
long max = 1;
for ( long i = 3; i <= nr ; i+=2 ) {
if ( nr % i == 0 ) {
nr/=i;
if ( isOddPrime(i) ){
max = i;
}
}
}
System.out.println(max);
}
It takes less than 1 ms.