I'm working on a program to compare different algorithms for factorization of large integers. One of the algorithms I'm including in the comparison is Fermats factorization method. The algorithm seems to work just fine for small numbers, but when I get larger numbers I get weird results.
Here's my code:
public void fermat(long n)
{
ArrayList<Long> factors = new ArrayList<Long>();
a = (long)Math.ceil(Math.sqrt(n));
b = a*a - n;
b_root = (long)(Math.sqrt(b)+0.5);
while(b_root*b_root != b)
{
a++;
b = a*a - n;
b_root = (long)(Math.sqrt(b)+0.5);
}
factors.add(a-b_root);
factors.add(a+b_root);
}
Now, when I try to factor 42139523531366663 I get the resulting factors 6194235479 and 2984853201, which is incorrect since 6194235479 * 2984853201 = 18488883597240918279. I figured that I got this result because somewhere in the algorithm I got to a point where the numbers became too big for a long or something similar, so the algorithm got a bit messed up because of that. I added a check which calculated the product of the two factors and compared with the input value, so that I'd get an alert if the factorization was faulty:
long x,y;
x = factors.get(0);
y = factors.get(1);
if(x*y!=n)
System.out.println("Faulty factorization.");
Interestingly enough, the check passed as true and I didn't get the alert. I tried just printing the result of the multiplication and this actually resulted in the input value. So my question is why does my program behave like this, and what can I do about it?
It looks like there is an overflow in a long somewhere, because longs have 64 bits and
42139523531366663 + 2^64 = 18488883597240918279
For sufficiently large numbers, you may need switch to using BigInteger.
Is it because there's an error in multiplying large numbers too?
That may be a valid enough reason. This is what makes the program think that it's factorization is right, but when you actually multiply the numbers without using the program, you discover the error.
Related
I have to find log and later after few computations antilog of many big decimal numbers. Since log and antilog are not supported for BigDecimal numbers, for this I used Apfloat library and use its pow method which can take both arguments as Apfloat values like below:
ApfloatMath.pow(Constants.BASE_OF_LOG, apFloatNum);
The problem is I am using it in a loop and the loop is big. Apfloat pow takes a lot of time to find power which is more than an hour. To avoid this, I thought of converting Apfloat into double and then using Math.pow which runs fast but gives me infinite for few values.
What should I do? Does anyone know ApfloatMath.pow alternative?
You said you are using Math.pow() now and that some of the calls return an infinite value.
If you can live with using (far less accurate) doubles instead of BigDecimals, then you should think of the fact that mathematically,
x = Math.pow(a, x);
is equivalent to
x = Math.pow(a, x - y) * Math.pow(a, y);
Say you have a big value, let's call it big, then instead of doing:
// pow(a, big) may return infinite
BigDecimal n = BigDecimal.valueOf(Math.pow(a, big));
you can just as well do:
// do this once, outside the loop
BigDecimal large = BigDecimal.valueOf(a).pow(100);
...
// do this inside the loop
// pow(a, big - 100) should not return infinite
BigDecimal n = BigDecimal.valueOf(Math.pow(a, big - 100)).multiply(large);
Instead of 100, you may want to pick another constant that better suits the values you are using. But something like the above could be a simple solution, and much faster than what you describe.
Note
Perhaps ApfloatMath.pow() is only slow for large values. If that is the case, you may be able to apply the principle above to Apfloat.pow() as well. You would only have to do the following only once, outside the loop:
Apfloat large = ApfloatMath.pow(Constants.BASE_OF_LOG, 100);
and then you could use the following inside the loop:
x = ApfloatMath.pow(Constants.BASE_OF_LOG, big - 100).multiply(large);
inside the loop.
But you'll have to test if that makes things faster. I could imagine that ApfloatMath.pow() can be much faster for an integer exponent.
Since I don't know more about your data, and because I don't have Apfloat installed, I can't test this, so you should see if the above solution is good enough for you (especially if it is accurate enough for you), and if it is actually better/faster than what you have.
We have a test exercise where you need to find out whether a given N number is a square of another number or no, with the smallest time complexity.
I wrote:
public static boolean what2(int n) {
double newN = (double)n;
double x = Math.sqrt(newN);
int y = (int)x;
if (y * y == n)
return false;
else
return true;
}
I looked online and specifically on SO to try and find the complexity of sqrt but couldn't find it. This SO post is for C# and says its O(1), and this Java post says its O(1) but could potentially iterate over all doubles.
I'm trying to understand the worst time complexity of this method. All other operations are O(1) so this is the only factor.
Would appreciate any feedback!
Using the floating point conversion is OK because java's int type is 32 bits and java's double type is the IEEE 64 bit format that can represent all values of 32 bit integers exactly.
If you were to implement your function for long, you would need to be more careful because many large long values are not represented exactly as doubles, so taking the square root and converting it to an integer type might not yield the actual square root.
All operations in your implementation execute in constant time, so the complexity of your solution is indeed O(1).
If I understood the question correctly, the Java instruction can be converted by just-in-time-compilation to use the native fsqrt instruction (however I don't know whether this is actually the case), which, according to this table, uses a bounded number of processor cycles, which means that the complexity would be O(1).
java's Math.sqrt actually delegates sqrt to StrictMath.java source code one of its implementations can be found here, by looking at sqrt function, it looks like the complexity is constant time. Look at while(r != 0) loop inside.
So I am making an application that can solve problems with Empirical Formulae and I need some code that would do something like:
If numbers are 2.5, 1, 3 it should change them to 2.5*2 = 5, 1*2 = 2, 3*2 = 6 so that the number with the decimal is converted to a whole number and the other numbers are adjusted appropriately.
I thought of this logic:
for(n = 1; (Math.round(simplestRat[0]) * n) != (int)SimplestRat[0]; n++)
to increment a counter that would multiply an integer to do what I want it to but I am skeptical about this code even at this phase and do not think it will work.
It would be a lot of help if someone could suggest a code for this or improve upon this code or even give me a link to another post for this problem as I was unable to find anything regarding this type of problem.
Any help is appreciated. Thanks
Okay, so you have to have a few steps. First, get them all into whole numbers. The easiest way is to find an appropriate power of ten to multiply them all by that leaves them as integers. This is a useful check: How to test if a double is an integer.
Then cast them to integers, and start working through them looking for common prime factors. This'll be a process similar to Eratosthenes' Sieve (http://en.wikipedia.org/wiki/Sieve_of_Eratosthenes) but with division at the end. For each prime, see if all 3 numbers divide by it exactly (modulo prime == 0). If they do, divide and reset the primes to 2. If they don't, next prime.
This should give you the lowest common ratio between the numbers. Any additional multiplier that came from the original stage is shaved off by the common primes method.
I want to efficiently calculate ((X+Y)!/(X!Y!))% P (P is like 10^9+7)
This discussion gives some insights on distributing modulo over division.
My concern is it's not necessary that a modular inverse always exists for a number.
Basically, I am looking for a code implementation of solving the problem.
For multiplication it is very straightforward:
public static int mod_mul(int Z,int X,int Y,int P)
{
// Z=(X+Y) the factorial we need to calculate, P is the prime
long result = 1;
while(Z>1)
{
result = (result*Z)%P
Z--;
}
return result;
}
I also realize that many factors can get cancelled in the division (before taking modulus), but if the number of divisors increase, then I'm finding it difficult to efficiently come up with an algorithm to divide. ( Looping over List(factors(X)+factors(Y)...) to see which divides current multiplying factor of numerator).
Edit: I don't want to use BigInt solutions.
Is there any java/python based solution or any standard algorithm/library for cancellation of factors( if inverse option is not full-proof) or approaching this type of problem.
((X+Y)!/(X!Y!)) is a low-level way of spelling a binomial coefficient ((X+Y)-choose-X). And while you didn't say so in your question, a comment in your code implies that P is prime. Put those two together, and Lucas's theorem applies directly: http://en.wikipedia.org/wiki/Lucas%27_theorem.
That gives a very simple algorithm based on the base-P representations of X+Y and X. Whether BigInts are required is impossible to guess because you didn't give any bounds on your arguments, beyond that they're ints. Note that your sample mod_mul code may not work at all if, e.g., P is greater than the square root of the maximum int (because result * Z may overflow then).
It's binomial coefficients - C(x+y,x).
You can calculate it differently C(n,m)=C(n-1,m)+C(n-1,m-1).
If you are OK with time complexity O(x*y), the code will be much simpler.
http://en.wikipedia.org/wiki/Combination
for what you need here is a way to do it efficiently : -
C(n,k) = C(n-1,k) + C(n-1,k-1)
Use dynamic programming to calculate efficient in bottom up approach
C(n,k)%P = ((C(n-1,k))%P + (C(n-1,k-1))%P)%P
Therefore F(n,k) = (F(n-1,k)+F(n-1,k-1))%P
Another faster approach : -
C(n,k) = C(n-1,k-1)*n/k
F(n,k) = ((F(n-1,k-1)*n)%P*inv(k)%P)%P
inv(k)%P means modular inverse of k.
Note:- Try to evaluate C(n,n-k) if (n-k<k) because nC(n-k) = nCk
I am new to Java and one of my class assignments is to find a prime number at least 100 digits long that contains the numbers 273042282802155991.
I have this so far but when I compile it and run it it seems to be in a continuous loop.
I'm not sure if I've done something wrong.
public static void main(String[] args) {
BigInteger y = BigInteger.valueOf(304877713615599127L);
System.out.println(RandomPrime(y));
}
public static BigInteger RandomPrime(BigInteger x)
{
BigInteger i;
for (i = BigInteger.valueOf(2); i.compareTo(x)<0; i.add(i)) {
if ((x.remainder(i).equals(BigInteger.ZERO))) {
x.divide(i).equals(x);
i.subtract(i);
}
}
return i;
}
Since this is homework ...
There is a method on BigInteger that tests for primality. This is much much faster than attempting to factorize a number. (If you take an approach that involves attempting to factorize 100 digit numbers you will fail. Factorization is believed to be an NP-complete problem. Certainly, there is no known polynomial time solution.)
The question is asking for a prime number that contains a given sequence of digits when it is represented as a sequence of decimal digits.
The approach of generating "random" primes and then testing if they contain those digits is infeasible. (Some simple high-school maths tells you that the probability that a randomly generated 100 digit number contains a given 18 digit sequence is ... 82 / 1018. And you haven't tested for primality yet ...
But there's another way to do it ... think about it!
Only start writing code once you've figured out in your head how your algorithm will work, and done the mental estimates to confirm that it will give an answer in a reasonable length of time.
When I say infeasible, I mean infeasible for you. Given a large enough number of computers, enough time and some high-powered mathematics, it may be possible to do some of these things. Thus, technically they may be computationally feasible. But they are not feasible as a homework exercise. I'm sure that the point of this exercise is to get you to think about how to do this the smart way ...
One tip is that these statements do nothing:
x.divide(i).equals(x);
i.subtract(i);
Same with part of your for loop:
i.add(i)
They don't modify the instances themselves, but return new values - values that you're failing to check and do anything with. BigIntegers are "immutable". They can't be changed - but they can be operated upon and return new values.
If you actually wanted to do something like this, you would have to do:
i = i.add(i);
Also, why would you subtract i from i? Wouldn't you always expect this to be 0?
You need to implement/use miller-rabin algorithm
Handbook of Applied Cryptography
chapter 4.24
http://www.cacr.math.uwaterloo.ca/hac/about/chap4.pdf