I'm try to see if large numbers are prime or not, number whose length are 11. Here is the code I am using:
private static boolean isPrime(BigInteger eval_number){
for(int i=2;i < eval_number.intValue();i++) {
if(eval_number.intValue() % i==0)
return false;
}
return true;
}
Now the number I'm inspecting in the debugger is eval_number which equals 11235813213. However when I inspect the eval_number.intValue() in the debugger instead of the value being 11235813213 the value is -1649088675. How is this happening? Also what would be a better way in inspecting large numbers to see if they are prime?
The strange value is a result of an overflow. The number held by the BigInteger instance is greater than 2^31-1 (Integer.MAX_VALUE) thus it can't be represented by an int. For the primcheck: BigInteger provides isProbablePrime(int) and there are several other fast (more or less) algorithms that allow to check whether a number is a primnumber with a given failure-rate. If you prefer 100% certainty you can optimize your code by reducing the upper-bounds for numbers to check to sqrt(input) and increasing the step-size by two. Or generate a prim-table, if the algorithm is used several times.
intValue() returns an integer equivalent for the given BigInteger number.
Since you are passing the value 11235813213, which is much larger than Integer.MAX_VALUE(maximum possible value for an int variable), which is 2147483647. So , it resulted in overflowing of the integer.
Also what would be a better way in inspecting large numbers to see if
they are prime?
You should use only BigInteger numbers for finding out large primes. Also, check this question (Determining if a BigInteger is Prime in Java) which I asked a year ago.
As others have said the number you are checking is ouside of the range of int.
You could use a long, but that only delays the problem, it will still fail on numbers beyond long's range.
The solution is to use BigInteger arithmetic :
private static boolean isPrime(BigInteger eval_number) {
for (BigInteger i = BigInteger.valueOf(2); i.compareTo(eval_number) < 0; i = i.add(BigInteger.ONE)) {
if (eval_number.mod(i).equals(BigInteger.ZERO)) {
return false;
}
}
return true;
}
That is just a correction of the inmediate problem your question is about. There are still things to improve there. Checking for being prime can be made more efficient. You don't have to check even numbers except 2 and you only need to check till the square root of the number in question.
You convert BigInteger to 32bit integer. If it is bigger than 2^31, it will return incorrect value. You need to do all the operations over BigInteger instances. I assume that you use BigInteger because of long being insufficient for other cases, but for number you stated as an example would be use of long instead of int sufficient. (long will be enough for numbers up to 2^63).
You have to make all operations with BigInteger, without converting it to int :
private static boolean isPrime(BigInteger eval_number) {
for (BigInteger i = BigInteger.valueOf(2); i.compareTo(eval_number) < 0; i = i.add(BigInteger.ONE)) {
if (eval_number.divideAndRemainder(i)[1].equals(BigInteger.ZERO)) {
System.out.println(i);
return false;
}
}
return true;
}
If you want to check whether a BigInteger is Prime or not you can use java.math.BigInteger.isProbablePrime(int certainty) it will returns true if this BigInteger is probably prime, false if it's definitely composite. If certainty is ≤ 0, true is returned.
Related
I want to find whether a given number is a power of two in a mathematical way, not with a bitwise approach. Here is my code:
private static double logBaseTwo(final double x) {
return Math.log(x) / Math.log(2);
}
private static double roundToNearestHundredThousandth(final double x) {
return Math.round(x * 100000.0) / 100000.0;
}
private static boolean isInteger(final double x) {
return (int)(Math.ceil(x)) == (int)(Math.floor(x));
}
public static boolean isPowerOfTwo(final int n) {
return isInteger(roundToNearestHundredThousandth(logBaseTwo(n)));
}
It incorrectly returns true for certain numbers, such as 524287. Why is that?
Your code fails because you may need more precision than you allow to capture the difference between the logs of BIG_NUMBER and BIG_NUMBER+1
The bitwise way is really best, but if you really want to use only "mathy" operations, then the best you can do is probably:
public static boolean isPowerOfTwo(final int n) {
int exp = (int)Math.round(logBaseTwo(n));
int test = (int)Math.round(Math.pow(2.0,exp));
return test == n;
}
This solution does not require any super-fine precision, and will work fine for all positive ints.
This is truly horrifyingly bad code, and I have no idea what you are trying to do. You seem to be trying to check if the log base 2 of n is an integer. Instead I would write a loop:
while (n>1) {
m = (n/2) * 2
if (n!=m){
return false;
}
n /=2;
}
return true;
The solution seems more complicated than it should be. I don't get the 100000d parts - seems to potentially cause problems when converting to ceiling.
This is the simple solution that works for all cases:
public static boolean isPowerOfTwo(int n) {
return Math.ceil(Math.log(n)/Math.log(2)) == Math.floor(Math.log(n)/Math.log(2));
}
Originally I had a problem using Math.log in my computations. I switched to Math.log10 and the problem went away. Although mathematically, any logB of base B should work, the nature of floating point math can be unpredictable.
Try this.
public static boolean isPowerOfTwo(int n) {
return n > 0 && Integer.highestOneBit(n) == Integer.lowestOneBit(n);
}
If you prefer to use logs you can do it this way.
public static boolean isPowerOfTwo(int n) {
return n > 0 && (Math.log10(n)/Math.log10(2))%1 == 0;
}
doubles and floats have, respectively, 64-bit and 32-bit precision. That means they can hold at the very most 18446744073709551616 unique numbers. That's a lot of numbers, but not an infinite amount of them. At some point (in fact, that point occurs about at 2^52), the 'gap' between any 2 numbers which are part of the 18446744073709551616 representable ones becomes larger than 1.000. Similar rules apply to small numbers. Math.log does double based math.
Secondarily, ints are similarly limited. They can hold up to 4294967296 different numbers. For ints it's much simpler: Ints can hold from -2147483648 up to 2147483647. If you try to add 1 to 2147483647, you get -2147483648 (it silently wraps around). It's quite possible you're running into that with trying to convert such a large number (your double times 10000d) to an int first.
Note that ? true : false (as in the original version of the question) is literally completely useless. the thing to the left of the question mark must be a boolean, and booleans are already true or false, that's their nature.
See the other answers for simpler approaches to this problem. Although, of course, the simplest solution is to simply count bits in the number. If it's precisely 1 bit, it's a power of 2. If it's 0 bits, well, you tell me if you consider '0' a power of 2 :)
I am currently working on a method to do an exponentiation calculation using recursion. Here is what I have so far:
public static long exponentiation(long x, int n) {
if (n == 0) {
return 1;
} else if (n == 1) {
return x;
// i know this doesn't work since im returning long
} else if (n < 0) {
return (1 / exponentiation(x, -n));
} else {
//do if exponent is even
if (n % 2 == 0) {
return (exponentiation(x * x, n / 2));
} else {
// do if exponent is odd
return x * exponentiation(x, n - 1);
}
}
}
I have two issues. First issue is that I cannot do negative exponent's, this is not a major issue since I am not required to do negative exponents. Second issue, is certain computations give me the wrong answer. For example 2^63 gives me the correct value, but it gives me a negative number. And 2^64 and on just give me 0. Is there anyway for me to fix this? I know that I could just switch the long's to doubleand my method will work perfectly. However, my professor has required us to use long. Thank you for your help!
The maximum value a long can represent is 2^63 -1. So if you calculate 2^63, it is bigger then what a long can hold and wraps around. Long is represented using twos-complement.
Just changing long to double doesn't exactly work. It changes the semantics of the method. Floating-point numbers have limite precision. With a 64-bit floating point number, you can still only represent the same amount of numbers as with a 64-bit integer. They are just distributed differently. a long can represent every whole number bewteen -2^63 and 2^63-1. A double can represent fractions of numbers as well, but at high numbers, it can't even represent every number.
For example, the next double you can represent after 100000000000000000000000000000000000000000000000000 is 100000000000000030000000000000000000000000000000000 - so you are missiong a whopping 30000000000000000000000000000000000 you can not represent with a double.
You are trying to fix something that you shouldn't bother with fixing. Using a long, there is a fixed maximum return value your method may return. Your method should clearly state what happens if it overflows, and you might want to handle such overflows (e.g. using Math#multiplyExactly), but if long is the return value you are supposed to return, then that is what you should be using.
You could hold the result in an array of longs, let's call it result[]. At first, apply the logic to result[0]. But, when that value goes negative,
1) increment result[1] by the excess.
2) now, your logic gets much messier and I'm typing on my phone, so this part is left as an exercise for the reader.
3) When result[1] overflows, start on result[2]...
When you print the result, combine the results, again, logic messy.
I assume this is how BigInteger works (more or less)? I've never looked at that code, you might want to.
But, basically, Polygnone is correct. Without considerable workarounds, there is an upper limit.
Here's my implementation of Fermat's little theorem. Does anyone know why it's not working?
Here are the rules I'm following:
Let n be the number to test for primality.
Pick any integer a between 2 and n-1.
compute a^n mod n.
check whether a^n = a mod n.
myCode:
int low = 2;
int high = n -1;
Random rand = new Random();
//Pick any integer a between 2 and n-1.
Double a = (double) (rand.nextInt(high-low) + low);
//compute:a^n = a mod n
Double val = Math.pow(a,n) % n;
//check whether a^n = a mod n
if(a.equals(val)){
return "True";
}else{
return "False";
}
This is a list of primes less than 100000. Whenever I input in any of these numbers, instead of getting 'true', I get 'false'.
The First 100,008 Primes
This is the reason why I believe the code isn't working.
In java, a double only has a limited precision of about 15 to 17 digits. This means that while you can compute the value of Math.pow(a,n), for very large numbers, you have no guarantee you'll get an exact result once the value has more than 15 digits.
With large values of a or n, your computation will exceed that limit. For example
Math.pow(3, 67) will have a value of 9.270946314789783e31 which means that any digit after the last 3 is lost. For this reason, after applying the modulo operation, you have no guarantee to get the right result (example).
This means that your code does not actually test what you think it does. This is inherent to the way floating point numbers work and you must change the way you hold your values to solve this problem. You could use long but then you would have problems with overflows (a long cannot hold a value greater than 2^64 - 1 so again, in the case of 3^67 you'd have another problem.
One solution is to use a class designed to hold arbitrary large numbers such as BigInteger which is part of the Java SE API.
As the others have noted, taking the power will quickly overflow. For example, if you are picking a number n to test for primality as small as say, 30, and the random number a is 20, 20^30 = about 10^39 which is something >> 2^90. (I took the ln of 10^39).
You want to use BigInteger, which even has the exact method you want:
public BigInteger modPow(BigInteger exponent, BigInteger m)
"Returns a BigInteger whose value is (this^exponent mod m)"
Also, I don't think that testing a single random number between 2 and n-1 will "prove" anything. You have to loop through all the integers between 2 and n-1.
#evthim Even if you have used the modPow function of the BigInteger class, you cannot get all the prime numbers in the range you selected correctly. To clarify the issue further, you will get all the prime numbers in the range, but some numbers you have are not prime. If you rearrange this code using the BigInteger class. When you try all 64-bit numbers, some non-prime numbers will also write. These numbers are as follows;
341, 561, 645, 1105, 1387, 1729, 1905, 2047, 2465, 2701, 2821, 3277, 4033, 4369, 4371, 4681, 5461, 6601, 7957, 8321, 8481, 8911, 10261, 10585, 11305, 12801, 13741, 13747, 13981, 14491, 15709, 15841, 16705, 18705, 18721, 19951, 23001, 23377, 25761, 29341, ...
https://oeis.org/a001567
161038, 215326, 2568226, 3020626, 7866046, 9115426, 49699666, 143742226, 161292286, 196116194, 209665666, 213388066, 293974066, 336408382, 376366, 666, 566, 566, 666 2001038066, 2138882626, 2952654706, 3220041826, ...
https://oeis.org/a006935
As a solution, make sure that the number you tested is not in this list by getting a list of these numbers from the link below.
http://www.cecm.sfu.ca/Pseudoprimes/index-2-to-64.html
The solution for C # is as follows.
public static bool IsPrime(ulong number)
{
return number == 2
? true
: (BigInterger.ModPow(2, number, number) == 2
? (number & 1 != 0 && BinarySearchInA001567(number) == false)
: false)
}
public static bool BinarySearchInA001567(ulong number)
{
// Is number in list?
// todo: Binary Search in A001567 (https://oeis.org/A001567) below 2 ^ 64
// Only 2.35 Gigabytes as a text file http://www.cecm.sfu.ca/Pseudoprimes/index-2-to-64.html
}
I'm looking to randomize a BigInteger. The intent is to pick a number from 1 to 8180385048. Though, from what I noticed, the BigInteger(BitLen, Random) does it from n to X2-1, I'd want some unpredictable number. I tried to make a method that would do it, but I keep running into bugs and have finally given in to asking on here. :P Does anyone have any suggestions on how to do this?
Judging from the docs of Random.nextInt(int n) which obviously needs to solve the same problem, they seem to have concluded that you can't do better than "resampling if out of range", but that the penalty is expected to be negligible.
From the docs:
The algorithm is slightly tricky. It rejects values that would result in an uneven distribution (due to the fact that 231 is not divisible by n). The probability of a value being rejected depends on n. The worst case is n=230+1, for which the probability of a reject is 1/2, and the expected number of iterations before the loop terminates is 2.
I'd suggest you simply use the randomizing constructor you mentioned and iterate until you reach a value that is in range, for instance like this:
public static BigInteger rndBigInt(BigInteger max) {
Random rnd = new Random();
do {
BigInteger i = new BigInteger(max.bitLength(), rnd);
if (i.compareTo(max) <= 0)
return i;
} while (true);
}
public static void main(String... args) {
System.out.println(rndBigInt(new BigInteger("8180385048")));
}
For your particular case (with max = 8180385048), the probability of having to reiterate, even once, is about 4.8 %, so no worries :-)
Make a loop and get random BigIntegers of the minimum bit length that covers your range until you obtain one number in range. That should preserve the distribution of random numbers.
Reiterating if out of range, as suggested in other answers, is a solution to this problem. However if you want to avoid this, another option is to use the modulus operator:
BigInteger i = new BigInteger(max.bitLength(), rnd);
i = i.mod(max); // Now 0 <= i <= max - 1
i = i.add(BigInteger.ONE); // Now 1 <= i <= max
For a small project (Problem 10 Project Euler) i tried to sum up all prime numbers below 2 millions. So I used a brute force method and iterated from 0 to 2'000'000 and checked if the number is a prime. If it is I added it to the sum:
private int sum = 0;
private void calculate() {
for (int i = 0; i < 2000000; i++) {
if (i.isPrime()) {
sum = sum + i;
}
}
sysout(sum)
}
The result of this calculation is 1179908154, but this is incorrect. So i changed int to BigInteger and now i get the correct sum 142913828922. Obviously the range of int was overflowed. But why can't Java tell me that? (e.g. by an exception)
Because it's conceivable that you might want it to behave in the traditional Integer fashion. Exceptions are reserved for things that are definitely and irrevocably wrong.
ETA: From the language spec:
"The built-in integer operators do not
indicate overflow or underflow in any
way. The only numeric operators that
can throw an exception (§11) are the
integer divide operator / (§15.17.2)
and the integer remainder operator %
(§15.17.3), which throw an
ArithmeticException if the right-hand
operand is zero."
(http://java.sun.com/docs/books/jls/second_edition/html/typesValues.doc.html)
Besides what Jim says, checking for conditions such as overflow would add a performance penalty to any calculation done with integers, which would make programs that do a lot of calculations a lot slower.
The other reason is that you can do this check yourself very easily and quickly.
if (sum+i < sum) {
throw new AritchmeticException();
}
should do the trick nicely, given that you know i is always positive and less than Integer.MAX_VALUE.
Being aware of Integer.MAX_VALUE is always useful :)
Because our profession values performance over correctness. ;(
Using BigInteger by default, and only reasoning whether it is acceptable, to use long or int if performance is a real problem, would help to avoid such problems.