Bitwise alternative modulo gives different results - java

I'm looking for an alternative for modulo, for Java. The reason is performance.
I have to run a script which loops some time and performs a modulo calculation each loop. Now I read on quite some websites there is a bitwise solution for this, but it gives different results in case of 1 % 3.
1 % 3; // results in 1
1 & (3-1); // results in 0
Can somebody explain this? Most calculations went fine, but this is one combination I found which does not give equal results.

For positive integers, i & (n-1) is equivalent to i % n if n is a power of 2. It doesn't work for all numbers. Otherwise we'd all be doing it the fast way all of the time.

Related

Beginner explanation to Bitwise operands

I am currently taking Intro programming classes. We are learning Java SE (eventually moving up to Java EE). I have a grasp on most things, but I hit a wall with bitwise manipulation and masking. For example:
EDITED HERE:
I want to figure out if a number is divisible by 3 and 5, but not 2. The only requirements is that I can not use % to find the answer, it has to be in a method call, and I have to use masking and bitwise operands.
I already learned how to determine if a number is odd or even with this:
public static boolean isEven(int num) {
return ((num & 1) == 0);
}
I understand what the bitwise operands (&, |, ^, >> , <<) do but can't actually implement them properly. Our book also does not have information on this, it's from our teachers notes.
I'm not asking for just the answer, I need to understand how it actually works.
These are pretty tricky unless your prof has actually shown you how to do this sort of thing. I'll do 3, and you figure out how to do 5.
This works because 4 == 1 mod 3:
static boolean isDivisibleBy3(int n)
{
while(n<0 || n>3) {
n = (n&3) + (n>>2);
}
return (n==3 || n==0)
}
n&3 gets the lower two bits. (n>>2) gets the rest of the bits and divides by 4. Because 4 == 1 mod 3, dividing by 4 doesn't change the remainder mod 3. Then we add them back together and get a smaller number with the same remainder as the one we started with.
Repeat until we get a number less than 4. At that point it's not going to get any smaller.
There are faster ways to do this, but this way is the easiest to understand.

Changing A Decimal to a whole number (NOT Rounding Off)

So I am making an application that can solve problems with Empirical Formulae and I need some code that would do something like:
If numbers are 2.5, 1, 3 it should change them to 2.5*2 = 5, 1*2 = 2, 3*2 = 6 so that the number with the decimal is converted to a whole number and the other numbers are adjusted appropriately.
I thought of this logic:
for(n = 1; (Math.round(simplestRat[0]) * n) != (int)SimplestRat[0]; n++)
to increment a counter that would multiply an integer to do what I want it to but I am skeptical about this code even at this phase and do not think it will work.
It would be a lot of help if someone could suggest a code for this or improve upon this code or even give me a link to another post for this problem as I was unable to find anything regarding this type of problem.
Any help is appreciated. Thanks
Okay, so you have to have a few steps. First, get them all into whole numbers. The easiest way is to find an appropriate power of ten to multiply them all by that leaves them as integers. This is a useful check: How to test if a double is an integer.
Then cast them to integers, and start working through them looking for common prime factors. This'll be a process similar to Eratosthenes' Sieve (http://en.wikipedia.org/wiki/Sieve_of_Eratosthenes) but with division at the end. For each prime, see if all 3 numbers divide by it exactly (modulo prime == 0). If they do, divide and reset the primes to 2. If they don't, next prime.
This should give you the lowest common ratio between the numbers. Any additional multiplier that came from the original stage is shaved off by the common primes method.

How to get the last digits of 2 power

I’m working on a problem where its demanded to get the last 100 digits of 2^1001. The solution must be in java and without using BigInteger, only with int or Long. For the moment I think to create a class that handle 100 digits. So my question is, if there is another way the handle the overflow using int or long to get a number with 100 digits.
Thanks for all.
EDITED: I was off by a couple powers of 10 in my modulo operator (oops)
The last 100 digits of 2^1001 is the number 2^1001 (mod 10^100).
Note that 2^1001 (mod 10^100) = 2*(2^1000(mod 10^100)) (mod 10^100).
Look into properties of modulo: http://www.math.okstate.edu/~wrightd/crypt/lecnotes/node17.html
This is 99% math problem, 1% programming problem. :)
With this approach, though, you wouldn't be able to use just an integer, as 10^100 won't fit in an int or long.
However, you could use this to find, say, the last 10 digits of 2^1001, then use a separate routine to find the next 10 digits, etc, where each routine uses this feature...
The fastest way to do it (coding wise) is to create an array of 100 integers and then have a function which starts from the ones place and doubles every digit. Make sure to take the modulus of 10 and carry over 1s if needed. For the 100th digit, simply eliminate the carry over. Do it 1001 times and you're done.

Extracting rightmost N bits of an integer

In the yester Code Jam Qualification round http://code.google.com/codejam/contest/dashboard?c=433101#s=a&a=0 , there was a problem called Snapper Chain. From the contest analysis I came to know the problem requires bit twiddling stuff like extracting the rightmost N bits of an integer and checking if they all are 1. I saw a contestant's(Eireksten) code which performed the said operation like below:
(((K&(1<<N)-1))==(1<<N)-1)
I couldn't understand how this works. What is the use of -1 there in the comparison?. If somebody can explain this, it would be very much useful for us rookies. Also, Any tips on identifying this sort of problems would be much appreciated. I used a naive algorithm to solve this problem and ended up solving only the smaller data set.(It took heck of a time to compile the larger data set which is required to be submitted within 8 minutes.). Thanks in advance.
Let's use N=3 as an example. In binary, 1<<3 == 0b1000. So 1<<3 - 1 == 0b111.
In general, 1<<N - 1 creates a number with N ones in binary form.
Let R = 1<<N-1. Then the expression becomes (K&R) == R. The K&R will extract the last N bits, for example:
101001010
& 111
————————————
000000010
(Recall that the bitwise-AND will return 1 in a digit, if and only if both operands have a 1 in that digit.)
The equality holds if and only if the last N bits are all 1. Thus the expression checks if K ends with N ones.
For example: N=3, K=101010
1. (1<<N) = 001000 (shift function)
2. (1<<N)-1 = 000111 (http://en.wikipedia.org/wiki/Two's_complement)
3. K&(1<<N)-1 = 0000010 (Bitmask)
4. K&(1<<N)-1 == (1<<N)-1) = (0000010 == 000111) = FALSE
I was working through the Snapper Chain problem and came here looking for an explanation on how the bit twiddling algorithm I came across in the solutions worked. I found some good info but it still took me a good while to figure it out for myself, being a bitwise noob.
Here's my attempt at explaining the algorithm and how to come up with it. If we enumerate all the possible power and ON/OFF states for each snapper in a chain, we see a pattern. Given the test case N=3, K=7 (3 snappers, 7 snaps), we show the power and ON/OFF states for each snapper for every kth snap:
1 2 3
0b:1 1 1.1 1.0 0.0 -> ON for n=1
0b:10 2 1.0 0.1 0.0
0b:11 3 1.1 1.1 1.0 -> ON for n=1, n=2
0b:100 4 1.0 0.0 1.0
0b:101 5 1.1 1.0 1.0 -> ON for n=1
0b:110 6 1.0 0.1 0.1
0b:111 7 1.1 1.1 1.1 -> ON for n=2, n=3
The lightbulb is on when all snappers are on and receiving power, or when we have a kth snap resulting in n 1s. Even more simply, the bulb is on when all of the snappers are ON, since they all must be receiving power to be ON (and hence the bulb). This means for every k snaps, we need n 1s.
Further, you can note that k is all binary 1s not only for k=7 that satisfies n=3, but for k=3 that satisifes n=2 and k=1 that satisifes n=1. Further, for n = 1 or 2 we see that every number of snaps that turns the bulb on, the last n digits of k are always 1. We can attempt to generalize that all ks that satisfy n snappers will be a binary number ending in n digits of 1.
We can use the expression noted by an earlier poster than 1 << n - 1 always gives us n binary digits of 1, or in this case, 1 << 3 - 1 = 0b111. If we treat our chain of n snappers as a binary number where each digit represents on/off, and we want n digits of one, this gives us our representation.
Now we want to find those cases where 1 << n - 1 is equal to some k that ends in n binary digits of 1, which we do by performing a bitwise-and: k & (1 << n - 1) to get the last n digits of k, and then comparing that to 1 << n - 1.
I suppose this type of thinking comes more naturally after working with these types of problems a lot, but it's still intimidating to me and I doubt I would ever have come up with such a solution by myself!
Here's my solution in perl:
$tests = <>;
for (1..$tests) {
($n, $k) = split / /, <>;
$m = 1 << $n - 1;
printf "Case #%d: %s\n", $_, (($k & $m) == $m) ? 'ON' : 'OFF';
}
I think we can recognize this kind of problem by calculating the answer by hand first, for some series of N (for example 1,2,3,..). After that, we will recognize the state change and then write a function to automate the process (first function). Run the program for some inputs, and notice the pattern.
When we get the pattern, write the function representing the pattern (second function), and compare the output of the first function and the second function.
For the Code Jam case, we can run both function against the small dataset, and verify the output. If it is identical, we have a high probability that the second function can solve the large dataset in time.

Problems with prime numbers

I am trying to write a program to find the largest prime factor of a very large number, and have tried several methods with varying success. All of the ones I have found so far have been unbelievably slow. I had a thought, and am wondering if this is a valid approach:
long number = input;
while(notPrime(number))
{
number = number / getLowestDivisiblePrimeNumber();
}
return number;
This approach would take an input, and would do the following:
200 -> 100 -> 50 -> 25 -> 5 (return)
90 -> 45 -> 15 -> 5 (return)
It divides currentNum repeatedly by the smallest divisible number (most often 2, or 3) until currentNum itself is prime (there is no divisible prime number less than the squareroot of currentNum), and assumes this is the largest prime factor of the original input.
Will this always work? If not, can someone give me a counterexample?
-
EDIT: By very large, I mean about 2^40, or 10^11.
The method will work, but will be slow. "How big are your numbers?" determines the method to use:
Less than 2^16 or so: Lookup table.
Less than 2^70 or so: Sieve of Atkin. This is an optimized version of the more well known Sieve of Eratosthenes. Edit: Richard Brent's modification of Pollard's rho algorithm may be better in this case.
Less than 10^50: Lenstra elliptic curve factorization
Less than 10^100: Quadratic Sieve
More than 10^100: General Number Field Sieve
This will always work because of the Unique Prime Factorization Theorem.
Certainly it will work (see Mark Byers' answer), but for "very large" inputs it may take far too long. You should note that your call to getLowestDivisiblePrimeNumber() conceals another loop, so this runs at O(N^2), and that depending on what you mean by "very large" it may have to work on BigNums which will be slow.
You could speed it up a little, by noting that your algorithm need never check factors smaller than the last one found.
You are trying to find the prime factors of a number. What you are proposing will work, but will still be slow for large numbers.... you should be thankful for this, since most modern security is predicated on this being a difficult problem.
From a quick search I just did, the fastest known way to factor a number is by using the Elliptic Curve Method.
You could try throwing your number at this demo: http://www.alpertron.com.ar/ECM.HTM .
If that convinces you, you could try either stealing the code (that's no fun, they provide a link to it!) or reading up on the theory of it elsewhere. There's a Wikipedia article about it here: http://en.wikipedia.org/wiki/Lenstra_elliptic_curve_factorization but I'm too stupid to understand it. Thankfully, it's your problem, not mine! :)
The thing with Project Euler is that there is usually an obvious brute-force method to do the problem, which will take just about forever. As the questions become more difficult, you will need to implement clever solutions.
One way you can solve this problem is to use a loop that always finds the smallest (positive integer) factor of a number. When the smallest factor of a number is that number, then you've found the greatest prime factor!
Detailed Algorithm description:
You can do this by keeping three variables:
The number you are trying to factor (A)
A current divisor store (B)
A largest divisor store (C)
Initially, let (A) be the number you are interested in - in this case, it is 600851475143. Then let (B) be 2. Have a conditional that checks if (A) is divisible by (B). If it is divisible, divide (A) by (B), reset (B) to 2, and go back to checking if (A) is divisible by (B). Else, if (A) is not divisible by (B), increment (B) by +1 and then check if (A) is divisible by (B). Run the loop until (A) is 1. The (3) you return will be the largest prime divisor of 600851475143.
There are numerous ways you could make this more effective - instead of incrementing to the next integer, you could increment to the next necessarily prime integer, and instead of keeping a largest divisor store, you could just return the current number when its only divisor is itself. However, the algorithm I described above will run in seconds regardless.
The implementation in python is as follows:-
def lpf(x):
lpf = 2;
while (x > lpf):
if (x%lpf==0):
x = x/lpf
lpf = 2
else:
lpf+=1;
print("Largest Prime Factor: %d" % (lpf));
def main():
x = long(raw_input("Input long int:"))
lpf(x);
return 0;
if __name__ == '__main__':
main()
Example: Let's find the largest prime factor of 105 using the method described above.
Let (A) = 105. (B) = 2 (we always start with 2), and we don't have a value for (C) yet.
Is (A) divisible by (B)? No. Increment (B) by +1: (B) = 3. Is Is (A) divisible by (B)? Yes. (105/3 = 35). The largest divisor found so far is 3. Let (C) = 3. Update (A) = 35. Reset (B) = 2.
Now, is (A) divisible by (B)? No. Increment (B) by +1: (B) = 3. Is (A) divisible by (B)? No. Increment (B) by +1: (B) = 4. Is (A) divisible by (B)? No. Increment (B) by +1: (B) = 5. Is (A) divisible by (B)? Yes. (35/5 = 7). The largest divisor we found previously is stored in (C). (C) is currently 3. 5 is larger than 3, so we update (C) = 5. We update (A)=7. We reset (B)=2.
Then we repeat the process for (A), but we will just keep incrementing (B) until (B)=(A), because 7 is prime and has no divisors other than itself and 1. (We could already stop when (B)>((A)/2), as you cannot have integer divisors greater than half of a number - the smallest possible divisor (other than 1) of any number is 2!)
So at that point we return (A) = 7.
Try doing a few of these by hand, and you'll get the hang of the idea

Categories