Implementation using linear congruential equation in java - java

I see an LCG implementation in Java under Random class as shown below:
/*
* This is a linear congruential pseudorandom number generator, as
* defined by D. H. Lehmer and described by Donald E. Knuth in
* <i>The Art of Computer Programming,</i> Volume 3:
* <i>Seminumerical Algorithms</i>, section 3.2.1.
*
* #param bits random bits
* #return the next pseudorandom value from this random number
* generator's sequence
* #since 1.1
*/
protected int next(int bits) {
long oldseed, nextseed;
AtomicLong seed = this.seed;
do {
oldseed = seed.get();
nextseed = (oldseed * multiplier + addend) & mask;
} while (!seed.compareAndSet(oldseed, nextseed));
return (int)(nextseed >>> (48 - bits));
}
But below link tells that LCG should be of the form, x2=(ax1+b)modM
https://math.stackexchange.com/questions/89185/what-does-linear-congruential-mean
But above code does not look in similar form. Instead it uses & in place of modulo operation as per below line
nextseed = (oldseed * multiplier + addend) & mask;
Can somebody help me understand this approach of using & instead of modulo operation?

Bitwise-ANDing with a mask which is of the form 2^n - 1 is the same as computing the number modulo 2^n: Any 1's higher up in the number are multiples of 2^n and so can be safely discarded. Note, however, that some multiplier/addend combinations work very poorly if you make the modulus a power of two (rather than a power of two minus one). That code is fine, but make sure it's appropriate for your constants.

This can be used if mask + 1 is a power of 2.
For instance, if you want to do modulo 4, you can write x & 3 instead of x % 4 to obtain the same result.
Note however that this requires that x be a positive number.

Related

calculate kth power of 2

I was solving a problem and the basic idea to calculate the power of 2 for some k. And then multiply it with 10. Result should be calculated value mod
10^9+7.
Given Constraints 1≤K≤10^9
I am using java language for this. I used 'Math.pow' function but 2^10000000 exceeds its range and I don't want to use 'BigInteger' here. Any other way to calculate such large values.
The actual problem is:
For each valid i, the sign with number i had the integer i written on one side and 10K−i−1 written on the other side.
Now, Marichka is wondering — how many road signs have exactly two distinct decimal digits written on them (on both sides in total)? Since this number may be large, compute it modulo 10^9+7.
I'm using this pow approach, but this is not an efficient way. Any suggestion to solve this problem.
My original Solution:
/* package codechef; // don't place package name! */
import java.util.*;
class Codechef
{
public static void main (String[] args) throws java.lang.Exception
{
Scanner scan = new Scanner(System.in);
int t = scan.nextInt();
while(t-->0){
long k = scan.nextInt();
long mul=10*(long)Math.pow(2, k-1);
long ans = mul%1000000007;
System.out.println(ans);
}
}
}
After taking some example, I reached that this pow solution works fine for small constraints but not for large.
while(t-->0){
long k = scan.nextInt();
long mul=10*(long)Math.pow(2, k);
long ans = mul%1000000007;
System.out.println(ans);
}
This pow function is exceeding its range. Any good solution to this.
Basically, f(g(x)) mod M is the same as f(g(x) mod M) mod M. As exponentiation is just a lot of multiplication, you can just decompose your single exponentiation into many multiplications, and apply modulo at every step. i.e.
10 * 2^5 mod 13
is the same as
10
* 2 mod 13
* 2 mod 13
* 2 mod 13
* 2 mod 13
* 2 mod 13
You can compact the loop by not breaking up the exponentiation so far; i.e. this would give the same answer, again:
10
* 4 mod 13
* 4 mod 13
* 2 mod 13
Faruk's recursive solution shows an elegant way to do this.
You need to use the idea of dividing the power by 2.
long bigmod(long p,long e,long M) {
if(e==0)
return 1;
if(e%2==0) {
long t=bigmod(p,e/2,M);
return (t*t)%M;
}
return (bigmod(p,e-1,M)*p)%M;
}
while(t-->0){
long k = scan.nextInt();
long ans = bigmod(2, k, 1000000007);
System.out.println(ans);
}
You can get details about the idea from here: https://www.geeksforgeeks.org/how-to-avoid-overflow-in-modular-multiplication/
As the size of long is 8 bytes and it is signed datatype so the range of long datatype is -(2^63) to (2^63 - 1). Hence to store 2^100 you have to use another datatype.

Converting binary representation of integers to ASCII in Java Card

I would like to convert arbitrary length of integers that are represented in binary format to the ASCII form.
One example being for the integer number 33023, the hexadecimal bytes is 0x80ff. I would like to represent 0x80ff into ASCII format of 33023 which has a hexadecimal representation of 0x3333303233.
I am working on a Java Card environment which does not recognize the String type so I would have to do the conversion manually via binary manipulation.
What is the most efficient way to go about solving this as Java Card environment on a 16 bit smart card is very constraint.
This is more tricky than that you may think as it requires base conversion, and base conversion is executed over the entire number, using big integer arithmetic.
That of course doesn't mean that we cannot create an efficient implementation of said big integer arithmetic specifically for the purpose. Here is an implementation that left pads with zero's (which is usually required on Java Card) and uses no additional memory (!). You may have to copy the original value of the big endian number if you want to keep it though - the input value is overwritten. Putting it in RAM is highly recommended.
This code simply divides the bytes with the new base (10 for decimals), returning the remainder. The remainder is the next lowest digit. As the input value has now been divided the next remainder is the digit that is just one position more significant than the one before. It keeps dividing and returning the remainder until the value is zero and the calculation is complete.
The tricky part of the algorithm is the inner loop, which divides the value by 10 in place, while returning the remainder using tail division over bytes. It provides one remainder / decimal digit per run. This also means that the order of the function is O(n) where n is the number of digits in the result (defining the tail division as a single operation). Note that n can be calculated by ceil(bigNumBytes * log_10(256)): the result of which is also present in the precalculated BCD_SIZE_PER_BYTES table. log_10(256) of course a constant decimal value, somewhere upwards of 2.408.
Here is the final code with optimizations (see the edit for different versions):
/**
* Converts an unsigned big endian value within the buffer to the same value
* stored using ASCII digits. The ASCII digits may be zero padded, depending
* on the value within the buffer.
* <p>
* <strong>Warning:</strong> this method zeros the value in the buffer that
* contains the original number. It is strongly recommended that the input
* value is in fast transient memory as it will be overwritten multiple
* times - until it is all zero.
* </p>
* <p>
* <strong>Warning:</strong> this method fails if not enough bytes are
* available in the output BCD buffer while destroying the input buffer.
* </p>
* <p>
* <strong>Warning:</strong> the big endian number can only occupy 16 bytes
* or less for this implementation.
* </p>
*
* #param uBigBuf
* the buffer containing the unsigned big endian number
* #param uBigOff
* the offset of the unsigned big endian number in the buffer
* #param uBigLen
* the length of the unsigned big endian number in the buffer
* #param decBuf
* the buffer that is to receive the BCD encoded number
* #param decOff
* the offset in the buffer to receive the BCD encoded number
* #return decLen, the length in the buffer of the received BCD encoded
* number
*/
public static short toDecimalASCII(byte[] uBigBuf, short uBigOff,
short uBigLen, byte[] decBuf, short decOff) {
// variables required to perform long division by 10 over bytes
// possible optimization: reuse remainder for dividend (yuk!)
short dividend, division, remainder;
// calculate stuff outside of loop
final short uBigEnd = (short) (uBigOff + uBigLen);
final short decDigits = BYTES_TO_DECIMAL_SIZE[uBigLen];
// --- basically perform division by 10 in a loop, storing the remainder
// traverse from right (least significant) to the left for the decimals
for (short decIndex = (short) (decOff + decDigits - 1); decIndex >= decOff; decIndex--) {
// --- the following code performs tail division by 10 over bytes
// clear remainder at the start of the division
remainder = 0;
// traverse from left (most significant) to the right for the input
for (short uBigIndex = uBigOff; uBigIndex < uBigEnd; uBigIndex++) {
// get rest of previous result times 256 (bytes are base 256)
// ... and add next positive byte value
// optimization: doing shift by 8 positions instead of mul.
dividend = (short) ((remainder << 8) + (uBigBuf[uBigIndex] & 0xFF));
// do the division
division = (short) (dividend / 10);
// optimization: perform the modular calculation using
// ... subtraction and multiplication
// ... instead of calculating the remainder directly
remainder = (short) (dividend - division * 10);
// store the result in place for the next iteration
uBigBuf[uBigIndex] = (byte) division;
}
// the remainder is what we were after
// add '0' value to create ASCII digits
decBuf[decIndex] = (byte) (remainder + '0');
}
return decDigits;
}
/*
* pre-calculated array storing the number of decimal digits for big endian
* encoded number with len bytes: ceil(len * log_10(256))
*/
private static final byte[] BYTES_TO_DECIMAL_SIZE = { 0, 3, 5, 8, 10, 13,
15, 17, 20, 22, 25, 27, 29, 32, 34, 37, 39 };
To extend the input size simply calculate and store the next decimal sizes in the table...

Why ConcurrentHashMap calculate hashcode with 0x7fffffff in 1.8?

When to calculate key's hashcode, spread() method is called:
static final int spread(int h) {
return (h ^ (h >>> 16)) & HASH_BITS;
}
where HASH_BITS equals 0x7fffffff, so, what is the purpose of HASH_BITS? Some one says it make the sign bit to 0, I am not sure about that.
The index of KV Node in hash buckets is calculated by following formula:
index = (n - 1) & hash
hash is the result of spread()
n is the length of hash buckets which maximum is 2^30
private static final int MAXIMUM_CAPACITY = 1 << 30;
So the maximum of n - 1 is 2^30 - 1 which means the top bit of hash will never be used in index calculation.
But i still don't understand is it necessary to clear the top bit of hash to 0.It seems that there are more reasons to do so.
/**
* Spreads (XORs) higher bits of hash to lower and also forces top
* bit to 0. Because the table uses power-of-two masking, sets of
* hashes that vary only in bits above the current mask will
* always collide. (Among known examples are sets of Float keys
* holding consecutive whole numbers in small tables.) So we
* apply a transform that spreads the impact of higher bits
* downward. There is a tradeoff between speed, utility, and
* quality of bit-spreading. Because many common sets of hashes
* are already reasonably distributed (so don't benefit from
* spreading), and because we use trees to handle large sets of
* collisions in bins, we just XOR some shifted bits in the
* cheapest possible way to reduce systematic lossage, as well as
* to incorporate impact of the highest bits that would otherwise
* never be used in index calculations because of table bounds.
*/
static final int spread(int h) {
return (h ^ (h >>> 16)) & HASH_BITS;
}
I think it is to avoid collision with the preserved hashcodes: MOVED(-1), TREEBIN(-2) and RESERVED(-3) of which symbol bits are always 1.

Java BigInteger factorization: division and multiplication differ

I'm writing a code to factorize a big number (more than 30 digit) in Java.
The number (n) is this: 8705702225074732811211966512111
The code seems to work and the results are:
7
2777
14742873817
By logic the last item should be obtainable by doing (n/(fact1 * fact2 * fact3)) and it results:
30377199961175839
I was very happy with this, but then decided to take a little test: I multiplied all the factor expecting to find n... But I didn't!
Here is my check code:
BigInteger n = new BigInteger("8705702225074732811211966512111");
BigInteger temp1 = new BigInteger("7");
BigInteger temp2 = new BigInteger("2777");
BigInteger temp3 = new BigInteger("14742873817");
BigInteger temp4 = n.divide(temp1).divide(temp2).divide(temp3);
System.out.println(n.mod(temp1));
System.out.println(n.mod(temp2));
System.out.println(n.mod(temp3));
System.out.println(n.mod(temp4));
System.out.println(n.divide(temp1).divide(temp2).divide(temp3).divide(temp4));
System.out.println(temp1.multiply(temp2).multiply(temp3).multiply(temp4));
System.out.println(n);
As you can see I simply define the number n and the factors (the last one is defined as n/(fact1 * fact2 * fact3) then check that n/eachfactor gives remainder 0.
Then I check that ((((N / (fact1)) / fact2) / fact3) / fact4) = 1
Lastly I check that fact1 * fact2 * fact3 * fact4 = n
The problems are:
n mod temp4 is not 0, but 245645763538854
fact1 * fact2 * fact3 * fact4 is different from n
but ((((N / fact1) / fact2) / fact3) / fact4) = 1
Here is the exact output:
0
0
0
245645763538854
1
8705702225074732565566202973257
8705702225074732811211966512111
This has no sense... How can the fourth factor be wrong and right at the same time?
I'm sorry to report :
8705702225074732811211966512111/(7*2777*14742873817) =
30377199961175839.8571428571
Where it should be a whole number.
So, your factorisation is wrong ... oops ..
Try bc under linux, for windows : http://gnuwin32.sourceforge.net/packages/bc.htm.
It can deal with these kind of numbers
this page says the actual factorization of your BigInteger is 7*2777*2106124831*212640399728230879
System.out.println(temp3.mod(temp1));
The above code gives 0, which means temp3 is not prime. temp4 is not a factor.

Find N bits of most significant bit of a BigInteger(msb)

i want to find n bits most significant bit of a BigInteger and return it as a byte type.
it's my home work and i know very little thinks about it.
please help me to resolve it.its very necessary for me.
this is the method that must be implemented:
/**
* *
* Gets N numbers of bits from the MOST SIGNIFICANT BIT (inclusive).
*
* #param value Source from bits will be extracted
* #param n The number of bits taken
* #return The n most significant bits from value
*/
private byte msb(BigInteger value, int n) {
return 0x000;
}
You can try bitLength() in java.math.BigInteger which returns the number of bits in the number. You can use this method to retrieve the n most significant bits as:
int n = 3;
BigInteger r = BigInteger.valueOf(23);
BigInteger f = r.shiftRight(r.bitLength() - n);
Byte result = Byte.valueOf(f.toString());
System.out.println(result);
This prints 5 as expected.

Categories