Assign value, but only if positive - without conditionals - java

Question
I was impressed by tricks like the xor-swap algorithm and similar. So I asked myself, is it possible to assign a variable a value, but only if the value is positive - without using any sort of if or hidden conditionals; just pure math.
Alternative
Basically, this but without the if:
int a = ...
int b = ...
if (b >= 0) {
a = b;
}
Examples
Here are some example input/output setups to illustrate the desired logic:
a = 1, b = 10 -> a = 10 // b is positive
a = 1, b = 0 -> a = 0 // b is 0, also positive
a = 1, b = -10 -> a = 1 // b is negative

tl;dr
int a = ...
int b = ...
int isNegative = b >>> 31; // 1 if negative, 0 if positive
int isPositive = 1 - isNegative; // 0 if negative, 1 if positive
a = isPositive * b + isNegative * a;
Signum
An easy way to achieve the task is to try to aquire some sort of signum, or more specifically a way to get a factor of
either 0, if b is positive
or 1 if b is negative, or vice-versa.
Now, if you take a look at how int is represented internally with its 32-bits (this is called Two's complement):
// 1234
00000000 00000000 00000100 11010010
// -1234
11111111 11111111 11111011 00101110
You see that it has the so called sign-bit on the very left, the most-significant-bit. Turns out, you can easily extract that bit with a simple bit-shift that just moves the whole bit-pattern 31 times to the right, only leaving the 32-th bit, i.e. the sign-bit:
int isNegative = b >>> 31; // 1 if negative, 0 if positive
Now, to get the opposite direction, you simply negate it and add 1 on top of it:
int isPositive = 1 - isNegative; // 0 if negative, 1 if positive
Annihilator and Identity
Once you have that, you can easily construct your desired value by exploiting the fact that
multiplication with 0 basically erases the argument (0 is an annihilator of *)
and addition with 0 does not change the value (0 is an identity element of +).
So, coming back to the logic we want to achieve in the first place:
we want b if b is positive
and we want a if b is negative
Hence, we just do b * isPositive and a * isNegative and add them together:
a = isPositive * b + isNegative * a;
Now, if b is positive, you will get:
a = 1 * b + 0 * a
= b + 0
= b
and if it is negative, you will get:
a = 0 * b + 1 * a
= 0 + a
= a
Other datatypes
The same approach can also be applied to any other signed data type, such as byte, short, long, float and double.
For example, here is a version for double:
double a = ...
double b = ...
long isNegative = Double.doubleToLongBits(b) >>> 63;
long isPositive = 1 - isNegative;
a = isPositive * b + isNegative * a;
Unfortunately, in Java you can not use >>> directly on double (since it usually also makes no sense to mess up the exponent and mantissa), but therefore you have the helper Double#doubleToLongBits which basically reinterprets the double as long.

Related

Struggling to find the correct loop invariant

I have the following code:
public static void main(String[] args) {
int a = 3;
int b = 7;
int x = b; // x=b
int res = a; // res = a
int y = 1;
int invariant = 0;
System.out.println("a|b|x|y|res|invariant");
while (x > 0) {
if (x % 2 == 0) {
y = 2 * y;
x = x / 2;
} else {
res = res + y;
y = 2 * y;
x = (x - 1) / 2;
}
invariant = y + 2;
String output = String.format("%d|%d|%d|%d|%d|%d", a,b,x,y,res,invariant);
System.out.println(output);
}
// < res = a + b >
}
Which gives the following output:
a|b|x|y|res|invariant
3|7|3|2|4|4
3|7|1|4|6|6
3|7|0|8|10|10
However, if I change the numbers, the invariant isn't equal to the res anymore. Therefore my loop invariant for this problem is not correct.
I'm struggling really hard to find the correct loop invariant and would be glad if there's any hint that someone can give me.
My first impression after looking into the code and my results is that the loop invariant changes based on a and b. Let's say both a and b are odd numbers as they are in my example, then my Loop invariant is correct (at least it seems like it)
Is it correct to assume a loop variant like the following?
< res = y - 2 && a % 2 != 0 && b % 2 != 0 >
I did use different numbers and it seems like anytime I change them there's a different loop invariant and I struggle to find any pattern whatsoever.
I would really appreciate if someone can give me a hint or a general idea on how to solve this.
Thanks
This loop computes the sum a+b.
res is initialized to a.
Then, in each iteration of the loop, the next bit of the binary representation of b (starting with the least significant bit) is added to res, until the loop ends and res holds a+b.
How does it work:
x is initialized to b. In each iteration you eliminate the least significant bit. If that bit is 0, you simply divide x by 2. If it's 1, you subtract 1 and divide by 2 (actually it would be sufficient to divide by 2, since (x-1)/2==x/2 when x is an odd int). Only when you encounter a 1 bit, you have to add it (multiplied by the correct power of 2) to the result. y Holds the correct power of 2.
In your a=3, b=7 example, the binary representation of b is 111
In the first iteration, the value of res is a + 1 (binary) == a + 1 = 4
In the second iteration, the value of res is a + 11 (binary) == a + 3 = 6
In the last iteration, the value of res is a + 111 (binary) == a + 7 == 10
You could write the invariant as:
invariant = a + (b & (y - 1));
This takes advantage of the fact the at the end of the i'th iteration (i starting from 1), y holds 2^i, so y - 1 == 2^i - 1 is a number whose binary representation is i 1 bits (i.e. 11...11 with i bits). When you & this number with b, you get the i least significant bits of b.

about &(bit and) operator

I hava see this code in HashMap:
/**
* Returns index for hash code h.
*/
static int indexFor(int h, int length) {
// assert Integer.bitCount(length) == 1 : "length must be a non-zero power of 2";
return h & (length-1);
}
The HashMap has this document:
when length is a power of two then h & (length-1) is equals h%length
I want to know the principle in math
just is why h & (length-1) == h%length (length is a power of two)
First you can think what it looks like when you take any integer n mod power of 2.
WLOG let this power of 2 is 10000 in binary (indeed it must be of the form 100...0), what does its multiples looks like? Its multiples must look like...whatever digit...0000. The last 4 digit must be zero.
So what is a number n mod 10000? Let this number n be ...whatever digit...1011. This number can be expressed as ...whatever digit...0000 + 1011, now it is obvious that n mod 10000 indeed only the last 4 digits is left.
In general, let length be a power of 2 which has x zeros, then n%length is the least x significant digits of n
So legnth - 1is indeed 111..111 (x digit 1), and when you take bitwise and with the number n, the least x significant digits of n is preserved and returned, which is what we want. Using the same example above,
Length = 10000, Length - 1 = 1111
n = 101001101 = 101000000 + 1101
=> n % Length = 1101
n & (Length - 1) = 1101
= n % Length
Just imagine: any power of two contains single bit set and has binary representation like this:
l = 00010000
if you subtract 1, it will contain ones at the right places
m = l-1 = 00001111
binary AND operation with any h makes all most significant bits zero, leaving less significant ones
10101010 & 00001111 = 00001010
This is equivalent to modulo operation with modulo l

Compare two integers using bit operator

I need to compare two integer using Bit operator.
I faced a problem where I have to compare two integers without using comparison operator.Using bit operator would help.But how?
Lets say
a = 4;
b = 5;
We have to show a is not equal to b.
But,I would like to extend it further ,say,we will show which is greater.Here b is greater..
You need at least comparison to 0 and notionally this is what the CPU does for a comparison. e.g.
Equals can be modelled as ^ as the bits have to be the same to return 0
(a ^ b) == 0
if this was C you could drop the == 0 as this can be implied with
!(a ^ b)
but in Java you can't convert an int to a boolean without at least some comparison.
For comparison you usually do a subtraction, though one which handles overflows.
(long) a - b > 0 // same as a > b
subtraction is the same as adding a negative and negative is the same as ~x+1 so you can do
(long) a + ~ (long) b + 1 > 0
to drop the +1 you can change this to
(long) a + ~ (long) b >= 0 // same as a > b
You could implement + as a series of bit by bit operations with >> << & | and ^ but I wouldn't inflict that on you.
You cannot convert 1 or 0 to bool without a comparison operator like Peter mentioned. It is still possible to get max without a comparison operator.
I'm using bit (1 or 0) instead of int to avoid confusion.
bit msb(x):
return lsb(x >> 31)
bit lsb(x):
return x &1
// returns 1 if x < 0, 0 if x >= 0
bit isNegative(x):
return msb(x)
With these helpers isGreater(a, b) looks like,
// BUG: bug due to overflow when a is -ve and b is +ve
// returns 1 if a > b, 0 if a <= b
bit isGreater_BUG(a, b):
return isNegative(b - a) // possible overflow
We need two helpers functions to detect same and different signs,
// toggles lsb only
bit toggle(x):
return lsb(~x)
// returns 1 if a, b have same signs (0 is considered +ve).
bit isSameSigns(a, b):
return toggle(isDiffSigns(a, b))
// returns 1 if a, b have different signs (0 is considered +ve).
bit isDiffSigns(a, b):
return msb(a ^ b)
So with the overflow issue fix,
// returns 1 if a > b, 0 if a <= b
bit isGreater(a, b):
return
(isSameSigns(a, b) & isNegative(b - a)) |
(isDiffSigns(a, b) & isNegative(b))
Note that isGreater works correctly for inputs 5, 0 and 0, -5 also.
It's trickier to implement isPositive(x) properly as 0 will also be considered positive. So instead of using isPositive(a - b) above, isNegative(b - a) is used as isNegative(x) works correctly for 0.
Now max can be implemented as,
// BUG: returns 0 when a == b instead of a (or b)
// returns a if a > b, b if b > a
int max_BUG(a, b):
return
isGreater(a, b) * a + // returns 0 when a = b
isGreater(b, a) * b //
To fix that, helper isZero(x) is used,
// returns 1 if x is 0, else 0
bit isZero(x):
// x | -x will have msb 1 for a non-zero integer
// and 0 for 0
return toggle(msb(x | -x))
So with the fix when a = b,
// returns 1 if a == b else 0
bit isEqual(a, b):
return isZero(a - b) // or isZero(a ^ b)
int max(a, b):
return
isGreater(a, b) * a + // a > b, so a
isGreater(b, a) * b + // b > a, so b
isEqual(a, b) * a // a = b, so a (or b)
That said, if isPositive(0) returns 1 then max(5, 5) will return 10 instead of 5. So a correct isPositive(x) implementation will be,
// returns 1 if x > 0, 0 if x <= 0
bit isPositive(x):
return isNotZero(x) & toggle(isNegative(x))
// returns 1 if x != 0, else 0
bit isNotZero(x):
return msb(x | -x)
Using binary two’s complement notation
int findMax( int x, int y)
{
int z = x - y;
int i = (z >> 31) & 0x1;
int max = x - i * z;
return max;
}
Reference: Here
a ^ b = c // XOR the inputs
// If a equals b, c is zero. Else c is some other value
c[0] | c[1] ... | c[n] = d // OR the bits
// If c equals zero, d equals zero. Else d equals 1
Note: a,b,c are n-bit integers and d is a bit
The solution in java without using a comparator operator:
int a = 10;
int b = 12;
boolean[] bol = new boolean[] {true};
try {
boolean c = bol[a ^ b];
System.out.println("The two integers are equal!");
} catch (java.lang.ArrayIndexOutOfBoundsException e) {
System.out.println("The two integers are not equal!");
int z = a - b;
int i = (z >> 31) & 0x1;
System.out.println("The bigger integer is " + (a - i * z));
}
I am going to assume you need an integer (0 or 1) because you will need a comparison to get a boolean from integer in java.
Here, is a simple solution that doesn't use comparison but uses subtraction which can actually be done using bitwise operations (but not recommended because it takes a lot of cycles in software).
// For equality,
// 1. Perform r=a^b.
// If they are equal you get all bits 0. Otherwise some bits are 1.
// 2. Cast it to a larger datatype 0 to have an extra bit for sign.
// You will need to clear the high bits because of signed casting.
// You can split it into two parts if you can't cast.
// 3. Perform -r.
// If all bits are 0, you will get 0.
// If some bits are not 0, then you get a negative number.
// 4. Shift right to extract MSB.
// This will give -1 (because of sign extension) for not equal and 0 for equal.
// You can easily convert it to 0 and 1 by adding 1 (I didn't include it in below function).
int equality(int a, int b) {
long r = ((long)(a^b)) ^0xffffffffl;
return (int)(((long)-r) >> 63);
}
// For greater_than,
// 1. Cast a and b to larger datatype to get more bits.
// You can split it into two parts if you can't cast.
// 2. Perform b-a.
// If a>b, then negative number (MSB is 1)
// If a<=b, then positive number or zero (MSB is 0)
// 3. Shift right to extract MSB.
// This will give -1 (because of sign extension) for greater than and 0 for not.
// You can easily convert it to 0 and 1 by negating it (I didn't include it in below function).
int greater_than(int a, int b) {
long r = (long)b-(long)a;
return (int)(r >> 63);
}
Less than is similar to greater but you swap a and b.
Trivia: These comparison functions are actually used in security (Cryptography) because the CPU comparison is not constant-time; aka not secure against timing attacks.

Java binary method for GCD infinate loop

I'm using the Binary Method to calculate the GCD of two fractions, the method works perfectly fine, except for when I subtract certain numbers from each other.
I'm assuming it's because, for instance, when I subtract 2/15 from 1/6, the GCD has a repeating number or something like that, though I could be wrong.
//The following lines calculate the GCD using the binary method
if (holderNum == 0)
{
gcd = holderDem;
}
else if (holderDem == 0)
{
gcd = holderNum;
}
else if ( holderNum == holderDem)
{
gcd = holderNum;
}
// Make "a" and "b" odd, keeping track of common power of 2.
final int aTwos = Integer.numberOfTrailingZeros(holderNum);
holderNum >>= aTwos;
final int bTwos = Integer.numberOfTrailingZeros(holderDem);
holderDem >>= bTwos;
final int shift = Math.min(aTwos, bTwos);
// "a" and "b" are positive.
// If a > b then "gdc(a, b)" is equal to "gcd(a - b, b)".
// If a < b then "gcd(a, b)" is equal to "gcd(b - a, a)".
// Hence, in the successive iterations:
// "a" becomes the absolute difference of the current values,
// "b" becomes the minimum of the current values.
if (holderNum != gcd)
{
while (holderNum != holderDem)
{
//debuging
String debugv3 = "Beginning GCD binary method";
System.out.println(debugv3);
//debugging
final int delta = holderNum - holderDem;
holderNum = Math.min(holderNum, holderDem);
holderDem = Math.abs(delta);
// Remove any power of 2 in "a" ("b" is guaranteed to be odd).
holderNum >>= Integer.numberOfTrailingZeros(holderNum);
gcd = holderDem;
}
}
// Recover the common power of 2.
gcd <<= shift;
That is the code that I'm using to complete this operation, the debugging message prints out forever.
Is there a way to cheat out of this when it gets stuck, or maybe set up an exception?
The problem is with negative values — when one of them is negative, holderNum will always take on the negative value (being the min); holderDem will become postive, so delta equal to a negative less a positive equals a lesser negative. Then holderDem = abs(delta) is a greater positive and keeps increasing. You should take the absolute value of both of them before entering the loop.
E.g.:
holderNum = -1 and holderDem = 6
Iteration 1:
delta = holderNum - holderDem = -1 - 6 = -7
holderNum = Math.min(holderNum, holderDem) = Math.min(-1, 6) = -1
holderDem = Math.abs(delta) = Math.abs(-7) = 7
Iteration 2:
delta = holderNum - holderDem = -1 - 7 = -8
holderNum = Math.min(holderNum, holderDem) = Math.min(-1, 7) = -1
holderDem = Math.abs(delta) = Math.abs(-7) = 8
etc., etc., etc.

Generate a random binary number with a variable proportion of '1' bits

I need a function to generate random integers. (assume Java long type for now, but this will be extended to BigInteger or BitSet later.)
The tricky part is there is a parameter P that specifies the (independent) probability of any bit in the result being 1.
If P = 0.5 then we can just use the standard random number generator. Some other values of P are also easy to implement. Here's an incomplete example:
Random random = new Random();
// ...
long nextLong(float p) {
if (p == 0.0f) return 0L;
else if (p == 1.0f) return -1L;
else if (p == 0.5f) return random.nextLong();
else if (p == 0.25f) return nextLong(0.5f) & nextLong(0.5f);
else if (p == 0.75f) return nextLong(0.5f) | nextLong(0.5f);
else if (p == 0.375f) return nextLong(0.5f) & nextLong(0.75f); // etc
else {
// What goes here??
String message = String.format("P=%f not implemented yet!", p);
throw new IllegalArgumentException(message);
}
}
Is there a way to generalise this for any value of P between 0.0 and 1.0?
First a little ugly math that you're already using in your code.
Define x and y are bits with probability of being 1 of X = p(x=1), Y = p(y=1) respectively.
Then we have that
p( x & y = 1) = X Y
p( x | y = 1) = 1 - (1-X) (1-Y)
p( x ^ y = 1) = X (1 - Y) + Y (1 - X)
Now if we let Y = 1/2 we get
P( x & y ) = X/2
P( x | y ) = (X+1)/2
Now set the RHS to the probability we want and we have two cases that we can solve for X
X = 2 p // if we use &
X = 2 p - 1 // if we use |
Next we assume we can use this again to obtain X in terms of another variable Z...
And then we keep iterating until we've done "enough".
Thats a bit unclear but consider p = 0.375
0.375 * 2 = 0.75 < 1.0 so our first operation is &
0.75 * 2 = 1.5 > 1.0 so our second operation is |
0.5 is something we know so we stop.
Thus we can get a variable with p=0.375 by X1 & (X2 | X3 )
The problem is that for most variables this will not terminate. e.g.
0.333 *2 = 0.666 < 1.0 so our first operation is &
0.666 *2 = 1.333 > 1.0 so our second operation is |
0.333 *2 = 0.666 < 1.0 so our third operation is &
etc...
so p=0.333 can be generated by
X1 & ( X2 | (X3 & (X4 | ( ... ) ) ) )
Now I suspect that taking enough terms in the series will give you enough accuracy, and this can be written as a recursive function. However there might be a better way that that too... I think the order of the operations is related to the binary representation of p, I'm just not sure exactly how... and dont have time to think about it deeper.
Anyway heres some untested C++ code that does this. You should be able to javaify it easily.
uint bitsWithProbability( float p )
{
return bitsWithProbabilityHelper( p, 0.001, 0, 10 );
}
uint bitsWithProbabilityHelper( float p, float tol, int cur_depth, int max_depth )
{
uint X = randbits();
if( cur_depth >= max_depth) return X;
if( p<0.5-tol)
{
return X & bitsWithProbabilityHelper( 2*p, 0.001, cur_depth+1, max_depth );
}
if(p>0.5+tol)
{
return X | bitsWithProbabilityHelper( 2*p-1, 0.001, cur_depth+1, max_depth );
}
return X;
}
Distribute proportional number of bits throughuot the number.
Pseudocode:
long generateNumber( double probability ){
int bitCount = 64 * probability;
byte[] data = new byte[64]; // 0-filled
long indexes = getRandomLong();
for 0 to bitCount-1 {
do {
// distribute this bit to some postition with 0.
int index = indexes & 64;
indexes >> 6;
if( indexes == 0 ) indexes = getRandomLong();
} while ( data[index] == 0 );
data[index] = 1;
}
return bytesToLong( data );
}
I hope you get what I mean. Perhaps the byte[] could be replaced with a long and bit operations to make it faster.
Here's how I solved it in the end.
Generate an integer N between 0..16, following the binomial distribution. This gives the number of '1' bits in the 16-bit partial result.
Randomly generate an index into a lookup table that contains 16-bit integers containing the desired number of '1' bits.
Repeat 4 times to get four 16-bit integers.
Splice these four 16-bit integers together to get a 64-bit integer.
This was partly inspired by Ondra Žižka's answer.
The benefit is that it reduces the number of calls to Random.nextLong() to 8 calls per 64 bits of output.
For comparison, rolling for each individual bit would require 64 calls. Bitwise AND/OR uses between 2 and 32 calls depending on the value of P
Of course calculating binomial probabilities is just as expensive, so those go in another lookup table.
It's a lot of code, but it's paying off in terms of performance.
Update - merged this with the bitwise AND/OR solution. It now uses that method if it guesses it will be more efficient (in terms of calls to Random.next().)
Use a random generator that generates a uniform float number r between 0 and 1. If r>p then set the bit to 0, otherwise set it to 1
If you're looking to apply some distribution where with probability P you get a 1 and with probability 1-P you get a 0 at any particular bit your best bet is simply to generate each bit independently with probability P of being a 1 (that sounds like a recursive definition, I know).
Here's a solution, I'll walk through it below:
public class MyRandomBitGenerator
{
Random pgen = new Random();
// assumed p is well conditioned (0 < p < 1)
public boolean nextBitIsOne(double p){
return pgen.nextDouble() < p ? true : false;
}
// assumed p is well conditioned (0 < p < 1)
public long nextLong(double p){
long nxt = 0;
for(int i = 0; i < 64; i++){
if(nextBitIsOne(p)){
nxt += 1 << i;
}
}
return nxt;
}
}
Basically, we first determine how to generate a value of 1 with probability P: pgen.nextDouble() generates a number between 0 and 1 with uniform probability, by asking if it's less than p we're sampling this distribution such that we expect to see p 1s as we call this function infinitely.
Here's another variant of Michael Anderson's answer
To avoid recursion, we process the bits of P iteratively from right-to-left instead of recursively from left-to-right. This would be tricky in floating-point representation so we extract the exponent/mantissa fields from the binary representation instead.
class BitsWithProbabilityHelper {
public BitsWithProbabilityHelper(float prob, Random rnd) {
if (Float.isNaN(prob)) throw new IllegalArgumentException();
this.rnd = rnd;
if (prob <= 0f) {
zero = true;
return;
}
// Decode IEEE float
int probBits = Float.floatToIntBits(prob);
mantissa = probBits & 0x7FFFFF;
exponent = probBits >>> 23;
// Restore the implicit leading 1 (except for denormals)
if (exponent > 0) mantissa |= 0x800000;
exponent -= 150;
// Force mantissa to be odd
int ntz = Integer.numberOfTrailingZeros(mantissa);
mantissa >>= ntz;
exponent += ntz;
}
/** Determine how many random words we need from the system RNG to
* generate one output word with probability P.
**/
public int iterationCount() {
return - exponent;
}
/** Generate a random number with the desired probability */
public long nextLong() {
if (zero) return 0L;
long acc = -1L;
int shiftReg = mantissa - 1;
for (int bit = exponent; bit < 0; ++ bit) {
if ((shiftReg & 1) == 0) {
acc &= rnd.nextLong();
} else {
acc |= rnd.nextLong();
}
shiftReg >>= 1;
}
return acc;
}
/** Value of <code>prob</code>, represented as m * 2**e where m is always odd. */
private int exponent;
private int mantissa;
/** Random data source */
private final Random rnd;
/** Zero flag (special case) */
private boolean zero;
}
Suppose the size of bit array is L. If L=1, the chance that the 1st bit is 1 will be P, and that being 0 will be 1-P. For L=2, the probability of getting a 00 is (1-P)2, a 01 or 10 is P(1-P) each and 11 is P2. Extending this logic, we can first determine the first bit by comparing a random number with P, then scale the random number such that we can again get anything between 0 to 1. A sample javascript code:
function getRandomBitArray(maxBits,probabilityOf1) {
var randomSeed = Math.random();
bitArray = new Array();
for(var currentBit=0;currentBit<maxBits;currentBit++){
if(randomSeed<probabilityOf1){
//fill 0 at current bit
bitArray.push(0);
//scale the sample space of the random no from [0,1)
//to [0.probabilityOf1)
randomSeed=randomSeed/probabilityOf1;
}
else{
//fill 1 at current bit
bitArray.push(1);
//scale the sample space to [probabilityOf1,1)
randomSeed = (randomSeed-probabilityOf1)/(1-probabilityOf1);
}
}
}
EDIT:
This code does generate completely random bits. I will try to explain the algorithm better.
Each bit string has a certain probability of occurring. Suppose a string has a probability of occurrence p; we want to choose that string if our random number falls is some interval of length p. The starting point of the interval must be fixed, but its value will not make much difference. Suppose we have chosen upto k bits correctly. Then, for the next bit, we divide the interval corresponding to this k-length bit-string into two parts of sizes in the ratio P:1-P (here P is the probability of getting a 1). We say that the next bit will be 1 if the random number is in the first part, 0 if it is in the second part. This ensure that the probabilities of strings of length k+1 also remain correct.
Java code:
public ArrayList<Boolean> getRandomBitArray(int maxBits, double probabilityOf1) {
double randomSeed = Math.random();
ArrayList<Boolean> bitArray = new ArrayList<Boolean>();
for(int currentBit=0;currentBit<maxBits;currentBit++){
if(randomSeed<probabilityOf1){
//fill 0 at current bit
bitArray.add(false);
//scale the sample space of the random no from [0,1)
//to [0.probabilityOf1)
randomSeed=randomSeed/probabilityOf1;
}
else{
//fill 1 at current bit
bitArray.add(true);
//scale the sample space to [probabilityOf1,1)
randomSeed = (randomSeed-probabilityOf1)/(1-probabilityOf1);
}
}
return bitArray;
}

Categories