math question about random (x) and random() % x - Java [duplicate] - java

This question already has answers here:
Why do people say there is modulo bias when using a random number generator?
(10 answers)
Closed 2 years ago.
so my question is at Java but it can be in any programming language.
there is this declaration :
Random rnd = new Random();
We want to get a random number at range 0 to x
I want to know if there is any mathematical difference between the following:
rnd.nextInt() % x;
and
rnd.nextInt(x)
The main question is, are one of these solutions more random than the other? Is one solution more appropriate or "correct" than the other? If they are equal I will be happy to see the mathematics proof for it

Welcome to "mathematical insight" with "MS Paint".
So, from a statistical standpoint, it would depend on the distribution of the numbers being generated. First of all, we'll treat the probability of any one number coming up as an independant event (aka discarding the seed, which RNG, etc). Following that, a modulus simply takes a range of numbers (e.g. a from N, where 0<=a<N), and subdivides them based on the divisor (the x in a % x). While the numbers are technically from a discrete population (integers), the range of integers for a probability mass function would be so large that it'd end up looking like a continuous graph anyhow. So let's consider a graph of the probability distribution function for a range of numbers:
If your random number generator doesn't generate with a uniform distribution across the range of numbers (aka, any number is as likely to come up as another number), then modulo would (potentially) be breaking up the results of a non-uniform distribution. When you consider the individual integers in those ranges as discrete (and individual) outcomes, the probability of any number i (0 <= i < x) being the result is the multiplication of the individual probabilities (i_1 * i_2 * ... * i_(N/x)). To think of it another way, if we overlaid the subdivisions of the ranges, it's plain to see that in non-symmetric distributions, it's much more likely that a modulo would not result in equally likely outcomes:
Remember, the likelihood of an outcome i in the graph above would be achieved through multiplying the likelihood of the individuals numbers (i_1, ..., i_(N/x)) in the range N that could result in i. For further clarity, if your range N doesn't evenly divide by the modular divisor x, there will always be some amount of numbers N % x that will have 1 addditional integer that could produce their result. This means that most modulus divisors that aren't a power of 2 (and similarly, ranges that are not a multiple of their divisor) could be skewed towards their lower results, regardless of having a uniform distribution:
So to summarize the point, Random#nextInt(int bound) takes all of these things (and more!) into consideration, and will consistently produce an outcome with uniform probability across the range of bound. Random#nextInt() % bound is only a halfway step that works in some specific scenarios. To your teacher's point, I would argue it's more likely you'll see some specific subset of numbers when using the modulus approach, not less.

new Random(x) just creates the Random object with the given seed, it does not ifself yield a random value.
I presume you are asking what the difference is between nextInt() % x and nextInt(x).
The difference is as follows.
nextInt(x)
nextInt(x) yields a random number n where 0 ≤ n < x, evenly distributed.
nextInt() % x
nextInt() % x yields a random number in the full integer range1, and then applies modulo x. The full integer range includes negative numbers, so the result could also be a negative number. With other words, the range is −x < n < x.
Furthermore, the distribution is not even in by far the most cases. nextInt() has 232 possibilities, but, for simplicity's sake, let's assume it has 24 = 16 possibilities, and we choose x not to be 16 or greater. Let's assume that x is 10.
All possibilities are 0, 1, 2, …, 14, 15, 16. After applying the modulo 10, the results are 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 0, 1, 2, 3, 4, 5. That means that some numbers have a greater likelihood to occur than others. That also means that the change of some numbers occurring twice has increased.
As we see, nextInt() % x has two problems:
Range is not as required.
Uneven distribution.
So you should definetely use nextInt(int bound) here. If the requirement is get only unique numbers, you must exclude the numbers already drawn from the number generator. See also Generating Unique Random Numbers in Java.
1 According to the Javadoc.

Related

Smart algorithm to randomize a Double in range but with odds

I use the following function to generate a random double in a specific range :
nextDouble(1.50, 7.00)
However, I've been trying to come up with an algorithm to make the randomization have higher probability to generate a double that is close to the 1.50 than it is to 7.00. Yet I don't even know where it starts. Anything come to mind ?
Java is also welcome.
You should start by discovering what probability distribution you need. Based on your requirements, and assuming that random number generations are independent, perhaps Poisson distribution is what you are looking for:
a call center receives an average of 180 calls per hour, 24 hours a day. The calls are independent; receiving one does not change the probability of when the next one will arrive. The number of calls received during any minute has a Poisson probability distribution with mean 3: the most likely numbers are 2 and 3 but 1 and 4 are also likely and there is a small probability of it being as low as zero and a very small probability it could be 10.
The usual probability distributions are already implemented in libraries e.g. org.apache.commons.math3.distribution.PoissonDistribution in Apache Commons Math3.
I suggest to not think about this problem in terms of generating a random number with irregular probability. Instead, think about generating a random number normally in a some range, but then map this range into another one in non-linear way.
Let's split our algorithm into 3 steps:
Generate a random number in [0, 1) range linearly (so using a standard random generator).
Map it into another [0, 1) range in non-linear way.
Map the resulting [0, 1) into [1.5, 7) linearly.
Steps 1. and 3. are easy, the core of our algorithm is 2. We need a way to map [0, 1) into another [0, 1), but non-linearly, so e.g. 0.7 does not have to produce 0.7. Classic math helps here, we just need to look at visual representations of algebraic functions.
In your case you expect that while the input number increases from 0 to 1, the result first grows very slowly (to stay near 1.5 for a longer time), but then it speeds up. This is exactly how e.g. y = x ^ 2 function looks like. Your resulting code could be something like:
fun generateDouble(): Double {
val step1 = Random.nextDouble()
val step2 = step1.pow(2.0)
val step3 = step2 * 5.5 + 1.5
return step3
}
or just:
fun generateDouble() = Random.nextDouble().pow(2.0) * 5.5 + 1.5
By changing the exponent to bigger numbers, the curve will be more aggressive, so it will favor 1.5 more. By making the exponent closer to 1 (e.g. 1.4), the result will be more close to linear, but still it will favor 1.5. Making the exponent smaller than 1 will start to favor 7.
You can also look at other algebraic functions with this shape, e.g. y = 2 ^ x - 1.
What you could do is to 'correct' the random with a factor in the direction of 1.5. You would create some sort of bias factor. Like this:
#Test
void DoubleTest() {
double origin = 1.50;
final double fiarRandom = new Random().nextDouble(origin, 7);
System.out.println(fiarRandom);
double biasFactor = 0.9;
final double biasedDiff = (fiarRandom - origin) * biasFactor;
double biasedRandom = origin + biasedDiff;
System.out.println(biasedRandom);
}
The lower you set the bias factor (must be >0 & <= 1), the stronger your bias towards 1.50.
You can take a straightforward approach. As you said you want a higher probability of getting the value closer to 1.5 than 7.00, you can even set the probability. So, here their average is (1.5+7)/2 = 4.25.
So let's say I want a 70% probability that the random value will be closer to 1.5 and a 30% probability closer to 7.
double finalResult;
double mid = (1.5+7)/2;
double p = nextDouble(0,100);
if(p<=70) finalResult = nextDouble(1.5,mid);
else finalResult = nextDouble(mid,7);
Here, the final result has 70% chance of being closer to 1.5 than 7.
As you did not specify the 70% probability you can even make it random.
you just have to generate nextDouble(50,100) which will give you a value more than or equal 50% and less than 100% which you can use later to apply this probability for your next calculation. Thanks
I missed that I am using the same solution strategy as in the reply by Nafiul Alam Fuji. But since I have already formulated my answer, I post it anyway.
One way is to split the range into two subranges, say nextDouble(1.50, 4.25) and nextDouble(4.25, 7.0). You select one of the subranges by generating a random number between 0.0 and 1.0 using nextDouble() and comparing it to a threshold K. If the random number is less than K, you do nextDouble(1.50, 4.25). Otherwise nextDouble(4.25, 7.0).
Now if K=0.5, it is like doing nextDouble(1.50, 7). But by increasing K, you will do nextDouble(1.50, 4.25) more often and favor it over nextDouble(4.25, 7.0). It is like flipping an unfair coin where K determines the extent of the cheating.

Random Shuffling in Java (or any language) Probabilities [duplicate]

This question already has answers here:
What distribution do you get from this broken random shuffle?
(10 answers)
Closed 7 years ago.
So, I'm watching Robert Sedgewick's videos on Coursera and I am currently at the Shuffling one. He's showing a "poorly written" shuffling code for online poker (it has a few other bugs, which I've removed because they are not related to my question) This is how the algorithm works:
for(int i = 0; i < N; i++)
int r = new Random().nextInt(53);
swap(cardArray, i, r);
It iterates all the cards once. At each iteration a random number is generated and the i-th card is swapped with the r-th card. Simple, right?
While I understand the algorithm, I didn't understand his probability calculation. He said that because Random uses a 32-bit seed (or 64, it doesn't seem to matter), this is constrained to only 2^32 different permutations.
He also said that Knuth's algorithm is better (same for loop, but choose a number between 1 and i) because it gives you N! permutations.
I can agree with Knuth's algorithm calculations. But I think that on the first one (which is supposed to be the faulty one) there should be N^N different permutations.
Is Sedgewick wrong or am I missing a fact?
Sedgewick's way of explaining it seems very strange and obtuse to me.
Imagine you had a deck of only 3 cards and applied the algorithm shown.
After the first card was swapped there would be 3 possible outcomes. After the second, 9. And after the 3rd swap, 27. Thus, we know that using the swapping algorithm we will have 27 different possible outcomes, some of which will be duplicate outcomes to the others.
Now, we know for a fact that there are 3 * 2 * 1 = 6 possible arrangements of a 3-card deck. However, 27 is NOT divisible by 6. Therefore, we know for a fact that some arrangements will be more common than others, even without computing what they are. Therefore, the swapping algorithm will not result in an equal probability among the 6 possibilities, i.e., it will be biased towards certain arrangements.
The same exact logic extends to the case of 52 cards.
We can investigate which arrangements are preferred by looking at the distribution of outcomes in the three-card case, which are:
1 2 3 5 occurrences
1 3 2 5 occurrences
2 1 3 4 occurrences
2 3 1 4 occurrences
3 1 2 4 occurrences
3 2 1 5 occurrences
Total 27
Examining these, we notice that combinations which require 0 or 1 swaps have more occurrences than combinations that require 2 swaps. In general, the fewer the number of swaps required for the combination, the more likely it is.
Since the sequence of numbers generated by a random number generator is uniquely determined by the seed, the argument is right - but it applies to Knuth's algorithm as well, and to any other shuffling algorithm: If N! > 2^M (where N is the number of cards and M is the number of bits in the seed), some permutations will never be generated. But even if the seed is big enough, the actual difference between the algorithms lies in the probability distribution: the first algorithm does not produce an uniform probability for the different permutations, while Knuth's does (assuming that the random generator is "random" enough). Note that Knuth's algorithm is also called the Fisher-Yates shuffle.
Sedgwick is right, of course. To get a true random order of cards, you must first use an algorithm that selects equally among the N! possible permutations, which means one that selects one of N, one of N-1, one of N-2, etc., and produces a different result for each combination, such as the Fisher-Yates algorithm.
Secondly, it is necessary to have a PRNG with an internal state of greater than log2(N!) bits, or else it will repeat itself before reaching all combinations. For 52 cards, that's 226 bits. 32 isn't even close.
I'm sorry, but I have to disagree with the answers of Aasmund and Lee Daniel. Every permutation of N elements can be expressed as 3(N - 1) transpositions between 1 and some index i between 1 and N (which is easy to prove by induction on N - see below) Therefore, in order to generate a random permutation it is enough to generate 3(N-1) random integers between 1 and N. In other words, you random generator only needs to be able to generate 3(N-1) different integers.
Theorem
Every permutation of {1, ..., N} can be expressed as the composition of N-1 transpositions
Proof (by induction on N)
CASE N = 1.
The only permutation of {1} is (1) which can be written as the composition of 0 transpositions (the composition of 0 elements is the identity)
CASE N = 2. (Only for who wasn't convinced by the case N = 1 above)
There are two permutations of 2 elements (1,2) and (2,1). Permutation (1,2) is the transposition of 1 with 1. Permutation (2,1) is the transposition of 1 and 2.
INDUCTION N -> Case N + 1.
Take any permutation s of {1, ..., N, N+1}. If s doesn't move N+1, then s is actually a permutation of {1, ..., N} and can be written as the composition of N-1 transpositions between indexes i,j with 1<=i,j<=N.
So let's assume that s moves N+1 to K. Let t the transposition between N+1 and K. Then ts doesn't move N+1 (N+1 -> K -> N+1) and therefore ts can be written as the composition of N-2 transpositions, i.e.,
ts = t1..tN-1.
Hence, s = t1..tN-1t
which consists of N transpositions (one less than N+1).
Corollary
Every permutation of {1, ..., N} can be written as the composition of (at most) 3(N-1) permutations between 1 and i, where 1<=i<=N.
Proof
In view of the Theorem it is enough to show that any transposition between two indexes i and j can be written as the composition of 3 transpositions between 1 and some index. But
swap(i,j) = swap(1,i)swap(1,j)swap(1,i)
where the concatenation of swaps is the composition of these transpositions.

What is the randomness of Java.nextFloat()

Specifically, if used in the form of:
Random.nextFloat() * N;
can I expect a highly randomized distribution of values from 0 to N?
Would it be better to do something like this?
Random.nextInt(N) * Random.nextFloat();
A single random number from a good generator--and java.util.Random is a good one--will be evenly distributed across the range... it will have a mean and median value of 0.5*N. 1/4 of the numbers will be less than 0.25*N and 1/4 of the numbers will be larger than 0.75*N, etc.
If you then multiply this by another random number generator (whose mean value is 0.5), you will end up with a random number with a mean value of 0.25*N and a median value of 0.187*N... So half your numbers are less than 0.187*N! 1/4 of the numbers will be under .0677*N! And only 1/4 of the numbers will be over 0.382*N. (Numbers obtained experimentally by looking at 1,000,000 random numbers generated as the product of two other random numbers, and analyzing them.)
This is probably not what you want.
At first, Random in Java doesn't contain rand() method. See docs. I think you thought about Random.next() method.
Due to your question, documentation says that nextFloat() is implemented like this:
public float nextFloat() {
return next(24) / ((float)(1 << 24));
}
So you don't need to use anything else.
Random#nextFloat() will give you an evenly distributed number between 0 and 1.
If you take an even distribution and multiply it by N, you scale the distribution up evenly. So you get a random number between 0 and N evenly distributed.
If you multiply this by a random number between 0 and N, then you'll get an uneven distribution. If multiplying by N gives you an even distribution between 0 and N, then multiplying by a number between 0 and N, must give you an answer that is less or equal to if you just multiplied by N. So your numbers on average are smaller.

Random.nextInt(int) is [slightly] biased

Namely, it will never generate more than 16 even numbers in a row with some specific upperBound parameters:
Random random = new Random();
int c = 0;
int max = 17;
int upperBound = 18;
while (c <= max) {
int nextInt = random.nextInt(upperBound);
boolean even = nextInt % 2 == 0;
if (even) {
c++;
} else {
c = 0;
}
}
In this example the code will loop forever, while when upperBound is, for example, 16, it terminates quickly.
What can be the reason of this behavior? There are some notes in the method's javadoc, but I failed to understand them.
UPD1: The code seems to terminate with odd upper bounds, but may stuck with even ones
UPD2:
I modified the code to capture the statistics of c as suggested in the comments:
Random random = new Random();
int c = 0;
long trials = 1 << 58;
int max = 20;
int[] stat = new int[max + 1];
while (trials > 0) {
while (c <= max && trials > 0) {
int nextInt = random.nextInt(18);
boolean even = nextInt % 2 == 0;
if (even) {
c++;
} else {
stat[c] = stat[c] + 1;
c = 0;
}
trials--;
}
}
System.out.println(Arrays.toString(stat));
Now it tries to reach 20 evens in the row - to get better statistics, and the upperBound is still 18.
The results turned out to be more than surprising:
[16776448, 8386560, 4195328, 2104576, 1044736,
518144, 264704, 132096, 68864, 29952, 15104,
12032, 1792, 3072, 256, 512, 0, 256, 0, 0]
At first it decreases as expected by the factor of 2, but note the last line! Here it goes crazy and the captured statistics seem to be completely weird.
Here is a bar plot in log scale:
How c gets the value 17 256 times is yet another mystery
http://docs.oracle.com/javase/6/docs/api/java/util/Random.html:
An instance of this class is used to generate a stream of
pseudorandom numbers. The class uses a 48-bit seed, which is modified
using a linear congruential formula. (See Donald Knuth, The Art of
Computer Programming, Volume 3, Section 3.2.1.)
If two instances of Random are created with the same seed, and the
same sequence of method calls is made for each, they will generate and
return identical sequences of numbers. [...]
It is a pseudo-random number generator. This means that you are not actually rolling a dice but rather use a formula to calculate the next "random" value based on the current random value. To creat the illusion of randomisation a seed is used. The seed is the first value used with the formula to generate the random value.
Apparently javas random implementation (the "formula"), does not generate more than 16 even numbers in a row.
This behaviour is the reason why the seed is usually initialized with the time. Deepending on when you start your program you will get different results.
The benefits of this approach are that you can generate repeatable results. If you have a game generating "random" maps, you can remember the seed to regenerate the same map if you want to play it again, for instance.
For true random numbers some operating systems provide special devices that generate "randomness" from external events like mousemovements or network traffic. However i do not know how to tap into those with java.
From the Java doc for secureRandom:
Many SecureRandom implementations are in the form of a pseudo-random
number generator (PRNG), which means they use a deterministic
algorithm to produce a pseudo-random sequence from a true random seed.
Other implementations may produce true random numbers, and yet others
may use a combination of both techniques.
Note that secureRandom does NOT guarantee true random numbers either.
Why changing the seed does not help
Lets assume random numbers would only have the range 0-7.
Now we use the following formula to generate the next "random" number:
next = (current + 3) % 8
the sequence becomes 0 3 6 1 4 7 2 5.
If you now take the seed 3 all you do is to change the starting point.
In this simple implementation that only uses the previous value, every value may occur only once before the sequence wraps arround and starts again. Otherwise there would be an unreachable part.
E.g. imagine the sequence 0 3 6 1 3 4 7 2 5. The numbers 0,4,7,2 and 5 would never be generated more than once(deepending on the seed they might be generated never), since once the sequence loops 3,6,1,3,6,1,... .
Simplified pseudo random number generators can be thought of a permutation of all numbers in the range and you use the seed as a starting point. If they are more advanced you would have to replace the permutation with a list in which the same numbers might occur multiple times.
More complex generators can have an internal state, allowing the same number to occur several times in the sequence, since the state lets the generator know where to continue.
The implementation of Random uses a simple linear congruential formula. Such formulae have a natural periodicity and all sorts of non-random patterns in the sequence they generate.
What you are seeing is an artefact of one of these patterns ... nothing deliberate. It is not an example of bias. Rather it is an example of auto-correlation.
If you need better (more "random") numbers, then you need to use SecureRandom rather than Random.
And the answer to "why was it implemented that way is" ... performance. A call to Random.nextInt can be completed in tens or hundreds of clock cycles. A call to SecureRandom is likely to be at least 2 orders of magnitude slower, possibly more.
For portability, Java specifies that implementations must use the inferior LCG method for java.util.Random. This method is completely unacceptable for any serious use of random numbers like complex simulations or Monte Carlo methods. Use an add-on library with a better PRNG algorithm, like Marsaglia's MWC or KISS. Mersenne Twister and Lagged Fibonacci Generators are often OK as well.
I'm sure there are Java libraries for these algorithms. I have a C library with Java bindings if that will work for you: ojrandlib.

Compute the product a * b² * c³ ... efficiently

What is the most efficient way to compute the product
a1 b2 c3 d4 e5 ...
assuming that squaring costs about half as much as multiplication? The number of operands is less than 100.
Is there a simple algorithm also for the case that the multiplication time is proportional to the square of operand length (as with java.math.BigInteger)?
The first (and only) answer is perfect w.r.t. the number of operations.
Funnily enough, when applied to sizable BigIntegers, this part doesn't matter at all. Even computing abbcccddddeeeee without any optimizations takes about the same time.
Most of the time gets spent in the final multiplication (BigInteger implements none of the smarter algorithms like Karatsuba, Toom–Cook, or FFT, so the time is quadratic). What's important is assuring that the intermediate multiplicands are about the same size, i.e., given numbers p, q, r, s of about the same size, computing (pq) (rs) is usually faster than ((pq) r) s. The speed ratio seems to be about 1:2 for some dozens of operands.
Update
In Java 8, there are both Karatsuba and Toom–Cook multiplications in BigInteger.
I absolutely don't know if this is the optimal approach (although I think it is asymptotically optimal), but you can do it all in O(N) multiplications. You group the arguments of a * b^2 * c^3 like this: c * (c*b) * (c*b*a). In pseudocode:
result = 1
accum = 1
for i in 0 .. arguments:
accum = accum * arg[n-i]
result = result * accum
I think it is asymptotically optimal, because you have to use N-1 multiplications just to multiply N input arguments.
As mentioned in the Oct 26 '12 edit:
With multiplication time superlinear in the size of the operands, it would be of advantage to keep the size of the operands for long operations similar (especially if the only Toom-Cook available was toom-2 (Karatsuba)). If not going for a full optimisation, putting operands in a queue that allows popping them in order of increasing (significant) length looks a decent shot from the hip.
Then again, there are special cases: 0, powers of 2, multiplications where one factor is (otherwise) "trivial" ("long-by-single-digit multiplication", linear in sum of factor lengths).
And squaring is simpler/faster than general multiplication (question suggests assuming ½), which would suggest the following strategy:
in a pre-processing step, count trailing zeroes weighted by exponent
result 0 if encountering a 0
remove trailing zeroes, discard resulting values of 1
result 1 if no values left
find and combine values occurring more than once
set up a queue allowing extraction of the "shortest" number. For each pair (number, exponent), insert the factors exponentiation by squaring would multiply
optional: combine "trivial factors" (see above) and re-insert
Not sure how to go about this. Say factors of length 12 where trivial, and initial factors are of length 1, 2, …, 10, 11, 12, …, n. Optimally, you combine 1+10, 2+9, … for 7 trivial factors from 12. Combining shortest gives 3, 6, 9, 12 for 8 from 12
extract the shortest pair of factors, multiply and re-insert
once there is just one number, the result is that with the zeroes from the first step tacked on
(If factorisation was cheap, it would have to go on pretty early to get most from cheap squaring.)

Categories