Random.nextInt(int) is [slightly] biased - java

Namely, it will never generate more than 16 even numbers in a row with some specific upperBound parameters:
Random random = new Random();
int c = 0;
int max = 17;
int upperBound = 18;
while (c <= max) {
int nextInt = random.nextInt(upperBound);
boolean even = nextInt % 2 == 0;
if (even) {
c++;
} else {
c = 0;
}
}
In this example the code will loop forever, while when upperBound is, for example, 16, it terminates quickly.
What can be the reason of this behavior? There are some notes in the method's javadoc, but I failed to understand them.
UPD1: The code seems to terminate with odd upper bounds, but may stuck with even ones
UPD2:
I modified the code to capture the statistics of c as suggested in the comments:
Random random = new Random();
int c = 0;
long trials = 1 << 58;
int max = 20;
int[] stat = new int[max + 1];
while (trials > 0) {
while (c <= max && trials > 0) {
int nextInt = random.nextInt(18);
boolean even = nextInt % 2 == 0;
if (even) {
c++;
} else {
stat[c] = stat[c] + 1;
c = 0;
}
trials--;
}
}
System.out.println(Arrays.toString(stat));
Now it tries to reach 20 evens in the row - to get better statistics, and the upperBound is still 18.
The results turned out to be more than surprising:
[16776448, 8386560, 4195328, 2104576, 1044736,
518144, 264704, 132096, 68864, 29952, 15104,
12032, 1792, 3072, 256, 512, 0, 256, 0, 0]
At first it decreases as expected by the factor of 2, but note the last line! Here it goes crazy and the captured statistics seem to be completely weird.
Here is a bar plot in log scale:
How c gets the value 17 256 times is yet another mystery

http://docs.oracle.com/javase/6/docs/api/java/util/Random.html:
An instance of this class is used to generate a stream of
pseudorandom numbers. The class uses a 48-bit seed, which is modified
using a linear congruential formula. (See Donald Knuth, The Art of
Computer Programming, Volume 3, Section 3.2.1.)
If two instances of Random are created with the same seed, and the
same sequence of method calls is made for each, they will generate and
return identical sequences of numbers. [...]
It is a pseudo-random number generator. This means that you are not actually rolling a dice but rather use a formula to calculate the next "random" value based on the current random value. To creat the illusion of randomisation a seed is used. The seed is the first value used with the formula to generate the random value.
Apparently javas random implementation (the "formula"), does not generate more than 16 even numbers in a row.
This behaviour is the reason why the seed is usually initialized with the time. Deepending on when you start your program you will get different results.
The benefits of this approach are that you can generate repeatable results. If you have a game generating "random" maps, you can remember the seed to regenerate the same map if you want to play it again, for instance.
For true random numbers some operating systems provide special devices that generate "randomness" from external events like mousemovements or network traffic. However i do not know how to tap into those with java.
From the Java doc for secureRandom:
Many SecureRandom implementations are in the form of a pseudo-random
number generator (PRNG), which means they use a deterministic
algorithm to produce a pseudo-random sequence from a true random seed.
Other implementations may produce true random numbers, and yet others
may use a combination of both techniques.
Note that secureRandom does NOT guarantee true random numbers either.
Why changing the seed does not help
Lets assume random numbers would only have the range 0-7.
Now we use the following formula to generate the next "random" number:
next = (current + 3) % 8
the sequence becomes 0 3 6 1 4 7 2 5.
If you now take the seed 3 all you do is to change the starting point.
In this simple implementation that only uses the previous value, every value may occur only once before the sequence wraps arround and starts again. Otherwise there would be an unreachable part.
E.g. imagine the sequence 0 3 6 1 3 4 7 2 5. The numbers 0,4,7,2 and 5 would never be generated more than once(deepending on the seed they might be generated never), since once the sequence loops 3,6,1,3,6,1,... .
Simplified pseudo random number generators can be thought of a permutation of all numbers in the range and you use the seed as a starting point. If they are more advanced you would have to replace the permutation with a list in which the same numbers might occur multiple times.
More complex generators can have an internal state, allowing the same number to occur several times in the sequence, since the state lets the generator know where to continue.

The implementation of Random uses a simple linear congruential formula. Such formulae have a natural periodicity and all sorts of non-random patterns in the sequence they generate.
What you are seeing is an artefact of one of these patterns ... nothing deliberate. It is not an example of bias. Rather it is an example of auto-correlation.
If you need better (more "random") numbers, then you need to use SecureRandom rather than Random.
And the answer to "why was it implemented that way is" ... performance. A call to Random.nextInt can be completed in tens or hundreds of clock cycles. A call to SecureRandom is likely to be at least 2 orders of magnitude slower, possibly more.

For portability, Java specifies that implementations must use the inferior LCG method for java.util.Random. This method is completely unacceptable for any serious use of random numbers like complex simulations or Monte Carlo methods. Use an add-on library with a better PRNG algorithm, like Marsaglia's MWC or KISS. Mersenne Twister and Lagged Fibonacci Generators are often OK as well.
I'm sure there are Java libraries for these algorithms. I have a C library with Java bindings if that will work for you: ojrandlib.

Related

Coin toss using random numbers does not appear exactly random

I wanted a random number generator to simulate a coin toss and here's what i did
public class CoinToss
{
public static void main(String args[])
{
int num=(int)(1000*Math.random());
if(num<500)
System.out.println("H");
else
System.out.println("T");
}
}
The results were discouraging as i got 16 Heads and 4 tails in 20 runs.
That does not appear to be random. Its possible but i want a general opinion if the program is correct ? Am i missing something mathematically ?
Modified your code a little and it seems to be random enough.
code:
int h = 0;
int t = 0;
for (int i = 0; i < 1000; i++) {
int num = (int) (1000 * Math.random());
if (num < 500) {
h++;
} else {
t++;
}
}
System.out.println("T:" + t);
System.out.println("H:" + h);
output:
T:506
H:494
I guess this is the thing with randomness ^^
20 runs is not a big enough sample size to assess how random it is. Think of it this way: if you did 4 runs and got 4 heads, you'd think, "Wow, that's not random at all." But in fact if you took 4 coins, and flipped them 16 times, you'd expect to get all 4 heads at least once. So if you do a small number of runs, and you get results that aren't equally divided between heads and tails, that doesn't mean it's not random.
Or look at it this way: if you wrote some code that just printed "Heads" then "Tails" then "Heads" and so on, you'd get exactly half heads and half tails. But that's not random at all! It's just a repeating pattern.
So the moral of the story is not to be surprised when random results look uneven over short runs. Try re-writing your code so that it counts how many heads and how many tails, and let it flip about a million or so, and see if you don't get about 500,000 each. It should be a little more or a little less, because random doesn't give you exact, but it should be closer.
your code seems to be correct although you could implement it easier:
Random r = new Random();
int num = r.nextInt(2);
if (num == 0)
System.out.println("H");
else
System.out.println("T");
Random#nextInt(int i) returns a random Integer between 0 and i-1
A pseudorandom number generator (PRNG), also known as a deterministic random bit generator (DRBG), is an algorithm for generating a sequence of numbers whose properties approximate the properties of sequences of random numbers.
The PRNG-generated sequence is not truly random, because it is completely determined by a relatively small set of initial values, called the PRNG's seed
Although sequences that are closer to truly random can be generated
Anyone who considers arithmetical methods of producing random digits is, of course, in a state of sin. John von Neumann
You need way more inputs in order to get equal number of each. For such small number of outputs sometimes you will get them close to each other in terms of numbers, sometimes one side will "show" way more then other. Actually, probability of having 4 tails and 16 heads is 0.462% which is somewhat "realistic" to happen... Try to play with it more with higher number of runs and see how it behaves.
And btw think about this input:
6 6 6 6 6 6 6 6 6 6
Doesn't seems random, right? But it exist in decimal of number π at some point, so its a part of random series. Its just a question of series size, so you must think in that way when you work with random numbers. Think more about random generator rather than about results. You are using correct function since its based on System.nanoTime(), so generator is right, but your result is to small.

Consistent random numbers across versions and platforms

I need/want to get random (well, not entirely) numbers to use for password generation.
What I do: Currently I am generating them with SecureRandom.
I am obtaining the object with
SecureRandom sec = SecureRandom.getInstance("SHA1PRNG", "SUN");
and then seeding it like this
sec.setSeed(seed);
Target: A (preferably fast) way to create random numbers, which are cryptographically at least a safe as the SHA1PRNG SecureRandom implementation. These need to be the same on different versions of the JRE and Android.
EDIT: The seed is generated from user input.
Problem: With SecureRandom.getInstance("SHA1PRNG", "SUN"); it fails like this:
java.security.NoSuchProviderException: SUN. Omitting , "SUN" produces random numbers, but those are different than the default (JRE 7) numbers.
Question: How can I achieve my Target?
You don't want it to be predictable: I want, because I need the predictability so that the same preconditions result in the same output. If they are not the same, its impossible hard to do what the user expects from the application.
EDIT: By predictable I mean that, when knowing a single byte (or a hundred) you should not be able to predict the next, but when you know the seed, you should be able to predict the first (and all others). Maybe another word is reproducible.
If anyone knows of a more intuitive way, please tell me!
I ended up isolating the Sha1Prng from the sun sources which guarantees reproducibility on all versions of Java and android. I needed to drop some important methods to ensure compatibility with android, as android does not have access to nio classes...
I recommend using UUID.randomUUID(), then splitting it into longs using getLeastSignificantBits() and getMostSignificantBits()
If you want predictable, they aren't random. That breaks your "Target" requirement of being "safe" and devolves into a simple shared secret between two servers.
You can get something that looks sort of random but is predicatable by using the characteristics of prime integers where you build a set of integers by starting with I (some specific integer) and add the first prime number and then modulo by the 2nd prime number. (In truth the first and second numbers only have to be relatively prime--meaning they have no common prime factors--not counting 1, in case you call that a factor.
If you repeat the process of adding and doing the modulo, you will get a set of numbers that you can repeatably reproduce and they are ordered in the sense that taking any member of the set, adding the first prime and doing the modulo by the 2nd prime, you will always get the same result.
Finally, if the 1st prime number is large relative to the second one, the sequence is not easily predictable by humans and seems sort of random.
For example, 1st prime = 7, 2nd prime = 5 (Note that this shows how it works but is not useful in real life)
Start with 2. Add 7 to get 9. Modulo 5 to get 4.
4 plus 7 = 11. Modulo 5 = 1.
Sequence is 2, 4, 1, 3, 0 and then it repeats.
Now for real life generation of numbers that seem random. The relatively prime numbers are 91193 and 65536. (I chose the 2nd one because it is a power of 2 so all modulo-ed values can fit in 16 bits.)
int first = 91193;
int modByLogicalAnd = 0xFFFF;
int nonRandomNumber = 2345; // Use something else
for (int i = 0; i < 1000 ; ++i) {
nonRandomNumber += first;
nonRandomNumber &= modByLogicalAnd;
// print it here
}
Each iteration generates 2 bytes of sort of random numbers. You can pack several of them into a buffer if you need larger random "strings".
And they are repeatable. Your user can pick the starting point and you can use any prime you want (or, in fact, any number without 2 as a factor).
BTW - Using a power of 2 as the 2nd number makes it more predictable.
Ignoring RNGs that use some physical input (random clock bits, electrical noise, etc) all software RNGs are predicable, given the same starting conditions. They are, after all, (hopefully) deterministic computer programs.
There are some algorithms that intentionally include the physical input (by, eg, sampling the computer clock occasionally) in attempt to prevent predictability, but those are (to my knowledge) the exception.
So any "conventional" RNG, given the same seed and implemented to the same specification, should produce the same sequence of "random" numbers. (This is why a computer RNG is more properly called a "pseudo-random number generator".)
The fact that an RNG can be seeded with a previously-used seed and reproduce a "known" sequence of numbers does not make the RNG any less secure than one where your are somehow prevented from seeding it (though it may be less secure than the fancy algorithms that reseed themselves at intervals). And the ability to do this -- to reproduce the same sequence again and again is not only extraordinarily useful in testing, it has some "real life" applications in encryption and other security applications. (In fact, an encryption algorithm is, in essence, simply a reproducible random number generator.)

Why are initial random numbers similar when using similar seeds?

I discovered something strange with the generation of random numbers using Java's Random class.
Basically, if you create multiple Random objects using close seeds (for example between 1 and 1000) the first value generated by each generator will be almost the same, but the next values looks fine (i didn't search further).
Here are the two first generated doubles with seeds from 0 to 9 :
0 0.730967787376657 0.24053641567148587
1 0.7308781907032909 0.41008081149220166
2 0.7311469360199058 0.9014476240300544
3 0.731057369148862 0.07099203475193139
4 0.7306094602878371 0.9187140138555101
5 0.730519863614471 0.08825840967622589
6 0.7307886238322471 0.5796252073129174
7 0.7306990420600421 0.7491696031336331
8 0.7302511331990172 0.5968915822372118
9 0.7301615514268123 0.7664359929590888
And from 991 to 1000 :
991 0.7142160704801332 0.9453385235522973
992 0.7109015598097105 0.21848118381994108
993 0.7108119780375055 0.38802559454181795
994 0.7110807233541204 0.8793923921785096
995 0.7109911564830766 0.048936787999225295
996 0.7105432327208906 0.896658767102804
997 0.7104536509486856 0.0662031629235198
998 0.7107223962653005 0.5575699754613725
999 0.7106328293942568 0.7271143712820883
1000 0.7101849056320707 0.574836350385667
And here is a figure showing the first value generated with seeds from 0 to 100,000.
First random double generated based on the seed :
I searched for information about this, but I didn't see anything referring to this precise problem. I know that there is many issues with LCGs algorithms, but I didn't know about this one, and I was wondering if this was a known issue.
And also, do you know if this problem only for the first value (or first few values), or if it is more general and using close seeds should be avoided?
Thanks.
You'd be best served by downloading and reading the Random source, as well as some papers on pseudo-random generators, but here are some of the relevant parts of the source. To begin with, there are three constant parameters that control the algorithm:
private final static long multiplier = 0x5DEECE66DL;
private final static long addend = 0xBL;
private final static long mask = (1L << 48) - 1;
The multiplier works out to approximately 2^34 and change, the mask 2^48 - 1, and the addend is pretty close to 0 for this analysis.
When you create a Random with a seed, the constructor calls setSeed:
synchronized public void setSeed(long seed) {
seed = (seed ^ multiplier) & mask;
this.seed.set(seed);
haveNextNextGaussian = false;
}
You're providing a seed pretty close to zero, so initial seed value that gets set is dominated by multiplier when the two are OR'ed together. In all your test cases with seeds close to zero, the seed that is used internally is roughly 2^34; but it's easy to see that even if you provided very large seed numbers, similar user-provided seeds will yield similar internal seeds.
The final piece is the next(int) method, which actually generates a random integer of the requested length based on the current seed, and then updates the seed:
protected int next(int bits) {
long oldseed, nextseed;
AtomicLong seed = this.seed;
do {
oldseed = seed.get();
nextseed = (oldseed * multiplier + addend) & mask;
} while (!seed.compareAndSet(oldseed, nextseed));
return (int)(nextseed >>> (48 - bits));
}
This is called a 'linear congruential' pseudo-random generator, meaning that it generates each successive seed by multiplying the current seed by a constant multiplier and then adding a constant addend (and then masking to take the lower 48 bits, in this case). The quality of the generator is determined by the choice of multiplier and addend, but the ouput from all such generators can be easily predicted based on the current input and has a set period before it repeats itself (hence the recommendation not to use them in sensitive applications).
The reason you're seeing similar initial output from nextDouble given similar seeds is that, because the computation of the next integer only involves a multiplication and addition, the magnitude of the next integer is not much affected by differences in the lower bits. Calculation of the next double involves computing a large integer based on the seed and dividing it by another (constant) large integer, and the magnitude of the result is mostly affected by the magnitude of the integer.
Repeated calculations of the next seed will magnify the differences in the lower bits of the seed because of the repeated multiplication by the constant multiplier, and because the 48-bit mask throws out the highest bits each time, until eventually you see what looks like an even spread.
I wouldn't have called this an "issue".
And also, do you know if this problem only for the first value (or first few values), or if it is more general and using close seeds should be avoided?
Correlation patterns between successive numbers is a common problem with non-crypto PRNGs, and this is just one manifestation. The correlation (strictly auto-correlation) is inherent in the mathematics underlying the algorithm(s). If you want to understand that, you should probably start by reading the relevant part of Knuth's Art of Computer Programming Chapter 3.
If you need non-predictability you should use a (true) random seed for Random ... or let the system pick a "pretty random" one for you; e.g. using the no-args constructor. Or better still, use a real random number source or a crypto-quality PRNG instead of Random.
For the record:
The javadoc (Java 7) does not specify how Random() seeds itself.
The implementation of Random() on Java 7 for Linux, is seeded from the nanosecond clock, XORed with a 'uniquifier' sequence. The 'uniquifier' sequence is LCG which uses different multiplier, and whose state is static. This is intended to avoid auto-correlation of the seeds ...
This is a fairly typical behaviour for pseudo-random seeds - they aren't required to provide completely different random sequences, they only provide a guarantee that you can get the same sequence again if you use the same seed.
The behaviour happens because of the mathematical form of the PRNG - the Java one uses a linear congruential generator, so you are just seeing the results running the seed through one round of the linear congruential generator. This isn't enough to completely mix up all the bit patterns, hence you see similar results for similar seeds.
Your best strategy is probably just to use very different seeds - one option would be to obtain these by hashing the seed values that you are currently using.
By making random seeds (for instance, using some mathematical functions on System.currentTimeMillis() or System.nanoTime() for seed generation) you can get better random result. Also can look at here for more information

Probability in Java

I was curious to know, how do I implement probability in Java? For example, if the chances of a variable showing is 1/25, then how would I implement that? Or any other probability? Please point me in the general direction.
You'd use Random to generate a random number, then test it against a literal to match the probability you're trying to achieve.
So given:
boolean val = new Random().nextInt(25)==0;
val will have a 1/25 probability of being true (since nextInt() has an even probability of returning any number starting at 0 and up to, but not including, 25.)
You would of course have to import java.util.Random; as well.
As pointed out below, if you're getting more than one random number it'd be more efficient to reuse the Random object rather than recreating it all the time:
Random rand = new Random();
boolean val = rand.nextInt(25)==0;
..
boolean val2 = rand.nextInt(25)==0;
Generally you use a random number generator. Most of those return a number in the interval [0,1[ so you would then check whether that number is < 0.04 or not.
if( new Random().nextDouble() < 0.04 ) { //you might want to cache the Random instance
//we hit the 1/25 ( 4% ) case.
}
Or
if( Math.random() < 0.04 ) {
//we hit the 1/25 ( 4% ) case.
}
Note that there are multiple random number generators that have different properties, but for simple applications the Random class should be sufficient.
Edit: I changed the condition from <= to < because the upper boundary of the random number is exlusive, i.e. the largest returned value will still be < 1.0. Hence x <= 0.04 would actually be slightly more than a 4% chance, while x < 0.04 would be accurate (or as accurate as floating point math can be).
Since 1.7 it's better to use (in concurrent environment at least):
ThreadLocalRandom.current().nextInt(25) == 0
Javadoc
A random number generator isolated to the current thread. Like the global Random generator used by the Math class, a ThreadLocalRandom is initialized with an internally generated seed that may not otherwise be modified. When applicable, use of ThreadLocalRandom rather than shared Random objects in concurrent programs will typically encounter much less overhead and contention. Use of ThreadLocalRandom is particularly appropriate when multiple tasks (for example, each a ForkJoinTask) use random numbers in parallel in thread pools.
Usages of this class should typically be of the form: ThreadLocalRandom.current().nextX(...) (where X is Int, Long, etc). When all usages are of this form, it is never possible to accidently share a ThreadLocalRandom across multiple threads.
This class also provides additional commonly used bounded random generation methods.
Java has a class called java.util.Random which can generate random numbers. If you want something to happen with probability 1/25, simply generate a random number between 1 and 25 (or 0 and 24 inclusive) and check whether that number is equal to 1.
if(new java.util.Random().nextInt(25)==0){
//Do something.
}
Maybe you can implement this with generating random numbers.
Random rn = new Random();
double d = rn.nextDouble(); // random value in range 0.0 - 1.0
if(d<=0.04){
doSomeThing();
}

Generate random integers in java

How to generate random integers but making sure that they don't ever repeat?
For now I use :
Random randomGenerator = new Random();
randomGenerator.nextInt(100);
EDIT I
I'm looking for most efficient way, or least bad
EDIT II
Range is not important
ArrayList<Integer> list = new ArrayList<Integer>(100);
for(int i = 0; i < 100; i++)
{
list.add(i);
}
Collections.shuffle(list);
Now, list contains the numbers 0 through 99, but in a random order.
If what you want is a pseudo-random non-repeating sequence of numbers then you should look at a linear feedback shift register. It will produce all the numbers between 0 and a given power of 2 without ever repeating. You can easily limit it to N by picking the nearest larger power of 2 and discarding all results over N. It doesn't have the memory constraints the other colleciton based solutions here have.
You can find java implementations here
How to generate random integers but making sure that they don't ever repeat?
First, I'd just like to point out that the constraint that the numbers don't repeat makes them non-random by definition.
I think that what you really need is a randomly generated permutation of the numbers in some range; e.g. 0 to 99. Even then, once you have used all numbers in the range, a repeat is unavoidable.
Obviously, you can increase the size of your range so that you can get a larger number without any repeats. But when you do this you run into the problem that your generator needs to remember all previously generated numbers. For large N that takes a lot of memory.
The alternative to remembering lots of numbers is to use a pseudo-random number generator with a long cycle length, and return the entire state of the generator as the "random" number. That guarantees no repeated numbers ... until the generator cycles.
(This answer is probably way beyond what the OP is interested in ... but someone might find it useful.)
If you have a very large range of integers (>>100), then you could put the generated integers into a hash table. When generating new random numbers, keep generating until you get a number which isn't in your hash table.
Depending on the application, you could also generate a strictly increasing sequence, i.e. start with a seed and add a random number within a range to it, then re-use that result as the seed for the next number. You can set how guessable it is by adjusting the range, balancing this with how many numbers you will need (if you made incremental steps of up to e.g., 1,000, you're not going to exhaust a 64-bit unsigned integer very quickly, for example).
Of course, this is pretty bad if you're trying to create some kind of unguessable number in the cryptographic sense, however having a non-repeating sequence would probably provide a reasonably effective attack on any cypher based on it, so I'm hoping you're not employing this in any kind of security context.
That said, this solution is not prone to timing attacks, which some of the others suggested are.
Matthew Flaschen has the solution that will work for small numbers. If your range is really big, it could be better to keep track of used numbers using some sort of Set:
Set usedNumbers = new HashSet();
Random randomGenerator = new Random();
int currentNumber;
while(IStillWantMoreNumbers) {
do {
currentNumber = randomGenerator.nextInt(100000);
} while (usedNumbers.contains(currentNumber));
}
You'll have to be careful with this though, because as the proportion of "used" numbers increases, the amount of time this function takes will increase exponentially. It's really only a good idea if your range is much larger than the amount of numbers you need to generate.
Since I can't comment on the earlier answers above due to not having enough reputation (which seems backwards... shouldn't I be able to comment on others' answers, but not provide my own answers?... anyway...), I'd like to mention that there is a major flaw with relying on Collections.shuffle() which has little to do with the memory constraints of your collection:
Collections.shuffle() uses a Random object, which in Java uses a 48-bit seed. This means there are 281,474,976,710,656 possible seed values. That seems like a lot. But consider if you want to use this method to shuffle a 52-card deck. A 52-card deck has 52! (over 8*10^67 possible configurations). Since you'll always get the same shuffled results if you use the same seed, you can see that the possible configurations of a 52-card deck that Collections.shuffle() can produce is but a small fraction of all the possible configurations.
In fact, Collections.shuffle() is not a good solution for shuffling any collection over 16 elements. A 17-element collection has 17! or 355,687,428,096,000 configurations, meaning 74,212,451,385,344 configurations will never be the outcome of Collections.shuffle() for a 17-element list.
Depending on your needs, this can be extremely important. Poor choice of shuffle/randomization techniques can leave your software vulnerable to attack. For instance, if you used Collections.shuffle() or a similar algorithm to implement a commercial poker server, your shuffling would be biased and a savvy computer-assisted player could use that knowledge to their benefit, as it skews the odds.
If you want 256 random numbers between 0 and 255, generate one random byte, then XOR a counter with it.
byte randomSeed = rng.nextInt(255);
for (int i = 0; i < 256; i++) {
byte randomResult = randomSeed ^ (byte) i;
<< Do something with randomResult >>
}
Works for any power of 2.
If the Range of values is not finite, then you can create an object which uses a List to keep track of the Ranges of used Integers. Each time a new random integer is needed, one would be generated and checked against the used ranges. If the integer is unused, then it would add that integer as a new used Range, add it to an existing used Range, or merge two Ranges as appropriate.
But you probably really want Matthew Flaschen's solution.
Linear Congruential Generator can be used to generate a cycle with different random numbers (full cycle).

Categories