I need/want to get random (well, not entirely) numbers to use for password generation.
What I do: Currently I am generating them with SecureRandom.
I am obtaining the object with
SecureRandom sec = SecureRandom.getInstance("SHA1PRNG", "SUN");
and then seeding it like this
sec.setSeed(seed);
Target: A (preferably fast) way to create random numbers, which are cryptographically at least a safe as the SHA1PRNG SecureRandom implementation. These need to be the same on different versions of the JRE and Android.
EDIT: The seed is generated from user input.
Problem: With SecureRandom.getInstance("SHA1PRNG", "SUN"); it fails like this:
java.security.NoSuchProviderException: SUN. Omitting , "SUN" produces random numbers, but those are different than the default (JRE 7) numbers.
Question: How can I achieve my Target?
You don't want it to be predictable: I want, because I need the predictability so that the same preconditions result in the same output. If they are not the same, its impossible hard to do what the user expects from the application.
EDIT: By predictable I mean that, when knowing a single byte (or a hundred) you should not be able to predict the next, but when you know the seed, you should be able to predict the first (and all others). Maybe another word is reproducible.
If anyone knows of a more intuitive way, please tell me!
I ended up isolating the Sha1Prng from the sun sources which guarantees reproducibility on all versions of Java and android. I needed to drop some important methods to ensure compatibility with android, as android does not have access to nio classes...
I recommend using UUID.randomUUID(), then splitting it into longs using getLeastSignificantBits() and getMostSignificantBits()
If you want predictable, they aren't random. That breaks your "Target" requirement of being "safe" and devolves into a simple shared secret between two servers.
You can get something that looks sort of random but is predicatable by using the characteristics of prime integers where you build a set of integers by starting with I (some specific integer) and add the first prime number and then modulo by the 2nd prime number. (In truth the first and second numbers only have to be relatively prime--meaning they have no common prime factors--not counting 1, in case you call that a factor.
If you repeat the process of adding and doing the modulo, you will get a set of numbers that you can repeatably reproduce and they are ordered in the sense that taking any member of the set, adding the first prime and doing the modulo by the 2nd prime, you will always get the same result.
Finally, if the 1st prime number is large relative to the second one, the sequence is not easily predictable by humans and seems sort of random.
For example, 1st prime = 7, 2nd prime = 5 (Note that this shows how it works but is not useful in real life)
Start with 2. Add 7 to get 9. Modulo 5 to get 4.
4 plus 7 = 11. Modulo 5 = 1.
Sequence is 2, 4, 1, 3, 0 and then it repeats.
Now for real life generation of numbers that seem random. The relatively prime numbers are 91193 and 65536. (I chose the 2nd one because it is a power of 2 so all modulo-ed values can fit in 16 bits.)
int first = 91193;
int modByLogicalAnd = 0xFFFF;
int nonRandomNumber = 2345; // Use something else
for (int i = 0; i < 1000 ; ++i) {
nonRandomNumber += first;
nonRandomNumber &= modByLogicalAnd;
// print it here
}
Each iteration generates 2 bytes of sort of random numbers. You can pack several of them into a buffer if you need larger random "strings".
And they are repeatable. Your user can pick the starting point and you can use any prime you want (or, in fact, any number without 2 as a factor).
BTW - Using a power of 2 as the 2nd number makes it more predictable.
Ignoring RNGs that use some physical input (random clock bits, electrical noise, etc) all software RNGs are predicable, given the same starting conditions. They are, after all, (hopefully) deterministic computer programs.
There are some algorithms that intentionally include the physical input (by, eg, sampling the computer clock occasionally) in attempt to prevent predictability, but those are (to my knowledge) the exception.
So any "conventional" RNG, given the same seed and implemented to the same specification, should produce the same sequence of "random" numbers. (This is why a computer RNG is more properly called a "pseudo-random number generator".)
The fact that an RNG can be seeded with a previously-used seed and reproduce a "known" sequence of numbers does not make the RNG any less secure than one where your are somehow prevented from seeding it (though it may be less secure than the fancy algorithms that reseed themselves at intervals). And the ability to do this -- to reproduce the same sequence again and again is not only extraordinarily useful in testing, it has some "real life" applications in encryption and other security applications. (In fact, an encryption algorithm is, in essence, simply a reproducible random number generator.)
Related
I wrote program that simulates dice roll
Random r = new Random();
int result = r.nextInt(6);
System.out.println(result);
I want to know if there is a way to "predict" next generated number and how JVM determines what number to generate next?
Will my code output numbers close to real random at any JVM and OS?
They're pseudorandom numbers, meaning that for general intents and purposes, they're random enough. However they are deterministic and entirely dependent on the seed. The following code will print out the same 10 numbers twice.
Random rnd = new Random(1234);
for(int i = 0;i < 10; i++)
System.out.println(rnd.nextInt(100));
rnd = new Random(1234);
for(int i = 0;i < 10; i++)
System.out.println(rnd.nextInt(100));
If you can choose the seed, you can precalculate the numbers first, then reset the generator with the same seed and you'll know in advance what numbers come out.
I want to know if there is a way to "predict" next generated number and how JVM determines what number to generate next?
Absolutely. The Random class is implemented as a linear congruential number generator (LCNG). The general formula for a linear congruential generator is:
new_state = (old_state * C1 + C2) modulo N
The precise algorithm used by Random is specified in the javadocs. If you know the current state of the generator1, the next state is completely predictable.
Will my code output numbers close to real random at any JVM and OS?
If you use Random, then No. Not for any JVM on any OS.
The sequence produced by an LCNG is definitely not random, and has statistical properties that are significantly different from a true random sequence. (The sequence will be strongly auto-correlated, and this will show up if you plot the results of successive calls to Random.nextInt().)
Is this a problem? Well it depends on what your application needs. If you need "random" numbers that are hard to predict (e.g. for an algorithm that is security related), then clearly no. And if the numbers are going to be used for a Monte Carlo simulation, then the inate auto-correlation of a LCNG can distort the simulation. But if you are just building a solitaire card game ... it maybe doesn't matter.
1 - To be clear, the state of a Random object consists of the values of its instance variables; see the source code. You can examine them using a debugger. At a pinch you could access them and even update them using Java reflection, but I would not advise doing that. The "previous" state is not recorded.
Yes, it is possible to predict what number a random number generator will produce next. I've seen this called cracking, breaking, or attacking the RNG. Searching for any of those terms along with "random number generator" should turn up a lot of results.
Read How We Learned to Cheat at Online Poker: A Study in Software Security for an excellent first-hand account of how a random number generator can be attacked. To summarize, the authors figured out what RNG was being used based on a faulty shuffling algorithm employed by an online poker site. They then figured out the RNG seed by sampling hands that were dealt. Once they had the algorithm and the seed, they knew exactly how the deck would be arranged after later shuffles.
You can also refer this link.
Check How does java.util.Random work and how good is it?:
In other words, we begin with some start or "seed" number which
ideally is "genuinely unpredictable", and which in practice is
"unpredictable enough". For example, the number of milliseconds— or
even nanoseconds— since the computer was switched on is available on
most systems. Then, each time we want a random number, we multiply the
current seed by some fixed number, a, add another fixed number, c,
then take the result modulo another fixed number, m. The number a is
generally large. This method of random number generation goes back
pretty much to the dawn of computing1. Pretty much every "casual"
random number generator you can think of— from those of scientific
calculators to 1980s home computers to currentday C and Visual Basic
library functions— uses some variant of the above formula to generate
its random numbers.
And also Predicting the next Math.random() in Java
Let's assume I have a reliably truly random source of random numbers, but it is very slow. It only give me a few hundreds of numbers every couple of hours.
Since I need way more than that I was thinking to use those few precious TRN I can get as seeds for java.util.Random (or scala.util.Random). I also always will pick a new one to generate the next random number.
So I guess my questions are:
Can the numbers I generate from those Random instance in Java be considered truly random since the seed is truly random?
Is there still a condition that is not met for true randomness?
If I keep on adding levels at what point will randomness be lost?
Or (as I thought when I came up with it) is truly random as long as the stream of seeds is?
I am assuming that nobody has intercepted the stream of seeds, but I do not plan to use those numbers for security purposes.
For a pseudo random generator like java.util.Random, the next generated number in the sequence becomes predictable given only a few numbers from the sequence, so you will loose your "true randomness" very fast. Better use one of the generators provided by java.security.SecureRandom - these are all strong random generators with an VERY long sequence length, which should be pretty hard to be predicted.
Our java Random gives uniformly spread random numbers. That is not true randomness, which may yield five times the same number.
Furthermore for every specific seed the same sequence is generated (intentionally). With 2^64 seeds in general irrelevant. (Note hackers could store the first ten numbers of every sequence; thereby rapidly catching up.)
So if you at large intervals use a truely random number as seed, you will get a uniform distribution during that interval. In effect not very different from not using the true randomizers.
Now combining random sequences might reduce the randomness. Maybe translating the true random number to bytes, and xor-ing every new random number with another byte, might give a wilder variance.
Please do not take my word only - I cannot guarantee the mathematical correctness of the above. A math/algorithmic forum might give more info.
When you take out more bits, than you have put in they are for sure no longer truly random. The break point may even occur earlier if the random number generator is bad. This can be seen by considering the entropy of the sequences. The seed value determines the sequence completely, so there are at most as many sequences as seed values. If they are all distinct, the entropy is the same as that of the seeds (which is essentially the number of seed bits, assuming the seed is truly random).
However, if different seeds lead to the same pseudo random sequence the entropy of the sequences will be lower than that of the seeds. If we cut off the sequences after n bits, the entropy may be even lower.
But why care if you don't use it for security purposes? Are you sure the pseudo random numbers are not good enough for your application?
I'm using java 6 random (java.util.Random,linux 64) to randomly decide between serving one version of a page to a second one (Normal A/B testing), technically i initialize the class once with the default empty constructor and it's injected to a bean (Spring) as a property .
Most of the times the copies of the pages are within 8%(+-) of each other but from time to time i see deviations of up to 20 percent , e.g :
I now have two copies that split : 680 / 570 is that considered normal ?
Is there a better/faster version to use than java random ?
Thanks
A deviation of 20% does seem rather large, but you would need to talk to a trained statistician to find out if it is statistically anomalous.
UPDATE - and the answer is that it is not necessarily anomalous. The statistics predict that you would get an outlier like this roughly 0.3% of the time.
It is certainly plausible for a result like this to be caused by the random number generator. The Random class uses a simple "linear congruential" algorithm, and this class of algorithms are strongly auto-correlated. Depending on how you use the random number, this could lead anomalies at the application level.
If this is the cause of your problem, then you could try replacing it with a crypto-strength random number generator. See the javadocs for SecureRandom. SecureRandom is more expensive than Random, but it is unlikely that this will make any difference in your use-case.
On the other hand, if these outliers are actually happening at roughly the rate predicted by the theory, changing the random number generator shouldn't make any difference.
If these outliers are really troublesome, then you need to take a different approach. Instead of generating N random choices, generate a list of false / true with exactly the required ratio, and then shuffle the list; e.g. using Collections.shuffle.
I believe this is fairly normal as it is meant to generate random sequences. If you want repeated patterns after certain interval, I think you may want to use a specific seed value in the constructor and reset the random with same seed after certain interval.
e.g. after every 100/500/n calls to Random.next.., reset the seed with old value using Random.setSeed(long seed) method.
java.util.Random.nextBoolean() is an approach for a standard binomial distribution, which has standard deviation of sqrt(n*p*(1-p)), with p=0.5.
So if you do 900 iterations, the standard deviation is sqrt(900*.5*.5) = 15, so most times the distribution would be in the range 435 - 465.
However, it is pseudo-random, and has a limited cycle of numbers it will go through before starting over. So if you have enough iterations, the actual deviation will be much smaller than the theoretical one. Java uses the formula seed = (seed * 0x5DEECE66DL + 0xBL) & ((1L << 48) - 1). You could write a different formula with smaller numbers to purposely obtain a smaller deviation, which would make it a worse random number generator, but better fitted for your purpose.
You could for example create a list of 5 trues and 5 falses in it, and use Collections.shuffle to randomize the list. Then you iterate over them sequentially. After 10 iterations you re-shuffle the list and start from the beginning. That way you'll never deviate more than 5.
See http://en.wikipedia.org/wiki/Linear_congruential_generator for the mathematics.
I discovered something strange with the generation of random numbers using Java's Random class.
Basically, if you create multiple Random objects using close seeds (for example between 1 and 1000) the first value generated by each generator will be almost the same, but the next values looks fine (i didn't search further).
Here are the two first generated doubles with seeds from 0 to 9 :
0 0.730967787376657 0.24053641567148587
1 0.7308781907032909 0.41008081149220166
2 0.7311469360199058 0.9014476240300544
3 0.731057369148862 0.07099203475193139
4 0.7306094602878371 0.9187140138555101
5 0.730519863614471 0.08825840967622589
6 0.7307886238322471 0.5796252073129174
7 0.7306990420600421 0.7491696031336331
8 0.7302511331990172 0.5968915822372118
9 0.7301615514268123 0.7664359929590888
And from 991 to 1000 :
991 0.7142160704801332 0.9453385235522973
992 0.7109015598097105 0.21848118381994108
993 0.7108119780375055 0.38802559454181795
994 0.7110807233541204 0.8793923921785096
995 0.7109911564830766 0.048936787999225295
996 0.7105432327208906 0.896658767102804
997 0.7104536509486856 0.0662031629235198
998 0.7107223962653005 0.5575699754613725
999 0.7106328293942568 0.7271143712820883
1000 0.7101849056320707 0.574836350385667
And here is a figure showing the first value generated with seeds from 0 to 100,000.
First random double generated based on the seed :
I searched for information about this, but I didn't see anything referring to this precise problem. I know that there is many issues with LCGs algorithms, but I didn't know about this one, and I was wondering if this was a known issue.
And also, do you know if this problem only for the first value (or first few values), or if it is more general and using close seeds should be avoided?
Thanks.
You'd be best served by downloading and reading the Random source, as well as some papers on pseudo-random generators, but here are some of the relevant parts of the source. To begin with, there are three constant parameters that control the algorithm:
private final static long multiplier = 0x5DEECE66DL;
private final static long addend = 0xBL;
private final static long mask = (1L << 48) - 1;
The multiplier works out to approximately 2^34 and change, the mask 2^48 - 1, and the addend is pretty close to 0 for this analysis.
When you create a Random with a seed, the constructor calls setSeed:
synchronized public void setSeed(long seed) {
seed = (seed ^ multiplier) & mask;
this.seed.set(seed);
haveNextNextGaussian = false;
}
You're providing a seed pretty close to zero, so initial seed value that gets set is dominated by multiplier when the two are OR'ed together. In all your test cases with seeds close to zero, the seed that is used internally is roughly 2^34; but it's easy to see that even if you provided very large seed numbers, similar user-provided seeds will yield similar internal seeds.
The final piece is the next(int) method, which actually generates a random integer of the requested length based on the current seed, and then updates the seed:
protected int next(int bits) {
long oldseed, nextseed;
AtomicLong seed = this.seed;
do {
oldseed = seed.get();
nextseed = (oldseed * multiplier + addend) & mask;
} while (!seed.compareAndSet(oldseed, nextseed));
return (int)(nextseed >>> (48 - bits));
}
This is called a 'linear congruential' pseudo-random generator, meaning that it generates each successive seed by multiplying the current seed by a constant multiplier and then adding a constant addend (and then masking to take the lower 48 bits, in this case). The quality of the generator is determined by the choice of multiplier and addend, but the ouput from all such generators can be easily predicted based on the current input and has a set period before it repeats itself (hence the recommendation not to use them in sensitive applications).
The reason you're seeing similar initial output from nextDouble given similar seeds is that, because the computation of the next integer only involves a multiplication and addition, the magnitude of the next integer is not much affected by differences in the lower bits. Calculation of the next double involves computing a large integer based on the seed and dividing it by another (constant) large integer, and the magnitude of the result is mostly affected by the magnitude of the integer.
Repeated calculations of the next seed will magnify the differences in the lower bits of the seed because of the repeated multiplication by the constant multiplier, and because the 48-bit mask throws out the highest bits each time, until eventually you see what looks like an even spread.
I wouldn't have called this an "issue".
And also, do you know if this problem only for the first value (or first few values), or if it is more general and using close seeds should be avoided?
Correlation patterns between successive numbers is a common problem with non-crypto PRNGs, and this is just one manifestation. The correlation (strictly auto-correlation) is inherent in the mathematics underlying the algorithm(s). If you want to understand that, you should probably start by reading the relevant part of Knuth's Art of Computer Programming Chapter 3.
If you need non-predictability you should use a (true) random seed for Random ... or let the system pick a "pretty random" one for you; e.g. using the no-args constructor. Or better still, use a real random number source or a crypto-quality PRNG instead of Random.
For the record:
The javadoc (Java 7) does not specify how Random() seeds itself.
The implementation of Random() on Java 7 for Linux, is seeded from the nanosecond clock, XORed with a 'uniquifier' sequence. The 'uniquifier' sequence is LCG which uses different multiplier, and whose state is static. This is intended to avoid auto-correlation of the seeds ...
This is a fairly typical behaviour for pseudo-random seeds - they aren't required to provide completely different random sequences, they only provide a guarantee that you can get the same sequence again if you use the same seed.
The behaviour happens because of the mathematical form of the PRNG - the Java one uses a linear congruential generator, so you are just seeing the results running the seed through one round of the linear congruential generator. This isn't enough to completely mix up all the bit patterns, hence you see similar results for similar seeds.
Your best strategy is probably just to use very different seeds - one option would be to obtain these by hashing the seed values that you are currently using.
By making random seeds (for instance, using some mathematical functions on System.currentTimeMillis() or System.nanoTime() for seed generation) you can get better random result. Also can look at here for more information
How to generate random integers but making sure that they don't ever repeat?
For now I use :
Random randomGenerator = new Random();
randomGenerator.nextInt(100);
EDIT I
I'm looking for most efficient way, or least bad
EDIT II
Range is not important
ArrayList<Integer> list = new ArrayList<Integer>(100);
for(int i = 0; i < 100; i++)
{
list.add(i);
}
Collections.shuffle(list);
Now, list contains the numbers 0 through 99, but in a random order.
If what you want is a pseudo-random non-repeating sequence of numbers then you should look at a linear feedback shift register. It will produce all the numbers between 0 and a given power of 2 without ever repeating. You can easily limit it to N by picking the nearest larger power of 2 and discarding all results over N. It doesn't have the memory constraints the other colleciton based solutions here have.
You can find java implementations here
How to generate random integers but making sure that they don't ever repeat?
First, I'd just like to point out that the constraint that the numbers don't repeat makes them non-random by definition.
I think that what you really need is a randomly generated permutation of the numbers in some range; e.g. 0 to 99. Even then, once you have used all numbers in the range, a repeat is unavoidable.
Obviously, you can increase the size of your range so that you can get a larger number without any repeats. But when you do this you run into the problem that your generator needs to remember all previously generated numbers. For large N that takes a lot of memory.
The alternative to remembering lots of numbers is to use a pseudo-random number generator with a long cycle length, and return the entire state of the generator as the "random" number. That guarantees no repeated numbers ... until the generator cycles.
(This answer is probably way beyond what the OP is interested in ... but someone might find it useful.)
If you have a very large range of integers (>>100), then you could put the generated integers into a hash table. When generating new random numbers, keep generating until you get a number which isn't in your hash table.
Depending on the application, you could also generate a strictly increasing sequence, i.e. start with a seed and add a random number within a range to it, then re-use that result as the seed for the next number. You can set how guessable it is by adjusting the range, balancing this with how many numbers you will need (if you made incremental steps of up to e.g., 1,000, you're not going to exhaust a 64-bit unsigned integer very quickly, for example).
Of course, this is pretty bad if you're trying to create some kind of unguessable number in the cryptographic sense, however having a non-repeating sequence would probably provide a reasonably effective attack on any cypher based on it, so I'm hoping you're not employing this in any kind of security context.
That said, this solution is not prone to timing attacks, which some of the others suggested are.
Matthew Flaschen has the solution that will work for small numbers. If your range is really big, it could be better to keep track of used numbers using some sort of Set:
Set usedNumbers = new HashSet();
Random randomGenerator = new Random();
int currentNumber;
while(IStillWantMoreNumbers) {
do {
currentNumber = randomGenerator.nextInt(100000);
} while (usedNumbers.contains(currentNumber));
}
You'll have to be careful with this though, because as the proportion of "used" numbers increases, the amount of time this function takes will increase exponentially. It's really only a good idea if your range is much larger than the amount of numbers you need to generate.
Since I can't comment on the earlier answers above due to not having enough reputation (which seems backwards... shouldn't I be able to comment on others' answers, but not provide my own answers?... anyway...), I'd like to mention that there is a major flaw with relying on Collections.shuffle() which has little to do with the memory constraints of your collection:
Collections.shuffle() uses a Random object, which in Java uses a 48-bit seed. This means there are 281,474,976,710,656 possible seed values. That seems like a lot. But consider if you want to use this method to shuffle a 52-card deck. A 52-card deck has 52! (over 8*10^67 possible configurations). Since you'll always get the same shuffled results if you use the same seed, you can see that the possible configurations of a 52-card deck that Collections.shuffle() can produce is but a small fraction of all the possible configurations.
In fact, Collections.shuffle() is not a good solution for shuffling any collection over 16 elements. A 17-element collection has 17! or 355,687,428,096,000 configurations, meaning 74,212,451,385,344 configurations will never be the outcome of Collections.shuffle() for a 17-element list.
Depending on your needs, this can be extremely important. Poor choice of shuffle/randomization techniques can leave your software vulnerable to attack. For instance, if you used Collections.shuffle() or a similar algorithm to implement a commercial poker server, your shuffling would be biased and a savvy computer-assisted player could use that knowledge to their benefit, as it skews the odds.
If you want 256 random numbers between 0 and 255, generate one random byte, then XOR a counter with it.
byte randomSeed = rng.nextInt(255);
for (int i = 0; i < 256; i++) {
byte randomResult = randomSeed ^ (byte) i;
<< Do something with randomResult >>
}
Works for any power of 2.
If the Range of values is not finite, then you can create an object which uses a List to keep track of the Ranges of used Integers. Each time a new random integer is needed, one would be generated and checked against the used ranges. If the integer is unused, then it would add that integer as a new used Range, add it to an existing used Range, or merge two Ranges as appropriate.
But you probably really want Matthew Flaschen's solution.
Linear Congruential Generator can be used to generate a cycle with different random numbers (full cycle).