Exponential distribution in Java not right - values too small? - java

I am trying to generate an exponential distribution for arrival and service times of processes. In C++, the example I have works fine and generates pseudo-random numbers in the range [0, inf) and some are bigger as expected. In Java, it does not work. The numbers are orders of magnitude smaller than their C++ equivalents, and I NEVER get any values > 0.99 even though I am using the same formula. In C++ I get 1.xx, or 2.xx etc., but never in Java.
lambda is the average rate of arrival and gets varied from 1 to 30.
I know that rand.nextDouble() gives a value b/w 0 and 1 and from the formula given and answers here on this site, this seems to be a needed component.
I should mention that multiplying my distribution values by 10 gets me much closer to where they need to be and they behave as expected.
In Java:
Random rand = new Random();
// if I multiply x by 10, I get much closer to the distribution I need
// I just don't know why it's off by a factor of 10?!
x = (Math.log(1-rand.nextDouble())/(-lambda));
I have also tried:
x = 0;
while (x == 0)
{
x = (-1/lambda)*log(rand.nextDouble());
}
The C++ code I was given:
// returns a random number between 0 and 1
float urand()
{
return( (float) rand()/RAND_MAX );
}
// returns a random number that follows an exp distribution
float genexp(float lambda)
{
float u,x;
x = 0;
while (x == 0)
{
u = urand();
x = (-1/lambda)*log(u);
}
return(x);
}

Related

Formula to make higher numbers harder to get in a random

I'm looking for a formula or a method to allow getting higher numbers in a random harder to obtain. For instance if I was attempting to get a number out of 1000, getting 1000 would be much harder than getting a lower number such as 1 - 250.
One easy way is to use square roots, which make it easier to get higher numbers. We then subtract from 1,000 to make it easier to get lower numbers instead.
If the lowest value you want is zero:
1000 - (int) Math.sqrt(rand.nextInt(1001*1001))
If the lowest value you want is one:
1000 - (int) Math.sqrt(rand.nextInt(1000*1000))
Well, POisson distribution with lambda less than or equal to 1 would fit your requirements
public static int getPoisson(double lambda) {
double L = Math.exp(-lambda);
double p = 1.0;
int k = 0;
do {
k++;
p *= Math.random();
} while (p > L);
return k - 1;
}
call it with 1 and see if it is what you want
Use a Rand for the high number, as in
highNum = Rand(1,4) *250;
randNum = Rand(1, highNum);
Using this formula, numbers between 1-250 have 8.3 times chance over numbers between 750-1000

Generating random integer between 1 and infinity

I would like to create an integer value between 1 and infinity. I want to have a probability distribution where the smaller the number is, the higher the chance it is generated.
I generate a random value R between 0 and 2.
Take the series
I want to know the smallest m with which my sum is bigger than R.
I need a fast way to determine m. This is would be pretty straightforward if i had R in binary, since m would be equal to the number of 1's my number has in a row from the most significant bit, plus one.
There is an upper limit on the integer this method can generate: integer values have an upper limit and double precision can also only reach so high in the [0;2[ interval. This is irrelevant, however, since it depends on the accuracy of the data representation method.
What would be the fastest way to determine m?
Set up the inequality
R <= 2 - 2**-m
Isolate the term with m
2**-m <= 2 - R
-m <= log2(2-R)
m >= -log2(2-R).
So it looks like you want ceiling(-log2(2-R)). This is basically an exponential distribution with discretization -- the algorithm for an exponential is -ln(1-U)/rate, where U is a Uniform(0,1) and 1/rate is the desired mean.
I think, straightforward solution will be OK as this series converges really fast:
if (r >= 2)
throw new IllegalArgumentException();
double exp2M = 1 / (2 - r);
int x = (int)exp2M;
int ans = 0;
while (x > 0) {
++ans;
x >>= 2;
}
return ans;

With some known probability,set the value of a variable

EDIT:If anyone downvotes this question,kindly leave a comment explaining why.
I am implementing a certain algorithm in Java called the Biased Voter Model which models the opinion dynamics of social network users.
Here there is a particular step which requires me to:
With probability pi , set x = q1 ; else with probability pi , set x = q2,and so on.
If the above step did not set x, then:
– With probability αi , set x = q0 ; and
– With probability 1 − αi set x = q, where
q ∈ [q0 , q ∗ ] is chosen uniformly at random.
Where, pi and αi are randomly chosen and is constant throughout. q1,q2...qn are known values. 'x' is what I need to set. And also q* is the q(i) which is has the smallest distance(closest) to q0.[Note:q0 is not a part of the array and is also known] But what I'm not sure of is what does the phrase "With probability pi set x=q1" ?
I have tried implementing it this way:
pi=(double)Math.round(Math.random()*10)/10;//sets a random number approximated to one decimal place
while(index<n){
double j=(double)Math.round(Math.random()*10)/10;
if(j>pi){
index++;
}
else{
x=q[index];
break;
}
}
However this is for the 1st part only. q[] contains q1,q2...qn and for each index I'm generating a random number j and if it's greater than pi I ignore that index and move on to the next.
For the second part I check j with αi.(The following is just a pseudocode and I haven't written everything here)
double j=(double)Math.round(Math.random()*10)/10;
if(j<αi)
{
temp1=q0;
}
double j=(double)Math.round(Math.random()*10)/10;
//randomly generate 'k' which takes on a value of either 0 or 1
if(j<(1-αi){
if(k==0)
temp2=q0;
else if(k==1)
temp2=q*;
}
if(temp1==temp2)
x=q0;
else
x=q*
I know this implementation is not completely correctly. Where am I logically going wrong? And what does the phrase "With probability pi set x=q1" actually mean?
For further reference,check this (page 7 section 5).
I think you may be overthinking it.
At start of program choose pi at random between 0 and 1, because pi is a probability.
Random random = new Random();
double pi = random.nextDouble(); // Choose a number between 0 and 1.
With probability pi set x=q1. Choose another random number between 0 and 1. If that number is less than pi, then set x = q1. Otherwise, set x = q. This is correct because the probability of a random number chosen uniformly between 0 and 1 being less than pi is exactly equal to pi.
double t = random.nextDouble();
if (t < pi) x = q1;
else x = q;

Gradually increase the probability of mutation

I am implementing something very similar to a Genetic Algorithm. So you go through multiple generations of a population - at the end of a generation you create a new population in three different ways 'randomly', 'mutation' and 'crossover'.
Currently the probabilities are static but I need to make it so that the probability of mutation gradually increases. I appreciate any direction as I'm a little stuck..
This is what I have:
int random = generator.nextInt(10);
if (random < 1)
randomlyCreate()
else if (random > 1 && random < 9 )
crossover();
else
mutate();
Thank you.
In your if statement, replace the hard coded numbers with variables and update them at the start of each generation.
Your if statement effectively divides the interval 0 to 10 into three bins. The probability of calling mutate() vs crossover() vs randomlyCreate() depends on the size of each bin. You can adjust the mutation rate by gradually moving the boundaries of the bins.
In your code, mutate() is called 20% of the time, (when random = 9 or 1), randomlyCreate() is called 10% of the time (when random = 0) and crossover() is called the other 70% of the time.
The code below starts out with these same ratios at generation 0, but the mutation rate increases by 1% each generation. So for generation 1 the mutation rate is 21%, for generation 2 it is 22%, and so on. randomlyCreate() is called 1 / 7 as often as crossover(), regardless of the mutation rate.
You could make the increase in mutation rate quadratic, exponential, or whatever form you choose by altering getMutationBoundary().
I've used floats in the code below. Doubles would work just as well.
If the mutation rate is what you're most interested in, it might be more intuitive to move the mutation bin so that it's at [0, 2] initially, and then increase its upper boundary from there (2.1, 2.2, etc). Then you can read off the mutation rate easily, (21%, 22%, etc).
void mainLoop() {
// make lots of generations
for (int generation = 0; generation < MAX_GEN; generation++) {
float mutationBoundary = getMutationBoundary(generation);
float creationBoundary = getCreationBoundary(mutationBoundary);
createNewGeneration(mutationBoundary, creationBoundary);
// Do some stuff with this generation, e.g. measure fitness
}
}
void createNewGeneration(float mutationBoundary, float creationBoundary) {
// create each member of this generation
for (int i = 0; i < MAX_POP; i++) {
createNewMember(mutationBoundary, creationBoundary);
}
}
void createNewMember(float mutationBoundary, float creationBoundary) {
float random = 10 * generator.nextFloat();
if (random > mutationBoundary) {
mutate();
}
else {
if (random < creationBoundary) {
randomlyCreate();
}
else {
crossover();
}
}
}
float getMutationBoundary(int generation) {
// Mutation bin is is initially between [8, 10].
// Lower bound slides down linearly, so it becomes [7.9, 10], [7.8, 10], etc.
// Subtracting 0.1 each generation makes the bin grow in size.
// Initially the bin is 10 - 8 = 2.0 units wide, then 10 - 7.9 = 2.1 units wide,
// and so on. So the probability of mutation grows from 2 / 10 = 20%
// to 2.1 / 10 = 21% and so on.
float boundary = 8 - 0.1f * generation;
if (boundary < 0) {
boundary = 0;
}
return boundary;
}
float getCreationBoundary(float creationBoundary) {
return creationBoundary / 8; // fixed ratio
}
Use a variable where you are currently use the 9, and (for example) multiply that by 0.9 every itaration, unless mutate() happens, in which case you multiply it by 3 for example. that way the chance of mutation grows slowly but exponentially (yes, that is possible), until they actually mutate, at which point the chance of another mutation drops like a brick and the process starts all over again.
these values are completely random, and are not based on any knowledge about mutation whatsoever, but I'm just showing you with this how you could manipulate it to have a variable value every time. Also: if you use what I just used, make sure the value of the variable is set to 10 if it ever goes over 10.
Any choose of genetic probabilites for operators is arbitrary (also valid if you use some function for increasing or decreasing probabilities). Better to codify operators inside the chromosome. For example, you can add a number of bits to codify all operators you use. When generate children, you take a look to these bits for all elements of the population and apply the operator with a probability equal to the current situation of operators in the whole population, considered globally.
For example:
void adaptive_probabilities(GA *ga, long chromosome_length) {
register int i, mut = 1, xover = 1, uxover = 1, ixover = 1, pop;
char bit1, bit2;
for (i = 0; i < ga->npop; i++) {
bit1 = ga->pop[i]->chromosome[chromosome_length - 2];
bit2 = ga->pop[i]->chromosome[chromosome_length - 1];
if (bit1 == '0' && bit2 == '0') {
mut++;
} else if (bit1 == '0' && bit2 == '1') {
xover++;
} else if (bit1 == '1' && bit2 == '0') {
uxover++;
} else if (bit1 == '1' && bit2 == '1') {
ixover++;
}
}
pop = ga->npop + 4;
ga->prob[0] = mut / (float)pop;
ga->prob[1] = xover / (float)pop;
ga->prob[2] = uxover / (float)pop;
ga->prob[3] = ixover / (float)pop;
}
In my case I use two bits because my chromosomes codify for four operators (three types of crossover + mutation). Bits for operators are located to the end of chromosome. All probabilities are > 0 (counters for operators begin from 1) and then I have to normalize all probabilities correctly with
pop = ga->npop + 4;
Then, I generate a random number for choose the operator in base to the calculated probabilities saved in the array ga->prob.Last bits of new children are changed to reflect the operator used.
This mechanism ensures a double search by the GA: in error space (as usual) and in the operators space. Probabilites change automatically and are optimized because children are generated with higher probability using best operators at any moment of the calculation.

Random number,with nonuniform distributed [duplicate]

This question already has answers here:
Closed 10 years ago.
Possible Duplicate:
Generate random number with non-uniform density
I try to identify/create a function ( in Java ) that give me a nonuniform distributed sequence of number.
if I has a function that say function f(x), and x>0 it will give me a random number
from 0 to x.
The function most work with any given x and this below is only a example how I want to have.
But if we say x=100 the function f(x) will return s nonunifrom distributed.
And I want for example say
0 to 20 be approximately 20% of all case.
21 to 50 be approximately 50% of all case.
51 to 70 be approximately 20% of all case.
71 to 100be approximately 10 of all case.
In short somting that give me a number like normal distribution and it peek at 30-40 in this case x is 100.
http://en.wikipedia.org/wiki/Normal_distribution
( I can use a uniform random gen as score if need, and only a function that will transfrom the uniform result to a non-uniform result. )
EDIT
My final solution for this problem is:
/**
* Return a value from [0,1] and mean as 0.3, It give 10% of it is lower
* then 0.1. 5% is higher then 0.8 and 30% is in rang 0.25 to 0.45
*
* #return
*/
public double nextMyGaussian() {
double d = -1000;
while (d < -1.5) {
// RANDOMis Java's normal Random() class.
// The nextGaussian is normal give a value from -5 to +5?
d = RANDOM.nextGaussian() * 1.5;
}
if (d > 3.5d) {
return 1;
}
return ((d + 1.5) / 5);
}
A simple solution would be to generate a first random number between 0 and 9.
0 means the 10 first percents, 1 the ten following percents, etc.
So if you get 0 or 1, you generate a second random number between 0 and 20. If you get 2, 3, 4, 5 or 6, you generate a second random number between 21 and 50, etc.
Could you just write a function that sums a number of random numbers it the 1-X range and takes an average? this will tend to the normal distribution as n increases
See:
Generate random numbers following a normal distribution in C/C++
I hacked something like the below:
class CrudeDistribution {
final int TRIALS = 20;
public int getAverageFromDistribution(int upperLimit) {
return getAverageOfRandomTrials(TRIALS, upperLimit);
}
private int getAverageOfRandomTrials(int trials, int upperLimit) {
double d = 0.0;
for (int i=0; i<trials; i++) {
d +=getRandom(upperLimit);
}
return (int) (d /= trials);
}
private int getRandom(int upperLimit) {
return (int) (Math.random()*upperLimit)+1;
}
}
There are libraries in Commons-Math that can generate distributions based on means and standard deviations (that measure the spread). and in the link some algorithms that do this.
Probably a fun hour of so of hunting to find the relevant 2 liner:
https://commons.apache.org/math/userguide/distribution.html
One solution would be to do a random number between 1-100 and based on the result do another random number in the appropriate range.
1-20 -> 0-20
21-70 -> 21-50
71-90 -> 51-70
91-100 -> 71-100
Hope that makes sense.
You need to create the f(x) first.
Assuming values x are equiprobable, your f(x) is
double f(x){
if(x<=20){
return x;
}else if (x>20 && x<=70){
return (x-20)/50*30+20;
} else if(...
etc
Just generate a bunch, say at least 30, uniform random numbers between 0 and x. Then take the mean of those. The mean will, following the central limit theorem, be a random number from a normal distribution centered around x/2.

Categories