Hello i am trying to make a method to generate a random number within a range
where it can take a Bias that will make the number more likely to be higher/lower depending on the bias.
To do this currently i was using this
public int randIntWeightedLow(int max, int min, int rolls){
int rValue = 100;
for (int i = 0; i < rolls ; i++) {
int rand = randInt(min, max);
if (rand < rValue ){
rValue = rand;
}
}
return rValue;
}
This works okay by giving me a number in the range and the more rolls i add the likely the number will be low. However the problem i am running in to is that the there is a big difference between having 3 rolls and 4 rolls.
I am loking to have somthing like
public void randomIntWithBias(int min, int max, float bias){
}
Where giving a negative bias would make the number be low more often and
a positive bias make the number be higher more often but still keeping the number in the random of the min and max.
Currently to generate a random number i am using
public int randInt(final int n1, final int n2) {
if (n1 == n2) {
return n1;
}
final int min = n1 > n2 ? n2 : n1;
final int max = n1 > n2 ? n1 : n2;
return rand.nextInt(max - min + 1) + min;
}
I am new to java and coding in general so any help would be greatly appreciated.
Ok, here is quick sketch how it could be done.
First, I propose to use Apache commons java library, it has sampling for integers
with different probabilities already implemented. We need Enumerated Integer Distribution.
Second, two parameters to make distribution look linear, p0 and delta.
For kth value relative probability would be p0 + k*delta. For delta positive
larger numbers will be more probable, for delta negative smaller numbers will be
more probable, delta=0 equal to uniform sampling.
Code (my Java is rusty, please bear with me)
import org.apache.commons.math3.distribution.EnumeratedIntegerDistribution;
public int randomIntWithBias(int min, int max, double p0, double delta){
if (p0 < 0.0)
throw new Exception("Negative initial probability");
int N = max - min + 1; // total number of items to sample
double[] p = new double[N]; // probabilities
int[] items = new int[N]; // items
double sum = 0.0; // total probabilities summed
for(int k = 0; k != N; ++k) { // fill arrays
p[k] = p0 + k*delta;
sum += p[k];
items[k] = min + k;
}
if (delta < 0.0) { // when delta negative we could get negative probabilities
if (p[N-1] < 0.0) // check only last probability
throw new Exception("Negative probability");
}
for(int k = 0; k != N; ++k) { // Normalize probabilities
p[k] /= sum;
}
EnumeratedIntegerDistribution rng = new EnumeratedIntegerDistribution(items, p);
return rng.sample();
}
That's the gist of the idea, code could be (and should be) optimized and cleaned.
UPDATE
Of course, instead of linear bias function you could put in, say, quadratic one.
General quadratic function has three parameters - pass them on, fill in a similar way array of probabilities, normalize, sample
Related
How do I calculate the sum of all the even numbers up to a certain number entered by the user using Java?
The naive solution would be to start from 0 and keep adding even numbers like this:
public static int square (int x)
{
int sum= 0;
for(int i = 0; i <= x; i+=2) sum += i;
return sum;
}
but you don't have to do this. This is a simple arithmetic sequence and to calculate the sum you can use the formula sum= n(a1 + an)/2 where a1 is the first term, 'an' is the last term and n is the total number of terms in the sequence.
for you a1 is 2, an is the parameter and you can calculate n by dividing the parameter (rounded down to closest even number) by 2.
This way your function will be:
public static int square (int x)
{
//you can do error checking if you want, x has to be non negative
if( (x%2) !=0) x--;
//x is guaranteed to be even at this point so x/2 is also an int
int sum= x/2 *(1+x/2);
return sum;
}
The trick to this question is "even numbers". By using % (the modulus operator) you can find these numbers easy. If you are curious about Mod, check this link https://msdn.microsoft.com/en-us/library/h6zfzfy7(v=vs.90).aspx
Using the square method you currently have and making a few modifications you can achieve the solution.
static int square (int x)
{
int result = x;
for(int i = 0; i < x; i++){
if(i%2 == 0){
result += i
}
}
return result;
}
Given an array with x elements, I must find four numbers that, when summed, equal zero. I also need to determine how many such sums exist.
So the cubic time involves three nested iterators, so we just have to look up the last number (with binary search).
Instead by using the cartesian product (same array for X and Y) we can store all pairs and their sum in a secondary array. So for each sum d we just have to look for -d.
This should look something like for (close to) quadratic time:
public static int quad(Double[] S) {
ArrayList<Double> pairs = new ArrayList<>(S.length * S.length);
int count = 0;
for (Double d : S) {
for (Double di : S) {
pairs.add(d + di);
}
}
Collections.sort(pairs);
for (Double d : pairs) {
int index = Collections.binarySearch(pairs, -d);
if (index > 0) count++; // -d was found so increment
}
return count;
}
With x being 353 (for our specific array input), the solution should be 528 but instead I only find 257 using this solution. For our cubic time we are able to find all 528 4-sums
public static int count(Double[] a) {
Arrays.sort(a);
int N = a.length;
int count = 0;
for(int i = 0; i < N; i++) {
for (int j = 0; j < N; j++) {
for (int k = 0; k < N; k++) {
int l = Arrays.binarySearch(a, -(a[i] + a[j] + a[k]));
if (l > 0) count++;
}
}
}
return count;
}
Is the precision of double lost by any chance?
EDIT: Using BigDecimal instead of double was discussed, but we were afraid it would have an impact on performance. We are only dealing with 353 elements in our array, so would this mean anything to us?
EDITEDIT: I apologize if I use BigDecimal incorrectly. I have never dealt with the library before. So after multiple suggestions I tried using BigDecimal instead
public static int quad(Double[] S) {
ArrayList<BigDecimal> pairs = new ArrayList<>(S.length * S.length);
int count = 0;
for (Double d : S) {
for (Double di : S) {
pairs.add(new BigDecimal(d + di));
}
}
Collections.sort(pairs);
for (BigDecimal d : pairs) {
int index = Collections.binarySearch(pairs, d.negate());
if (index >= 0) count++;
}
return count;
}
So instead of 257 it was able to find 261 solutions. This might indicate there is a problem double and I am in fact losing precision. However 261 is far away from 528, but I am unable to locate the cause.
LASTEDIT: So I believe this is horrible and ugly code, but it seems to be working none the less. We had already experimented with while but with BigDecimal we are now able to get all 528 matches.
I am not sure if it's close enough to quadratic time or not, time will tell.
I present you the monster:
public static int quad(Double[] S) {
ArrayList<BigDecimal> pairs = new ArrayList<>(S.length * S.length);
int count = 0;
for (Double d : S) {
for (Double di : S) {
pairs.add(new BigDecimal(d + di));
}
}
Collections.sort(pairs);
for (BigDecimal d : pairs) {
BigDecimal negation = d.negate();
int index = Collections.binarySearch(pairs, negation);
while (index >= 0 && negation.equals(pairs.get(index))) {
index--;
}
index++;
while (index >= 0 && negation.equals(pairs.get(index))) {
count++;
index++;
}
}
return count;
}
You should use the BigDecimal class instead of double here, since exact precision of the floating point numbers in your array adding up to 0 is a must for your solution. If one of your decimal values was .1, you're in trouble. That binary fraction cannot be precisely represented with a double. Take the following code as an example:
double counter = 0.0;
while (counter != 1.0)
{
System.out.println("Counter = " + counter);
counter = counter + 0.1;
}
You would expect this to execute 10 times, but it is an infinite loop since counter will never be precisely 1.0.
Example output:
Counter = 0.0
Counter = 0.1
Counter = 0.2
Counter = 0.30000000000000004
Counter = 0.4
Counter = 0.5
Counter = 0.6
Counter = 0.7
Counter = 0.7999999999999999
Counter = 0.8999999999999999
Counter = 0.9999999999999999
Counter = 1.0999999999999999
Counter = 1.2
Counter = 1.3
Counter = 1.4000000000000001
Counter = 1.5000000000000002
Counter = 1.6000000000000003
When you search for either pairs or an individual element, you need to count with multiplicity. I.e., if you find element -d in your array of either singletons or pairs, then you need to increase the count by the number of matches that are found, not just increase by 1. This is probably why you're not getting the full number of results when you search over pairs. And it could mean that the number 528 of matches is not the true full number when you are searching over singletons. And in general, you should not use double precision arithmetic for exact arithmetic; use an arbitrary precision rational number package instead.
I have a range [min-max]. min and max are of type double. I want to divide this interval into n equal intervals.(n is an integer). How can I achieve this in Java?
For example :
say I have a range [10-50]. and n=4 .
output should be a list of ranges like [10-20] [20-30][30-40] [40-50]
So what you need here is a formula for the limits of the smaller ranges. First lets start off by computing the length of each small range:
// let range be [start, end]
// let the number of smaller ranges be n
double total_length = end - start;
double subrange_length = total_length/n;
After that do a simple cycle for the smaller ranges moving the left end of the current range with the value computed above on each step:
double current_start = start;
for (int i = 0; i < n; ++i) {
System.out.printl("Smaller range: [" + current_start + ", " + (current_start + subrange_length) + "]");
current_start += subrange_length;
}
If have the Range given in the form of an array with two elements (min and max)
double[] range = new double[] {min, max};
int n = 4;
you could try it this way. What you get from divideRange is a two-dimensional array with subranges of the given range, with each of them having the wanted length.
public double[][] divideRange(double[] range, n) {
double[][] ranges = new double[n][2];
double length = (range[1] - range[0])/n;
ranges[0][0] = range[0];
ranges[0][1] = range[0]+length;
for(int i = 1; i < n; i++) {
ranges[i][0] = ranges[i-1][1];
ranges[i][1] = ranges[i-1][1]+length;
}
return ranges;
}
What you can do is use what #Achintya used, double dist = (double)(max-min)/n; Then starting from the min, add dist to it and that is the max of your first interval.
So it'd be something like:
[min, min + dist], [min + dist, min + 2*dist]... until min + n*dist >= max.
int counter = 0;
while(true) {
CreateInterval(min + counter*dist, min + (counter+1)*dist);
if (min+(counter+1)*dist >= max) {
//if we have reached the max, we are done
break;
}
}
I need to find a median value of an array of doubles (in Java) without modifying it (so selection is out) or allocating a lot of new memory. I also don't care to find the exact median, but within 10% is fine (so if median splits the sorted array 40%-60% it's fine).
How can I achieve this efficiently?
Taking into account suggestions from rfreak, ILMTitan and Peter I wrote this code:
public static double median(double[] array) {
final int smallArraySize = 5000;
final int bigArraySize = 100000;
if (array.length < smallArraySize + 2) { // small size, so can just sort
double[] arr = array.clone();
Arrays.sort(arr);
return arr[arr.length / 2];
} else if (array.length > bigArraySize) { // large size, don't want to make passes
double[] arr = new double[smallArraySize + 1];
int factor = array.length / arr.length;
for (int i = 0; i < arr.length; i++)
arr[i] = array[i * factor];
return median(arr);
} else { // average size, can sacrifice time for accuracy
final int buckets = 1000;
final double desiredPrecision = .005; // in percent
final int maxNumberOfPasses = 10;
int[] histogram = new int[buckets + 1];
int acceptableMin, acceptableMax;
double min, max, range, scale,
medianMin = -Double.MAX_VALUE, medianMax = Double.MAX_VALUE;
int sum, numbers, bin, neighborhood = (int) (array.length * 2 * desiredPrecision);
for (int r = 0; r < maxNumberOfPasses; r ++) { // enter search for number around median
max = -Double.MAX_VALUE; min = Double.MAX_VALUE;
numbers = 0;
for (int i = 0; i < array.length; i ++)
if (array[i] > medianMin && array[i] < medianMax) {
if (array[i] > max) max = array[i];
if (array[i] < min) min = array[i];
numbers ++;
}
if (min == max) return min;
if (numbers <= neighborhood) return (medianMin + medianMax) / 2;
acceptableMin = (int) (numbers * (50d - desiredPrecision) / 100);
acceptableMax = (int) (numbers * (50d + desiredPrecision) / 100);
range = max - min;
scale = range / buckets;
for (int i = 0; i < array.length; i ++)
histogram[(int) ((array[i] - min) / scale)] ++;
sum = 0;
for (bin = 0; bin <= buckets; bin ++) {
sum += histogram[bin];
if (sum > acceptableMin && sum < acceptableMax)
return ((.5d + bin) * scale) + min;
if (sum > acceptableMax) break; // one bin has too many values
}
medianMin = ((bin - 1) * scale) + min;
medianMax = (bin * scale) + min;
for (int i = 0; i < histogram.length; i ++)
histogram[i] = 0;
}
return .5d * medianMin + .5d * medianMax;
}
}
Here I take into account the size of the array. If it's small, then just sort and get the true median. If it's very large, sample it and get the median of the samples, and otherwise iteratively bin the values and see if the median can be narrowed down to an acceptable range.
I don't have any problems with this code. If someone sees something wrong with it, please let me know.
Thank you.
Assuming you mean median and not average. Also assuming you are working with fairly large double[], or memory wouldn't be an issue for sorting a copy and performing an exact median. ...
With minimal additional memory overhead you could probably run a O(n) algorithm that would get in the ballpark. I'd try this and see how accurate it is.
Two passes.
First pass find the min and max. Create a set of buckets that represent evenly spaced number ranges between the min and max. Make a second pass and "count" how many numbers fall in each bin. You should then be able to make a reasonable estimate of the median. Using 1000 buckets would only cost 4k if you use int[] to store the buckets. The math should be fast.
The only question is accuracy, and I think you should be able to tune the number of buckets to get in the error range for your data sets.
I'm sure someone with a better math/stats background than I could provide a precise size to get the error range you are looking for.
Pick a small number of array elements at random, and find the median of those.
Following on from the OPs question about; how to extract N values from a much larger array.
The following code shows how long it takes to find the median of a large array and then shows how long it take to find the median of a fixed size selection of values. The fixed size selection has a fixed cost, but is increasingly inaccurate as the the size of the original array grows.
The following prints
Avg time 17345 us. median=0.5009231700563378
Avg time 24 us. median=0.5146687617507585
the code
double[] nums = new double[100 * 1000 + 1];
for (int i = 0; i < nums.length; i++) nums[i] = Math.random();
{
int runs = 200;
double median = 0;
long start = System.nanoTime();
for (int r = 0; r < runs; r++) {
double[] arr = nums.clone();
Arrays.sort(arr);
median = arr[arr.length / 2];
}
long time = System.nanoTime() - start;
System.out.println("Avg time " + time / 1000 / runs + " us. median=" + median);
}
{
int runs = 20000;
double median = 0;
long start = System.nanoTime();
for (int r = 0; r < runs; r++) {
double[] arr = new double[301]; // fixed size to sample.
int factor = nums.length / arr.length; // take every nth value.
for (int i = 0; i < arr.length; i++)
arr[i] = nums[i * factor];
Arrays.sort(arr);
median = arr[arr.length / 2];
}
long time = System.nanoTime() - start;
System.out.println("Avg time " + time / 1000 / runs + " us. median=" + median);
}
To meet your requirement of not creating objects, I would put the fixed size array in a ThreadLocal so there is no ongoing object creation. You adjust the size of the array to suit how fast you want the function to be.
1) How much is a lot of new memory? Does it preclude a sorted copy of the data, or of references to the data?
2) Is your data repetitive (are there many distinct values)? If yes, then your answer to (1) is less likely to cause problems, because you may be able to do something with a lookup map and an array: e.g. Map and an an array of short and a suitably tweaked comparison object.
3) The typical case for the your "close to the mean" approximation is more likely to be O(n.log(n)). Most sort algorithms only degrade to O(n^2) with pathological data. Additionally, the exact median is only going to be (typically) O(n.log(n)), assuming you can afford a sorted copy.
4) Random sampling (a-la dan04) is more likely to be accurate than choosing values near the mean, unless your distribution is well behaved. For example poisson distribution and log normal both have different medians to means.
Given an array of size n I want to generate random probabilities for each index such that Sigma(a[0]..a[n-1])=1
One possible result might be:
0 1 2 3 4
0.15 0.2 0.18 0.22 0.25
Another perfectly legal result can be:
0 1 2 3 4
0.01 0.01 0.96 0.01 0.01
How can I generate these easily and quickly? Answers in any language are fine, Java preferred.
Get n random numbers, calculate their sum and normalize the sum to 1 by dividing each number with the sum.
The task you are trying to accomplish is tantamount to drawing a random point from the N-dimensional unit simplex.
http://en.wikipedia.org/wiki/Simplex#Random_sampling might help you.
A naive solution might go as following:
public static double[] getArray(int n)
{
double a[] = new double[n];
double s = 0.0d;
Random random = new Random();
for (int i = 0; i < n; i++)
{
a [i] = 1.0d - random.nextDouble();
a [i] = -1 * Math.log(a[i]);
s += a[i];
}
for (int i = 0; i < n; i++)
{
a [i] /= s;
}
return a;
}
To draw a point uniformly from the N-dimensional unit simplex, we must take a vector of exponentially distributed random variables, then normalize it by the sum of those variables. To get an exponentially distributed value, we take a negative log of uniformly distributed value.
This is relatively late, but to show the ammendment to #Kobi's simple and straightforward answer given in this paper pointed to by #dreeves which makes the sampling uniform. The method (if I understand it clearly) is to
Generate n-1 distinct values from the range [1, 2, ... , M-1].
Sort the resulting vector
Add 0 and M as the first and last elements of the resulting vector.
Generate a new vector by computing xi - xi-1 where i = 1,2, ... n. That is, the new vector is made up of the differences between consecutive elements of the old vector.
Divide each element of the new vector by M. You have your uniform distribution!
I am curious to know if generating distinct random values and normalizing them to 1 by dividing by their sum will also produce a uniform distribution.
Get n random numbers, calculate their sum and normalize the sum to 1
by dividing each number with the sum.
Expanding on Kobi's answer, here's a Java function that does exactly that.
public static double[] getRandDistArray(int n) {
double randArray[] = new double[n];
double sum = 0;
// Generate n random numbers
for (int i = 0; i < randArray.length; i++) {
randArray[i] = Math.random();
sum += randArray[i];
}
// Normalize sum to 1
for (int i = 0; i < randArray.length; i++) {
randArray[i] /= sum;
}
return randArray;
}
In a test run, getRandDistArray(5) returned the following
[0.1796505603694718, 0.31518724882558813, 0.15226147256596428, 0.30954417535503603, 0.043356542883939767]
If you want to generate values from a normal distribution efficiently, try the Box Muller Transformation.
public static double[] array(int n){
double[] a = new double[n];
double flag = 0;
for(int i=0;i<n;i++){
a[i] = Math.random();
flag += a[i];
}
for(int i=0;i<n;i++) a[i] /= flag;
return a;
}
Here, at first a stores random numbers. And the flag will keep the sum all the numbers generated so that at the next for loop the numbers generated will be divided by the flag, which at the end the array will have random numbers in probability distribution.