for testing a specific math function I implemented I need to generate a lot of doubles in the full range of doubles > 0. So the random generated value should be between 2^−52 × 2^−1022 and (2−(2^−52)) × 2^1023. I tried using
ThreadLocalRandom.current().nextDouble(origin, bound)
but it only gives values close to 1e300.
I tested it with
public void testRandomDouble() {
double current;
for (int j = 0; j < 10; j++) {
double min = Double.POSITIVE_INFINITY;
for (int i = 0; i < 100_000_000; i++) {
current = ThreadLocalRandom.current().nextDouble(Double.MIN_VALUE, Double.MAX_VALUE);
if (current < min) {
min = current;
}
}
System.out.println(min);
}
}
generating the output
1.2100736287390257E300
1.2292284466505449E300
1.4318629398915128E299
6.922983256724938E299
1.3927453080775622E300
4.8454315085367987E300
1.4899199386555683E299
3.7592835763389994E299
2.0561053862668256E300
1.6268118313101214E299
Even when recompiling and rerunning the test (so the local thread has a different seed) results in approx. the same output. I didn't find anything about this behavior online. What am I missing?
There are ten times as many values in the range 1e300 as 1e299, and ten times as many values in the range 1e299 as in 298. This shouldn't necessarily be surprising! You should expect 90% of your values to be e300, 99% to be e299 or e300, etc. If you want a uniform distribution over the values that can possibly be held in a double, not a uniform distribution over the number line, you will need a very different algorithm. That would probably look something like
double d;
do {
d = Double.longBitsToDouble(random.nextLong());
} while (Double.isNaN(d) || Double.isInfinite(d) || d <= 0);
return d;
Related
Hello i am trying to make a method to generate a random number within a range
where it can take a Bias that will make the number more likely to be higher/lower depending on the bias.
To do this currently i was using this
public int randIntWeightedLow(int max, int min, int rolls){
int rValue = 100;
for (int i = 0; i < rolls ; i++) {
int rand = randInt(min, max);
if (rand < rValue ){
rValue = rand;
}
}
return rValue;
}
This works okay by giving me a number in the range and the more rolls i add the likely the number will be low. However the problem i am running in to is that the there is a big difference between having 3 rolls and 4 rolls.
I am loking to have somthing like
public void randomIntWithBias(int min, int max, float bias){
}
Where giving a negative bias would make the number be low more often and
a positive bias make the number be higher more often but still keeping the number in the random of the min and max.
Currently to generate a random number i am using
public int randInt(final int n1, final int n2) {
if (n1 == n2) {
return n1;
}
final int min = n1 > n2 ? n2 : n1;
final int max = n1 > n2 ? n1 : n2;
return rand.nextInt(max - min + 1) + min;
}
I am new to java and coding in general so any help would be greatly appreciated.
Ok, here is quick sketch how it could be done.
First, I propose to use Apache commons java library, it has sampling for integers
with different probabilities already implemented. We need Enumerated Integer Distribution.
Second, two parameters to make distribution look linear, p0 and delta.
For kth value relative probability would be p0 + k*delta. For delta positive
larger numbers will be more probable, for delta negative smaller numbers will be
more probable, delta=0 equal to uniform sampling.
Code (my Java is rusty, please bear with me)
import org.apache.commons.math3.distribution.EnumeratedIntegerDistribution;
public int randomIntWithBias(int min, int max, double p0, double delta){
if (p0 < 0.0)
throw new Exception("Negative initial probability");
int N = max - min + 1; // total number of items to sample
double[] p = new double[N]; // probabilities
int[] items = new int[N]; // items
double sum = 0.0; // total probabilities summed
for(int k = 0; k != N; ++k) { // fill arrays
p[k] = p0 + k*delta;
sum += p[k];
items[k] = min + k;
}
if (delta < 0.0) { // when delta negative we could get negative probabilities
if (p[N-1] < 0.0) // check only last probability
throw new Exception("Negative probability");
}
for(int k = 0; k != N; ++k) { // Normalize probabilities
p[k] /= sum;
}
EnumeratedIntegerDistribution rng = new EnumeratedIntegerDistribution(items, p);
return rng.sample();
}
That's the gist of the idea, code could be (and should be) optimized and cleaned.
UPDATE
Of course, instead of linear bias function you could put in, say, quadratic one.
General quadratic function has three parameters - pass them on, fill in a similar way array of probabilities, normalize, sample
The code below was my first attempt at a LCM (lowest common multiple) calculator with a user interface (UI code not shown) written months ago. I know there are simpler ways to write this, but I'd like help understanding why sometimes THIS specific code is not finding a common multiple (with most number sets it works fine).
When a user inputs almost any number set, the app spits out the correct LCM. But when the number set 1,234 / 2,345 / 5,432 / 4,321 is used, the app initially was stopping when x hit 536,870,912. This was because the result of x * mult was a number that couldn't be held by the int primitive. After changing x to a double and casting result = (int) (mult * x), the code continues to function as expected but seems to increment x indefinitely.
public static void compare(){
result = 0;
int mult = 0;
double x = 1;
int[] nums = UserInterface.getNums();
// finds highest number in user-input set
for(int i = 0; i < nums.length; i ++){
if (nums[i] > mult) mult = nums[i];
}
// finds lowest common multiple
for(int i = 0; i < nums.length;){
if((mult * x) % nums[i] == 0){
result = (int) (mult * x);
i++;
}
else{
result = 0;
x++;
i = 0;
}
}
}
We know the LCM of your test set must be less than or equal to 67,920,681,416,560.
In java the int datatype has a max value of 2^31-1 = 2,147,483,647 so you are obviously going to get an overflow. You can change your code to use long throughout this has a max value of 2^64-1=18,446,744,073,709,551,615 so it should be sufficient for your calculation. If you need bigger values then look at the BigInteger class.
In javascript things are more complicated. All numbers are floating point so you loose accuracy. This probably mean the condition
if((mult * x) % nums[i] == 0)
is never satisfied so your loop never quits.
Your algorithm is very basic, there are much better algorithms out there, elclanrs has one above and see https://en.wikipedia.org/wiki/Least_common_multiple for some hints.
Also you should change the title of the question. As it stands it make no sense as any set of numbers must have a LCM.
Given an array with x elements, I must find four numbers that, when summed, equal zero. I also need to determine how many such sums exist.
So the cubic time involves three nested iterators, so we just have to look up the last number (with binary search).
Instead by using the cartesian product (same array for X and Y) we can store all pairs and their sum in a secondary array. So for each sum d we just have to look for -d.
This should look something like for (close to) quadratic time:
public static int quad(Double[] S) {
ArrayList<Double> pairs = new ArrayList<>(S.length * S.length);
int count = 0;
for (Double d : S) {
for (Double di : S) {
pairs.add(d + di);
}
}
Collections.sort(pairs);
for (Double d : pairs) {
int index = Collections.binarySearch(pairs, -d);
if (index > 0) count++; // -d was found so increment
}
return count;
}
With x being 353 (for our specific array input), the solution should be 528 but instead I only find 257 using this solution. For our cubic time we are able to find all 528 4-sums
public static int count(Double[] a) {
Arrays.sort(a);
int N = a.length;
int count = 0;
for(int i = 0; i < N; i++) {
for (int j = 0; j < N; j++) {
for (int k = 0; k < N; k++) {
int l = Arrays.binarySearch(a, -(a[i] + a[j] + a[k]));
if (l > 0) count++;
}
}
}
return count;
}
Is the precision of double lost by any chance?
EDIT: Using BigDecimal instead of double was discussed, but we were afraid it would have an impact on performance. We are only dealing with 353 elements in our array, so would this mean anything to us?
EDITEDIT: I apologize if I use BigDecimal incorrectly. I have never dealt with the library before. So after multiple suggestions I tried using BigDecimal instead
public static int quad(Double[] S) {
ArrayList<BigDecimal> pairs = new ArrayList<>(S.length * S.length);
int count = 0;
for (Double d : S) {
for (Double di : S) {
pairs.add(new BigDecimal(d + di));
}
}
Collections.sort(pairs);
for (BigDecimal d : pairs) {
int index = Collections.binarySearch(pairs, d.negate());
if (index >= 0) count++;
}
return count;
}
So instead of 257 it was able to find 261 solutions. This might indicate there is a problem double and I am in fact losing precision. However 261 is far away from 528, but I am unable to locate the cause.
LASTEDIT: So I believe this is horrible and ugly code, but it seems to be working none the less. We had already experimented with while but with BigDecimal we are now able to get all 528 matches.
I am not sure if it's close enough to quadratic time or not, time will tell.
I present you the monster:
public static int quad(Double[] S) {
ArrayList<BigDecimal> pairs = new ArrayList<>(S.length * S.length);
int count = 0;
for (Double d : S) {
for (Double di : S) {
pairs.add(new BigDecimal(d + di));
}
}
Collections.sort(pairs);
for (BigDecimal d : pairs) {
BigDecimal negation = d.negate();
int index = Collections.binarySearch(pairs, negation);
while (index >= 0 && negation.equals(pairs.get(index))) {
index--;
}
index++;
while (index >= 0 && negation.equals(pairs.get(index))) {
count++;
index++;
}
}
return count;
}
You should use the BigDecimal class instead of double here, since exact precision of the floating point numbers in your array adding up to 0 is a must for your solution. If one of your decimal values was .1, you're in trouble. That binary fraction cannot be precisely represented with a double. Take the following code as an example:
double counter = 0.0;
while (counter != 1.0)
{
System.out.println("Counter = " + counter);
counter = counter + 0.1;
}
You would expect this to execute 10 times, but it is an infinite loop since counter will never be precisely 1.0.
Example output:
Counter = 0.0
Counter = 0.1
Counter = 0.2
Counter = 0.30000000000000004
Counter = 0.4
Counter = 0.5
Counter = 0.6
Counter = 0.7
Counter = 0.7999999999999999
Counter = 0.8999999999999999
Counter = 0.9999999999999999
Counter = 1.0999999999999999
Counter = 1.2
Counter = 1.3
Counter = 1.4000000000000001
Counter = 1.5000000000000002
Counter = 1.6000000000000003
When you search for either pairs or an individual element, you need to count with multiplicity. I.e., if you find element -d in your array of either singletons or pairs, then you need to increase the count by the number of matches that are found, not just increase by 1. This is probably why you're not getting the full number of results when you search over pairs. And it could mean that the number 528 of matches is not the true full number when you are searching over singletons. And in general, you should not use double precision arithmetic for exact arithmetic; use an arbitrary precision rational number package instead.
I have run into a weird issue for problem 3 of Project Euler. The program works for other numbers that are small, like 13195, but it throws this error when I try to crunch a big number like 600851475143:
Exception in thread "main" java.lang.ArithmeticException: / by zero
at euler3.Euler3.main(Euler3.java:16)
Here's my code:
//Number whose prime factors will be determined
long num = 600851475143L;
//Declaration of variables
ArrayList factorsList = new ArrayList();
ArrayList primeFactorsList = new ArrayList();
//Generates a list of factors
for (int i = 2; i < num; i++)
{
if (num % i == 0)
{
factorsList.add(i);
}
}
//If the integer(s) in the factorsList are divisable by any number between 1
//and the integer itself (non-inclusive), it gets replaced by a zero
for (int i = 0; i < factorsList.size(); i++)
{
for (int j = 2; j < (Integer) factorsList.get(i); j++)
{
if ((Integer) factorsList.get(i) % j == 0)
{
factorsList.set(i, 0);
}
}
}
//Transfers all non-zero numbers into a new list called primeFactorsList
for (int i = 0; i < factorsList.size(); i++)
{
if ((Integer) factorsList.get(i) != 0)
{
primeFactorsList.add(factorsList.get(i));
}
}
Why is it only big numbers that cause this error?
Your code is just using Integer, which is a 32-bit type with a maximum value of 2147483647. It's unsurprising that it's failing when used for numbers much bigger than that. Note that your initial loop uses int as the loop variable, so would actually loop forever if it didn't throw an exception. The value of i will go from the 2147483647 to -2147483648 and continue.
Use BigInteger to handle arbitrarily large values, or Long if you're happy with a limited range but a larger one. (The maximum value of long / Long is 9223372036854775807L.)
However, I doubt that this is really the approach that's expected... it's going to take a long time for big numbers like that.
Not sure if it's the case as I don't know which line is which - but I notice your first loop uses an int.
//Generates a list of factors
for (int i = 2; i < num; i++)
{
if (num % i == 0)
{
factorsList.add(i);
}
}
As num is a long, its possible that num > Integer.MAX_INT and your loop is wrapping around to negative at MAX_INT then looping until 0, giving you a num % 0 operation.
Why does your solution not work?
Well numbers are discrete in hardware. Discrete means thy have a min and max values. Java uses two's complement, to store negative values, so 2147483647+1 == -2147483648. This is because for type int, max value is 2147483647. And doing this is called overflow.
It seems as if you have an overflow bug. Iterable value i first becomes negative, and eventually 0, thus you get java.lang.ArithmeticException: / by zero. If your computer can loop 10 million statements a second, this would take 1h 10min to reproduce, so I leave it as assumption an not a proof.
This is also reason trivially simple statements like a+b can produce bugs.
How to fix it?
package margusmartseppcode.From_1_to_9;
public class Problem_3 {
static long lpf(long nr) {
long max = 0;
for (long i = 2; i <= nr / i; i++)
while (nr % i == 0) {
max = i;
nr = nr / i;
}
return nr > 1 ? nr : max;
}
public static void main(String[] args) {
System.out.println(lpf(600851475143L));
}
}
You might think: "So how does this work?"
Well my tough process went like:
(Dynamical programming approach) If i had list of primes x {2,3,5,7,11,13,17, ...} up to value xi > nr / 2, then finding largest prime factor is trivial:
I start from the largest prime, and start testing if devision reminder with my number is zero, if it is, then that is the answer.
If after looping all the elements, I did not find my answer, my number must be a prime itself.
(Brute force, with filters) I assumed, that
my numbers largest prime factor is small (under 10 million).
if my numbers is a multiple of some number, then I can reduce loop size by that multiple.
I used the second approach here.
Note however, that if my number would be just little off and one of {600851475013, 600851475053, 600851475067, 600851475149, 600851475151}, then my approach assumptions would fail and program would take ridiculously long time to run. If computer could execute 10m statements per second it would take 6.954 days, to find the right answer.
In your brute force approach, just generating a list of factors would take longer - assuming you do not run out of memory before.
Is there a better way?
Sure, in Mathematica you could write it as:
P3[x_] := FactorInteger[x][[-1, 1]]
P3[600851475143]
or just FactorInteger[600851475143], and lookup the largest value.
This works because in Mathematica you have arbitrary size integers. Java also has arbitrary size integer class called BigInteger.
Apart from the BigInteger problem mentioned by Jon Skeet, note the following:
you only need to test factors up to sqrt(num)
each time you find a factor, divide num by that factor, and then test that factor again
there's really no need to use a collection to store the primes in advance
My solution (which was originally written in Perl) would look something like this in Java:
long n = 600851475143L; // the original input
long s = (long)Math.sqrt(n); // no need to test numbers larger than this
long f = 2; // the smallest factor to test
do {
if (n % f == 0) { // check we have a factor
n /= f; // this is our new number to test
s = (long)Math.sqrt(n); // and our range is smaller again
} else { // find next possible divisor
f = (f == 2) ? 3 : f + 2;
}
} while (f < s); // required result is in "n"
Given an array of size n I want to generate random probabilities for each index such that Sigma(a[0]..a[n-1])=1
One possible result might be:
0 1 2 3 4
0.15 0.2 0.18 0.22 0.25
Another perfectly legal result can be:
0 1 2 3 4
0.01 0.01 0.96 0.01 0.01
How can I generate these easily and quickly? Answers in any language are fine, Java preferred.
Get n random numbers, calculate their sum and normalize the sum to 1 by dividing each number with the sum.
The task you are trying to accomplish is tantamount to drawing a random point from the N-dimensional unit simplex.
http://en.wikipedia.org/wiki/Simplex#Random_sampling might help you.
A naive solution might go as following:
public static double[] getArray(int n)
{
double a[] = new double[n];
double s = 0.0d;
Random random = new Random();
for (int i = 0; i < n; i++)
{
a [i] = 1.0d - random.nextDouble();
a [i] = -1 * Math.log(a[i]);
s += a[i];
}
for (int i = 0; i < n; i++)
{
a [i] /= s;
}
return a;
}
To draw a point uniformly from the N-dimensional unit simplex, we must take a vector of exponentially distributed random variables, then normalize it by the sum of those variables. To get an exponentially distributed value, we take a negative log of uniformly distributed value.
This is relatively late, but to show the ammendment to #Kobi's simple and straightforward answer given in this paper pointed to by #dreeves which makes the sampling uniform. The method (if I understand it clearly) is to
Generate n-1 distinct values from the range [1, 2, ... , M-1].
Sort the resulting vector
Add 0 and M as the first and last elements of the resulting vector.
Generate a new vector by computing xi - xi-1 where i = 1,2, ... n. That is, the new vector is made up of the differences between consecutive elements of the old vector.
Divide each element of the new vector by M. You have your uniform distribution!
I am curious to know if generating distinct random values and normalizing them to 1 by dividing by their sum will also produce a uniform distribution.
Get n random numbers, calculate their sum and normalize the sum to 1
by dividing each number with the sum.
Expanding on Kobi's answer, here's a Java function that does exactly that.
public static double[] getRandDistArray(int n) {
double randArray[] = new double[n];
double sum = 0;
// Generate n random numbers
for (int i = 0; i < randArray.length; i++) {
randArray[i] = Math.random();
sum += randArray[i];
}
// Normalize sum to 1
for (int i = 0; i < randArray.length; i++) {
randArray[i] /= sum;
}
return randArray;
}
In a test run, getRandDistArray(5) returned the following
[0.1796505603694718, 0.31518724882558813, 0.15226147256596428, 0.30954417535503603, 0.043356542883939767]
If you want to generate values from a normal distribution efficiently, try the Box Muller Transformation.
public static double[] array(int n){
double[] a = new double[n];
double flag = 0;
for(int i=0;i<n;i++){
a[i] = Math.random();
flag += a[i];
}
for(int i=0;i<n;i++) a[i] /= flag;
return a;
}
Here, at first a stores random numbers. And the flag will keep the sum all the numbers generated so that at the next for loop the numbers generated will be divided by the flag, which at the end the array will have random numbers in probability distribution.