How can I divide a range into n equal bins? - java

I have a range [min-max]. min and max are of type double. I want to divide this interval into n equal intervals.(n is an integer). How can I achieve this in Java?
For example :
say I have a range [10-50]. and n=4 .
output should be a list of ranges like [10-20] [20-30][30-40] [40-50]

So what you need here is a formula for the limits of the smaller ranges. First lets start off by computing the length of each small range:
// let range be [start, end]
// let the number of smaller ranges be n
double total_length = end - start;
double subrange_length = total_length/n;
After that do a simple cycle for the smaller ranges moving the left end of the current range with the value computed above on each step:
double current_start = start;
for (int i = 0; i < n; ++i) {
System.out.printl("Smaller range: [" + current_start + ", " + (current_start + subrange_length) + "]");
current_start += subrange_length;
}

If have the Range given in the form of an array with two elements (min and max)
double[] range = new double[] {min, max};
int n = 4;
you could try it this way. What you get from divideRange is a two-dimensional array with subranges of the given range, with each of them having the wanted length.
public double[][] divideRange(double[] range, n) {
double[][] ranges = new double[n][2];
double length = (range[1] - range[0])/n;
ranges[0][0] = range[0];
ranges[0][1] = range[0]+length;
for(int i = 1; i < n; i++) {
ranges[i][0] = ranges[i-1][1];
ranges[i][1] = ranges[i-1][1]+length;
}
return ranges;
}

What you can do is use what #Achintya used, double dist = (double)(max-min)/n; Then starting from the min, add dist to it and that is the max of your first interval.
So it'd be something like:
[min, min + dist], [min + dist, min + 2*dist]... until min + n*dist >= max.
int counter = 0;
while(true) {
CreateInterval(min + counter*dist, min + (counter+1)*dist);
if (min+(counter+1)*dist >= max) {
//if we have reached the max, we are done
break;
}
}

Related

Finding the minimum sum and maximum sum of a list of integers in an array

I am currently working on a HackerRank practice question and I only pass 5 test cases and I have no idea why. I've thought of all edge cases that I can think of myself but I fail most test cases.
Problem:
Given five positive integers, find the minimum and maximum values that can be calculated by summing exactly four of the five integers. Then print the respective minimum and maximum values as a single line of two space-separated long integers.
Example -
The minimum sum is 1 + 3 + 5 + 7 = 16 and the maximum sum is 3 + 5 + 7 + 9 = 24. The function prints
16 24
This is my solution so far:
public static void miniMaxSum(List<Integer> arr) {
// Write your code here
Collections.sort(arr);
int max = 0;
int min = 0;
int sum = 0;
int smallest = arr.get(0);
int largest = arr.get(4);
for (int i=0; i<arr.size(); i++) {
sum += arr.get(i);
}
min = sum - largest;
max = sum - smallest;
System.out.print(min+ " " + max);
}
I have no idea what test cases I'm failing since it doesn't tell me. I've tried arrays with duplicates, massive numbers, unsorted, and it all gives me expected answer. Please help!
Use long datatype because there is possibility of Integer overflowing or use 16 bit Integer.
public static void miniMaxSum(List<Integer> arr) {
// Write your code here
Collections.sort(arr);
long max = 0;
long min = 0;
long sum = 0;
long smallest = arr.get(0);
long largest = arr.get(4);
for (int i=0; i<arr.size(); i++) {
sum += arr.get(i);
}
min = sum - largest;
max = sum - smallest;
System.out.print(min+ " " + max);
}
}

Get a random number within a range with a bias

Hello i am trying to make a method to generate a random number within a range
where it can take a Bias that will make the number more likely to be higher/lower depending on the bias.
To do this currently i was using this
public int randIntWeightedLow(int max, int min, int rolls){
int rValue = 100;
for (int i = 0; i < rolls ; i++) {
int rand = randInt(min, max);
if (rand < rValue ){
rValue = rand;
}
}
return rValue;
}
This works okay by giving me a number in the range and the more rolls i add the likely the number will be low. However the problem i am running in to is that the there is a big difference between having 3 rolls and 4 rolls.
I am loking to have somthing like
public void randomIntWithBias(int min, int max, float bias){
}
Where giving a negative bias would make the number be low more often and
a positive bias make the number be higher more often but still keeping the number in the random of the min and max.
Currently to generate a random number i am using
public int randInt(final int n1, final int n2) {
if (n1 == n2) {
return n1;
}
final int min = n1 > n2 ? n2 : n1;
final int max = n1 > n2 ? n1 : n2;
return rand.nextInt(max - min + 1) + min;
}
I am new to java and coding in general so any help would be greatly appreciated.
Ok, here is quick sketch how it could be done.
First, I propose to use Apache commons java library, it has sampling for integers
with different probabilities already implemented. We need Enumerated Integer Distribution.
Second, two parameters to make distribution look linear, p0 and delta.
For kth value relative probability would be p0 + k*delta. For delta positive
larger numbers will be more probable, for delta negative smaller numbers will be
more probable, delta=0 equal to uniform sampling.
Code (my Java is rusty, please bear with me)
import org.apache.commons.math3.distribution.EnumeratedIntegerDistribution;
public int randomIntWithBias(int min, int max, double p0, double delta){
if (p0 < 0.0)
throw new Exception("Negative initial probability");
int N = max - min + 1; // total number of items to sample
double[] p = new double[N]; // probabilities
int[] items = new int[N]; // items
double sum = 0.0; // total probabilities summed
for(int k = 0; k != N; ++k) { // fill arrays
p[k] = p0 + k*delta;
sum += p[k];
items[k] = min + k;
}
if (delta < 0.0) { // when delta negative we could get negative probabilities
if (p[N-1] < 0.0) // check only last probability
throw new Exception("Negative probability");
}
for(int k = 0; k != N; ++k) { // Normalize probabilities
p[k] /= sum;
}
EnumeratedIntegerDistribution rng = new EnumeratedIntegerDistribution(items, p);
return rng.sample();
}
That's the gist of the idea, code could be (and should be) optimized and cleaned.
UPDATE
Of course, instead of linear bias function you could put in, say, quadratic one.
General quadratic function has three parameters - pass them on, fill in a similar way array of probabilities, normalize, sample

returning an arbitrary number given a value in java

So Im trying to make a function that returns an arbitrary integer integer which is greater than X, not greater than 1,000,000,000 , and that ends with 0. You can assume that X is between 1 and 999,999,999. For example, given X = 33, your funcitonm may return 77 and for X = 22, your function may return 92.
Here is what I got so far, not sure if im even doing it right...
import java.util*;
import java.io*;
public class exerciseA {
public static void main(String[] args) throws Exception {
int max = 1000000000;
int min = 0;
int diff = max - min;
Random arbitrary = new Random();
int i = arbitrary.nextInt(diff + 1);
i += min;
System.out.print("The arbitrary Number is " + i);
}
}
The following snippet will do the trick:
int max = 100000000; // change made here
int min = 0;
int diff = max - min;
Random arbitrary = new Random();
int i = arbitrary.nextInt(diff + 1);
i += min;
System.out.print("The arbitrary Number is " + i * 10); // change made here
Note:
Initialize max to 100000000 as we will be multiplying the arbitrary number by 10.

count the odd numbers in a specified range

How do I a get the counter to count the odd number under 100 in this program?
public class checkpassfail {
public static void main(String[] args) {
int sum =0;
double avr;
int lower = 1;
int uper = 100;
int num=lower;
int counter =0;
while( num <= uper){
sum= sum+(num+=3);
counter+=3;
}
System.out.println("the sum of these nubers is\t" +sum);
System.out.println(counter);
double s =(double)sum;
avr =s/counter;
System.out.println("the average of these nubers is \t"+avr);
}
What do you actually want to do?
If I'm not wrong you want to find odd numbers in between lower_bound and upper_bound.
int lower_bound = 0, upper_bound = 10;
ArrayList<Integer> odds = new ArrayList<Integer>();
while(lower_bound < upper_bound)
{
if(lower_bound % 2 == 1)
odds.add(lower_bound);
lower_bound++;
}
// Number of odd numbers found
int numberOfOddsFound = odds.size();
Welcome to StackOverflow :)
Using the for-loop, you calculate these aggregated values with:
final int lower = 1; // Lower bound
final int upper = 100; // Upper bound
int sum = 0; // Default sum
int count = 0; // Default count
double average = Double.NaN; // Default average
int i = lower; // Init the "runnig" variable
while (i <= upper){ // Until the upper bound is reached, do:
sum += i; // Add the number to the overall sum
count++; // One more number has been used - count it
i += 2; // Add 2 since you mind odd values only
}
average = sum / count; // Calculate the average
// And enjoy the results below
System.out.println("Count: " + count);
System.out.println("Sum: " + sum);
System.out.println("Average: " + average);
There also other ways of using the formulas to calculate these characteristics of a regular sequence of numbers or with Stream-API using IntStream.range(..) which allows calculating the aggregation values directly. However, in the beginning, stick with the for-loop.
You don't have to do all of this I guess. If you already know the highest value and the lowest value, and you want to count how many odd numbers between that range, you can code it like below.
int odd_count = (upper + 1) / 2 - lower / 2;
public class checkpassfail {
public static void main(String[] args) {
int lower = 1;
int upper = 100;
int odd_count = (upper + 1) / 2 - lower / 2;
System.out.println("Odd numbers count = "odd_count);
}
}
This will print Odd numbers count = 50.

Approximate median of an immutable array

I need to find a median value of an array of doubles (in Java) without modifying it (so selection is out) or allocating a lot of new memory. I also don't care to find the exact median, but within 10% is fine (so if median splits the sorted array 40%-60% it's fine).
How can I achieve this efficiently?
Taking into account suggestions from rfreak, ILMTitan and Peter I wrote this code:
public static double median(double[] array) {
final int smallArraySize = 5000;
final int bigArraySize = 100000;
if (array.length < smallArraySize + 2) { // small size, so can just sort
double[] arr = array.clone();
Arrays.sort(arr);
return arr[arr.length / 2];
} else if (array.length > bigArraySize) { // large size, don't want to make passes
double[] arr = new double[smallArraySize + 1];
int factor = array.length / arr.length;
for (int i = 0; i < arr.length; i++)
arr[i] = array[i * factor];
return median(arr);
} else { // average size, can sacrifice time for accuracy
final int buckets = 1000;
final double desiredPrecision = .005; // in percent
final int maxNumberOfPasses = 10;
int[] histogram = new int[buckets + 1];
int acceptableMin, acceptableMax;
double min, max, range, scale,
medianMin = -Double.MAX_VALUE, medianMax = Double.MAX_VALUE;
int sum, numbers, bin, neighborhood = (int) (array.length * 2 * desiredPrecision);
for (int r = 0; r < maxNumberOfPasses; r ++) { // enter search for number around median
max = -Double.MAX_VALUE; min = Double.MAX_VALUE;
numbers = 0;
for (int i = 0; i < array.length; i ++)
if (array[i] > medianMin && array[i] < medianMax) {
if (array[i] > max) max = array[i];
if (array[i] < min) min = array[i];
numbers ++;
}
if (min == max) return min;
if (numbers <= neighborhood) return (medianMin + medianMax) / 2;
acceptableMin = (int) (numbers * (50d - desiredPrecision) / 100);
acceptableMax = (int) (numbers * (50d + desiredPrecision) / 100);
range = max - min;
scale = range / buckets;
for (int i = 0; i < array.length; i ++)
histogram[(int) ((array[i] - min) / scale)] ++;
sum = 0;
for (bin = 0; bin <= buckets; bin ++) {
sum += histogram[bin];
if (sum > acceptableMin && sum < acceptableMax)
return ((.5d + bin) * scale) + min;
if (sum > acceptableMax) break; // one bin has too many values
}
medianMin = ((bin - 1) * scale) + min;
medianMax = (bin * scale) + min;
for (int i = 0; i < histogram.length; i ++)
histogram[i] = 0;
}
return .5d * medianMin + .5d * medianMax;
}
}
Here I take into account the size of the array. If it's small, then just sort and get the true median. If it's very large, sample it and get the median of the samples, and otherwise iteratively bin the values and see if the median can be narrowed down to an acceptable range.
I don't have any problems with this code. If someone sees something wrong with it, please let me know.
Thank you.
Assuming you mean median and not average. Also assuming you are working with fairly large double[], or memory wouldn't be an issue for sorting a copy and performing an exact median. ...
With minimal additional memory overhead you could probably run a O(n) algorithm that would get in the ballpark. I'd try this and see how accurate it is.
Two passes.
First pass find the min and max. Create a set of buckets that represent evenly spaced number ranges between the min and max. Make a second pass and "count" how many numbers fall in each bin. You should then be able to make a reasonable estimate of the median. Using 1000 buckets would only cost 4k if you use int[] to store the buckets. The math should be fast.
The only question is accuracy, and I think you should be able to tune the number of buckets to get in the error range for your data sets.
I'm sure someone with a better math/stats background than I could provide a precise size to get the error range you are looking for.
Pick a small number of array elements at random, and find the median of those.
Following on from the OPs question about; how to extract N values from a much larger array.
The following code shows how long it takes to find the median of a large array and then shows how long it take to find the median of a fixed size selection of values. The fixed size selection has a fixed cost, but is increasingly inaccurate as the the size of the original array grows.
The following prints
Avg time 17345 us. median=0.5009231700563378
Avg time 24 us. median=0.5146687617507585
the code
double[] nums = new double[100 * 1000 + 1];
for (int i = 0; i < nums.length; i++) nums[i] = Math.random();
{
int runs = 200;
double median = 0;
long start = System.nanoTime();
for (int r = 0; r < runs; r++) {
double[] arr = nums.clone();
Arrays.sort(arr);
median = arr[arr.length / 2];
}
long time = System.nanoTime() - start;
System.out.println("Avg time " + time / 1000 / runs + " us. median=" + median);
}
{
int runs = 20000;
double median = 0;
long start = System.nanoTime();
for (int r = 0; r < runs; r++) {
double[] arr = new double[301]; // fixed size to sample.
int factor = nums.length / arr.length; // take every nth value.
for (int i = 0; i < arr.length; i++)
arr[i] = nums[i * factor];
Arrays.sort(arr);
median = arr[arr.length / 2];
}
long time = System.nanoTime() - start;
System.out.println("Avg time " + time / 1000 / runs + " us. median=" + median);
}
To meet your requirement of not creating objects, I would put the fixed size array in a ThreadLocal so there is no ongoing object creation. You adjust the size of the array to suit how fast you want the function to be.
1) How much is a lot of new memory? Does it preclude a sorted copy of the data, or of references to the data?
2) Is your data repetitive (are there many distinct values)? If yes, then your answer to (1) is less likely to cause problems, because you may be able to do something with a lookup map and an array: e.g. Map and an an array of short and a suitably tweaked comparison object.
3) The typical case for the your "close to the mean" approximation is more likely to be O(n.log(n)). Most sort algorithms only degrade to O(n^2) with pathological data. Additionally, the exact median is only going to be (typically) O(n.log(n)), assuming you can afford a sorted copy.
4) Random sampling (a-la dan04) is more likely to be accurate than choosing values near the mean, unless your distribution is well behaved. For example poisson distribution and log normal both have different medians to means.

Categories