This question is an extension of Java- Math.random(): Selecting an element of a 13 by 13 triangular array. I am selecting two numbers at random (0-12 inclusive) and I wanted the values to be equal.
But now, since this is a multiplication game, I want a way to bias the results so certain combinations come up more frequently (like if the Player does worse for 12x8, I want it to come up more frequently). Eventually, I would like to bias towards any of the 91 combinations, but once I get this down, that should not be hard.
My Thoughts: Add some int n to the triangular number and Random.nextInt(91 + n) to bias the results toward a combination.
private int[] triLessThan(int x, int[] bias) { // I'm thinking a 91 element array, 0 for no bias, positive for bias towards
int i = 0;
int last = 0;
while (true) {
int sum = 0;
for (int a = 0; a < i * (i + 2)/2; a++){
sum += bias[a]
}
int triangle = i * (i + 1) / 2;
if (triangle + sum > x){
int[] toReturn = {last,i};
return toReturn;
}
last = triangle;
i++;
}
}
At the random number roll:
int sum = sumOfArray(bias); // bias is the array;
int roll = random.nextInt(91 + sum);
int[] triNum = triLessThan(roll);
int num1 = triNum[1];
int num2 = roll - triNum[0]; //now split into parts and make bias[] add chances to one number.
where sumOfArray just finds the sum (that formula is easy). Will this work?
Edit: Using Floris's idea:
At random number roll:
int[] bias = {1,1,1,...,1,1,1} // 91 elements
int roll = random.nextInt(sumOfBias());
int num1 = roll;
int num2 = 0;
while (roll > 0){
roll -= bias[num2];
num2++;
}
num1 = (int) (Math.sqrt(8 * num2 + 1) - 1)/2;
num2 -= num1 * (num1 + 1) / 2;
You already know how to convert a number between 0 and 91 and turn it into a roll (from the answer to your previous question). I would suggest that you create an array of N elements, where N >> 91. Fill the first 91 elements with 0...90, and set a counter A to 91. Now choose a number between 0 and A, pick the corresponding element from the array, and convert to a multiplication problem. If the answer is wrong, append the number of the problem to the end of the array, and increment A by one.
This will create an array in which the frequencies of sampling will represent the number of times a problem was solved incorrectly - but it doesn't ever lower the frequency again if the problem is solved correctly the next time it is asked.
An alternative and better solution, and one that is a little closer to yours (but distinct) creates an array of 91 frequencies - each initially set to 1 - and keeps track of the sum (initially 91). But now, when you choose a random number (between 0 and sum) you traverse the array until the cumulative sum is greater then your random number - the number of the bin is the roll you choose, and you convert that with the formula derived earlier. If the answer is wrong you increment the bin and update the sum; if it is right, you decrement the sum but never to a value less than one, and update the sum. Repeat.
This should give you exactly what you are asking: given an array of 91 numbers ("bins"), randomly select a bin in such a way that the probability of that bin is proportional to the value in it. Return the index of the bin (which can be turned into the combination of numbers using the method you had before). This function is called with the bin (frequency) array as the first parameter, and the cumulative sum as the second. You look up where the cumulative sum of the first n elements first exceeds a random number scaled by the sum of the frequencies:
private int chooseBin(float[] freq, float fsum) {
// given an array of frequencies (probabilities) freq
// and the sum of this array, fsum
// choose a random number between 0 and 90
// such that if this function is called many times
// the frequency with which each value is observed converges
// on the frequencies in freq
float x, cs=0; // x stores random value, cs is cumulative sum
int ii=-1; // variable that increments until random value is found
x = Math.rand();
while(cs < x*fsum && ii<90) {
// increment cumulative sum until it's bigger than fraction x of sum
ii++;
cs += freq[ii];
}
return ii;
}
I confirmed that it gives me a histogram (blue bars) that looks exactly like the probability distribution that I fed it (red line):
(note - this was plotted with matlab so X goes from 1 to 91, not from 0 to 90).
Here is another idea (this is not really answering the question, but it's potentially even more interesting):
You can skew your probability of choosing a particular problem by sampling something other than a uniform distribution. For example, the square of a uniformly sampled random variate will favor smaller numbers. This gives us an interesting possibility:
First, shuffle your 91 numbers into a random order
Next, pick a number from a non-uniform distribution (one that favors smaller numbers). Since the numbers were randomly shuffled, they are in fact equally likely to be chosen. But now here's the trick: if the problem (represented by the number picked) is solved correctly, you move the problem number "to the top of the stack", where it is least likely to be chosen again. If the player gets it wrong, it is moved to the bottom of the stack, where it is most likely to be chosen again. Over time, difficult problems move to the bottom of the stack.
You can create random distributions with different degrees of skew using a variation of
roll = (int)(91*(asin(Math.rand()*a)/asin(a)))
As you make a closer to 1, the function tends to favor lower numbers with almost zero probability of higher numbers:
I believe the following code sections do what I described:
private int[] chooseProblem(float bias, int[] currentShuffle) {
// if bias == 0, we choose from uniform distribution
// for 0 < bias <= 1, we choose from increasingly biased distribution
// for bias > 1, we choose from uniform distribution
// array currentShuffle contains the numbers 0..90, initially in shuffled order
// when a problem is solved correctly it is moved to the top of the pile
// when it is wrong, it is moved to the bottom.
// return value contains number1, number2, and the current position of the problem in the list
int problem, problemIndex;
if(bias < 0 || bias > 1) bias = 0;
if(bias == 0) {
problem = random.nextInt(91);
problemIndex = problem;
}
else {
float x = asin(Math.random()*bias)/asin(bias);
problemIndex = Math.floor(91*x);
problem = currentShuffle[problemIndex];
}
// now convert "problem number" into two numbers:
int first, last;
first = (int)((Math.sqrt(8*problem + 1)-1)/2);
last = problem - first * (first+1) / 2;
// and return the result:
return {first, last, problemIndex};
}
private void shuffleProblems(int[] currentShuffle, int upDown) {
// when upDown==0, return a randomly shuffled array
// when upDown < 0, (wrong answer) move element[-upDown] to zero
// when upDown > 0, (correct answer) move element[upDown] to last position
// note - if problem 0 is answered incorrectly, don't call this routine!
int ii, temp, swap;
if(upDown == 0) {
// first an ordered list:
for(ii=0;ii<91;ii++) {
currentShuffle[ii]=ii;
}
// now shuffle it:
for(ii=0;ii<91;ii++) {
temp = currentShuffle[ii];
swap = ii + random.nextInt(91-ii);
currentShuffle[ii]=currentShuffle[swap];
currentShuffle[swap]=temp;
}
return;
}
if(upDown < 0) {
temp = currentShuffle[-upDown];
for(ii = -upDown; ii>0; ii--) {
currentShuffle[ii]=currentShuffle[ii-1];
}
currentShuffle[0] = temp;
}
else {
temp = currentShuffle[upDown];
for(ii = upDown; ii<90; ii++) {
currentShuffle[ii]=currentShuffle[ii+1];
}
currentShuffle[90] = temp;
}
return;
}
// main problem posing loop:
int[] currentShuffle = new int[91];
int[] newProblem;
int keepGoing = 1;
// initial shuffle:
shuffleProblems( currentShuffle, 0); // initial shuffle
while(keepGoing) {
newProblem = chooseProblem(bias, currentShuffle);
// pose the problem, get the answer
if(wrong) {
if(newProblem > 0) shuffleProblems( currentShuffle, -newProblem[2]);
}
else shuffleProblems( currentShuffle, newProblem[2]);
// decide if you keep going...
}
Related
I am tasked with creating a program in java which calculates the square root of a double and goes through each step of calculating it manually. The requirements are:
split the number into number pairs including the decimal point (1234.67 -> 12 34 67) to prepare for subtraction. If the number is uneven, a zero must populate (234.67 -> 02 34 67)
Print each pair (each pair is a minuend), one at a time, into the console and have the console show the subtraction. Subtrahend starts at 1 and so long as the result >= 0, the subtrahend increases by 2.
The count of subtrahends is the first number of the final square root output, the count of subtrahends from the second round is the second number of the square root output, etc.
From the first subtrahend round, take the remainder and join it to the second number pair, this is the new minuend for the second round of subtraction
Calculate the second subtrahend in round two by doubling the first number of the square root output and adding 1 in the first digit position
Repeat step 2, increasing by 2 each time
Step 5 and 6 repeat until two decimal places are reached
My question is with the number pairs in step 1 and getting the subsequent subtrahends after step 3 as a number to calculate. We are given the following visual:
My current thought is to put the double into a string and then tell java that each number pair is a number. I have a method created which creates a string from a double, but I am still missing how to incorporate the decimal place numbers. From my C class, I remember multiplying decimals by 100 to "store" the decimal numbers before converting them back later with another division by 100. I'm sure there is a java library that is able to do this but we are specifically not allowed to use them.
I think I should be able to continue on with the rest of the problem once I get past this point of splitting the number into number pairs inclusive of the decimals.
This is also my first stack post so if you have any tips on how to better write questions for future posts that would be helpful as well.
This is my current array method to store a given double into an array:
public static void printArray(int [] a) //printer helper method
{
for(int i = 0; i < a.length; i++)
{
System.out.print(a[i]);
}
}
public static void stringDigits (double n) //begin string method
{
int a [] = new int [15];
int i = 0;
int stringLength = 0;
while(n > 1)
{
a[i] = (int) (n % 10);
n = n / 10;
i++;
}
for(int j = 0; a[j] != 0; j++)
{
System.out.print(a[j]);
if(a[j] != 0)
{
stringLength++;
}
}
System.out.println("");
System.out.println(stringLength);
int[] numbersArray = new int[stringLength];
int g = 0;
for(int k = a.length-1; g < numbersArray.length; k--)
{
if(a[k] > 0)
{
numbersArray[g] = a[k];
g++;
}
}
System.out.println("");
printArray(numbersArray);
}
I've tried at first to store the value of the double into an int[] a array so that I can then select the numbers in pairs and then somehow combine them back into numbers. So if the array is {1,2,3,4,5,6} my next idea is to get java to convert a[0] + a[1] into the number 12 to prepare for the subtraction step.
This link looks close but does anyone know why the numbers are "10l" and "100l" etc? I've tested some of the answers and they dont produce the proper squareroot compared to the sqrt function from the math library.
Create a program that calculates the square root of a number without using Math.sqrt
I'm wondering if there was a way to create a random number generator that generates a number between two integers, but is twice as likely to generate an even number than an odd number. At current, I haven't come up with a way that's even similar or close to 2x as likely.
Simple but should work:
store random float call (0.0f - 1.0f) (random.nextFloat())
get a random integer in desired range
if random float call was less than 0.67f, if needed decrement or increment the random integer to make it even, return value
else, if needed decrement or increment the random integer to make it odd, return value
Make sure you decrement or increment towards the right direction if random integer is a boundary value of the desired range.
There are many ways you could do this. One would be to generate two integers: one between the user's bounds, and one between 0 and 2, inclusive. Replace the last bit of the first number with the last bit of the second number to get a result that is even twice as often as it is odd.
You do need to watch out for the possibility that the bit-twiddling last step puts the result out of bounds; in that event, you should re-draw from the beginning.
Implementing #SteveKuo 's suggestion in the comments:
import java.util.Scanner;
class Main {
public static void main(String[] args) {
Scanner scanner = new Scanner(System.in);
System.out.print("Please enter the minimum number that can be generated: ");
int min = scanner.nextInt();
System.out.print("Please enter the maximum number that can be generated: ");
int max = scanner.nextInt();
int evenOrOdd = 0 + (int)(Math.random() * ((2 - 0) + 1));
int random = 0;
if(evenOrOdd == 2) { // generate random odd number
if(max % 2 == 0) { --max; }
if(min % 2 == 0) { ++min; }
random = min + 2*(int)(Math.random() * ((max - min)/2+1));
} else { //get random number between [(min+1)/2, max/2] and multiply by 2 to get random even number between min and max
random = ((min+1)/2 + (int)(Math.random() * ((max/2 - (min+1)/2) + 1))) * 2;
}
System.out.printf("The generated random number is: %d", random);
}
}
Try it here!
I was going through this problem in one of exam paper and found one solution in answer book. I am not able to understand algorithm behind it. Can anyone explain me how this algorithm works?
Given n non-negative integers representing an elevation map where the width of each bar is 1, compute how much water it is able to trap after raining.
For example, Given the input
[0,1,0,2,1,0,1,3,2,1,2,1]
the return value would be
6
Solution as per answer book is this
public class Solution {
public int trap(int[] height) {
if (height.length <=2 )
return 0;
int h = 0, sum = 0, i = 0, j = height.length - 1;
while(i < j)
{
if ( height[i] < height[j] )
{
h = Math.max(h,height[i]);
sum += h - height[i];
i++;
}
else
{
h = Math.max(h,height[j]);
sum += h - height[j];
j--;
}
}
return sum;
}
}
Thanks
WoDoSc was nice enough to draw a diagram of the elevations and trapped water. The water can only be trapped between two higher elevations.
What I did was run the code and output the results so you can see how the trapped water is calculated. The code starts at both ends of the "mountain" range. Whichever end is lower is moved closer to the center.
In the case where the two ends are the same height, the right end is moved closer to the center. You could move the left end closer to the center instead.
The first column is the height and index of the elevations on the left. The second column is the height and index of the elevations on the right.
The third column is the maximum minimum height. In other words, the maximum height of the left or the right, whichever maximum is smaller. This number is important to determine the local water level.
The fourth column is the sum.
Follow along with the diagram and you can see how the algorithm works.
0,0 1,11 0 0
1,1 1,11 1 0
1,1 2,10 1 0
0,2 2,10 1 1
2,3 2,10 2 1
2,3 1,9 2 2
2,3 2,8 2 2
2,3 3,7 2 2
1,4 3,7 2 3
0,5 3,7 2 5
1,6 3,7 2 6
6
And here's the code. Putting print and println statements in appropriate places can help you understand what the code is doing.
package com.ggl.testing;
public class RainWater implements Runnable {
public static void main(String[] args) {
new RainWater().run();
}
#Override
public void run() {
int[] height = { 0, 1, 0, 2, 1, 0, 1, 3, 2, 1, 2, 1 };
System.out.println(trap(height));
}
public int trap(int[] height) {
if (height.length <= 2) {
return 0;
}
int h = 0, sum = 0, i = 0, j = height.length - 1;
while (i < j) {
System.out.print(height[i] + "," + i + " " + height[j] + "," + j
+ " ");
if (height[i] < height[j]) {
h = Math.max(h, height[i]);
sum += h - height[i];
i++;
} else {
h = Math.max(h, height[j]);
sum += h - height[j];
j--;
}
System.out.println(h + " " + sum);
}
return sum;
}
}
I know that probably it's not the best way to represent it graphically, but you can imagine the situation as the following figure:
Where the red bars are the terrain (with elevations according to the array of your example), and the blue bars are the water that can be "trapped" into the "valleys" of the terrain.
Simplifying, the algorithm loops all the bar left-to-right (if left is smaller) or right-to-left (if right is smaller), the variable h stores the maximum height found during each step of the loop, because the water can not be higher than the maximum height of the terrains, and to know how much water can be trapped, it sums the differences between the height of the water (maximum height h) and the elevation of the terrain on a specific point, to get the actual quantity of water.
The algorithm works by processing the land from the left (i) and the right (j).
i and j are counters that work towards each other approaching the middle of the land.
h is a variable that tracks the max height found thus far considering the lower side.
The land is processed by letting i and j worked "toward each other." When I read the code, I pictured two imaginary walls squeezing the water toward the middle where the lowest wall moves toward the higher wall. The algorithm continues to sum up the volume of water. It uses h - height[x] because water can only be contained by inside the lowest point between two walls. So essentially it continues to sum up the volume of water from the left and right and subtracts out and water displaced by higher elevation blocks.
Maybe better variable names would have been
leftWall instead of i
rightWall instead of j
waterMaxHeight instead
of h
I think above solution is difficult to understand.I have a simple solution which take o(n) extra space & o(n) time complexity.
Step of algorithm
1.Maintain an array which contain maximum of all element which is right side of current element.
2.maintain a variable max from left side which contain maximum of all element which is left side of current element.
3.find minimum of max from left & max from right which is already present in array.
4.if minimum value is greater than the current value in array than add difference of than in ans & add the difference with current value & update max from left.
import java.util.*;
import java.lang.*;
import java.io.*;
class Solution
{
public static void main (String[] args) throws java.lang.Exception
{
int[] array= {0, 1, 0, 2, 1, 0, 1, 3, 2, 1, 2, 1 };
int[] arrayofmax=new int[array.length];
int max=0;
arrayofmax[array.length-1]=0;
for(int x=array.length-1;x>0;x--){
if(max<array[x]){
max=array[x];
}
arrayofmax[x-1]=max;
}
int ans=0;
int maxfromleft=0;
for(int i=0;i<array.length-1;i++){
if(maxfromleft<array[i]){
maxfromleft=array[i];
}
int min=maxfromleft>arrayofmax[i+1]?arrayofmax[i+1]:maxfromleft;
if(min>array[i+1]){
ans+=min-array[i+1];
array[i+1]=min;
}
}
System.out.println(ans);
}
}
May be my algorithm is same as above but i think this implementation is easy to understand
Trapping Rain Water problem solved in Java.
class Store
{
static int arr[] = new int[]{0, 1, 0, 2, 2};
// Method for maximum amount of water
static int StoreWater(int n)
{
int max = 0;
int f = 0;
for (int i = 1; i < n; i++)
{
max = Math.max(arr[i], max);
f += Math.max(arr[i], max) - arr[i];
}
return f;
}
public static void main(String[] args)
{
System.out.println("Maximum water that can be accumulated is " +
findWater(arr.length));
}
}
Here is a different and easy approach for water trapping problem. O(1) space and O(N) time complexity.
Logic:
-> Let’s loop from 0 index to the end of the input values.
-> If we find a wall greater than or equal to the previous wall
-> make note of the index of that wall in a var called prev_index
-> keep adding previous wall’s height minus current (ith) wall to the variable water.
-> have a temp variable that also stores the same value as water.
-> Loop till the end, if you dont find any wall greater than or equal to the previous wall, then quit.
-> If the above point is true (i.e, if prev_index < size of input array), then subtract the temp variable from water, and loop from end of the input array to prev_index and find a wall greater than or equal to the previous wall (in this case, the last wall from backwards)
The concept here is if there is a larger wall to the right you can retain water with height equal to the smaller wall on the left.
If there are no larger walls to the right, then start from left. There must be a larger wall to your left now.
You're essentially looping twice, so O(2N), but asymptotically O(N), and of course O(1) space.
JAVA Code Here:
class WaterTrap
{
public static void waterTrappingO1SpaceOnTime(){
int arr[] = {1,2,3,2,1,0}; // answer = 14
int size = arr.length-1;
int prev = arr[0]; //Let first element be stored as previous, we shall loop from index 1
int prev_index = 0; //We need to store previous wall's index
int water = 0;
int temp = 0; //temp will store water until a larger wall is found. If there are no larger walls, we shall delete temp value from water
for(int i=1; i<= size; i++){
if(arr[i] >= prev){ // If current wall is taller then previous wall, make current wall as the previous wall, and its index as previous wall's index for the subsequent loops
prev = arr[i];
prev_index = i;
temp = 0; //because larger or same height wall is found
} else {
water += prev - arr[i]; //Since current wall is shorter then previous, we subtract previous wall height from current wall height and add to water
temp += prev - arr[i]; // Store same value in temp as well, if we dont find larger wall, we will subtract temp from water
}
}
// If the last wall was larger than or equal to the previous wall, then prev_index would be equal to size of the array (last element)
// If we didn't find a wall greater than or equal to the previous wall from the left, then prev_index must be less than index of last element
if(prev_index < size){
water -= temp; //Temp would've stored the water collected from previous largest wall till the end of array if no larger wall was found. So it has excess water. Delete that from 'water' var
prev = arr[size]; // We start from the end of the array, so previous should be assigned to the last element.
for(int i=size; i>= prev_index; i--){ //Loop from end of array up to the 'previous index' which would contain the "largest wall from the left"
if(arr[i] >= prev){ //Right end wall will be definitely smaller than the 'previous index' wall
prev = arr[i];
} else {
water += prev - arr[i];
}
}
}
System.out.println("MAX WATER === " + water);
}
public static void main(String[] args) {
waterTrappingO1SpaceOnTime();
}
}
Algorithm:
1.Create two array left and right of size n. create a variable max_ = INT_MIN.
2.Run one loop from start to end. In each iteration update max_ as max_ = max(max_, arr[i]) and also assign left[i] = max_
3.Update max_ = INT_MIN.
4.Run another loop from end to start. In each iteration update max_ as max_ = max(max_, arr[i]) and also assign right[i] = max_
5.Traverse the array from start to end.
6.The amount of water that will be stored in this column is min(a,b) – array[i],(where a = left[i] and b = right[i]) add this value to total amount of water stored
7.Print the total amount of water stored.
Code:
/*** Theta(n) Time COmplexity ***/
static int trappingRainWater(int ar[],int n)
{
int res=0;
int lmaxArray[]=new int[n];
int rmaxArray[]=new int[n];
lmaxArray[0]=ar[0];
for(int j=1;j<n;j++)
{
lmaxArray[j]=Math.max(lmaxArray[j-1], ar[j]);
}
rmaxArray[n-1]=ar[n-1];
for(int j=n-2;j>=0;j--)
{
rmaxArray[j]=Math.max(rmaxArray[j+1], ar[j]);
}
for(int i=1;i<n-1;i++)
{
res=res+(Math.min(lmaxArray[i], rmaxArray[i])-ar[i]);
}
return res;
}
python code
class Solution:
def trap(self, h: List[int]) -> int:
i=0
j=len(h)-1
ml=-1
mr=-1
left=[]
right=[]
while(i<len(h)):
if ml<h[i]:
ml=h[i]
left.append(ml)
if mr<h[j]:
mr=h[j]
right.insert(0,mr)
i=i+1
j=j-1
s=0
for i in range(len(h)):
s=s+min(left[i],right[i])-h[i]
return s
Can anyone tell me the complexity (Big O notation preferred) of this code? It finds the least number of "coins" needed to make a target sum.
To do this it calculates the least number of coins for each number up to the target starting from 1. Each number is worked out based on the possible pairs of numbers that could sum to it, and the pair with the smallest cost is used. An example hopefully makes this clearer
If the "coins" are {1, 3, 4} and the target is 13 then it iterates from 1 to 13, where the cost of 2 the minimum from (0+2, 1+1), the c(5) is the smallest cost of (c(0)+c(5), c(1)+c(4), c(2)+c(3)), etc up to c(13)
This is a version of the knapsack problem and I'm wondering how to define its complexity?
Code:
import java.util.*;
public class coinSumMinimalistic {
public static final int TARGET = 12003;
public static int[] validCoins = {1, 3, 5, 6, 7, 10, 12};
public static void main(String[] args) {
Arrays.sort(validCoins);
sack();
}
public static void sack() {
Map<Integer, Integer> coins = new TreeMap<Integer, Integer>();
coins.put(0, 0);
int a = 0;
for(int i = 1; i <= TARGET; i++) {
if(a < validCoins.length && i == validCoins[a]) {
coins.put(i, 1);
a++;
} else coins.put(i, -1);
}
for(int x = 2; x <= TARGET; x++) {
if(x % 5000 == 0) System.out.println("AT: " + x);
ArrayList<Integer> list = new ArrayList<Integer>();
for(int i = 0; i <= x / 2; i++) {
int j = x - i;
list.add(i);
list.add(j);
}
coins.put(x, min(list, coins));
}
System.out.println("It takes " + coins.get(TARGET) + " coins to reach the target of " + TARGET);
}
public static int min(ArrayList<Integer> combos, Map<Integer, Integer> coins) {
int min = Integer.MAX_VALUE;
int total = 0;
for(int i = 0; i < combos.size() - 1; i += 2) {
int x = coins.get(combos.get(i));
int y = coins.get(combos.get(i + 1));
if(x < 0 || y < 0) continue;
else {
total = x + y;
if(total > 0 && total < min) {
min = total;
}
}
}
int t = (min == Integer.MAX_VALUE || min < 0) ? -1:min;
return t;
}
}
EDIT: Upon research I think that the complexity is O(k*n^2) where n is the target, and k is the number of coins supplied, is this correct?
I thinky the code you provided is kind of chaotic. So this post is more about the conceptual algorithm instead of the real algorithm. This can differ a bit since for instance insertion in an ArrayList<T> is not O(1), but I'm confident that you can use good datastructures (for instance LinkedList<T>s) for this to let all operations run in constant time.
What your algorithm basically does is the following:
It starts with a map that maps all the given coins to one: it requires one coin to achieve the value on the coin.
For each iteration, it mixes all already achieved values with all already achieved values. The result is thus the sum of the coins and it takes at the sum of the number of coins unless it was already present in the collection.
This step you forgot: kick out values strictly larger than the requested value: since all coins are strictly positive, you will never be able to construct a value with such composition less than the requested value.
You keep doing this until you have constructed the requested coin value.
If at iteration i all new values added to the set are strictly larger than the requested value, you can stop: the requested value can't be constructed.
The parameters are:
n: the number of coins.
r: the requested value.
A first observation is that each step of (2.) requires O(s^2) time with s the number of elements in the set at the start of the iteration: this is because you match every value with every value.
A second observation is that you can never have more elements in the set than the requested value. This means that s is bounded by O(r) (we assume all coins are integers, thus the set can contain at most all integer values from 0 to r-1). Step (2.) has thus a maximum time complexity of O(r^2).
And furthermore the set evolves progressively: at each iteration, you will always construct a new value that is at least one larger than the maximum thus far. As a consequence, the algorithm will perform maximum O(r) iterations.
This implies that the algorithm has a time-complexity of O(r^3): r times O(r^2).
Why is the behavior exponential and thus at least NP-hard?
A first argument is that it comes down on how you represent input: in many cases, numbers are represented using a system with a radix greater than or equal to 2. This means that with k characters, you can represent a value that scales with O(g^k) with g the radix. Thus exponential. In other words, if you use a 32-bit number, worst case, r=O(2^32). So if you take this as input, there is an exponential part. If you would encode the target using unary notation, the algorithm is in P. But of course that's a bit like the padding-argument: given you provide enough useless input data (exponential or even super-exponential), all algorithms are in P, but you don't buy much with this.
A second argument is that if you leave the the requested value out of the input, you can only state that you start with n coins. You know that the number of iterations is fixed: you see the target value as an unknown constant. Each iteration, the total number of values in the Map<Integer,Integer> potentially squares. This thus means that the computational effort is:
n+n^2+n^4+n^6+...n^(log r)
^ ^ ^
| \-- first iteration \-- end of algorithm
\-- insertion
It is clear that this behavior is exponential in n.
I need to generate n random numbers between a and b, but any two numbers cannot have a difference of less than c. All variables except n are floats (n is an int).
Solutions are preferred in java, but C/C++ is okay too.
Here is what code I have so far.:
static float getRandomNumberInRange(float min, float max) {
return (float) (min + (Math.random() * (max - min)));
}
static float[] randomNums(float a, float b, float c, int n) {
float minDistance = c;
float maxDistance = (b - a) - (n - 1) * c;
float[] randomNumArray = new float[n];
float random = getRandomNumberInRange(minDistance, maxDistance);
randomNumArray[0] = a + random;
for (int x = 1; x < n; x++) {
maxDistance = (b - a) - (randomNumArray[x - 1]) - (n - x - 1) * c;
random = getRandomNumberInRange(minDistance, maxDistance);
randomNumArray[x] = randomNumArray[x - 1] + random;
}
return randomNumArray;
}
If I run the function as such (10 times), I get the following output:
Input: randomNums(-1f, 1f, 0.1f, 10)
[-0.88, 0.85, 1.23, 1.3784, 1.49, 1.59, 1.69, 1.79, 1.89, 1.99]
[-0.73, -0.40, 0.17, 0.98, 1.47, 1.58, 1.69, 1.79, 1.89, 1.99]
[-0.49, 0.29, 0.54, 0.77, 1.09, 1.56, 1.69, 1.79, 1.89, 1.99]
I think a reasonable approach can be the following:
Total "space" is (b - a)
Remove the minimum required space (n-1)*c to obtain the remaining space
Shot (n-1) random numbers between 0 and 1 and scale them so that the sum is this just computed "optional space". Each of them will be a "slice" of space to be used.
First number is a
For each other number add c and the next "slice" to the previous number. Last number will be b.
If you don't want first and last to match a and b exactly then just create n+1 slices instead of n-1 and start with a+slice[0] instead of a.
The main idea is that once you remove the required spacing between the points (totalling (n-1)*c) the problem is just to find n-1 values so that the sum is the prescribed "optional space". To do this with a uniform distribution just shoot n-1 numbers, compute the sum and uniformly scale those numbers so that the sum is instead what you want by multiplying each of them by the constant factor k = wanted_sum / current_sum.
To obtain the final result you just use as spacing between a value and the previous one the sum of the mandatory part c and one of the randomly sampled variable parts.
An example in Python of the code needed for the computation is the following
space = b - a
slack = space - (n - 1)*c
slice = [random.random() for i in xrange(n-1)] # Pick (n-1) random numbers 0..1
k = slack / sum(slice) # Compute needed scaling
slice = [x*k for x in slice] # Scale to get slice sizes
result = [a]
for i in xrange(n-1):
result.append(result[-1] + slice[i] + c)
If you have random number X and you want another random number Y which is a minimum of A from X and a maximum of B from X, why not write that in your code?
float nextRandom(float base, float minDist, float maxDist) {
return base + minDist + (((float)Math.random()) * (maxDist - minDist));
}
by trying to keep the base out of the next number routine, you add a lot of complexity to your algorithm.
Though this does not exactly do what you need and does not incorporate the techinque being described in this thread, I believe that this code will prove to be useful as it will do what it seems like you want.
static float getRandomNumberInRange(float min, float max)
{
return (float) (min + (Math.random() * ((max - min))));
}
static float[] randomNums(float a, float b, float c, int n)
{
float averageDifference=(b-a)/n;
float[] randomNumArray = new float[n];
int random;
randomNumArray[0]=a+averageDifference/2;
for (int x = 1; x < n; x++)
randomNumArray[x]=randomNumArray[x-1]+averageDifference;
for (int x = 0; x < n; x++)
{
random = getRandomNumberInRange(-averageDifference/2, averageDifference/2);
randomNumArray[x]+=random;
}
return randomNumArray;
}
I need to generate n random numbers between a and b, but any two numbers cannot have a difference of less than c. All variables except n are floats (n is an int).
Solutions are preferred in java, but C/C++ is okay too.
First, what distribution? I'm going to assume a uniform distribution, but with that caveat that "any two numbers cannot have a difference of less than c". What you want is called "rejection sampling". There's a wikipedia article on the subject, plus a whole lot of other references on the 'net and in books (e.g. http://www.columbia.edu/~ks20/4703-Sigman/4703-07-Notes-ARM.pdf). Pseudocode, using some function random_uniform() that returns a random number drawn from U[0,1], and assuming a 1-based array (many languages use a 0-based array):
function generate_numbers (a, b, c, n, result)
result[1] = a + (b-a)*random_uniform()
for index from 2 to n
rejected = true
while (rejected)
result[index] = a + (b-a)*random_uniform()
rejected = abs (result[index] < result[index-1]) < c
end
end
Your solution was almost correct, here is the fix:
maxDistance = b - (randomNumArray[x - 1]) - (n - x - 1) * c;
I would do this by just generating n random numbers between a and b. Then I would sort them and get the first order differences, kicking out any numbers that generate a difference less than c, leaving me with m numbers. If m < n, I would just do it again, this time for n - m numbers, add those numbers to my original results, sort again, generate differences...and so on until I have n numbers.
Note, first order differences means x[1] - x[0], x[2] - x[1] and so on.
I don't have time to write this out in C but in R, it's pretty easy:
getRands<-function(n,a,b,c){
r<-c()
while(length(r) < n){
r<-sort(c(r,runif(n,a,b)))
r<-r[-(which(diff(r) <= c) + 1 )]
}
r
}
Note that if you are too aggresive with c relative to a and b, this kind of solution might take a long time to converge, or not converge at all if n * C > b -a
Also note, I don't mean for this R code to be a fully formed, production ready piece of code, just an illustration of the algorithm (for those who can follow R).
How about using a shifting range as you generate numbers to ensure that they don't appear too close?
static float[] randomNums(float min, float max, float separation, int n) {
float rangePerNumber = (max - min) / n;
// Check separation and range are consistent.
assert (rangePerNumber >= separation) : "You have a problem.";
float[] randomNumArray = new float[n];
// Set range for first random number
float lo = min;
float hi = lo + rangePerNumber;
for (int i = 0; i < n; ++i) {
float random = getRandomNumberInRange(lo, hi);
// Shift range for next random number.
lo = random + separation;
hi = lo + rangePerNumber;
randomNumArray[i] = random;
}
return randomNumArray;
}
I know you already accepted an answer, but I like this problem. I hope it's unique, I haven't gone through everyone's answers in detail just yet, and I need to run, so I'll just post this and hope it helps.
Think of it this way: Once you pick your first number, you have a chunk +/- c that you can no longer pick in.
So your first number is
range1=b-a
x=Random()*range1+a
At this point, x is somewhere between a and b (assuming Random() returns in 0 to 1). Now, we mark out the space we can no longer pick in
excludedMin=x-c
excludedMax=x+c
If x is close to either end, then it's easy, we just pick in the remaining space
if (excludedMin<=a)
{
range2=b-excludedMax
y=Random()*range2+excludedMax
}
Here, x is so close to a, that you won't get y between a and x, so you just pick between x+c and b. Likewise:
else if (excludedMax>=b)
{
range2=excludedMin-a
y=Random()*range2+a
}
Now if x is somewhere in the middle, we have to do a little magic
else
{
range2=b-a-2*c
y=Random()*range2+a
if (y>excludedMin) y+=2*c
}
What's going on here? Well, we know that the range y can lie in, is 2*c smaller than the whole space, so we pick a number somewhere in that smaller space. Now, if y is less than excludedMin, we know y "is to the left" of x-c, and we're all ok. However, if y>excluded min, we add 2*c (the total excluded space) to it, to ensure that it's greater than x+c, but it'll still be less than b because our range was reduced.
Now, it's easy to expand so n numbers, each time you just reduce the range by the excluded space among any of the other points. You continue until the excluded space equals the original range (b-a).
I know it's bad form to do a second answer, but I just thought of one...use a recursive search of the space:
Assume a global list of points: points
FillRandom(a,b,c)
{
range=b-a;
if (range>0)
{
x=Random()*range+a
points.Append(x)
FillRandom(a,x-c,c)
FillRandom(x+c,b,c)
}
}
I'll let you follow the recursion, but at the end, you'll have a list in points that fills the space with density 1/c