How can sound be generated from known points? - java

I have 10 lists of points (each point is time-amplitude pair), where each list belongs to known frequency
So i have a class InputValue with two fields sampleDate (long) and sampleValue (double), and 10 lists - List samples800Hz, samples400Hz and so on.
800Hz list contains about 1600 points (not fixed value because data sampler can have un-predictable delays) for each second, 400Hz list contains about 800 points for each second and so on.
How can i:
Generate sound from list of points
Mix several or all lists in one sound?
If i got it right, i need to resample each list to one sample rate (can java soundformat take custom sample rates like 1600, or i should use standart ones, where lowest is 8000?) and then fill samples buffer like
AudioFormat af = new AudioFormat( (float )1600, 8, 1, true, false );
SourceDataLine sdl = AudioSystem.getSourceDataLine( af );
sdl.open();
sdl.start();
for( int i = 0; i < 1600; i++ ) {
buf[ 0 ] = ???
sdl.write( buf, 0, 1 );
}
sdl.drain();
sdl.stop();
But how can i tell sdl that my aplitude value belongs to some frequency? and how can i mix different frequencies?
BTW, can i, instead of resampling each list, create 10 audioformats with different sample rates (1600 for 800Hz, 800 for 400Hz and so on) and later mix 10 sdls in one?

It sounds like you're trying to use a wavetable for your sound generation. If you're simply just recreating an 800 Hz tone, this is easy:
static int sample = 0;
for (int i = 0; i < 1600; i++) {
buf[i] = samples800Hz[sample];
sample = (sample + 1) % SAMPLES_800HZ_SIZE;
}
Lets say you want to combine an 800 Hz and 1600 Hz tone... just add it together (you might have to mix the values so they don't clip):
static int sample1 = 0, sample2 = 0;
for (int i = 0; i < 1600; i++) {
// Multiply each sample by 0.5; this gives us a 50% mix between the two
buf[i] = (samples800Hz[sample1] * 0.5) + (samples1600Hz[sample2] * 0.5);
sample1 = (sample1 + 1) % SAMPLES_800HZ_SIZE;
sample2 = (sample2 + 1) % SAMPLES_1600HZ_SIZE;
}
Now my answer doesn't consider how many times/number of frames your system is running its callback. You'll have to figure that out on your own. Also, if you wanted to have multiple tone generation instead of endlessly making lists, I would urge you to look up wavetable oscillators. A wavetable is basically creating one array of a tone and then adjusting the speed/phase you read the table to generate a desired frequency.
Good luck!

Related

How to interpret output from FFT on Noise library

I'm trying to get the most representative frequency (or first harmonic) from an audio file using the Noise FFT library (https://github.com/paramsen/noise). I have an array with the values of size x and the output array's size is x+2. I'm not familiar with Fourier Transform, so maybe I'm missing something, but from my understanding I should have something that represents the frequencies and stores the magnitude (or in this case a complex number from with to calculate it) of each one.
The thing is: since each position in the array should be a frequency, how can I know the range of the output frequencies, what frequency is each position or something like that?
Edit: This is part of the code I'm using
float[] mono = new float[size];
// I fill the array with the appropiate values
Noise noise = Noise.real(size);
float[] dst = new float[size + 2];
float[] fft = noise.fft(mono, dst);
// The result array has the pairs of real+imaginary floats in a one dimensional array; even indices
// are real, odd indices are imaginary. DC bin is located at index 0, 1, nyquist at index n-2, n-1
double greatest = 0;
int greatestIdx = 0;
for(int i = 0; i < fft.length / 2; i++) {
float real = fft[i * 2];
float imaginary = fft[i * 2 + 1];
double magnitude = Math.sqrt(real*real+imaginary*imaginary);
if (magnitude > greatest) {
greatest = magnitude;
greatestIdx = i;
}
System.out.printf("index: %d, real: %.5f, imaginary: %.5f\n", i, real, imaginary);
}
I just noticed something I had overlooked. When reading the comment just before the for loop (which is from the sample code provided in GitHub) it says that nyquist is located at the last pair of values of the array. From what I searched, nyquist is 22050Hz, so... To know the frequency corresponding to greatestIdx I should map the range [0,size+2] to the range [0,22050] and calculate the new value? It seems like a pretty unprecise measure.
Taking the prior things into account, maybe I should use another library for more precision? If that is the case, what would be one that let me specify the output frequency range or that gives me approximately the human hearing range by default?
I believe that the answer to your question is here if I understand it correctly https://stackoverflow.com/a/4371627/9834835
To determine the frequency for each FFT bin you may use the formula
F = i * sample / nFFt
where:
i = the FFT index
sample = the sample rate
nFft = your FFT size

Working out a point using curve fitting in java

The following code produces a curve that should fit fit the points
1, 1
150, 250
10000, 500
100000, 750
100000, 1000
I built this code based off the documentation here, however, I am not entirely sure how to use the data correctly for further calcuations and whether PolynomialCurveFitter.create(3) will affect the answers in these future calcuations.
For example, how would I use the data outputted to calculate what is the x value if the y value is 200 and how would the value differ if I had PolynomialCurveFitter.create(2) instead of PolynomialCurveFitter.create(3)?
import java.util.ArrayList;
import java.util.Arrays;
import org.apache.commons.math3.fitting.PolynomialCurveFitter;
import org.apache.commons.math3.fitting.WeightedObservedPoints;
public class MyFuncFitter {
public static void main(String[] args) {
ArrayList<Integer> keyPoints = new ArrayList<Integer>();
keyPoints.add(1);
keyPoints.add(150);
keyPoints.add(10000);
keyPoints.add(100000);
keyPoints.add(1000000);
WeightedObservedPoints obs = new WeightedObservedPoints();
if(keyPoints != null && keyPoints.size() != 1) {
int size = keyPoints.size();
int sectionSize = (int) (1000 / (size - 1));
for(int i = 0; i < size; i++) {
if(i != 0)
obs.add(keyPoints.get(i), i * sectionSize);
else
obs.add(keyPoints.get(0), 1);
}
} else if(keyPoints.size() == 1 && keyPoints.get(0) >= 1) {
obs.add(1, 1);
obs.add(keyPoints.get(0), 1000);
}
PolynomialCurveFitter fitter = PolynomialCurveFitter.create(3);
fitter.withStartPoint(new double[] {keyPoints.get(0), 1});
double[] coeff = fitter.fit(obs.toList());
System.out.println(Arrays.toString(coeff));
}
}
About what the consequences of changing d for your function
PolynomialCurveFitter.create takes the degree of the polynomial as a parameter.
Very (very) roughly speaking, the polynomial degree will describe the "complexity" of the curve you want to fit. A low-level degree will produce simple curves (just a parabola for d=2), whereas higher degrees will produce more intricate curves, with lots of peaks and valleys, of highly varying sizes, therefore more able to perfectly "fit" all your data points, at the expense of not necessarily being a good "prediction" of all other values.
Like the blue curve on this graphic:
You can see how the straight line would be a better "approximation", while not fitting the data point properly.
How to compute x for any y values in the computed function
You "simply" need to solve the polynomial ! Using the very same library. Add the inverted y value to your coefficents list, and find its root.
Let's say you chose a degree of 2.
Your coefficients array coeffs will contains 3 factors {a0, a1, a2} which describes the equation as such:
If you want to solve this for a particular value, like y= 600, you need to solve :
So, basically,
So, just substract 600 to a0:
coeffs[0] -= 600
and find the root of the polynomial using the dedicated function:
PolynomialFunction polynomial = new PolynomialFunction(coeffs);
LaguerreSolver laguerreSolver = new LaguerreSolver();
double x = laguerreSolver.solve(100, polynomial, 0, 1000000);
System.out.println("For y = 600, we found x = " + x);

XOR Neural Network(FF) converges to 0.5

I've created a program that allows me to create flexible Neural networks of any size/length, however I'm testing it using the simple structure of an XOR setup(Feed forward, Sigmoid activation, back propagation, no batching).
EDIT: The following is a completely new approach to my original question which didn't supply enough information
EDIT 2: I started my weight between -2.5 and 2.5, and fixed a problem in my code where I forgot some negatives. Now it either converges to 0 for all cases or 1 for all, instead of 0.5
Everything works exactly the way that I THINK it should, however it is converging toward 0.5, instead of oscillating between outputs of 0 and 1. I've completely gone through and hand calculated an entire setup of feeding forward/calculating delta errors/back prop./ etc. and it matched what I got from the program. I have also tried optimizing it by changing learning rate/ momentum, as well as increase complexity in the network(more neurons/layers).
Because of this, I assume that either one of my equations is wrong, or I have some other sort of misunderstanding in my Neural Network. The following is the logic with equations that I follow for each step:
I have an input layer with two inputs and a bias, a hidden with 2 neurons and a bias, and an output with 1 neuron.
Take the input from each of the two input neurons and the bias neuron, then multiply them by their respective weights, and then add them together as the input for each of the two neurons in the hidden layer.
Take the input of each hidden neuron, pass it through the Sigmoid activation function (Reference 1) and use that as the neuron's output.
Take the outputs of each neuron in hidden layer (1 for the bias), multiply them by their respective weights, and add those values to the output neuron's input.
Pass the output neuron's input through the Sigmoid activation function, and use that as the output for the whole network.
Calculate the Delta Error(Reference 2) for the output neuron
Calculate the Delta Error(Reference 3) for each of the 2 hidden neurons
Calculate the Gradient(Reference 4) for each weight (starting from the end and working back)
Calculate the Delta Weight(Reference 5) for each weight, and add that to its value.
Start the process over with by Changing the inputs and expected output(Reference 6)
Here are the specifics of those references to equations/processes (This is probably where my problem is!):
x is the input of the neuron: (1/(1 + Math.pow(Math.E, (-1 * x))))
-1*(actualOutput - expectedOutput)*(Sigmoid(x) * (1 - Sigmoid(x))//Same sigmoid used in reference 1
SigmoidDerivative(Neuron.input)*(The sum of(Neuron.Weights * the deltaError of the neuron they connect to))
ParentNeuron.output * NeuronItConnectsTo.deltaError
learningRate*(weight.gradient) + momentum*(Previous Delta Weight)
I have an arrayList with the values 0,1,1,0 in it in that order. It takes the first pair(0,1), and then expects a 1. For the second time through, it takes the second pair(1,1) and expects a 0. It just keeps iterating through the list for each new set. Perhaps training it in this systematic way causes the problem?
Like I said before, they reason I don't think it's a code problem is because it matched exactly what I had calculated with paper and pencil (which wouldn't have happened if there was a coding error).
Also when I initialize my weights the first time, I give them a random double value between 0 and 1. This article suggests that that may lead to a problem: Neural Network with backpropogation not converging
Could that be it? I used the n^(-1/2) rule but that did not fix it.
If I can be more specific or you want other code let me know, thanks!
This is wrong
SigmoidDerivative(Neuron.input)*(The sum of(Neuron.Weights * the deltaError of the neuron they connect to))
First is sigmoid activation (g)
second is derivative of sigmoid activation
private double g(double z) {
return 1 / (1 + Math.pow(2.71828, -z));
}
private double gD(double gZ) {
return gZ * (1 - gZ);
}
Unrelated note: Your notation of (-1*x) is really strange just use -x
Your implementation from how you phrase the steps of your ANN seems poor. Try to focus on implementing Forward/BackPropogation and then an UpdateWeights method.
Creating a matrix class
This is my Java implementation, its very simple and somewhat rough. I use a Matrix class to make the math behind it appear very simple in code.
If you can code in C++ you can overload operaters which will enable for even easier writing of comprehensible code.
https://github.com/josephjaspers/ArtificalNetwork/blob/master/src/artificalnetwork/ArtificalNetwork.java
Here are the algorithms (C++)
All of these codes can be found on my github (the Neural nets are simple and funcitonal)
Each layer includes the bias nodes, which is why there are offsets
void NeuralNet::forwardPropagation(std::vector<double> data) {
setBiasPropogation(); //sets all the bias nodes activation to 1
a(0).set(1, Matrix(data)); //1 to offset for bias unit (A = X)
for (int i = 1; i < layers; ++i) {
// (set(1 -- offsets the bias unit
z(i).set(1, w(i - 1) * a(i - 1));
a(i) = g(z(i)); // g(z ) if the sigmoid function
}
}
void NeuralNet::setBiasPropogation() {
for (int i = 0; i < activation.size(); ++i) {
a(i).set(0, 0, 1);
}
}
outLayer D = A - Y (y is the output data)
hiddenLayers d^l = (w^l(T) * d^l+1) *: gD(a^l)
d = derivative vector
W = weights matrix (Length = connections, width = features)
a = activation matrix
gD = derivative function
^l = IS NOT POWER OF (this just means at layer l)
= dotproduct
*: = multiply (multiply each element "through")
cpy(n) returns a copy of the matrix offset by n (ignores n rows)
void NeuralNet::backwardPropagation(std::vector<double> output) {
d(layers - 1) = a(layers - 1) - Matrix(output);
for (int i = layers - 2; i > -1; --i) {
d(i) = (w(i).T() * d(i + 1).cpy(1)).x(gD(a(i)));
}
}
Explaining this code maybe confusing without images so I'm sending this link which I think is a good source, it also contains an explanation of BackPropagation which may be better then my own explanation.
http://galaxy.agh.edu.pl/~vlsi/AI/backp_t_en/backprop.html
void NeuralNet::updateWeights() {
// the operator () (int l, int w) returns a double reference at that position in the matrix
// thet operator [] (int n) returns the nth double (reference) in the matrix (useful for vectors)
for (int l = 0; l < layers - 1; ++l) {
for (int i = 1; i < d(l + 1).length(); ++i) {
for (int j = 0; j < a(l).length(); ++j) {
w(l)(i - 1, j) -= (d(l + 1)[i] * a(l)[j]) * learningRate + m(l)(i - 1, j);
m(l)(i - 1, j) = (d(l + 1)[i] * a(l)[j]) * learningRate * momentumRate;
}
}
}
}

Is this a "good enough" random algorithm; why isn't it used if it's faster?

I made a class called QuickRandom, and its job is to produce random numbers quickly. It's really simple: just take the old value, multiply by a double, and take the decimal part.
Here is my QuickRandom class in its entirety:
public class QuickRandom {
private double prevNum;
private double magicNumber;
public QuickRandom(double seed1, double seed2) {
if (seed1 >= 1 || seed1 < 0) throw new IllegalArgumentException("Seed 1 must be >= 0 and < 1, not " + seed1);
prevNum = seed1;
if (seed2 <= 1 || seed2 > 10) throw new IllegalArgumentException("Seed 2 must be > 1 and <= 10, not " + seed2);
magicNumber = seed2;
}
public QuickRandom() {
this(Math.random(), Math.random() * 10);
}
public double random() {
return prevNum = (prevNum*magicNumber)%1;
}
}
And here is the code I wrote to test it:
public static void main(String[] args) {
QuickRandom qr = new QuickRandom();
/*for (int i = 0; i < 20; i ++) {
System.out.println(qr.random());
}*/
//Warm up
for (int i = 0; i < 10000000; i ++) {
Math.random();
qr.random();
System.nanoTime();
}
long oldTime;
oldTime = System.nanoTime();
for (int i = 0; i < 100000000; i ++) {
Math.random();
}
System.out.println(System.nanoTime() - oldTime);
oldTime = System.nanoTime();
for (int i = 0; i < 100000000; i ++) {
qr.random();
}
System.out.println(System.nanoTime() - oldTime);
}
It is a very simple algorithm that simply multiplies the previous double by a "magic number" double. I threw it together pretty quickly, so I could probably make it better, but strangely, it seems to be working fine.
This is sample output of the commented-out lines in the main method:
0.612201846732229
0.5823974655091941
0.31062451498865684
0.8324473610354004
0.5907187526770246
0.38650264675748947
0.5243464344127049
0.7812828761272188
0.12417247811074805
0.1322738256858378
0.20614642573072284
0.8797579436677381
0.022122999476108518
0.2017298328387873
0.8394849894162446
0.6548917685640614
0.971667953190428
0.8602096647696964
0.8438709031160894
0.694884972852229
Hm. Pretty random. In fact, that would work for a random number generator in a game.
Here is sample output of the non-commented out part:
5456313909
1427223941
Wow! It performs almost 4 times faster than Math.random.
I remember reading somewhere that Math.random used System.nanoTime() and tons of crazy modulus and division stuff. Is that really necessary? My algorithm performs a lot faster and it seems pretty random.
I have two questions:
Is my algorithm "good enough" (for, say, a game, where really random numbers aren't too important)?
Why does Math.random do so much when it seems just simple multiplication and cutting out the decimal will suffice?
Your QuickRandom implementation hasn't really an uniform distribution. The frequencies are generally higher at the lower values while Math.random() has a more uniform distribution. Here's a SSCCE which shows that:
package com.stackoverflow.q14491966;
import java.util.Arrays;
public class Test {
public static void main(String[] args) throws Exception {
QuickRandom qr = new QuickRandom();
int[] frequencies = new int[10];
for (int i = 0; i < 100000; i++) {
frequencies[(int) (qr.random() * 10)]++;
}
printDistribution("QR", frequencies);
frequencies = new int[10];
for (int i = 0; i < 100000; i++) {
frequencies[(int) (Math.random() * 10)]++;
}
printDistribution("MR", frequencies);
}
public static void printDistribution(String name, int[] frequencies) {
System.out.printf("%n%s distribution |8000 |9000 |10000 |11000 |12000%n", name);
for (int i = 0; i < 10; i++) {
char[] bar = " ".toCharArray(); // 50 chars.
Arrays.fill(bar, 0, Math.max(0, Math.min(50, frequencies[i] / 100 - 80)), '#');
System.out.printf("0.%dxxx: %6d :%s%n", i, frequencies[i], new String(bar));
}
}
}
The average result looks like this:
QR distribution |8000 |9000 |10000 |11000 |12000
0.0xxx: 11376 :#################################
0.1xxx: 11178 :###############################
0.2xxx: 11312 :#################################
0.3xxx: 10809 :############################
0.4xxx: 10242 :######################
0.5xxx: 8860 :########
0.6xxx: 9004 :##########
0.7xxx: 8987 :#########
0.8xxx: 9075 :##########
0.9xxx: 9157 :###########
MR distribution |8000 |9000 |10000 |11000 |12000
0.0xxx: 10097 :####################
0.1xxx: 9901 :###################
0.2xxx: 10018 :####################
0.3xxx: 9956 :###################
0.4xxx: 9974 :###################
0.5xxx: 10007 :####################
0.6xxx: 10136 :#####################
0.7xxx: 9937 :###################
0.8xxx: 10029 :####################
0.9xxx: 9945 :###################
If you repeat the test, you'll see that the QR distribution varies heavily, depending on the initial seeds, while the MR distribution is stable. Sometimes it reaches the desired uniform distribution, but more than often it doesn't. Here's one of the more extreme examples, it's even beyond the borders of the graph:
QR distribution |8000 |9000 |10000 |11000 |12000
0.0xxx: 41788 :##################################################
0.1xxx: 17495 :##################################################
0.2xxx: 10285 :######################
0.3xxx: 7273 :
0.4xxx: 5643 :
0.5xxx: 4608 :
0.6xxx: 3907 :
0.7xxx: 3350 :
0.8xxx: 2999 :
0.9xxx: 2652 :
What you are describing is a type of random generator called a linear congruential generator. The generator works as follows:
Start with a seed value and multiplier.
To generate a random number:
Multiply the seed by the multiplier.
Set the seed equal to this value.
Return this value.
This generator has many nice properties, but has significant problems as a good random source. The Wikipedia article linked above describes some of the strengths and weaknesses. In short, if you need good random values, this is probably not a very good approach.
Your random number function is poor, as it has too little internal state -- the number output by the function at any given step is entirely dependent on the previous number. For instance, if we assume that magicNumber is 2 (by way of example), then the sequence:
0.10 -> 0.20
is strongly mirrored by similar sequences:
0.09 -> 0.18
0.11 -> 0.22
In many cases, this will generate noticeable correlations in your game -- for instance, if you make successive calls to your function to generate X and Y coordinates for objects, the objects will form clear diagonal patterns.
Unless you have good reason to believe that the random number generator is slowing your application down (and this is VERY unlikely), there is no good reason to try and write your own.
The real problem with this is that it's output histogram is dependent on the initial seed far to much - much of the time it will end up with a near uniform output but a lot of the time will have distinctly un-uniform output.
Inspired by this article about how bad php's rand() function is, I made some random matrix images using QuickRandom and System.Random. This run shows how sometimes the seed can have a bad effect (in this case favouring lower numbers) where as System.Random is pretty uniform.
QuickRandom
System.Random
Even Worse
If we initialise QuickRandom as new QuickRandom(0.01, 1.03) we get this image:
The Code
using System;
using System.Drawing;
using System.Drawing.Imaging;
namespace QuickRandomTest
{
public class QuickRandom
{
private double prevNum;
private readonly double magicNumber;
private static readonly Random rand = new Random();
public QuickRandom(double seed1, double seed2)
{
if (seed1 >= 1 || seed1 < 0) throw new ArgumentException("Seed 1 must be >= 0 and < 1, not " + seed1);
prevNum = seed1;
if (seed2 <= 1 || seed2 > 10) throw new ArgumentException("Seed 2 must be > 1 and <= 10, not " + seed2);
magicNumber = seed2;
}
public QuickRandom()
: this(rand.NextDouble(), rand.NextDouble() * 10)
{
}
public double Random()
{
return prevNum = (prevNum * magicNumber) % 1;
}
}
class Program
{
static void Main(string[] args)
{
var rand = new Random();
var qrand = new QuickRandom();
int w = 600;
int h = 600;
CreateMatrix(w, h, rand.NextDouble).Save("System.Random.png", ImageFormat.Png);
CreateMatrix(w, h, qrand.Random).Save("QuickRandom.png", ImageFormat.Png);
}
private static Image CreateMatrix(int width, int height, Func<double> f)
{
var bitmap = new Bitmap(width, height);
for (int y = 0; y < height; y++) {
for (int x = 0; x < width; x++) {
var c = (int) (f()*255);
bitmap.SetPixel(x, y, Color.FromArgb(c,c,c));
}
}
return bitmap;
}
}
}
One problem with your random number generator is that there is no 'hidden state' - if I know what random number you returned on the last call, I know every single random number you will send until the end of time, since there is only one possible next result, and so on and so on.
Another thing to consider is the 'period' of your random number generator. Obviously with a finite state size, equal to the mantissa portion of a double, it will only be able to return at most 2^52 values before looping. But that's in the best case - can you prove that there are no loops of period 1, 2, 3, 4...? If there are, your RNG will have awful, degenerate behavior in those cases.
In addition, will your random number generation have a uniform distribution for all starting points? If it does not, then your RNG will be biased - or worse, biased in different ways depending on the starting seed.
If you can answer all of these questions, awesome. If you can't, then you know why most people do not re-invent the wheel and use a proven random number generator ;)
(By the way, a good adage is: The fastest code is code that does not run. You could make the fastest random() in the world, but it's no good if it is not very random)
One common test I always did when developing PRNGs was to :
Convert output to char values
Write chars value to a file
Compress file
This let me quickly iterate on ideas that were "good enough" PRNGs for sequences of around 1 to 20 megabytes. It also gave a better top down picture than just inspecting it by eye, as any "good enough" PRNG with half-a-word of state could quickly exceed your eyes ability to see the cycle point.
If I was really picky, I might take the good algorithms and run the DIEHARD/NIST tests on them, to get more of an insight, and then go back and tweak some more.
The advantage of the compression test, as opposed to a frequency analysis is that, trivially it is easy to construct a good distribution : simply output a 256 length block containing all chars of values 0 - 255, and do this 100,000 times. But this sequence has a cycle of length 256.
A skewed distribution, even by a small margin, should be picked up by a compression algorithm, particularly if you give it enough (say 1 megabyte) of the sequence to work with. If some characters, or bigrams, or n-grams occur more frequently, a compression algorithm can encode this distribution skew to codes that favor the frequent occurrences with shorter code words, and you get a delta of compression.
Since most compression algorithms are fast, and they require no implementation (as OSs have them just lying around), the compression test is a very useful one for quickly rating pass/fail for an PRNG you might be developing.
Good luck with your experiments!
Oh, I performed this test on the rng you have above, using the following small mod of your code :
import java.io.*;
public class QuickRandom {
private double prevNum;
private double magicNumber;
public QuickRandom(double seed1, double seed2) {
if (seed1 >= 1 || seed1 < 0) throw new IllegalArgumentException("Seed 1 must be >= 0 and < 1, not " + seed1);
prevNum = seed1;
if (seed2 <= 1 || seed2 > 10) throw new IllegalArgumentException("Seed 2 must be > 1 and <= 10, not " + seed2);
magicNumber = seed2;
}
public QuickRandom() {
this(Math.random(), Math.random() * 10);
}
public double random() {
return prevNum = (prevNum*magicNumber)%1;
}
public static void main(String[] args) throws Exception {
QuickRandom qr = new QuickRandom();
FileOutputStream fout = new FileOutputStream("qr20M.bin");
for (int i = 0; i < 20000000; i ++) {
fout.write((char)(qr.random()*256));
}
}
}
The results were :
Cris-Mac-Book-2:rt cris$ zip -9 qr20M.zip qr20M.bin2
adding: qr20M.bin2 (deflated 16%)
Cris-Mac-Book-2:rt cris$ ls -al
total 104400
drwxr-xr-x 8 cris staff 272 Jan 25 05:09 .
drwxr-xr-x+ 48 cris staff 1632 Jan 25 05:04 ..
-rw-r--r-- 1 cris staff 1243 Jan 25 04:54 QuickRandom.class
-rw-r--r-- 1 cris staff 883 Jan 25 05:04 QuickRandom.java
-rw-r--r-- 1 cris staff 16717260 Jan 25 04:55 qr20M.bin.gz
-rw-r--r-- 1 cris staff 20000000 Jan 25 05:07 qr20M.bin2
-rw-r--r-- 1 cris staff 16717402 Jan 25 05:09 qr20M.zip
I would consider an PRNG good if the output file could not be compressed at all.
To be honest, I did not think your PRNG would do so well, only 16% on ~20 Megs is pretty impressive for such a simple construction. But I still consider it a fail.
The fastest random generator you could implement is this:
XD, jokes apart, besides everything said here, I'd like to contribute citing
that testing random sequences "is a hard task" [ 1 ], and there are several test
that check certain properties of pseudo-random numbers, you can find a lot of them
here: http://www.random.org/analysis/#2005
One simple way to evaluate random generator "quality" is the old Chi Square test.
static double chisquare(int numberCount, int maxRandomNumber) {
long[] f = new long[maxRandomNumber];
for (long i = 0; i < numberCount; i++) {
f[randomint(maxRandomNumber)]++;
}
long t = 0;
for (int i = 0; i < maxRandomNumber; i++) {
t += f[i] * f[i];
}
return (((double) maxRandomNumber * t) / numberCount) - (double) (numberCount);
}
Citing [ 1 ]
The idea of the χ² test is to check whether or not the numbers produced are
spread out reasonably. If we generate N positive numbers less than r, then we'd
expect to get about N / r numbers of each value. But---and this is the essence of
the matter---the frequencies of ocurrence of all the values should not be exactly
the same: that wouldn't be random!
We simply calculate the sum of the squares of the frecuencies of occurrence of
each value, scaled by the expected frequency, and then substract off the size of the
sequence. This number, the "χ² statistic," may be expressed mathematically as
If the χ² statistic is close to r, then the numbers are random; if it is too far away,
then they are not. The notions of "close" and "far away" can be more precisely
defined: tables exist that tell exactly how relate the statistic to properties of
random sequences. For the simple test that we're performing, the statistic should
be within 2√r
Using this theory and the following code:
abstract class RandomFunction {
public abstract int randomint(int range);
}
public class test {
static QuickRandom qr = new QuickRandom();
static double chisquare(int numberCount, int maxRandomNumber, RandomFunction function) {
long[] f = new long[maxRandomNumber];
for (long i = 0; i < numberCount; i++) {
f[function.randomint(maxRandomNumber)]++;
}
long t = 0;
for (int i = 0; i < maxRandomNumber; i++) {
t += f[i] * f[i];
}
return (((double) maxRandomNumber * t) / numberCount) - (double) (numberCount);
}
public static void main(String[] args) {
final int ITERATION_COUNT = 1000;
final int N = 5000000;
final int R = 100000;
double total = 0.0;
RandomFunction qrRandomInt = new RandomFunction() {
#Override
public int randomint(int range) {
return (int) (qr.random() * range);
}
};
for (int i = 0; i < ITERATION_COUNT; i++) {
total += chisquare(N, R, qrRandomInt);
}
System.out.printf("Ave Chi2 for QR: %f \n", total / ITERATION_COUNT);
total = 0.0;
RandomFunction mathRandomInt = new RandomFunction() {
#Override
public int randomint(int range) {
return (int) (Math.random() * range);
}
};
for (int i = 0; i < ITERATION_COUNT; i++) {
total += chisquare(N, R, mathRandomInt);
}
System.out.printf("Ave Chi2 for Math.random: %f \n", total / ITERATION_COUNT);
}
}
I got the following result:
Ave Chi2 for QR: 108965,078640
Ave Chi2 for Math.random: 99988,629040
Which, for QuickRandom, is far away from r (outside of r ± 2 * sqrt(r))
That been said, QuickRandom could be fast but (as stated in another answers) is not good as a random number generator
[ 1 ] SEDGEWICK ROBERT, Algorithms in C, Addinson Wesley Publishing Company, 1990, pages 516 to 518
I put together a quick mock-up of your algorithm in JavaScript to evaluate the results. It generates 100,000 random integers from 0 - 99 and tracks the instance of each integer.
The first thing I notice is that you are more likely to get a low number than a high number. You see this the most when seed1 is high and seed2 is low. In a couple of instances, I got only 3 numbers.
At best, your algorithm needs some refining.
If the Math.Random() function calls the operating system to get the time of day, then you cannot compare it to your function. Your function is a PRNG, whereas that function is striving for real random numbers. Apples and oranges.
Your PRNG may be fast, but it does not have enough state information to achieve a long period before it repeats (and its logic is not sophisticated enough to even achieve the periods that are possible with that much state information).
Period is the length of the sequence before your PRNG begins to repeat itself. This happens as soon as the PRNG machine makes a state transition to a state which is identical to some past state. From there, it will repeat the transitions which began in that state. Another problem with PRNG's can be a low number of unique sequences, as well as degenerate convergence on a particular sequence which repeats. There can also be undesirable patterns. For instance, suppose that a PRNG looks fairly random when the numbers are printed in decimal, but an inspection of the values in binary shows that bit 4 is simply toggling between 0 and 1 on each call. Oops!
Take a look at the Mersenne Twister and other algorithms. There are ways to strike a balance between the period length and CPU cycles. One basic approach (used in the Mersenne Twister) is to cycle around in the state vector. That is to say, when a number is being generated, it is not based on the entire state, just on a few words from the state array subject to a few bit operations. But at each step, the algorithm also moves around in the array, scrambling the contents a little bit at a time.
There are many, many pseudo random number generators out there. For example Knuth's ranarray, the Mersenne twister, or look for LFSR generators. Knuth's monumental "Seminumerical algorithms" analizes the area, and proposes some linear congruential generators (simple to implement, fast).
But I'd suggest you just stick to java.util.Random or Math.random, they fast and at least OK for occasional use (i.e., games and such). If you are just paranoid on the distribution (some Monte Carlo program, or a genetic algorithm), check out their implementation (source is available somewhere), and seed them with some truly random number, either from your operating system or from random.org. If this is required for some application where security is critical, you'll have to dig yourself. And as in that case you shouldn't believe what some colored square with missing bits spouts here, I'll shut up now.
It is very unlikely that random number generation performance would be an issue for any use-case you came up with unless accessing a single Random instance from multiple threads (because Random is synchronized).
However, if that really is the case and you need lots of random numbers fast, your solution is far too unreliable. Sometimes it gives good results, sometimes it gives horrible results (based on the initial settings).
If you want the same numbers that the Random class gives you, only faster, you could get rid of the synchronization in there:
public class QuickRandom {
private long seed;
private static final long MULTIPLIER = 0x5DEECE66DL;
private static final long ADDEND = 0xBL;
private static final long MASK = (1L << 48) - 1;
public QuickRandom() {
this((8682522807148012L * 181783497276652981L) ^ System.nanoTime());
}
public QuickRandom(long seed) {
this.seed = (seed ^ MULTIPLIER) & MASK;
}
public double nextDouble() {
return (((long)(next(26)) << 27) + next(27)) / (double)(1L << 53);
}
private int next(int bits) {
seed = (seed * MULTIPLIER + ADDEND) & MASK;
return (int)(seed >>> (48 - bits));
}
}
I simply took the java.util.Random code and removed the synchronization which results in twice the performance compared to the original on my Oracle HotSpot JVM 7u9. It is still slower than your QuickRandom, but it gives much more consistent results. To be precise, for the same seed values and single threaded applications, it gives the same pseudo-random numbers as the original Random class would.
This code is based on the current java.util.Random in OpenJDK 7u which is licensed under GNU GPL v2.
EDIT 10 months later:
I just discovered that you don't even have to use my code above to get an unsynchronized Random instance. There's one in the JDK, too!
Look at Java 7's ThreadLocalRandom class. The code inside it is almost identical to my code above. The class is simply a local-thread-isolated Random version suitable for generating random numbers quickly. The only downside I can think of is that you can't set its seed manually.
Example usage:
Random random = ThreadLocalRandom.current();
'Random' is more than just about getting numbers.... what you have is pseudo-random
If pseudo-random is good enough for your purposes, then sure, it's way faster (and XOR+Bitshift will be faster than what you have)
Rolf
Edit:
OK, after being too hasty in this answer, let me answer the real reason why your code is faster:
From the JavaDoc for Math.Random()
This method is properly synchronized to allow correct use by more than one thread. However, if many threads need to generate pseudorandom numbers at a great rate, it may reduce contention for each thread to have its own pseudorandom-number generator.
This is likely why your code is faster.
java.util.Random is not much different, a basic LCG described by Knuth. However it has main 2 main advantages/differences:
thread safe - each update is a CAS which is more expensive than a simple write and needs a branch (even if perfectly predicted single threaded). Depending on the CPU it could be significant difference.
undisclosed internal state - this is very important for anything non-trivial. You wish the random numbers not to be predictable.
Below it's the main routine generating 'random' integers in java.util.Random.
protected int next(int bits) {
long oldseed, nextseed;
AtomicLong seed = this.seed;
do {
oldseed = seed.get();
nextseed = (oldseed * multiplier + addend) & mask;
} while (!seed.compareAndSet(oldseed, nextseed));
return (int)(nextseed >>> (48 - bits));
}
If you remove the AtomicLong and the undisclosed sate (i.e. using all bits of the long), you'd get more performance than the double multiplication/modulo.
Last note: Math.random should not be used for anything but simple tests, it's prone to contention and if you have even a couple of threads calling it concurrently the performance degrades. One little known historical feature of it is the introduction of CAS in java - to beat an infamous benchmark (first by IBM via intrinsics and then Sun made "CAS from Java")
This is the random function I use for my games. It's pretty fast, and has good (enough) distribution.
public class FastRandom {
public static int randSeed;
public static final int random()
{
// this makes a 'nod' to being potentially called from multiple threads
int seed = randSeed;
seed *= 1103515245;
seed += 12345;
randSeed = seed;
return seed;
}
public static final int random(int range)
{
return ((random()>>>15) * range) >>> 17;
}
public static final boolean randomBoolean()
{
return random() > 0;
}
public static final float randomFloat()
{
return (random()>>>8) * (1.f/(1<<24));
}
public static final double randomDouble() {
return (random()>>>8) * (1.0/(1<<24));
}
}

Generating correlated numbers

Here is a fun one: I need to generate random x/y pairs that are correlated at a given value of Pearson product moment correlation coefficient, or Pearson r. You can imagine this as two arrays, array X and array Y, where the values of array X and array Y must be re-generated, re-ordered or transformed until they are correlated with each other at a given level of Pearson r. Here is the kicker: Array X and Array Y must be uniform distributions.
I can do this with a normal distribution, but transforming the values without skewing the distribution has me stumped. I tried re-ordering the values in the arrays to increase the correlation, but I will never get arrays correlated at 1.00 or -1.00 just by sorting.
Any ideas?
--
here is the AS3 code for random correlated gaussians, to get the wheels turning:
public static function nextCorrelatedGaussians(r:Number):Array{
var d1:Number;
var d2:Number;
var n1:Number;
var n2:Number;
var lambda:Number;
var r:Number;
var arr:Array = new Array();
var isNeg:Boolean;
if (r<0){
r *= -1;
isNeg=true;
}
lambda= ( (r*r) - Math.sqrt( (r*r) - (r*r*r*r) ) ) / (( 2*r*r ) - 1 );
n1 = nextGaussian();
n2 = nextGaussian();
d1 = n1;
d2 = ((lambda*n1) + ((1-lambda)*n2)) / Math.sqrt( (lambda*lambda) + (1-lambda)*(1-lambda));
if (isNeg) {d2*= -1}
arr.push(d1);
arr.push(d2);
return arr;
}
I ended up writing a short paper on this
It doesn't include your sorting method (although in practice I think it's similar to my first method, in a roundabout way), but does describe two ways that don't require iteration.
Here is an implementation of of twolfe18's algorithm written in Actionscript 3:
for (var j:int=0; j < size; j++) {
xValues[i]=Math.random());
}
var varX:Number = Util.variance(xValues);
var varianceE:Number = 1/(r*varX) - varX;
for (var i:int=0; i < size; i++) {
yValues[i] = xValues[i] + boxMuller(0, Math.sqrt(varianceE));
}
boxMuller is just a method that generates a random Gaussian with the arguments (mean, stdDev).
size is the size of the distribution.
Sample output
Target p: 0.8
Generated p: 0.04846346291280387
variance of x distribution: 0.0707786253165176
varianceE: 17.589920412141158
As you can see I'm still a ways off. Any suggestions?
This apparently simple question has been messing up with my mind since yesterday evening! I looked for the topic of simulating distributions with a dependency, and the best I found is this: simulate dependent random variables. The gist of it is, you can easily simulate 2 normals with given correlation, and they outline a method to transform these non-independent normals, but this won't preserve correlation. The correlation of the transform will be correlated, so to speak, but not identical. See the paragraph "Rank correlation coefficents".
Edit: from what I gather from the second part of the article, the copula method would allow you to simulate / generate random variables with rank correlation.
start with the model y = x + e where e is the error (a normal random variable). e should have a mean of 0 and variance k.
long story short, you can write a formula for the expected value of the Pearson in terms of k, and solve for k. note, you cannot randomly generate data with the Pearson exactly equal to a specific value, only with the expected Pearson of a specific value.
i'll try to come back and edit this post to include a closed form solution when i have access to some paper.
EDIT: ok, i have a hand-wavy solution that is probably correct (but will require testing to confirm). for now, assume desired Pearson = p > 0 (you can figure out the p < 0 case). like i mentioned earlier, set your model for Y = X + E (X is uniform, E is normal).
sample to get your x's
compute var(x)
the variance of E should be: (1/(rsd(x)))^2 - var(x)
generate your y's based on your x's and sample from your normal random variable E
for p < 0, set Y = -X + E. proceed accordingly.
basically, this follows from the definition of Pearson: cov(x,y)/var(x)*var(y). when you add noise to the x's (Y = X + E), the expected covariance cov(x,y) should not change from that with no noise. the var(x) does not change. the var(y) is the sum of var(x) and var(e), hence my solution.
SECOND EDIT: ok, i need to read definitions better. the definition of Pearson is cov(x, y)/(sd(x)sd(y)). from that, i think the true value of var(E) should be (1/(rsd(x)))^2 - var(x). see if that works.
To get a correlation of 1 both X and Y should be the same, so copy X to Y and you have a correlation of 1. To get a -1 correlation, make Y = 1 - X. (assuming X values are [0,1])
A strange problem demands a strange solution -- here is how I solved it.
-Generate array X
-Clone array X to Create array Y
-Sort array X (you can use whatever method you want to sort array X -- quicksort, heapsort anything stable.)
-Measure the starting level of pearson's R with array X sorted and array Y unsorted.
WHILE the correlation is outside of the range you are hoping for
IF the correlation is to low
run one iteration of CombSort11 on array Y then recheck correlation
ELSE IF the correlation is too high
randomly swap two values and recheck correlation
And thats it! Combsort is the real key, it has the effect of increasing the correlation slowly and steadily. Check out Jason Harrison's demo to see what I mean. To get a negative correlation you can invert the sort or invert one of the arrays after the whole process is complete.
Here is my implementation in AS3:
public static function nextReliableCorrelatedUniforms(r:Number, size:int, error:Number):Array {
var yValues:Array = new Array;
var xValues:Array = new Array;
var coVar:Number = 0;
for (var e:int=0; e < size; e++) { //create x values
xValues.push(Math.random());
}
yValues = xValues.concat();
if(r != 1.0){
xValues.sort(Array.NUMERIC);
}
var trueR:Number = Util.getPearson(xValues, yValues);
while(Math.abs(trueR-r)>error){
if (trueR < r-error){ // combsort11 for y
var gap:int = yValues.length;
var swapped:Boolean = true;
while (trueR <= r-error) {
if (gap > 1) {
gap = Math.round(gap / 1.3);
}
var i:int = 0;
swapped = false;
while (i + gap < yValues.length && trueR <= r-error) {
if (yValues[i] > yValues[i + gap]) {
var t:Number = yValues[i];
yValues[i] = yValues[i + gap];
yValues[i + gap] = t;
trueR = Util.getPearson(xValues, yValues)
swapped = true;
}
i++;
}
}
}
else { // decorrelate
while (trueR >= r+error) {
var a:int = Random.randomUniformIntegerBetween(0, size-1);
var b:int = Random.randomUniformIntegerBetween(0, size-1);
var temp:Number = yValues[a];
yValues[a] = yValues[b];
yValues[b] = temp;
trueR = Util.getPearson(xValues, yValues)
}
}
}
var correlates:Array = new Array;
for (var h:int=0; h < size; h++) {
var pair:Array = new Array(xValues[h], yValues[h]);
correlates.push(pair);}
return correlates;
}

Categories