Apache Commons Math3: Multiply row with column vector - java

I want to multiply two vectors a^T = (1,2,3) and b = (4,5,6). With pen and pencil, I got
c = 1*4 + 2*5 + 3*6 = 4 + 10 + 18 = 32
With apache commons math3 I do
ArrayRealVector a = new ArrayRealVector(new double []{1, 2, 3});
ArrayRealVector b = new ArrayRealVector(new double []{4, 5, 6});
to get a representation of the vectors. And to get the result I want to do something like
double c = a.transpose().multiply(b);
but I can't find the right method for it (Wether transpose nor multiply).

This is the dot product, which you can do with double c = a.dotProduct(b);

Related

What is the range of improved Perlin noise?

I'm trying to find the theoretical output range of improved Perlin noise for 1, 2 and 3 dimensions. I'm aware of existing answers to this question, but they don't seem to accord with my practical findings.
If n is the number of dimensions then according to [1] it should be [-sqrt(n/4), sqrt(n/4)]. According to [2] (which refers to [3]) it should be [-0.5·sqrt(n), 0.5·sqrt(n)] (which amounts to the same thing).
This means that the ranges should be approximately:
Dimensions
Range
1
[-0.5, 0.5]
2
[-0.707, 0.707]
3
[-0.866, 0.866]
However when I run the following code (which uses Ken Perlin's own reference implementation of improved noise from his website), I get higher values for 2 and 3 dimensions, namely approximately:
Dimensions
Range
1
[-0.5, 0.5]
2
[-0.891, 0.999]
3
[-0.997, 0.999]
With different permutations I even sometimes get values slightly over 1.0 for 3 dimensions, and for some strange reason one of the bounds for two dimension always seems to be about 0.89 while the other is about 1.00.
I can't figure out whether this is due to a bug in my code (I don't see how since this is Ken Perlin's own code) or due to those discussions not being correct or not being applicable somehow, in which case I would like to know what the theoretical ranges are for improved Perlin noise.
Can you replicate this? Are the results wrong, or can you point me to a discussion of the theoretical values that accords with this outcome?
The code:
public class PerlinTest {
public static void main(String[] args) {
double lowest1DValue = Double.MAX_VALUE, highest1DValue = -Double.MAX_VALUE;
double lowest2DValue = Double.MAX_VALUE, highest2DValue = -Double.MAX_VALUE;
double lowest3DValue = Double.MAX_VALUE, highest3DValue = -Double.MAX_VALUE;
final Random random = new SecureRandom();
for (int i = 0; i < 10000000; i++) {
double value = noise(random.nextDouble() * 256.0, 0.0, 0.0);
if (value < lowest1DValue) {
lowest1DValue = value;
}
if (value > highest1DValue) {
highest1DValue = value;
}
value = noise(random.nextDouble() * 256.0, random.nextDouble() * 256.0, 0.0);
if (value < lowest2DValue) {
lowest2DValue = value;
}
if (value > highest2DValue) {
highest2DValue = value;
}
value = noise(random.nextDouble() * 256.0, random.nextDouble() * 256.0, random.nextDouble() * 256.0);
if (value < lowest3DValue) {
lowest3DValue = value;
}
if (value > highest3DValue) {
highest3DValue = value;
}
}
System.out.println("Lowest 1D value: " + lowest1DValue);
System.out.println("Highest 1D value: " + highest1DValue);
System.out.println("Lowest 2D value: " + lowest2DValue);
System.out.println("Highest 2D value: " + highest2DValue);
System.out.println("Lowest 3D value: " + lowest3DValue);
System.out.println("Highest 3D value: " + highest3DValue);
}
static public double noise(double x, double y, double z) {
int X = (int)Math.floor(x) & 255, // FIND UNIT CUBE THAT
Y = (int)Math.floor(y) & 255, // CONTAINS POINT.
Z = (int)Math.floor(z) & 255;
x -= Math.floor(x); // FIND RELATIVE X,Y,Z
y -= Math.floor(y); // OF POINT IN CUBE.
z -= Math.floor(z);
double u = fade(x), // COMPUTE FADE CURVES
v = fade(y), // FOR EACH OF X,Y,Z.
w = fade(z);
int A = p[X ]+Y, AA = p[A]+Z, AB = p[A+1]+Z, // HASH COORDINATES OF
B = p[X+1]+Y, BA = p[B]+Z, BB = p[B+1]+Z; // THE 8 CUBE CORNERS,
return lerp(w, lerp(v, lerp(u, grad(p[AA ], x , y , z ), // AND ADD
grad(p[BA ], x-1, y , z )), // BLENDED
lerp(u, grad(p[AB ], x , y-1, z ), // RESULTS
grad(p[BB ], x-1, y-1, z ))),// FROM 8
lerp(v, lerp(u, grad(p[AA+1], x , y , z-1 ), // CORNERS
grad(p[BA+1], x-1, y , z-1 )), // OF CUBE
lerp(u, grad(p[AB+1], x , y-1, z-1 ),
grad(p[BB+1], x-1, y-1, z-1 ))));
}
static double fade(double t) { return t * t * t * (t * (t * 6 - 15) + 10); }
static double lerp(double t, double a, double b) { return a + t * (b - a); }
static double grad(int hash, double x, double y, double z) {
int h = hash & 15; // CONVERT LO 4 BITS OF HASH CODE
double u = h<8 ? x : y, // INTO 12 GRADIENT DIRECTIONS.
v = h<4 ? y : h==12||h==14 ? x : z;
return ((h&1) == 0 ? u : -u) + ((h&2) == 0 ? v : -v);
}
static final int p[] = new int[512], permutation[] = { 151,160,137,91,90,15,
131,13,201,95,96,53,194,233,7,225,140,36,103,30,69,142,8,99,37,240,21,10,23,
190, 6,148,247,120,234,75,0,26,197,62,94,252,219,203,117,35,11,32,57,177,33,
88,237,149,56,87,174,20,125,136,171,168, 68,175,74,165,71,134,139,48,27,166,
77,146,158,231,83,111,229,122,60,211,133,230,220,105,92,41,55,46,245,40,244,
102,143,54, 65,25,63,161, 1,216,80,73,209,76,132,187,208, 89,18,169,200,196,
135,130,116,188,159,86,164,100,109,198,173,186, 3,64,52,217,226,250,124,123,
5,202,38,147,118,126,255,82,85,212,207,206,59,227,47,16,58,17,182,189,28,42,
223,183,170,213,119,248,152, 2,44,154,163, 70,221,153,101,155,167, 43,172,9,
129,22,39,253, 19,98,108,110,79,113,224,232,178,185, 112,104,218,246,97,228,
251,34,242,193,238,210,144,12,191,179,162,241, 81,51,145,235,249,14,239,107,
49,192,214, 31,181,199,106,157,184, 84,204,176,115,121,50,45,127, 4,150,254,
138,236,205,93,222,114,67,29,24,72,243,141,128,195,78,66,215,61,156,180
};
static { for (int i=0; i < 256 ; i++) p[256+i] = p[i] = permutation[i]; }
}
Ken’s not using unit vectors. As [1] says, with my emphasis:
Third, there are many different ways to select the random vectors at the grid cell corners. In Improved Perlin noise, instead of selecting any random vector, one of 12 vectors pointing to the edges of a cube are used instead. Here, I will talk strictly about a continuous range of angles since it is easier – however, the range of value of an implementation of Perlin noise using a restricted set of vectors will never be larger. Finally, the script in this repository assumes the vectors are of unit length. If they not, the range of value should be scaled according to the maximum vector length. Note that the vectors in Improved Perlin noise are not unit length.
For Ken’s improved noise, the maximum vector length is 1 in 1D and √2 in 2D, so the theoretical bounds are [−0.5, 0.5] in 1D and [−1, 1] in 2D. I don’t know why you’re not seeing the full range in 2D; if you shuffled the permutation I bet you would sometimes.
For 3D, the maximum vector length is still √2, but the extreme case identified by [1] isn’t a possible output, so the theoretical range of [−√(3/2), √(3/2)] is an overestimate. These folks tried to work it out exactly, and yes, the maximum absolute value does seem to be strictly greater than 1.

Using Apache math for linear regression with weights

I've been using Apache math for a while to do a multiple linear regression using OLSMultipleLinearRegression. Now I need to extend my solution to include a weighting factor for each data point.
I'm trying to replicate the MATLAB function fitlm.
I have a MATLAB call like:
table_data = table(points_scored, height, weight, age);
model = fitlm( table_data, 'points_scored ~ -1, height, weight, age', 'Weights', data_weights)
From 'model' I get the regression coefficients for height, weight, age.
In Java the code I have now is (roughly):
double[][] variables = double[grades.length][3];
// Fill in variables for height, weight, age,
...
OLSMultipleLinearRegression regression = new OLSMultipleLinearRegression();
regression.setNoIntercept(true);
regression.newSampleData(points_scored, variables);
There does not appear to be a way to add weightings to OLSMultipleLinearRegression. There does appear to be a way to add weights to the LeastSquaresBuilder. However I'm having trouble figuring out exactly how to use this. My biggest problem (I think) is creating the jacobians that are expected.
Here is most of what I tried:
double[] points_scored = //fill in points scored
double[] height = //fill in
double[] weight = //fill in
double[] age = // fill in
MultivariateJacobianFunction distToResidual= coeffs -> {
RealVector value = new ArrayRealVector(points_scored.length);
RealMatrix jacobian = new Array2DRowRealMatrix(points_scored.length, 3);
for (int i = 0; i < measures.length; ++i) {
double residual = points_scored[i];
residual -= coeffs.getEntry(0) * height[i];
residual -= coeffs.getEntry(1) * weight[i];
residual -= coeffs.getEntry(2) * age[i];
value.setEntry(i, residual);
//No idea how to set up the jacobian here
}
return new Pair<RealVector, RealMatrix>(value, jacobian);
};
double[] prescribedDistancesToLine = new double[measures.length];
Arrays.fill(prescribedDistancesToLine, 0);
double[] starts = new double[] {1, 1, 1};
LeastSquaresProblem problem = new LeastSquaresBuilder().
start(starts).
model(distToResidual).
target(prescribedDistancesToLine).
lazyEvaluation(false).
maxEvaluations(1000).
maxIterations(1000).
build();
LeastSquaresOptimizer.Optimum optimum = new LevenbergMarquardtOptimizer().optimize(problem);
Since I don't know how to make the jacobian values I've just been stabbing in the dark and getting coefficient nowhere near the MATLAB answers. Once I get this part working I know that adding the weights should be a pretty straight forward extra line int the LeastSquaresBuilder.
Thanks for any help in advance!
You can use class GLSMultipleLinearRegression from Apache math.
For example, let find linear regression for three plane data points
(0, 0), (1, 2), (2, 0) with weights 1, 2, 1:
import org.apache.commons.math3.stat.regression.GLSMultipleLinearRegression;
public class Main {
public static void main(String[] args) {
GLSMultipleLinearRegression regr = new GLSMultipleLinearRegression();
regr.setNoIntercept(false);
double[] y = new double[]{0.0, 2.0, 0.0};
double[][] x = new double[3][];
x[0] = new double[]{0.0};
x[1] = new double[]{1.0};
x[2] = new double[]{2.0};
double[][] omega = new double[3][];
omega[0] = new double[]{1.0, 0.0, 0.0};
omega[1] = new double[]{0.0, 0.5, 0.0};
omega[2] = new double[]{0.0, 0.0, 1.0};
regr.newSampleData(y, x, omega);
double[] params = regr.estimateRegressionParameters();
System.out.println("Slope: " + params[1] + ", intercept: " + params[0]);
}
}
Note that the omega matrix is diagonal, and its diagonal elements are reciprocal weights.
View the documentation for multi-variable case.

Choco Solver setObjective maximize polynominal equation

I'm currently trying out Choco Solver (4.0.8) and I'm trying to solve this equations:
Maximize
subject to
I'm stuck on maximising the first equation. I guess I just need a hint which subtype of Varaible EQUATION should be.
Model model = new Model("my first problem");
BoolVar x1 = model.boolVar("x1");
BoolVar x2 = model.boolVar("x2");
BoolVar x3 = model.boolVar("x3");
BoolVar x4 = model.boolVar("x4");
BoolVar[] bools = {x1, x2, x3, x4};
int[] c = {5, 7, 4, 3};
int[] c2 = {8, 11, 6, 4};
Variable EQUATION = new Variable();
model.scalar(bools, c, "<=", 14).post(); // 5x1 + 7x2 + 4x3 + 3x4 ≤ 14
model.setObjective(Model.MAXIMIZE, EQUATION); // 8x1 + 11x2 + 6x3 + 4x4
model.getSolver().solve();
System.out.println(x1);
System.out.println(x2);
System.out.println(x3);
System.out.println(x4);
It seems I have found a solution like this:
Variable EQUATION = new ScaleView(x1, 8)
.add(new ScaleView(x2, 11),
new ScaleView(x3, 6),
new ScaleView(x4, 4)).intVar();

java- making a path more dense?

I have an array list of points in JAVA.
I want to add more points so the path is more dense.
How should I try to do this?
I made this image to explain better
Let's consider two points from your 2D-polyline A (x1, y1) and B (x2, y2).
We can build straight-line equation using these two points A and B.
The common form of the equation is: y = k*x + b, where k and b are constants.
Using A and B coords we build an equation system:
k*x1 + b = y1
k*x2 + b = y2
Solving this equation we get k and b constants and therefore we have the line equation built.
After that, substitute X coordinates to this equation and you get Y coordinates.
So, we need to find dots between A and B.
And we substitute x1 + 1 as X to this equation, to get relevant Y coordinate.
After that we substitute x1 + 2 as X to this equation to get the relevant Y coordinate and so on, until you get X2 coordinate of the dot B.
Consider the following example.
We have A (2, 2) and B (5, 3)
Building the equation system:
2 * k + b = 2
5 * k + b = 3
b = 2 - 2 * k
5 * k + 2 - 2 * k = 3
3 * k + 2 = 3
3 * k = 1
k = 1/3
b = 2 - 2/3
b = 4/3
and our A-B line equation is:
y = x/3 + 4/3
let's find dots between A and B.
We increase x coordinate of the dot A to 1, and are going to find Y coordinate of that dot.
x = 3
y = 1 + 4/3 = 7/3
Now, get the next dot after that, using x = 4
x = 4
y = 4/3 + 4/3 = 8/3
Dot B has x coordinate equal to 5, just checking:
x = 5
y = 5/3 + 4/3 = 3
correct!
That's it.

Using least square method with Commons Math and fitting

I was trying to use commons math to figure out the constants in a polynomial. It looks like the routine exists but I got this error. Does anyone see the issue?
I was trying to convert this question to commons-math:
https://math.stackexchange.com/questions/121212/how-to-find-curve-equation-from-data
From plotting you data (Wolfram|Alpha link), it does not look linear. So it better be fit by a polynomial. I assume you want to fit the data:
X Y
1 4
2 8
3 13
4 18
5 24
..
using a quadratic polynomial y=ax2+bx+c.
And wolfram alpha provided a great utility. I wish I could get the same answers like from wolfram.
http://www.wolframalpha.com/input/?i=fit+4%2C+8%2C+13%2C
E.g. By entering that data, I would get : 4.5 x-0.666667 (linear)
Here is the code and error:
import org.apache.commons.math3.stat.regression.OLSMultipleLinearRegression;
import org.apache.commons.math3.stat.regression.SimpleRegression;
final OLSMultipleLinearRegression regression2 = new OLSMultipleLinearRegression();
double[] y = {
4.0,
8,
13,
};
double[][] x2 =
{
{ 1.0, 1, 1 },
{ 1.0, 2, 4 },
{ 0.0, 3, 9 },
};
regression2.newSampleData(y, x2);
regression2.setNoIntercept(true);
regression2.newSampleData(y, x2);
double[] beta = regression2.estimateRegressionParameters();
for (double d : beta) {
System.out.println("D: " + d);
}
Exception in thread "main" org.apache.commons.math3.exception.MathIllegalArgumentException: not enough data (3 rows) for this many predictors (3 predictors)
at org.apache.commons.math3.stat.regression.AbstractMultipleLinearRegression.validateSampleData(AbstractMultipleLinearRegression.java:236)
at org.apache.commons.math3.stat.regression.OLSMultipleLinearRegression.newSampleData(OLSMultipleLinearRegression.java:70)
at org.berlin.bot.algo.BruteForceSort.main(BruteForceSort.java:108)
The javadoc for validateSampleData() states that the two-dimensional array must have at least one more row than it has columns.
http://commons.apache.org/proper/commons-math/javadocs/api-3.3/org/apache/commons/math3/stat/regression/AbstractMultipleLinearRegression.html
Rcook was right. I provided an additional row (test case) and that generated the same answer as from wolfram/alpha.
D: 0.24999999999999822
D: 3.4500000000000033
D: 0.24999999999999914
Or 0.25x^2 + 3.45x + 0.25
final OLSMultipleLinearRegression regression2 = new OLSMultipleLinearRegression();
double[] y = {
4,
8,
13,
18
};
double[][] x2 =
{
{ 1, 1, 1 },
{ 1, 2, 4 },
{ 1, 3, 9 },
{ 1, 4, 16 },
};
regression2.newSampleData(y, x2);
regression2.setNoIntercept(true);
regression2.newSampleData(y, x2);
double[] beta = regression2.estimateRegressionParameters();
for (double d : beta) {
System.out.println("D: " + d);
}

Categories