I've been using Apache math for a while to do a multiple linear regression using OLSMultipleLinearRegression. Now I need to extend my solution to include a weighting factor for each data point.
I'm trying to replicate the MATLAB function fitlm.
I have a MATLAB call like:
table_data = table(points_scored, height, weight, age);
model = fitlm( table_data, 'points_scored ~ -1, height, weight, age', 'Weights', data_weights)
From 'model' I get the regression coefficients for height, weight, age.
In Java the code I have now is (roughly):
double[][] variables = double[grades.length][3];
// Fill in variables for height, weight, age,
...
OLSMultipleLinearRegression regression = new OLSMultipleLinearRegression();
regression.setNoIntercept(true);
regression.newSampleData(points_scored, variables);
There does not appear to be a way to add weightings to OLSMultipleLinearRegression. There does appear to be a way to add weights to the LeastSquaresBuilder. However I'm having trouble figuring out exactly how to use this. My biggest problem (I think) is creating the jacobians that are expected.
Here is most of what I tried:
double[] points_scored = //fill in points scored
double[] height = //fill in
double[] weight = //fill in
double[] age = // fill in
MultivariateJacobianFunction distToResidual= coeffs -> {
RealVector value = new ArrayRealVector(points_scored.length);
RealMatrix jacobian = new Array2DRowRealMatrix(points_scored.length, 3);
for (int i = 0; i < measures.length; ++i) {
double residual = points_scored[i];
residual -= coeffs.getEntry(0) * height[i];
residual -= coeffs.getEntry(1) * weight[i];
residual -= coeffs.getEntry(2) * age[i];
value.setEntry(i, residual);
//No idea how to set up the jacobian here
}
return new Pair<RealVector, RealMatrix>(value, jacobian);
};
double[] prescribedDistancesToLine = new double[measures.length];
Arrays.fill(prescribedDistancesToLine, 0);
double[] starts = new double[] {1, 1, 1};
LeastSquaresProblem problem = new LeastSquaresBuilder().
start(starts).
model(distToResidual).
target(prescribedDistancesToLine).
lazyEvaluation(false).
maxEvaluations(1000).
maxIterations(1000).
build();
LeastSquaresOptimizer.Optimum optimum = new LevenbergMarquardtOptimizer().optimize(problem);
Since I don't know how to make the jacobian values I've just been stabbing in the dark and getting coefficient nowhere near the MATLAB answers. Once I get this part working I know that adding the weights should be a pretty straight forward extra line int the LeastSquaresBuilder.
Thanks for any help in advance!
You can use class GLSMultipleLinearRegression from Apache math.
For example, let find linear regression for three plane data points
(0, 0), (1, 2), (2, 0) with weights 1, 2, 1:
import org.apache.commons.math3.stat.regression.GLSMultipleLinearRegression;
public class Main {
public static void main(String[] args) {
GLSMultipleLinearRegression regr = new GLSMultipleLinearRegression();
regr.setNoIntercept(false);
double[] y = new double[]{0.0, 2.0, 0.0};
double[][] x = new double[3][];
x[0] = new double[]{0.0};
x[1] = new double[]{1.0};
x[2] = new double[]{2.0};
double[][] omega = new double[3][];
omega[0] = new double[]{1.0, 0.0, 0.0};
omega[1] = new double[]{0.0, 0.5, 0.0};
omega[2] = new double[]{0.0, 0.0, 1.0};
regr.newSampleData(y, x, omega);
double[] params = regr.estimateRegressionParameters();
System.out.println("Slope: " + params[1] + ", intercept: " + params[0]);
}
}
Note that the omega matrix is diagonal, and its diagonal elements are reciprocal weights.
View the documentation for multi-variable case.
Related
This first function is stolen directly from the nom-tam-fits site: https://nom-tam-fits.github.io/nom-tam-fits/intro.html
private static void dataTableToBinaryFitsDummy() throws Exception {
BufferedFile bf = new BufferedFile("table.fits", "rw");
BasicHDU.getDummyHDU().write(bf); // Write an initial null HDU
double[] ra = {1.};
double[] dec = {2.};
String[] name = {" "}; // maximum length will be 10 characters
Object[] row = {ra, dec, name};
long rowSize = ArrayFuncs.computeLSize(row);
BinaryTable table = new BinaryTable();
table.addRow(row);
Header header = new Header();
table.fillHeader(header);
BinaryTableHDU bhdu = new BinaryTableHDU(header, table);
bhdu.setColumnName(0, "ra", null);
bhdu.setColumnName(1, "dec", null);
bhdu.setColumnName(2, "name", null);
bhdu.getHeader().setNaxis(2, 1000); // set the header to the actual number of rows we write
bhdu.getHeader().write(bf);
ByteBuffer buffer = ByteBuffer.allocate((int) rowSize);
for (int event = 0; event < 1000; event ++){
buffer.clear();
// update ra, dec and name here
buffer.putDouble(event);
buffer.putDouble(dec[0]);
buffer.put(("event "+event).getBytes());
buffer.flip();
bf.write(buffer.array());
}
FitsUtil.pad(bf, rowSize * 1000);
bf.close();
}
As a starting point, I want to be able to pass in an arbitrary collection of primitive double arrays. This was my attempt, which IMO shouldn't be particularly different from the original.
private static void dataTableToBinaryFitsDummyWithArgsDoublesOnlyWrapper() {
int nRows = 100;
double [] rowA = new double[nRows];
double [] rowB = new double[nRows];
double [] rowC = new double[nRows];
double [] rowD = new double[nRows];
Random random = new Random();
for (int i=0;i<nRows;i++) {
rowA[i] = i;
rowB[i] = 2*i;
rowC[i] = random.nextGaussian();
rowD[i] = rowC[i] * -1;
}
ImmutableList<double[]> columns = ImmutableList.of(rowA, rowB, rowC, rowD);
ImmutableList<String> names = ImmutableList.of("ints", "double ints", "randos", "neg randos");
try {
dataTableToBinaryFitsDummyWithArgsDoublesOnly(columns, names, 4, nRows);
} catch (Exception e) {
e.printStackTrace();
}
}
private static void dataTableToBinaryFitsDummyWithArgsDoublesOnly(ImmutableList<double[]> columns,
ImmutableList<String> names, int nCols, int nRows) throws Exception {
BufferedFile bf = new BufferedFile("tableWithArgs.fits", "rw");
BasicHDU.getDummyHDU().write(bf); // Write an initial null HDU
Object[] row = new Object[nCols];
for (int i=0;i<nCols;i++) {
row[i] = columns.get(i);
}
long rowSize = ArrayFuncs.computeLSize(row);
BinaryTable table = new BinaryTable();
table.addRow(row);
Header header = new Header();
table.fillHeader(header);
BinaryTableHDU bhdu = new BinaryTableHDU(header, table);
for (int i=0;i<nCols;i++) {
bhdu.setColumnName(i, names.get(i), null);
}
bhdu.getHeader().setNaxis(2, nRows);
bhdu.getHeader().write(bf);
ByteBuffer buffer = ByteBuffer.allocate((int) rowSize);
for (int i = 0; i < nRows; i ++){
buffer.clear();
// update ra, dec and name here
for (double[] column : columns) {
buffer.putDouble(column[i]);
}
buffer.flip();
bf.write(buffer.array());
}
FitsUtil.pad(bf, rowSize * nRows);
bf.close();
}
However, what I get looks very different. I want the rows to look like
ints, double ints, randos, neg randos
0.0, 0.0, 0.66938, -0.66938
1.0, 2.0, 0.53482, -0.53482
2.0, 4.0, 0.66825, -0.66825
...
But instead I get
ints, double ints, randos, neg randos
(0.0, 0.0, 0.66938, -0.66938, ...), (0.0, 0.0, 0.0, 0.0, ...), (0.0, 0.0, 0.0, 0.0, ...), (0.0, 0.0, 0.0, 0.0, ...)
(1.0, 2.0, 0.53482, -0.53482, ...), (0.0, 0.0, 0.0, 0.0, ...), (0.0, 0.0, 0.0, 0.0, ...), (0.0, 0.0, 0.0, 0.0, ...)
It looks like instead of getting 4 columns and 100 rows of numbers, I'm getting 4 columns and 100 rows of 100-number lists. I followed the canonical example as best I could, why did this happen?
While I'm at it, I also want this to be an AsciiTable and AsciiTableHDU, but the only examples I'm seeing are with BinaryTables, BinaryTableHDUs and ByteBuffers.
I'm not super familiar with nom.tam.fits, but according to the docs BinaryTable.addRow takes an array of arrays of primitives (this is because the data in a single cell of a FITS binary table can itself be a multi-dimensional array).
That's why the original example you cited creates an row from an array of 1-element arrays:
double[] ra = {1.};
double[] dec = {2.};
String[] name = {" "}; // maximum length will be 10 characters
Object[] row = {ra, dec, name};
In your code you seem to be confusing rows and columns:
double [] rowA = new double[nRows];
double [] rowB = new double[nRows];
double [] rowC = new double[nRows];
double [] rowD = new double[nRows];
Here, each of these arrays represents a full column of data (which is why their lengths are nRows).
Then here
Object[] row = new Object[nCols];
for (int i=0;i<nCols;i++) {
row[i] = columns.get(i);
}
you create a single "row" containing for the values in that row the entire columns. You are missing some outer for-loop over nRows.
There is also a BinaryTable.addColumn which should probably more efficient for adding entire columns at once (especially if you pre-specify the number and types of columns).
The main use case of the original example, where one row is being written at a time, might be if you have some long-running observation or event logging process where one row is appended to the table at a time in a streaming fashion.
I try to merge multiple meshes with a transformation matrix into a single mesh.
Each mesh has 4 data sets.
Vertices
Indices
Texture Coordinates
Normals
The way I'm trying to do it is supposed to be lazy and not cost that much CPU.
It is a 3 step process.
Multiply each vertex and normal with the transformation matrix.
Merge the Vertices, Texture Coordinates and Normals of each mesh into 3 big arrays.
Merge the Indices of each mesh into a single array but use the sum of the previous meshes as an offset. For example: If mesh 1 has 800 indices then 800 has to be added to all of the indices from mesh 2.
This method has two big problems.
Duplicate vertices are not shared
Parts that are invisible due to clipping are not removed
But that is OK as this is supposed to be a lazy method with not much CPU usage. It is already optimal for creating meshes for grass and bushes.
I have attempted an implementation of this method which looks like this:
public static final MeshData mergeLazy(List<MeshData> meshes, List<Matrix4f> transformations) {
int lengthVertices = 0;
int lengthNormals = 0;
int lengthTexCoords = 0;
int lengthIndices = 0;
ArrayList<Integer> indexLengths = new ArrayList<>();
for(MeshData mesh : meshes) {
lengthVertices += mesh.getVertices().length;
lengthNormals += mesh.getNormals().length;
lengthTexCoords += mesh.getTextureCoordinates().length;
int length = mesh.getIndices().length;
lengthIndices += length;
indexLengths.add(length);
}
float[] vertices = new float[lengthVertices];
float[] texCoords = new float[lengthTexCoords];
float[] normals = new float[lengthNormals];
int[] indices = new int[lengthIndices];
int iv = 0;
int ivt = 0;
int ivn = 0;
int i = 0;
int indexLength = 0;
for(int im = 0; im < meshes.size(); im++) {
MeshData mesh = meshes.get(im);
float[] mVertices = mesh.getVertices();
float[] mTexCoords = mesh.getTextureCoordinates();
float[] mNormals = mesh.getNormals();
int[] mIndices = mesh.getIndices();
Matrix4f transformation = transformations.get(im);
for(int index = 0; index < mVertices.length; index += 3) {
Vector3f vertex = MatrixUtil.multiply(transformation, mVertices[index], mVertices[index + 1], mVertices[index + 2]);
vertices[iv++] = vertex.x;
vertices[iv++] = vertex.y;
vertices[iv++] = vertex.z;
Vector3f normal = MatrixUtil.multiply(transformation, mNormals[index], mNormals[index + 1], mNormals[index + 2]);
normals[ivn++] = normal.x;
normals[ivn++] = normal.y;
normals[ivn++] = normal.z;
}
for(int index = 0; index < mTexCoords.length; index++) {
texCoords[ivt++] = mTexCoords[index];
}
for(int index = 0; index < mIndices.length; index++) {
indices[i++] = indexLength + mIndices[index];
}
indexLength += indexLengths.get(im);
}
MeshData data = new MeshData();
data.setIndices(indices);
data.setNormals(normals);
data.setTextureCoordinates(texCoords);
data.setVertices(vertices);
return data;
}
In the end I actually have a single mesh and the multiplying of the transformation also works.... for rotation and scaling, but here come the problems.
The multiplying with the transformation does NOT work for the translation.
My method for multiplying a matrix with a vector looks like this:
public static final Vector3f multiply(Matrix4f matrix, float x, float y, float z) {
Vector3f result = new Vector3f();
result.x = x * matrix.m00 + y * matrix.m01 + z * matrix.m02;
result.y = x * matrix.m10 + y * matrix.m11 + z * matrix.m12;
result.z = x * matrix.m20 + y * matrix.m21 + z * matrix.m22;
return result;
}
And the second problem is that the textures of the second mesh are somewaht off.
Here is a picture:
As you can see the second mesh only has about 1/4 of the actual texture.
The code I used to generate this mesh looks like this:
Material grassMaterial = new Material();
grassMaterial.setMinBrightness(0.1F);
grassMaterial.setColorMap(new Texture(new XImgTextureReader().read(new FileInputStream("res/textures/grass2.ximg"))));
grassMaterial.setAffectedByLight(true);
grassMaterial.setTransparent(true);
grassMaterial.setUpwardsNormals(true);
grassMaterial.setFog(fog);
MeshData quad = Quad.generateMeshData(
new Vector3f(0.0F, 1F, 0.0F),
new Vector3f(0.0F, 0.0F, 0.0F),
new Vector3f(1F, 0.0F, 0.0F),
new Vector3f(1F, 1F, 0.0F)
);
StaticMesh grassMesh = new StaticMesh(MeshUtil.mergeLazy(Arrays.asList(quad, quad), Arrays.asList(
MatrixUtil.createTransformationMatrx(
new Vector3f(0.0F, 0.0F, 0.0F),
new Vector3f(0.0F, 0.0F, 0.0F),
new Vector3f(1.0F, 1.0F, 1.0F)
),
MatrixUtil.createTransformationMatrx(
new Vector3f(0F, 0.0F, -0F),
new Vector3f(0.0F, 90.0F, 0.0F),
new Vector3f(1.0F, 1.0F, 1.0F)
)
)));
grassMesh.setCullMode(StaticMesh.CULLING_DISABLED);
Entity grass = new Entity();
grass.setShaderPipeline(shaderPipeline);
grass.setMaterial(grassMaterial);
grass.setMesh(grassMesh);
grass.setTranslation(0, 0, 1);
My question now is: What did I do wrong? Why is the texture so weird and why does the multiplication with the transformation not work for the translation?
If you need more of the code, I have a GitHub Repo with the Eclipse Project here: https://github.com/RalleYTN/Heroica-Fabulis
Thanks to #Rabbid76 I came closer to my answer and now I have finally found the problem.
The first problem with the translation not working was fixed by multiplying the transformation vertically instead of horizontally. Thanks again #Rabidd76 .
And the reason why the textures where so weird is because I merged the indices incorrectly. I should not have taken the sum of all indices in the meshes before as offset but the sum of the vertices.
Here is now the working method:
public static final MeshData mergeLazy(List<MeshData> meshes, List<Matrix4f> transformations) {
ArrayList<Float> vertices = new ArrayList<>();
ArrayList<Float> texCoords = new ArrayList<>();
ArrayList<Float> normals = new ArrayList<>();
ArrayList<Integer> indices = new ArrayList<>();
int offset = 0;
int m = 0;
for(MeshData mesh : meshes) {
Matrix4f transformation = transformations.get(m);
float[] mVertices = mesh.getVertices();
float[] mNormals = mesh.getNormals();
for(int index = 0; index < mesh.getVertices().length; index += 3) {
Vector3f vertex = MatrixUtil.multiply(transformation, mVertices[index], mVertices[index + 1], mVertices[index + 2]);
vertices.add(vertex.x);
vertices.add(vertex.y);
vertices.add(vertex.z);
Vector3f normal = MatrixUtil.multiply(transformation, mNormals[index], mNormals[index + 1], mNormals[index + 2]);
normals.add(normal.x);
normals.add(normal.y);
normals.add(normal.z);
}
ListUtil.addFloatArray(texCoords, mesh.getTextureCoordinates());
int[] mIndices = mesh.getIndices();
for(int index : mIndices) {
indices.add(index + offset);
}
offset += mVertices.length / 3;
m++;
}
MeshData mesh = new MeshData();
mesh.setIndices(ListUtil.toPrimitiveIntArray(indices));
mesh.setNormals(ListUtil.toPrimitiveFloatArray(normals));
mesh.setTextureCoordinates(ListUtil.toPrimitiveFloatArray(texCoords));
mesh.setVertices(ListUtil.toPrimitiveFloatArray(vertices));
return mesh;
}
I'm new to Geotools and I created two geometries (two polygons for example) and I want to compute the percentage of the intersection area over one of the geometries.
//First polygon
GeometryFactory geometryFactory1 = JTSFactoryFinder.getGeometryFactory();
Coordinate[] coords1 =
new Coordinate[] {new Coordinate(4, 0), new Coordinate(2, 2),
new Coordinate(4, 4), new Coordinate(6, 2), new Coordinate(4, 0) };
LinearRing ring1 = geometryFactory.createLinearRing( coords );
LinearRing holes[] = null;
Polygon polygon1 = geometryFactory.createPolygon(ring1, holes );
// Second polygon
GeometryFactory geometryFactory2 = JTSFactoryFinder.getGeometryFactory();
Coordinate[] coords2 =
new Coordinate[] {new Coordinate(2, 0), new Coordinate(2, 2),
new Coordinate(1, 1), new Coordinate(4, 2), new Coordinate(2, 0) };
LinearRing ring2 = geometryFactory.createLinearRing( coords );
LinearRing holes[] = null;
Polygon polygon2 = geometryFactory.createPolygon(ring2, holes );
// test if polygon2 is inside polygon1
boolean test = polygon1.contains(polygon2);
Does someone know how to calculate the percentage of polygon2 inside polygon1 (or a circle)? there is any algorithm to calculate the area of intersection between geometries?
You will need to compute the intersection, then its area, and a last compute the ratio
Geometry intersect = polygon1.intersection(polygon2);
double areaRatio = 100.0*intersect.getArea() / polygon2.getArea();
System.out.println("ratio: "+areaRatio + "%");
That being said, you will want to ensure the geometries are valid before computing the intersection, using polygon1.isValid() and polygon2.isValid().
The sample data for polygon2 is self-intersecting, so the intersection operation fails with
com.vividsolutions.jts.geom.TopologyException: found non-noded
intersection between LINESTRING ( 2.0 0.0, 2.0 2.0 ) and LINESTRING (
1.0 1.0, 2.5 1.5 ) [ (2.0, 1.3333333333333333, NaN) ]
I've been able to use Apache Math's interpolation using the LinearInterpolator().interpolate(x1, y1). Unfortunately, I could not find a way to extrapolate.
How can I do linear extrapolation in java?
x1 = [1, 2, 3, 4, 5];
y1 = [2, 4, 8, 16, 32];
I would like to know the values of any x2 not just the one in the range of the x1.
If I try to extract the value of 6 I get an: OutOfRangeException if {#code v} is outside of the domain of the
* spline function (smaller than the smallest knot point or larger than the
largest knot point).
Edit: Here is my simple interpolate function. I would like an option to enable the extrapolation just like in MathLab(interp2). Using x1 and y1 arrays an input for that function I get the Apache's OutOfRangeException because the value 6 is not contained in the x1 array.
public static List<Double> interpolateLinear(double[] x1, double[] y1, Double[] x2) {
List<Double> resultList;
final PolynomialSplineFunction function = new LinearInterpolator().interpolate(x1, y1);
resultList = Arrays.stream(x2).map(aDouble -> function.value(aDouble)).collect(Collectors.toList());
return resultList;
}
Edit2: Had to read a little bit on the .value method of the PolynomialSplineFunction object to get it right but there it goes (all the credit goes to user Joni) Thanks man:
public static double[] interpolateLinear(double[] x1, double[] y1, double[] x2) {
final PolynomialSplineFunction function = new LinearInterpolator().interpolate(x1, y1);
final PolynomialFunction[] splines = function.getPolynomials();
final PolynomialFunction firstFunction = splines[0];
final PolynomialFunction lastFunction = splines[splines.length - 1];
final double[] knots = function.getKnots();
final double firstKnot = knots[0];
final double lastKnot = knots[knots.length - 1];
double[] resultList = Arrays.stream(x2).map(aDouble -> {
if (aDouble > lastKnot) {
return lastFunction.value(aDouble - knots[knots.length - 2]);
} else if (aDouble < firstKnot)
return firstFunction.value(aDouble - knots[0]);
return function.value(aDouble);
}).toArray();
return resultList;
}
You can get the first and last polynomial splines from the interpolator, and use those to extrapolate.
PolynomialSplineFunction function = new LinearInterpolator().interpolate(x1, y1);
PolynomialFunction[] splines = function.getPolynomials();
PolynomialFunction first = splines[0];
PolynomialFunction last = splines[splines.length-1];
// use first and last to extrapolate
You won't get 64 from 6 though. You should expect 48 from a linear extrapolation. Which goes to show that extrapolation is bound to give you wrong answers.
I have similar problem, the interpolation part is a cubic spline function, and math3.analysis.polynomials.PolynomialSplineFunction does not support the extrapolation.
In the end, I decide to write the linear extrapolation based on the leftest(/rightest) two points (i.e. x1,x2 and y1, y2). I need the extrapolation part to avoid that the function fails or get any very irregular values in the extrapolation region. In my example, I hard coded such that the extrapolated value should stay in [0.5* y1, 2 * y1] (left side) or [0.5 * yn, 2 *yn] (right side).
As mentioned by Joni, the extrapolation is dangerous, and it could leads to unexpected results. Be careful. The linear extrapolation can be replaced by any other kind extrapolation, depending on how you write the code (e.g. using the derivative at the right/left point and inferring a quadratic function for extrapolation.)
public static double getValue(PolynomialSplineFunction InterpolationFunction, double v) {
try {
return InterpolationFunction.value(v);
} catch (OutOfRangeException e) {
// add the extrapolation function: we use linear extrapolation based on the slope of the two points on the left or right
double[] InterpolationKnots = InterpolationFunction.getKnots();
int n = InterpolationKnots.length;
double first, second, firstValue, secondValue;
if (v < InterpolationKnots[0])
{ // extrapolation from the left side, linear extrapolation based on the first two points on the left
first = InterpolationKnots[0]; // the leftest point
second = InterpolationKnots[1]; // the second leftest point
}
else { // extrapolation on the right side, linear extrapolation based on the first two points on the right
first = InterpolationKnots[n - 1]; // the rightest point
second = InterpolationKnots[n - 2]; // the second rightest point
}
firstValue = InterpolationFunction.value(first);
secondValue = InterpolationFunction.value(second);
double extrapolatedValue = (firstValue - secondValue) / (first - second) * (v - first) + firstValue;
// add a boundary to the extrapolated value so that it is within [0.5, 2] * firstValue
if (extrapolatedValue > 2 * firstValue){ extrapolatedValue = 2 * firstValue;}
if (extrapolatedValue < 0.5 * firstValue) {extrapolatedValue = 0.5* firstValue;}
return extrapolatedValue;
}
}
Just sharing a complete example based on the answer provided by Joni:
import java.util.Arrays;
import org.apache.commons.math3.analysis.interpolation.LinearInterpolator;
import org.apache.commons.math3.analysis.polynomials.PolynomialFunction;
import org.apache.commons.math3.analysis.polynomials.PolynomialSplineFunction;
public class App {
public static void main(String[] args) {
double[] x1 = { 1, 2, 3, 4, 5 };
double[] y1 = { 2, 4, 8, 16, 32 };
double[] x2 = { 6, 7 };
double[] res = interpolateLinear(x1, y1, x2);
for (int i = 0; i < res.length; i++) {
System.out.println("Value: " + x2[i] + " => extrapolation: " + res[i]);
}
}
public static double[] interpolateLinear(double[] x1, double[] y1, double[] x2) {
final PolynomialSplineFunction function = new LinearInterpolator().interpolate(x1, y1);
final PolynomialFunction[] splines = function.getPolynomials();
final PolynomialFunction firstFunction = splines[0];
final PolynomialFunction lastFunction = splines[splines.length - 1];
final double[] knots = function.getKnots();
final double firstKnot = knots[0];
final double lastKnot = knots[knots.length - 1];
double[] resultList = Arrays.stream(x2).map(aDouble -> {
if (aDouble > lastKnot) {
return lastFunction.value(aDouble - knots[knots.length - 2]);
} else if (aDouble < firstKnot)
return firstFunction.value(aDouble - knots[0]);
return function.value(aDouble);
}).toArray();
return resultList;
}
}
I'm working on a android application and I need to estimate online camera rotation in 3D-plan using images from camera and opencv library. I like to calculate Euler angles.
I have read this and this page and I can estimate homography matrix like here.
My first question is, should I really know the camera intrinsic matrix from camera calibrtion or is the homography matrix (camera extrinsic) enough to estimate euler angles (pitch, roll, yaw)?
If homography matrix is enough, how can I do it exactly?
Sorry, I am really beginner with opencv and cannot decompose "Mat" of homography to rotation matrix and translation matrix like describes here. How can I implement euler angles in android?
You can see my code using solvePnPRansac() and decomposeProjectionMatrix to calculate euler angles.
But it returns just a null-vector as double[] eulerArray = {0,0,0}!!! Can somebody help me?! What is wrong there?
Thank you very much for any response!
public double[] findEulerAngles(MatOfKeyPoint keypoints1, MatOfKeyPoint keypoints2, MatOfDMatch matches){
KeyPoint[] k1 = keypoints1.toArray();
KeyPoint[] k2 = keypoints2.toArray();
List<DMatch> matchesList = matches.toList();
List<KeyPoint> referenceKeypointsList = keypoints2.toList();
List<KeyPoint> sceneKeypointsList = keypoints1.toList();
// Calculate the max and min distances between keypoints.
double maxDist = 0.0;
double minDist = Double.MAX_VALUE;
for(DMatch match : matchesList) {
double dist = match.distance;
if (dist < minDist) {
minDist = dist;
}
if (dist > maxDist) {
maxDist = dist;
}
}
// Identify "good" keypoints based on match distance.
List<Point3> goodReferencePointsList = new ArrayList<Point3>();
ArrayList<Point> goodScenePointsList = new ArrayList<Point>();
double maxGoodMatchDist = 1.75 * minDist;
for(DMatch match : matchesList) {
if (match.distance < maxGoodMatchDist) {
Point kk2 = k2[match.queryIdx].pt;
Point kk1 = k1[match.trainIdx].pt;
Point3 point3 = new Point3(kk1.x, kk1.y, 0.0);
goodReferencePointsList.add(point3);
goodScenePointsList.add( kk2);
sceneKeypointsList.get(match.queryIdx).pt);
}
}
if (goodReferencePointsList.size() < 4 || goodScenePointsList.size() < 4) {
// There are too few good points to find the pose.
return;
}
MatOfPoint3f goodReferencePoints = new MatOfPoint3f();
goodReferencePoints.fromList(goodReferencePointsList);
MatOfPoint2f goodScenePoints = new MatOfPoint2f();
goodScenePoints.fromList(goodScenePointsList);
MatOfDouble mRMat = new MatOfDouble(3, 3, CvType.CV_32F);
MatOfDouble mTVec = new MatOfDouble(3, 1, CvType.CV_32F);
//TODO: solve camera intrinsic matrix
Mat intrinsics = Mat.eye(3, 3, CvType.CV_32F); // dummy camera matrix
intrinsics.put(0, 0, 400);
intrinsics.put(1, 1, 400);
intrinsics.put(0, 2, 640 / 2);
intrinsics.put(1, 2, 480 / 2);
Calib3d.solvePnPRansac(goodReferencePoints, goodScenePoints, intrinsics, new MatOfDouble(), mRMat, mTVec);
MatOfDouble rotCameraMatrix1 = new MatOfDouble(3, 1, CvType.CV_32F);
double[] rVecArray = mRMat.toArray();
// Calib3d.Rodrigues(mRMat, rotCameraMatrix1);
double[] tVecArray = mTVec.toArray();
MatOfDouble projMatrix = new MatOfDouble(3, 4, CvType.CV_32F); //projMatrix 3x4 input projection matrix P.
projMatrix.put(0, 0, rVecArray[0]);
projMatrix.put(0, 1, rVecArray[1]);
projMatrix.put(0, 2, rVecArray[2]);
projMatrix.put(0, 3, 0);
projMatrix.put(1, 0, rVecArray[3]);
projMatrix.put(1, 1, rVecArray[4]);
projMatrix.put(1, 2, rVecArray[5]);
projMatrix.put(1, 3, 0);
projMatrix.put(2, 0, rVecArray[6]);
projMatrix.put(2, 1, rVecArray[7]);
projMatrix.put(2, 2, rVecArray[8]);
projMatrix.put(2, 3, 0);
MatOfDouble cameraMatrix = new MatOfDouble(3, 3, CvType.CV_32F); //cameraMatrix Output 3x3 camera matrix K.
MatOfDouble rotMatrix = new MatOfDouble(3, 3, CvType.CV_32F); //rotMatrix Output 3x3 external rotation matrix R.
MatOfDouble transVect = new MatOfDouble(4, 1, CvType.CV_32F); //transVect Output 4x1 translation vector T.
MatOfDouble rotMatrixX = new MatOfDouble(3, 3, CvType.CV_32F); //rotMatrixX a rotMatrixX
MatOfDouble rotMatrixY = new MatOfDouble(3, 3, CvType.CV_32F); //rotMatrixY a rotMatrixY
MatOfDouble rotMatrixZ = new MatOfDouble(3, 3, CvType.CV_32F); //rotMatrixZ a rotMatrixZ
MatOfDouble eulerAngles = new MatOfDouble(3, 1, CvType.CV_32F); //eulerAngles Optional three-element vector containing three Euler angles of rotation in degrees.
Calib3d.decomposeProjectionMatrix( projMatrix,
cameraMatrix,
rotMatrix,
transVect,
rotMatrixX,
rotMatrixY,
rotMatrixZ,
eulerAngles);
double[] eulerArray = eulerAngles.toArray();
return eulerArray;
}
Homography relates images of the same planar surface, so it works only if there is a dominant plane in the image and you can find enough feature points lying on the plane in both images and successfully match them. Minimum number of matches is four and the math will work under the assumption, that the matches are 100% correct. With the help of robust estimation like RANSAC, you can get the result even if some elements in your set of feature point matches are obvious mismatches or are not placed on a plane.
For a more general case of a set of macthed features without the planarity assumption, you will need to find an essential matrix. The exact definition of the matrix can be found here. In short, it works more or less like homography - it relates corresponding points in two images. The minimum number of matches required to compute the essential matrix is five. To get the result from such a minimum sample, you need to make sure that the established matches are 100% correct. Again, robust estimation can help if there are outliers in your correspondence set -- and with automatic feature detection and matching there usually are.
OpenCV 3.0 has a function for essential matrix computation, conveniently integrated with RANSAC robust estimation. The essential matrix can be decomposed to the rotation matrix and translation vector as shown in the Wikipedia article I linked before. OpenCV 3.0 has a readily available function for this, too.
Now works the flowing code for me and I have decomposed the euler angles from homography matrix! I have some values for pitch, roll and yaw, which I don't know, whether there are correct. Have somebody any Idee, how can I test it?!
private static MatOfDMatch filterMatchesByHomography(MatOfKeyPoint keypoints1, MatOfKeyPoint keypoints2, MatOfDMatch matches){
List<Point> lp1 = new ArrayList<Point>(500);
List<Point> lp2 = new ArrayList<Point>(500);
KeyPoint[] k1 = keypoints1.toArray();
KeyPoint[] k2 = keypoints2.toArray();
List<DMatch> matchesList = matches.toList();
if (matchesList.size() < 4){
MatOfDMatch mat = new MatOfDMatch();
return mat;
}
// Add matches keypoints to new list to apply homography
for(DMatch match : matchesList){
Point kk1 = k1[match.queryIdx].pt;
Point kk2 = k2[match.trainIdx].pt;
lp1.add(kk1);
lp2.add(kk2);
}
MatOfPoint2f srcPoints = new MatOfPoint2f(lp1.toArray(new Point[0]));
MatOfPoint2f dstPoints = new MatOfPoint2f(lp2.toArray(new Point[0]));
//---------------------------------------
Mat mask = new Mat();
Mat homography = Calib3d.findHomography(srcPoints, dstPoints, Calib3d.RANSAC, 0.2, mask); // Finds a perspective transformation between two planes. ---Calib3d.LMEDS
Mat pose = cameraPoseFromHomography(homography);
//Decomposing a rotation matrix to eulerangle
pitch = Math.atan2(pose.get(2, 1)[0], pose.get(2, 2)[0]); // arctan2(r32, r33)
roll = Math.atan2(-1*pose.get(2, 0)[0], Math.sqrt( Math.pow(pose.get(2, 1)[0], 2) + Math.pow(pose.get(2, 2)[0], 2)) ); // arctan2(-r31, sqrt(r32^2 + r33^2))
yaw = Math.atan2(pose.get(2, 0)[0], pose.get(0, 0)[0]);
List<DMatch> matches_homo = new ArrayList<DMatch>();
int size = (int) mask.size().height;
for(int i = 0; i < size; i++){
if ( mask.get(i, 0)[0] == 1){
DMatch d = matchesList.get(i);
matches_homo.add(d);
}
}
MatOfDMatch mat = new MatOfDMatch();
mat.fromList(matches_homo);
return mat;
}
This is my camera pose from homography matrix (see this page too):
private static Mat cameraPoseFromHomography(Mat h) {
//Log.d("DEBUG", "cameraPoseFromHomography: homography " + matToString(h));
Mat pose = Mat.eye(3, 4, CvType.CV_32FC1); // 3x4 matrix, the camera pose
float norm1 = (float) Core.norm(h.col(0));
float norm2 = (float) Core.norm(h.col(1));
float tnorm = (norm1 + norm2) / 2.0f; // Normalization value
Mat normalizedTemp = new Mat();
Core.normalize(h.col(0), normalizedTemp);
normalizedTemp.convertTo(normalizedTemp, CvType.CV_32FC1);
normalizedTemp.copyTo(pose.col(0)); // Normalize the rotation, and copies the column to pose
Core.normalize(h.col(1), normalizedTemp);
normalizedTemp.convertTo(normalizedTemp, CvType.CV_32FC1);
normalizedTemp.copyTo(pose.col(1));// Normalize the rotation and copies the column to pose
Mat p3 = pose.col(0).cross(pose.col(1)); // Computes the cross-product of p1 and p2
p3.copyTo(pose.col(2));// Third column is the crossproduct of columns one and two
Mat temp = h.col(2);
double[] buffer = new double[3];
h.col(2).get(0, 0, buffer);
pose.put(0, 3, buffer[0] / tnorm); //vector t [R|t] is the last column of pose
pose.put(1, 3, buffer[1] / tnorm);
pose.put(2, 3, buffer[2] / tnorm);
return pose;
}