I am new to Mahout, and have been lately transforming a lot of my previous machine learning code to this framework. In many places, I am using cosine similarity between vectors for clustering, classification, etc. Investigating Mahout's distance method, however, gave me quite a surprise. In the following code snippet, the dimension and the float values are taken from an actual output of one of my programs (not that it matters here):
import org.apache.mahout.math.RandomAccessSparseVector;
import org.apache.mahout.common.distance.CosineDistanceMeasure;
public static void main(String[] args) {
RandomAccessSparseVector u = new RandomAccessSparseVector(373);
RandomAccessSparseVector v = new RandomAccessSparseVector(373);
u.set(24, 0.4526985183337534);
u.set(55, 0.5333219834564495);
u.set(54, 0.5333219834564495);
u.set(53, 0.4756042214095471);
v.set(57, 0.6653016370845252);
v.set(56, 0.6653016370845252);
v.set(11, 0.3387439495921685);
CosineDistanceMeasure cosineDistanceMeasure = new CosineDistanceMeasure();
System.out.println(cosineDistanceMeasure.distance(u, v));
}
The output is 1.0. Shouldn't it be 0.0?
Comaring this with the output of cosineDistanceMeasure.distance(u, u), I realize that what I am looking for is 1 - cosineDistanceMeasure.distance(u, v). But this reversal just doesn't make sense to me. Any idea why it was implemented this way? Or am I missing something very obvious?
When two points are "close", the angle they form when viewed as vectors from the origin is small, near zero. The cosine of angles near zero is near 1, and the cosine decreases as the angle goes towards 90 and then 180 degrees.
So cosine decreases as distance increases. This is why the cosine of the angle between two vectors itself can't make sense as a distance metric. The 'canonical' way to make a distance metric is 1 - cosine; it's a proper metric.
Related
I'm using the non linear least squares Levenburg Marquardt algorithm in java to fit a number of exponential curves (A+Bexp(Cx)). Although the data is quite clean and has a good approximation to the model the algorithm is not able to model the majority of them even with a excessive number of iterations(5000-6000). For the curves it can model, it does so in about 150 iterations.
LeastSquaresProblem problem = new LeastSquaresBuilder()
.start(start).model(jac).target(dTarget)
.lazyEvaluation(false).maxEvaluations(5000)
.maxIterations(6000).build();
LevenbergMarquardtOptimizer optimizer = new LevenbergMarquardtOptimizer();
LeastSquaresOptimizer.Optimum optimum = optimizer.optimize(problem);}
My question is how would I define a convergence criteria in apache commons in order to stop it hitting a max number of iterations?
I don't believe Java is your problem. Let's address the mathematics.
This problem is easier to solve if you change your function.
Your assumed equation is:
y = A + B*exp(C*x)
It'd be easier if you could do this:
y-A = B*exp(C*x)
Now A is just a constant that can be zero or whatever value you need to shift the curve up or down. Let's call that variable z:
z = B*exp(C*x)
Taking the natural log of both sides:
ln(z) = ln(B*exp(C*x))
We can simplify that right hand side to get the final result:
ln(z) = ln(B) + C*x
Transform your (x, y) data to (x, z) and you can use least squares fitting of a straight line where C is the slope in (x, z) space and ln(B) is the intercept. Lots of software available to do that.
Ok, so on the internet, I have seen equations for solving this, but they require the normal of the plane, and are a lot higher math than I know.
Basically, if I have an x,y,z position (as well as x,y,z rotations) for my ray, and x,y,z for three points that represent my plane, How would I solve for the point of collision?
I have done 2D collisions before, but I am clueless on how this would work in 3D. Also, I work in java, though I understand C# well enough.
Thanks to the answer below, I was able to find the normal of my face. This then allowed me to, through trial and error and http://geomalgorithms.com/a05-_intersect-1.html, come up with the following code (hand made vector math excluded):
Vertice Vertice1 = faces.get(f).getV1();
Vertice Vertice2 = faces.get(f).getV2();
Vertice Vertice3 = faces.get(f).getV3();
Vector v1 = vt.subtractVertices(Vertice2, Vertice1);
Vector v2 = vt.subtractVertices(Vertice3, Vertice1);
Vector normal = vt.dotProduct(v1, v2);
//formula = -(ax + by + cz + d)/n * u where a,b,c = normal(x,y,z) and where u = the vector of the ray from camX,camY,camZ,
// with a rotation of localRotX,localRotY,localRotZ
double Collision =
-(normal.x*camX + normal.y*camY + normal.z*camZ) / vt.dotProduct(normal, vt.subtractVertices(camX,camY,camZ,
camX + Math.sin(localRotY)*Math.cos(localRotX),camY + Math.cos(localRotY)*Math.cos(localRotX),camZ + Math.sin(localRotX)));
This code, mathimatically should work, but I have yet to properly test the code. Tough I will continue working on this, I consider this topic finished. Thank you.
It would be very helpful to post one of the equations that you think would work for your situation. Without more information, I can only suggest using basic linear algebra to get the normal vector for the plane from the data you have.
In R3 (a.k.a. 3d math), the cross product of two vectors will yield a vector that is perpendicular to the two vectors. A plane normal vector is a vector that is perpendicular to the plane.
You can get two vectors that lie in your plane from the three points you mentioned. Let's call them A, B, and C.
v1 = B - A
v2 = C - A
normal = v1 x v2
Stackoverflow doesn't have Mathjax formatting so that's a little ugly, but you should get the idea: construct two vectors from your three points in the plane, take the cross product of your two vectors, and then you have a normal vector. You should then be closer to adapting the equation to your needs.
I have a Java program that moves an object according to a sinus function asin(bx).
The object moves at a certain speed by changing the x parameter by the time interval x the speed. However, this only moves the object at a constant speed along the x axis.
What I want to do is move the object at a constant tangential speed along the curve of the function. Could someone please help me? Thanks.
Let's say you have a function f(x) that describes a curve in the xy plane. The problem consists in moving a point along this curve at a constant speed S (i.e., at a constant tangential speed, as you put it.)
So, let's start at an instant t and a position x. The point has coordinates (x, f(x)). An instant later, say, at t + dt the point has moved to (x + dx, f(x + dx)).
The distance between these two locations is:
dist = sqrt((x + dx - dx)2 + (f(x+dx) - f(x))2) = sqrt(dx2 + (f(x+dx) - f(x))2)
Now, let's factor out dx to the right. We get:
dist = sqrt(1 + f'(x)2) dx
where f'(x) is the derivative (f(x+dx) - f(x)) /dx.
If we now divide by the time elapsed dt we get
dist/dt = sqrt(1 + f'(x)2) dx/dt.
But dist/dt is the speed at which the point moves along the curve, so it is the constant S. Then
S = sqrt(1 + f'(x)2) dx/dt
Solving for dx
dx = S / sqrt(1 + f'(x)2) dt
which gives you how much you have to move the x-coordinate of the point after dt units of time.
The arc length on a sine curve as a function of x is an elliptic integral of the second kind. To determine the x coordinate after you move a particular distance (or a particular time with a given speed) you would need to invert this elliptic integral. This is not an elementary function.
There are ways to approximate the inverse of the elliptic integral that are much simpler than you might expect. You could combine a good numerical integration algorithm such as Simpson's rule with either Newton's method or binary search to find a numerical root of arc length(x) = kt. Whether this is too computationally expensive depends on how accurate you need it to be and how often you need to update it. The error will decrease dramatically if you estimate the length of one period once, and then reduce t mod the arc length on one period.
Another approach you might try is to use a different curve than a sine curve with a more tractable arc length parametrization. Unfortunately, there are few of those, which is why the arc length exercises in calculus books repeat the same types of curves over and over. Another possibility is to accept a speed that isn't constant but doesn't go too far above or below a specified constant, which you can get with some Fourier analysis.
Another approach is to recognize the arc length parametrization as a solution to a 2-dimensional ordinary differential equation. A first order numerical approximation (Euler's method) might suffice, and I think that's what Leandro Caniglia's answer suggests. If you find that the round off errors are too large, you can use a higher order method such as Runge-Kutta.
I need to calculate the distance from a point [x y z] to the surface of an object (at this stage a simple rectoid, but later an arbitrary shape) along [0 0 1].
I could do this defining the surfaces as planes using unit vectors and then doing a linear algebra calculation to find the distance to all the planes along [0 0 1] but as someone fairly new to coding and Java, I wanted to see if there was a library or a more efficient way of doing this as in the long term I may have complex convex objects, so need to be careful to use standard practices (so I can use something else to generate the planes!)
Thanks,
If you are using Point3D to represent your points then you have a distance method you can use to calculate the distance. So the question is which point on the surface do you want? If you just need any point on the surface you could just pick one of the corner points and use that to calculate the distance.
Does anyone know of a scientific/mathematical library in Java that has a straightforward implementation of weighted linear regression? Something along the lines of a function that takes 3 arguments and returns the corresponding coefficients:
linearRegression(x,y,weights)
This seems fairly straightforward, so I imagine it exists somewhere.
PS) I've tried Flannigan's library: http://www.ee.ucl.ac.uk/~mflanaga/java/Regression.html, it has the right idea but seems to crash sporadically and complain out my degrees of freedom?
Not a library, but the code is posted: http://www.codeproject.com/KB/recipes/LinReg.aspx
(and includes the mathematical explanation for the code, which is a huge plus).
Also, it seems that there is another implementation of the same algorithm here: http://sin-memories.blogspot.com/2009/04/weighted-linear-regression-in-java-and.html
Finally, there is a lib from a University in New Zealand that seems to have it implemented: http://www.cs.waikato.ac.nz/~ml/weka/ (pretty decent javadocs). The specific method is described here:
http://weka.sourceforge.net/doc/weka/classifiers/functions/LinearRegression.html
I was also searching for this, but I couldn't find anything. The reason might be that you can simplify the problem to the standard regression as follows:
The weighted linear regression without residual can be represented as
diag(sqrt(weights))y = diag(sqrt(weights))Xb where diag(sqrt(weights))T basically means multiplying each row of the T matrix by a different square rooted weight. Therefore, the translation between weighted and unweighted regressions without residual is trivial.
To translate a regression with residual y=Xb+u into a regression without residual y=Xb, you add an additional column to X - a new column with only ones.
Now that you know how to simplify the problem, you can use any library to solve the standard linear regression.
Here's an example, using Apache Commons Math:
void linearRegression(double[] xUnweighted, double[] yUnweighted, double[] weights) {
double[] y = new double[yUnweighted.length];
double[][] x = new double[xUnweighted.length][2];
for (int i = 0; i < y.length; i++) {
y[i] = Math.sqrt(weights[i]) * yUnweighted[i];
x[i][0] = Math.sqrt(weights[i]) * xUnweighted[i];
x[i][1] = Math.sqrt(weights[i]);
}
OLSMultipleLinearRegression regression = new OLSMultipleLinearRegression();
regression.setNoIntercept(true);
regression.newSampleData(y, x);
double[] regressionParameters = regression.estimateRegressionParameters();
double slope = regressionParameters[0];
double intercept = regressionParameters[1];
System.out.println("y = " + slope + "*x + " + intercept);
}
This can be explained intuitively by the fact that in linear regression with u=0, if you take any point (x,y) and convert it to (xC,yC), the error for the new point will also get multiplied by C. In other words, linear regression already applies higher weight to points with higher x. We are minimizing the squared error, that's why we extract the roots of the weights.
I personally used org.apache.commons.math.stat.regression.SimpleRegression Class of the Apache Math library.
I also found a more lightweight class from Princeton university but didn't test it:
http://introcs.cs.princeton.edu/java/97data/LinearRegression.java.html
Here's a direct Java port of the C# code for weighted linear regression from the first link in Aleadam's answer:
https://github.com/lukehutch/WeightedLinearRegression.java