How to compare two curves (array of points) - java

I have problem to find method to compare two trajectories (curves).
The first original contains points (x,y).
The second one can be offset, smaller or larger scale, and with rotation - also array with points (x,y)
My first method that i did is to find smallest distance between two points and repeat this process in every iteration, sum of it and divide by number of points - then my result tell me value the average error per point:
http://www.mathopenref.com/coorddist.html
And also i find this method:
https://help.scilab.org/docs/6.0.0/en_US/fminsearch.html
But i cant figure out how to use it.
I would like compare both trajectories but my results have to include rotation, or at least offset for beginning.
My current result is calculate error per point (distance)
get coordinate (x,y) second trajectory.
in loop i try to find min_distance between (x,y) from 1. and point from original trajectory.
add smallest_distance what i found in 2 step.
divide sum of smallest distance by number of points from second trajectory.
My result describe average error(distance) per points if we compare with original trajectory.
But i can not figure how to handle if trajectory is rotated, scaled or is shifted.
Please look at my example trajectories:
http://pokazywarka.pl/trajectory/
http://pokazywarka.pl/trajectory2/

So you need to compare shape of 2 curves invariant on rotation,translation and scale.
Solution
Let assume 2 sinwaves for testing. Both rotated and scaled but with the same aspect ratio and one with added noise. I generated them in C++ like this:
struct _pnt2D
{
double x,y;
// inline
_pnt2D() {}
_pnt2D(_pnt2D& a) { *this=a; }
~_pnt2D() {}
_pnt2D* operator = (const _pnt2D *a) { *this=*a; return this; }
//_pnt2D* operator = (const _pnt2D &a) { ...copy... return this; }
};
List<_pnt2D> curve0,curve1; // curves points
_pnt2D p0,u0,v0,p1,u1,v1; // curves OBBs
const double deg=M_PI/180.0;
const double rad=180.0/M_PI;
void rotate2D(double alfa,double x0,double y0,double &x,double &y)
{
double a=x-x0,b=y-y0,c,s;
c=cos(alfa);
s=sin(alfa);
x=x0+a*c-b*s;
y=y0+a*s+b*c;
}
// this code is the init stuff:
int i;
double x,y,a;
_pnt2D p,*pp;
Randomize();
for (x=0;x<2.0*M_PI;x+=0.01)
{
y=sin(x);
p.x= 50.0+(100.0*x);
p.y=180.0-( 50.0*y);
rotate2D(+15.0*deg,200,180,p.x,p.y);
curve0.add(p);
p.x=150.0+( 50.0*x);
p.y=200.0-( 25.0*y)+5.0*Random();
rotate2D(-25.0*deg,250,100,p.x,p.y);
curve1.add(p);
}
OBB oriented bounding box
compute OBB which will find the rotation angle and position of both curves so rotate one of them so they start at the same position and has the same orientation.
If the OBB sizes are too different then the curves are different.
For above example it yealds this result:
Each OBB is defined by start point P and basis vectors U,V where |U|>=|V| and z coordinate of U x V is positive. That will ensure the same winding for all OBBs. It can be done in OBBox_compute by adding this to the end:
// |U|>=|V|
if ((u.x*u.x)+(u.y*u.y)<(v.x*v.x)+(v.y*v.y)) { _pnt2D p; p=u; u=v; v=p; }
// (U x V).z > 0
if ((u.x*v.y)-(u.y*v.x)<0.0)
{
p0.x+=v.x;
p0.y+=v.y;
v.x=-v.x;
v.y=-v.y;
}
So curve0 has p0,u0,v0 and curve1 has p1,u1,v1.
Now we want to rescale,translate and rotate curve1 to match curve0 It can be done like this:
// compute OBB
OBBox_compute(p0,u0,v0,curve0.dat,curve0.num);
OBBox_compute(p1,u1,v1,curve1.dat,curve1.num);
// difference angle = - acos((U0.U1)/(|U0|.|U1|))
a=-acos(((u0.x*u1.x)+(u0.y*u1.y))/(sqrt((u0.x*u0.x)+(u0.y*u0.y))*sqrt((u1.x*u1.x)+(u1.y*u1.y))));
// rotate curve1
for (pp=curve1.dat,i=0;i<curve1.num;i++,pp++)
rotate2D(a,p1.x,p1.y,pp->x,pp->y);
// rotate OBB1
rotate2D(a,0.0,0.0,u1.x,u1.y);
rotate2D(a,0.0,0.0,v1.x,v1.y);
// translation difference = P0-P1
x=p0.x-p1.x;
y=p0.y-p1.y;
// translate curve1
for (pp=curve1.dat,i=0;i<curve1.num;i++,pp++)
{
pp->x+=x;
pp->y+=y;
}
// translate OBB1
p1.x+=x;
p1.y+=y;
// scale difference = |P0|/|P1|
x=sqrt((u0.x*u0.x)+(u0.y*u0.y))/sqrt((u1.x*u1.x)+(u1.y*u1.y));
// scale curve1
for (pp=curve1.dat,i=0;i<curve1.num;i++,pp++)
{
pp->x=((pp->x-p0.x)*x)+p0.x;
pp->y=((pp->y-p0.y)*x)+p0.y;
}
// scale OBB1
u1.x*=x;
u1.y*=x;
v1.x*=x;
v1.y*=x;
You can use Understanding 4x4 homogenous transform matrices to do all this in one step. Here the result:
sampling
in case of non uniform or very different point density between curves or between any parts of it you should re-sample your curves to have common point density. You can use linear or polynomial interpolation for this. You also do not need to store the new sampling in memory but instead you could build function that returns point of each curve parametrized by arc-length from start.
point curve0(double distance);
point curve1(double distance);
comparison
Now you can substract the 2 curves and sum up the abs of the differences. Then divide it by the curve length and threshold the result.
for (double sum=0.0,l=0.0;d<=bigger_curve_length;l+=step)
sum+=fabs(curve0(l)-curve1(l));
sum/=bigger_curve_length;
if (sum>threshold) curves are different
else curves match
You should try this even with +180deg rotation as the orientation difference from OBB has only half of the true range.
Here few related QAs:
compare shapes
How can i produce multi point linear interpolation?

Related

How to calculate distance between two points in n dimensions with Java?

I'd like to write a function that can calculate the Euclidean distance between two points, no matter how many coordinates the points have (assuming both points have the same number of coordinates)?
For two dimensions its of course
public static Double getDistance(Point2D p, Point2D ref) {
double dXSquared = Math.pow(p.getX()-ref.getX(), 2);
double dYSquared = Math.pow(p.getY()-ref.getY(), 2);
return Double.valueOf(Math.sqrt(dXSquared + dYSquared));
}
Is there an elegant way of doing this without having to write workarounds to figure out how many coordinates a point has? Like direct vector operations as in numpy would be nice.

Closest Point - Flaw in approach

I am solving Closest Point Problem from here
Problem Statement :
We are given an array of n points in the plane, and the problem is to find out the closest pair of points in the array.
INPUT : Input will be two arrays X and Y, X[] stores x coordinates and Y[] stores y coordinates.
OUTPUT : Smallest distance.
My Algorithm :
Note : Approach works only for positive coordinates.
Find Distance between all the coordinates from (0,0) and store it in distance array.
Sort Distance array calculated in previous step.
Find smallest distance by calculating difference between two consecutive values in distance array.
Code :
public class ClosestPoint {
int x[]={2,12,40,5,12,3},y[]={3,30,50,1,10,4}; // x and y coordinates
float distance[] = {0,0,0,0,0,0}; // distance
void calculateDis(){
for(int i=0;i<x.length;i++){
int dis=(x[i]*x[i] + y[i]*y[i]);
distance[i]= (float)Math.sqrt(dis);
}
}
float findClosest() {
float closest = Float.MAX_VALUE;
for(int i=0;i<distance.length-1;i++) {
float pairDis= distance[i+1]-distance[i];
if(closest>pairDis) {
closest =pairDis;
}
}
return closest;
}
public static void main(String arg[]) {
ClosestPoint p =new ClosestPoint();
p.calculateDis(); // calculate distance from 0,0.
Arrays.sort(p.distance);
System.out.println(p.findClosest());
}
}
Correct answer :
1.4
My Answer :
0.099
I am not getting correct answer. Can someone point out flaw in my approach.
Thanks.
The actual problem is in the logic. You are calculating the distances from origin and comparing it. This may lead to the wrong answer.
Consider this example of points (3,4) and (4,3). Both are at same distance from origin - 5. So according to your logic, You sort the distances and take minimum consecutive distance so here your algorithm will return 0 (as after sorting array would be 5.0 , 5.0) but the actual answer is .

algorithm to calculate perimeter of unioned rectangles

I'm trying to calculate the perimeter of the union of a n rectangles, of which I have the bottom left and top right points. Every rectangle sits on the x axis (bottom left corner of every rectangle is (x, 0)). I've been looking into different ways of doing this and it seems like the Sweep-Line algorithm is the best approach. I've looked at Graham Scan as well. I'm aiming for an O(n log n) algorithm. Honestly though I am lost in how to proceed, and I'm hoping someone here can do their best to dumb it down for me and try to help me understand exactly how to accomplish this.
Some things I've gathered from the research I've done:
We'll need to sort the points (I'm not sure the criteria in which we are sorting them).
We will be dividing and conquering something (to achieve the O (log n)).
We'll need to calculate intersections (What's the best way to do this?)
We'll need some sort of data structure to hold the points (Binary tree perhaps?)
I'll ultimately be implementing this algorithm in Java.
The algorithm is a lot of fiddly case analysis. Not super complicated, but difficult to get completely correct.
Say all the rectangles are stored in an array A by lower left and upper right corner (x0, y0, x1, y1). So we can represent any edge of a rectangle as a pair (e, i) where e \in {L, R, T, B} for left, right, top, and bottom edge and i denotes A[i]. Put all pairs (L, i) in a start list S and sort it on A[i].x0.
We'll also need a scan line C, which is a BST of triples (T, i, d) for top edges and (B, i, d) for bottom. Here i is a rectangle index, and d is an integer depth, described below. The key for the BST is the edges' y coordinates. Initially it's empty.
Note that at any time you can traverse C in order and determine which portions of the sweep line are hidden by a rectangle and not. Do this by keeping a depth counter, initially zero. From least y to greatest, when you encounter a bottom edge, add 1 to the counter. When you see a top edge, decrement 1. For regions where the counter is zero, the scan line is visible. Else it's hidden by a rectangle.
Now you never actually do that entire traversal. Rather you can be efficient by maintaining the depths incrementally. The d element of each triple in C is the depth of the region above it. (The region below the first edge in C is always of depth 0.)
Finally we need an output register P. It stores a set of polylines (doubly linked lists of edges are convenient for this) and allows queries of the form "Give me all the polylines whose ends' y coordinates fall in the range [y0..y1]. It's a property of the algorithm that these polylines always have two horizontal edges crossing the scan line as their ends, and all other edges are left of the scan line. Also, no two polylines intersect. They're segments of the output polygon "under construction." Note the output polygon may be non-simple, consisting of multiple "loops" and "holes." Another BST will do for P. It is also initially empty.
Now the algorithm looks roughly like this. I'm not going to steal all the fun of figuring out the details.
while there are still edges in S
Let V = leftmost vertical edge taken from S
Determine Vv, the intersection of V with the visible parts of C
if V is of the form (L, i) // a left edge
Update P with Vv (polylines may be added or joined)
add (R, i) to S
add (T, i) and (B, i) to C, incrementing depths as needed
else // V is of the form (R, i) // a right edge
Update P with Vv (polylines may be removed or joined)
remove (T, i) and (B, i) from C, decrementing depths as needed
As P is updated, you'll generate the complex polygon. The rightmost edge should close the last loop.
Finally, be aware that coincident edges can create some tricky special cases. When you run into those, post again, and we can discuss.
The run time for the sort is of course O(n log n), but the cost of updating the scan line depends on how many polygons can overlap: O(n) for degenerate cases or O(n^2) for the whole computation.
Good luck. I've implemented this algorithm (years ago) and a few others similar. They're tremendous exercises in rigorous logical case analysis. Extremely frustrating, but also rewarding when you win through.
The trick is to first find the max height at every segment along the x axis (see the picture above). Once you know this, then the perimeter is easy:
NOTE: I haven't tested the code so there might be typos.
// Calculate perimeter given the maxY at each line segment.
double calcPerimeter(List<Double> X, List<Double> maxY) {
double perimeter = 0;
for(int i = 1; i < X.size(); i++){
// Add the left side of the rect, maxY[0] == 0
perimeter += Math.abs(maxY.get(i) - maxY.get(i - 1))
// add the top of the rect
perimeter += X.get(i) - X.get(i-1);
}
// Add the right side and return total perimeter
return perimeter + maxY.get(maxY.size() - 1);
}
Putting it all together, you will need to first calculate X and maxY. The full code will look something like this:
double calcUnionPerimeter(Set<Rect> rects){
// list of x points, with reference to Rect
List<Entry<Double, Rect>> orderedList = new ArrayList<>();
// create list of all x points
for(Rect rect : rects){
orderedList.add(new Entry(rect.getX(), rect));
orderedList.add(new Entry(rect.getX() + rect.getW(), rect));
}
// sort list by x points
Collections.sort(orderedList, new Comparator<Entry<Double,Rect>>(){
#Override int compare(Entry<Double, Rect> p1, Entry<Double, Rect> p2) {
return Double.compare(p1.getKey(), p2.getKey());
}
});
// Max PriorityQueue based on Rect height
Queue<Rect> maxQ = new PriorityQueue<>(orderedList, new Comparator<Rect>(){
#Override int compare(Rect r1, Rect r2) {
return Double.compare(r1.getH(), r2.getH());
}
}
List<Double> X = new ArrayList<>();
List<Double> maxY = new ArrayList<>();
// loop through list, building up X and maxY
for(Entry<Double, Rect> e : orderedList) {
double x = e.getKey();
double rect = e.getValue();
double isRightEdge = x.equals(rect.getX() + rect.getW());
X.add(x);
maxY.add(maxQ.isEmpty() ? 0 : maxQ.peek().getY());
if(isRightEdge){
maxQ.dequeue(rect); // remove rect from queue
} else {
maxQ.enqueue(rect); // add rect to queue
}
}
return calcPerimeter(X, maxY);
}

Smooth rotation with quaternions

Quaternion can describe not only rotation, but also an orientation, i.e. rotation from initial (zero) position.
I was wishing to model smooth rotation from one orientation to another. I calculated start orientation startOrientation and end orientation endOrientation and was wishing to describe intermediate orientations as startOrientation*(1-argument) + endOrientation*argument while argument changes from 0 to 1.
The code for monkey engine update function is follows:
#Override
public void simpleUpdate(float tpf) {
if( endOrientation != null ) {
if( !started ) {
started = true;
}
else {
fraction += tpf * speed;
argument = (float) ((1 - Math.cos(fraction * Math.PI)) / 2);
orientation = startOrientation.mult(1-argument).add(endOrientation.mult(argument));
//orientation = startOrientation.mult(1-fraction).add(endOrientation.mult(fraction));
log.debug("tpf = {}, fraction = {}, argument = {}", tpf, fraction, argument);
//log.debug("orientation = {}", orientation);
rootNode.setLocalRotation(orientation);
if( fraction >= 1 ) {
rootNode.setLocalRotation(endOrientation);
log.debug("Stopped rotating");
startOrientation = endOrientation = null;
fraction = 0;
started = false;
}
}
}
}
The cosine formula was expected to model smooth accelerating at the beginning and decelerating at the end.
The code works but not as expected: the smooth rotation starts and finishes long before fraction and argument values reach 1 and I don't understand, why.
Why the orientation value reaches endOrientation so fast?
You have stated that in your case startOrientation was being modified. However; the following remains true
Interpolating between quaternions
The method slerp is included within the Quaternion class for this purpose: interpolating between two rotations.
Assuming we have two quaternions startOrientation and endOrientation and we want the point interpolation between them then we interpolate between then using the following code:
float interpolation=0.2f;
Quaternion result=new Quaternion();
result.slerp(startOrientation, endOrientation, interpolation);
Why your approach may be dangerous
Quaternions are somewhat complex internally and follow somewhat different mathematical rules to say vectors. You have called the multiply(float scalar) method on the quaternion. Internally this looks like this
public QuaternionD mult(float scalar) {
return new QuaternionD(scalar * x, scalar * y, scalar * z, scalar * w);
}
So it just does a simple multiplication of all the elements. This explicitly does not return a rotation that is scalar times the size. In fact such a quaternion no longer represents a valid rotation at all since its no longer a unit quaternion. If you called normalise on this quaterion it would immediately undo the scaling. I'm sure Quaternion#multiply(float scalar) has some use but I am yet to find them.
It is also the case that "adding" quaternions does not combine them. In fact you multiply them. So combining q1 then q2 then q3 would be achieved as follows:
Quaternion q0 = q1.mult(q2).mult(q3);
The cheat sheet is incredibly useful for this
Formula vs slerp comparison
In your case your formula for interpolation is nearly but not quite correct. This shows a graph of yaw for interpolation between 2 quaternions using both methods

Computing circle intersections in O( (n+s) log n)

I'm trying to figure out how to design an algorithm that can complete this task with a O((n+s) log n) complexity. s being the amount of intersections. I've tried searching on the internet, yet couldn't really find something.
Anyway, I realise having a good data structure is key here. I am using a Red Black Tree implementation in java: TreeMap. I also use the famous(?) sweep-line algorithm to help me deal with my problem.
Let me explain my setup first.
I have a Scheduler. This is a PriorityQueue with my circles ordered(ascending) based on their most left coordinate. scheduler.next() basically polls the PriorityQueue, returning the next most left circle.
public Circle next()
{ return this.pq.poll(); }
I also have an array with 4n event points in here. Granting every circle has 2 event points: most left x and most right x. The scheduler has a method sweepline() to get the next event point.
public Double sweepline()
{ return this.schedule[pointer++]; }
I also have a Status. The sweep-line status to be more precise. According to the theory, the status contains the circles that are eligible to be compared to each other. The point of having the sweep line in this whole story is that you're able to rule out a lot of candidates because they simply are not within the radius of current circles.
I implemented the Status with a TreeMap<Double, Circle>. Double being the circle.getMostLeftCoord().
This TreeMap guarantees O(log n) for inserting/removing/finding.
The algorithm itself is implemented like so:
Double sweepLine = scheduler.sweepline();
Circle c = null;
while (notDone){
while((!scheduler.isEmpty()) && (c = scheduler.next()).getMostLeftCoord() >= sweepLine)
status.add(c);
/*
* Delete the oldest circles that the sweepline has left behind
*/
while(status.oldestCircle().getMostRightCoord() < sweepLine)
status.deleteOldest();
Circle otherCircle;
for(Map.Entry<Double, Circle> entry: status.keys()){
otherCircle = entry.getValue();
if(!c.equals(otherCircle)){
Intersection[] is = Solver.findIntersection(c, otherCircle);
if(is != null)
for(Intersection intersection: is)
intersections.add(intersection);
}
}
sweepLine = scheduler.sweepline();
}
EDIT: Solver.findIntersection(c, otherCircle); returns max 2 intersection points. Overlapping circles are not considered to have any intersections.
The code of the SweepLineStatus
public class BetterSweepLineStatus {
TreeMap<Double, Circle> status = new TreeMap<Double, Circle>();
public void add(Circle c)
{ this.status.put(c.getMostLeftCoord(), c); }
public void deleteOldest()
{ this.status.remove(status.firstKey()); }
public TreeMap<Double, Circle> circles()
{ return this.status; }
public Set<Entry<Double, Circle>> keys()
{ return this.status.entrySet(); }
public Circle oldestCircle()
{ return this.status.get(this.status.firstKey()); }
I tested my program, and I clearly had O(n^2) complexity.
What am I missing here? Any input you guys might be able to provide is more than welcome.
Thanks in advance!
You can not find all intersection points of n circles in the plane in O(n log n) time because every pair of circles can have up to two distinct intersection points and therefore n circles can have up to n² - n distinct intersection points and hence they can not be enumerated in O(n log n) time.
One way to obtain the maximum number of n² - n intersection points is to place the centers of n circles of equal radius r at mutually different points of a line of length l < 2r.
N circles with the same centre and radius will have N(N-1)/2 pairs of intersecting circles, while by using large enough circles so that their boundaries are almost straight lines you can draw a grid with N/2 lines intersecting each of N/2 lines, which is again N^2. I would look and see how many entries are typically present in your map when you add a new circle.
You might try using bounding squares for your circles and keeping an index on the pending squares so that you can find only squares which have y co-ordinates that intersect your query square (assuming that the sweep line is parallel to the y axis). This would mean that - if your data was friendly, you could hold a lot of pending squares and only check a few of them for possible intersections of the circles within the squares. Data unfriendly enough to cause real N^2 intersections is always going to be a problem.
How large are the circles compared to the entire area? If the ratio is small enough I would consider putting them into buckets of some sort. It'll make the complexity a little more complicated than O(n log n) but should be faster.

Categories