I am generating 2D arcs using the following code.
final Arc2D.Double arcPath = new Arc2D.Double();
arcPath.setArcByCenter(centerPoint.getX(), centerPoint.getY(), radius, fDXFArc.getStartAngle(), fDXFArc.getTotalAngle(), Arc2D.OPEN);
The arcs are perfectly rendered on my Canvas but I do not know if they are Clockwise or Counter Clockwise. Can someone share the algorithm to detect the arc's orientation ?
I see two hints for always counterclockwise (80% sure):
First java.awt.geom.Arc2D itself tells in it's class description:
* The angles are specified relative to the non-square
* framing rectangle such that 45 degrees always falls on the line from
* the center of the ellipse to the upper right corner of the framing
* rectangle.
This can only be true if 0 degrees are at 12 pm and degrees measured clockwise or 3 pm and degrees measured counterclockwise.
Second public void setAngles() in the same package measure degrees counterclockwise:
* The arc will always be non-empty and extend counterclockwise
* from the first point around to the second point.
following that it would make sense to follow the same pattern in all functions of that class.
If you need to be sure: Ask the author of that class:
* #author Jim Graham
I actually manage to determine my Arcs direction. I just splitted every arc that is larger than 180degrees to 2 smaller arcs and I use the following code
Point startPoint = arc.getBorderPoint(EBorderPoint.StartPoint);
Point endPoint = arc.getBorderPoint(EBorderPoint.EndPoint);
Point centerPoint = arc.getBorderPoint(EBorderPoint.CenterPoint);
final double result = (endPoint.getX() - startPoint.getX()) * (centerPoint.getY() - startPoint.getY()) - (endPoint.getY() - startPoint.getY()) * (centerPoint.getX() - startPoint.getX());
boolean isClockWise = result > 0 ? false : true;
if (result == 0)
{
// Since I splitted the large arcs to 2 smaller arcs
// the code will never go to this if statement
}
I wrote this code used to create a plane in front of our character that we then use
to check the position of enemies and know if they are inside the plane or not.
The code is in Java and is running on our server, and we use it in our game using Unity.
In the following code AreaWidth is the width of the plane we want to check, Area length is its length.
Message is holding data about our player such as position and quaternion.
bwNode is holding data about the current target that we are checking.
float AreaWidth = message.getWidth();
float AreaLength = message.getLength();
double Fi, cs, sn;
Fi = message.quaternion.toEulerAngles().getY();
cs = Math.cos(Fi);
sn = Math.sin(Fi);
int ptx, pty;
ptx = (int)(bwNode.getLoc().getX() - message.loc.getX());
pty = (int)(bwNode.getLoc().getZ() - message.loc.getZ());
double perplen, alonglen;
perplen = Math.abs(ptx * sn - pty * cs);
alonglen = ptx * cs + pty * sn;
if (perplen <= AreaWidth / 2)
{
if (alonglen >= 0 && alonglen <= AreaLength)
{
//the target is inside the area
}
}
So when debugging or actually using this code in game, it is working with 1 issue only, the plane is always facing either the right or left side of our player instead of always face the front/player facing direction.
Here is an image representing it for 0, 90, 180 and 270 degrees on our player Y rotation. (in this image the white cube is the player, the triangle shows where the player is facing, The sphere is the enemy, the red transparent box shows where our plane should be, and the colored lines forms the plane that is actually created.)
I need help figuring out what is causing it in my code.
Well it is not a question about Unity as far as I understand, as it relates to your backend only.
Anyway, you are really mixing up the terms of target position and player position, pay attention that at your screens, the plane will always face the target, no matter where the player looks.
The way I would do it, is expressing the target in coordinate system of the player, something like this:
float AreaWidth = message.getWidth();
float AreaLength = message.getLength();
Fi = message.quaternion.toEulerAngles().getY();
cs = Math.cos(Fi);
sn = Math.sin(Fi);
// First, move the target from word axis origin to player axis origin
float tx = (bwNode.getLoc().getX() - message.loc.getX());
float tz = (bwNode.getLoc().getZ() - message.loc.getZ());
// Now rotate the target around it new word origin to apply player rotation
float ptx = cs * tx - sn * tz;
float ptz = sn * tx + cs * tz;
// Now ptx and ptz are in new word coordinates, simply test them agains area
if (-AreaWidth / 2 <= ptx && ptx <= AreaWidth / 2 && AreaLength => 0 && AreaLength <= l) {
// It is inside
} else {
// It is outside
}
P.S: It is probably much easier (and definitely more efficient) to pass player full transform matrix to the server instead of handling transform and rotation components separately.
In my 3D world implementation I use Direction-Vectors (unit vector) to decide the orientation of my 3D-objects.
Each 3D-object has its own Direction-Vector which by default has the orientation V3(1, 0, 0) with Origin at V3(0,0,0).
This is how I apply the directional Rotation-Matrix "D" (the matrix "A" is used to rotate 3D-objects around their Direction-Vector as an axis, this seems to work fine):
Model3D model = actor.model;
// Loops through all the edges in the model
for (int i = 0; i < model.edges.length; i++) {
M3 D = directionMatrix(actor);
M3 R = rotationMatrix(actor);
// Draws a line based on each edge in the model.
// Each line consists of two points a and b.
// The matrix R rotates the points around a given axis.
// The D matrix rotates the points towards a given axis - not around it.
S.drawLine(g,
D.mul(R.mul(model.points[model.edges[i].a])).scale(actor.scale),
D.mul(R.mul(model.points[model.edges[i].b])).scale(actor.scale)
);
}
This is how I calculate my current directional Rotation-Matrix "D":
public M3 directionalRotationMatrix(c_Actor3D actor) {
double x = Math.atan2(actor.direction.z, actor.direction.y);
double y = Math.atan2(actor.direction.x, actor.direction.z);
double z = Math.atan2(actor.direction.y, actor.direction.x);
double sin_x = Math.sin(x), sin_y = Math.sin(y), sin_z = Math.sin(z);
double cos_x = Math.cos(x), cos_y = Math.cos(y), cos_z = Math.cos(z);
return new M3(
cos_x * cos_y, (cos_x * sin_y * sin_z) - (sin_x * cos_z),
(cos_x * sin_y * cos_z) + (sin_x * sin_z), sin_x * cos_y, (sin_x * sin_y * sin_z) + (cos_x * cos_z),
(sin_x * sin_y * cos_z) - (cos_x * sin_z), -sin_y, cos_y * sin_z, cos_y * cos_z);
}
My problem is to create the correct directional Rotation-Matrix that rotates the 3D-objects in the direction of their respective Direction-Vectors.
I'm not sure at all what I do wrong... My idea is to first rotate the cube towards a direction, then rotate the cube around the axis of the direction. After all that comes position transformation etc.
Thank you for your help guys!
Sounds like you are trying to move a 3D object in the direction of it's forward facing vector. To do this you will need the position of the object (x,y,z) and 3 vectors (forward, up, and right). You can rotate the 3 vectors using pitch yaw and roll based vector math(see below link). For the forward movement you then add the position of the object plus the speed multiplied by the forward vector, ie: position += speed * forward
Use the following complete example code posted here to figure out how to implemented your own version. http://cs.lmu.edu/~ray/notes/flightsimulator/
I am trying to setup a layer using worldwind java and i want to render icons on the map at their specific geo locations. I have that working but i want to be able to zoom to where all the icons are. Is there an easy way to do that? Im not really sure where to start.. Are there existing methods for zooming in on a group of points?
First you need to calculate the Sector containing all of your points. e.g.
Sector boundingSector = Sector.boundingSector(points);
//public static Sector boundingSector(Iterable<? extends LatLon> itrbl)
Now here's some code taken from ScankortDenmark example to calculate the zoom you need to fit the whole sector on screen:
// From ScankortDenmark example
public static double computeZoomForExtent(Sector sector)
{
Angle delta = sector.getDeltaLat();
if (sector.getDeltaLon().compareTo(delta) > 0)
delta = sector.getDeltaLon();
double arcLength = delta.radians * Earth.WGS84_EQUATORIAL_RADIUS;
double fieldOfView = Configuration.getDoubleValue(AVKey.FOV, 45.0);
return arcLength / (2 * Math.tan(fieldOfView / 2.0));
}
182Much's answer does work under under some conditions. However, a better solution must take into account that the Horizontal FOV (Field of View) is not always fixed at 45.0 degrees. It also needs to take into account the Vertical FOV. Even how the positions end of clustering has to be taken into account. Meaning, do the positions spread out more East to West or North and South. Is the users view of the globe (WorldWindow) actually skinnier then the height. All of these factors come into account when calculating the needed zoom level to view all positions. I created this static method to account for all of the listed positions above. As a side note, you can have slightly better precision if you calculate the actual mean radius of the Earth for where your positions tend to cluster instead of taken Earth.WGS84_EQUATORIAL_RADIUS. But this is almost negligible so I leave that part out here.
/**
* Calculates the altitude in meters needed to view all of the given points.
* This method is safe for any window sizing configurations. If the
* WorldWindor arg is null then a static max altitude value of 1,0667,999
* meters is returned. if the WorldWindow is good but the list of Positions
* is null or empty then the current zoom level of the WorldWindow is
* returned. If the list of positions cannot all be seen on the globe
* because some positions are on the other side of the globe then a static
* max altitude value of 1,0667,999 meters is returned.
*
* #param positions
* - a list of positions wanted to view
* #return the altitude in meters needed to view all of the given points.
*/
public static double getZoomAltitude(List<Position> positions, WorldWindow wwd) {
double zoom = 10667999;
if (wwd != null) {
// Gets the current zoom as a fail safe to return
BasicOrbitView orbitView = (BasicOrbitView) wwd.getView();
zoom = orbitView.getZoom();
// zoom is in meters and and is limited the max zoom out to 10,667,999 meters
int MAX_ZOOM = 10667999;
if (positions != null && !positions.isEmpty()) {
Sector sector = Sector.boundingSector(positions);
if (sector != null) {
// This calculation takes into account the window sizing configuration of the map in order to accurately
// display the list of positions.
double meanRadius = Earth.WGS84_EQUATORIAL_RADIUS;
// Next we must calculate the zoom levels for both delta latitude viewing and delta longitude viewing.
// generally, a group of positions that spread out more Longitudenal viewing (wider viewing width)
// holds a constant 45.0 degree field of view (FOV). The horizontal FOV can be changed so this input
// must handle dynamically as well. The latitudenal (positon group runs more East to West then North and South)
// position group have a dynamic FOV that changes depending on the users sizing of the map. These have
// to be handled any time the group of positions has a greater delta latitude than delta longitude.
// Also if the user has a skinny map this will effect the output calculation and must be handled.
// Here we take all the dynamic variables into account for both types of possibilities and choose
// the larger zoom level of them.
int deltaLon = new BigDecimal(sector.getDeltaLon().radians * meanRadius).intValue();
int deltaLat = new BigDecimal(sector.getDeltaLat().radians * meanRadius).intValue();
System.out.println("deltaLonAL Wider: " + deltaLon + "\tdeltaLatAL Taller: " + deltaLat);
double horizontalFOV = orbitView.getFieldOfView().getDegrees();
double verticalFOV = ViewUtil.computeVerticalFieldOfView(orbitView.getFieldOfView(),
orbitView.getViewport()).getDegrees();
double lonZoomLevel = new BigDecimal((deltaLon / 2.0) / (Math.tan(horizontalFOV / 2.0))).intValue();
double latZoomLevel = new BigDecimal((deltaLat / 2.0)
/ (Math.tan(Math.toRadians(verticalFOV) / 2.0))).intValue();
System.out
.println("LonZoomLevel Wider: " + lonZoomLevel + "\tLatZoomLevel Taller: " + latZoomLevel);
double zoomLevel = Math.max(lonZoomLevel, latZoomLevel);
System.out.println("zoomLevel meters: " + zoomLevel + "\tfeet: "
+ new BigDecimal(zoomLevel * 3.2808));
// zoom is the altitude measured in meters to view a given area calculated to fit the viewing
// window edge to edge. A buffer is needed around the area for visual appeal. The bufferedZoom
// is a calculated linear equation (y = 1.0338x + 96177 where R² = 1) It gives the same buffer
// boundary around a group of position depending on the calculated zoom altitude.
double bufferedZoom = 1.0338 * zoomLevel + 96177;
zoom = new BigDecimal(bufferedZoom).intValue();
if (zoom > MAX_ZOOM) {
zoom = MAX_ZOOM;
System.out.println("MAX_ZOOM applied");
}
}
} else {
System.out.println("getZoomAltitude method cannot calculate the zoom because the points passed in was null and the current zoom was returned.");
}
}
return zoom;
}
I wish to determine the 2D screen coordinates (x,y) of points in 3D space (x,y,z).
The points I wish to project are real-world points represented by GPS coordinates and elevation above sea level.
For example:
Point (Lat:49.291882, Long:-123.131676, Height: 14m)
The camera position and height can also be determined as a x,y,z point. I also have the heading of the camera (compass degrees), its degree of tilt (above/below horizon) and the roll (around the z axis).
I have no experience of 3D programming, therefore, I have read around the subject of perspective projection and learnt that it requires knowledge of matrices, transformations etc - all of which completely confuse me at present.
I have been told that OpenGL may be of use to construct a 3D model of the real-world points, set up the camera orientation and retrieve the 2D coordinates of the 3D points.
However, I am not sure if using OpenGL is the best solution to this problem and even if it is I have no idea how to create models, set up cameras etc
Could someone suggest the best method to solve my problem? If OpenGL is a feasible solution i'd have to use OpenGL ES if that makes any difference. Oh and whatever solution I choose it must execute quickly.
Here's a very general answer. Say the camera's at (Xc, Yc, Zc) and the point you want to project is P = (X, Y, Z). The distance from the camera to the 2D plane onto which you are projecting is F (so the equation of the plane is Z-Zc=F). The 2D coordinates of P projected onto the plane are (X', Y').
Then, very simply:
X' = ((X - Xc) * (F/Z)) + Xc
Y' = ((Y - Yc) * (F/Z)) + Yc
If your camera is the origin, then this simplifies to:
X' = X * (F/Z)
Y' = Y * (F/Z)
You do indeed need a perspective projection and matrix operations greatly simplify doing so. I assume you are already aware that your spherical coordinates must be transformed to Cartesian coordinates for these calculations.
Using OpenGL would likely save you a lot of work over rolling your own software rasterizer. So, I would advise trying it first. You can prototype your system on a PC since OpenGL ES is not too different as long as you keep it simple.
If just need to compute coordinates of some points, you should only need some algebra skills, not 3D programming with openGL.
Moreover openGL does not deal with Geographic coordinates
First get some doc about WGS84 and geodesic coordinates, you have first to convert your GPS data into a cartesian frame ( for instance the earth centric cartesian frame in which is defined the WGS84 ellipsoid ).
Then the computations with matrix can take place.
The chain of transformations is roughly :
WGS84
earth centric coordinates
some local frame
camera frame
2D projection
For the first conversion see this
The last involves a projection matrix
The others are only coordinates rotations and translation.
The "some local frame" is the local cartesian frame with origin as your camera location
tangent to the ellipsoid.
I'd recommend "Mathematics for 3D Game Programming and Computer Graphics" by Eric Lengyel. It covers matrices, transformations, the view frustum, perspective projection and more.
There is also a good chapter in The OpenGL Programming Guide (red book) on viewing transformations and setting up a camera (including how to use gluLookAt).
If you aren't interested in displaying the 3D scene and are limited to using OpenGL ES then it may be better to just write your own code to do the mapping from 3D to 2D window coords. As a starting point you could download Mesa 3D, an open source implementation of OpenGL, to see how they implement gluPerspective (to set a projection matrix), gluLookAt (to set a camera transformation) and gluProject (to project a 3D point to 2D window coords).
return [((fol/v[2])*v[0]+x),((fol/v[2])*v[1]+y)];
Point at [0,0,1] will be x=0 and y=0, unless you add center screen xy - it's not camera xy. fol is focal length, derived from fov angle and screen width - how high is the triangle (tangent). This method will not match three.js perspective matrix, which is why am I looking for that.
I should not be looking for it. I matched xy on openGL, perfectly like super glue! But I cannot get it to work right in java. THAT Perfect match follows.
var pmat = [0,0,0,0,0,0,0,0,0,0,
(farclip + nearclip) / (nearclip - farclip),-1,0,0,
2*farclip*nearclip / (nearclip - farclip),0 ];
void setpmat() {
double fl; // = tan(dtor(90-fovx/aspect/2)); /// UNIT focal length
fl = 1/tan(dtor(fov/Aspect/2)); /// same number
pmat[0] = fl/Aspect;
pmat[5] = fl;
}
void fovmat(double v[],double p[]) {
int cx = (int)(_Width/2),cy = (int)(_Height/2);
double pnt2[4], pnt[4] = { 0,0,0,1 } ;
COPYVECTOR(pnt,p);NORMALIZE(pnt);
popmatrix4(pnt2,pmat,pnt);
COPYVECTOR(v,pnt2);
v[0] *= -cx; v[1] *= -cy;
v[0] += cx; v[1] += cy;
} // world to screen matrix
void w2sm(int xy[],double p[]) {
double v[3]; fovmat(v,p);
xy[0] = (int)v[0];
xy[1] = (int)v[1];
}
I have one more way to match three.js xy, til I get the matrix working, just one condition. must run at Aspect of 2
function w2s(fol,v,x,y) {
var a = width / height;
var b = height/width ;
/// b = .5 // a = 2
var f = 1/Math.tan(dtor(_fov/a)) * x * b;
return [intr((f/v[2])*v[0]+x),intr((f/v[2])*v[1]+y)];
}
Use it with the inverted camera matrix, you will need invert_matrix().
v = orbital(i);
v = subv(v,campos);
v3 = popmatrix(wmatrix,v); //inverted mat
if (v3[2] > 0) {
xy = w2s(flen,v3,cx,cy);
Finally here it is, (everyone ought to know by now), the no-matrix match, any aspect.
function angle2fol(deg,centerx) {
var b = width / height;
var a = dtor(90 - (clamp(deg,0.0001,174.0) / 2));
return asa_sin(PI_5,centerx,a) / b;
}
function asa_sin(a,s,b) {
return Math.sin(b) * (s / Math.sin(PI-(a+b)));
} // ASA solve opposing side of angle2 (b)
function w2s(fol,v,x,y) {
return [intr((fol/v[2])*v[0]+x),intr((fol/v[2])*v[1]+y)];
}
Updated the image for the proof. Input _fov gets you 1.5 that, "approximately." To see the FOV readout correctly, redo the triangle with the new focal length.
function afov(deg,centerx) {
var f = angle2fol(deg,centerx);
return rtod(2 * sss_cos(f,centerx,sas_cos(f,PI_5,centerx)));
}
function sas_cos(s,a,ss) {
return Math.sqrt((Math.pow(s,2)+Math.pow(ss,2))-(2*s*ss*Math.cos(a)));
} // Side Angle Side - solve length of missing side
function sss_cos(a,b,c) {
with (Math) {
return acos((pow(a,2)+pow(c,2)-pow(b,2))/(2*a*c));
}
} // SSS solve angle opposite side2 (b)
Star library confirmed the perspective, then possible to measure the VIEW! http://innerbeing.epizy.com/cwebgl/perspective.jpg
I can explain the 90 deg correction to moon's north pole in one word precession. So what is the current up vector. pnt? radec?
function ininorths() {
if (0) {
var c = ctime;
var v = LunarPos(jdm(c));
c += secday();
var vv = LunarPos(jdm(c));
vv = crossprod(v,vv);
v = eyeradec(vv);
echo(v,vv);
v = [266.86-90,65.64]; //old
}
var v = [282.6425,65.8873]; /// new.
// ...
}
I have yet to explain the TWO sets of vectors: Three.milkyway.matrix and the 3D to 2D drawing. They ARE:
function drawmilkyway() {
var v2 = radec2pos(dtor(192.8595), dtor(27.1283),75000000);
// gcenter 266.4168 -29.0078
var v3 = radec2pos(dtor(266.4168), dtor(-29.0078),75000000);
// ...
}
function initmwmat() {
var r,u,e;
e = radec2pos(dtor(156.35), dtor(12.7),1);
u = radec2pos(dtor(60.1533), dtor(25.5935),1);
r = normaliz(crossprod(u,e));
u = normaliz(crossprod(e,r));
e = normaliz(crossprod(r,u));
var m = MilkyWayMatrix;
m[0]=r[0];m[1]=r[1];m[2]=r[2];m[3]=0.0;
m[4]=u[0];m[5]=u[1];m[6]=u[2];m[7]=0.0;
m[8]=e[0];m[9]=e[1];m[10]=e[2];m[11]=0.0;
m[12]=0.0;m[13]=0.0;m[14]=0.0;m[15]=1.0;
}
/// draw vectors and matrix were the same in C !
void initmwmat(double m[16]) {
double r[3], u[3], e[3];
radec2pos(e,dtor(192.8595), dtor(27.1283),1); //up
radec2pos(u,dtor(266.4051), dtor(-28.9362),-1); //eye
}