I need to calculate the linear acceleration based on the accelerometer, gyroscope and magnetometer. I found an application for android, which does exactly what I want to achieve:
https://play.google.com/store/apps/details?id=com.kircherelectronics.fusedlinearacceleration.
https://github.com/KEOpenSource/FusedLinearAcceleration
I'm trying to port it to a pure java. Because some elements of the code are based on virtual sensors (Gravity Sensor), I would like to achieve the same result by compute direction of gravity based on three basic sensors. I read that the force of gravity can be calculated using the Low Pass Filter (same as Android < 4.0), but this method does not give very accurate results.
From android 4.0, the force of gravity on each axis is calculated using sensor fusion. I found the code responsible for these measurements, but it is written in the CPP:
https://github.com/android/platform_frameworks_base/blob/ics-mr1/services/sensorservice/GravitySensor.cpp
Method used there is called "getRotationMatrix". The same method in SensorManager.java class: https://gitorious.org/android-eeepc/base/source/9cb3e09ec49351401cf19b5ae5092dd9ca90a538:core/java/android/hardware/SensorManager.java#L1034
public static boolean getRotationMatrix(float[] R, float[] I,
float[] gravity, float[] geomagnetic) {
// TODO: move this to native code for efficiency
float Ax = gravity[0];
float Ay = gravity[1];
float Az = gravity[2];
final float Ex = geomagnetic[0];
final float Ey = geomagnetic[1];
final float Ez = geomagnetic[2];
float Hx = Ey*Az - Ez*Ay;
float Hy = Ez*Ax - Ex*Az;
float Hz = Ex*Ay - Ey*Ax;
final float normH = (float)Math.sqrt(Hx*Hx + Hy*Hy + Hz*Hz);
if (normH < 0.1f) {
// device is close to free fall (or in space?), or close to
// magnetic north pole. Typical values are > 100.
return false;
}
final float invH = 1.0f / normH;
Hx *= invH;
Hy *= invH;
Hz *= invH;
final float invA = 1.0f / (float)Math.sqrt(Ax*Ax + Ay*Ay + Az*Az);
Ax *= invA;
Ay *= invA;
Az *= invA;
final float Mx = Ay*Hz - Az*Hy;
final float My = Az*Hx - Ax*Hz;
final float Mz = Ax*Hy - Ay*Hx;
if (R != null) {
if (R.length == 9) {
R[0] = Hx; R[1] = Hy; R[2] = Hz;
R[3] = Mx; R[4] = My; R[5] = Mz;
R[6] = Ax; R[7] = Ay; R[8] = Az;
} else if (R.length == 16) {
R[0] = Hx; R[1] = Hy; R[2] = Hz; R[3] = 0;
R[4] = Mx; R[5] = My; R[6] = Mz; R[7] = 0;
R[8] = Ax; R[9] = Ay; R[10] = Az; R[11] = 0;
R[12] = 0; R[13] = 0; R[14] = 0; R[15] = 1;
}
}
if (I != null) {
// compute the inclination matrix by projecting the geomagnetic
// vector onto the Z (gravity) and X (horizontal component
// of geomagnetic vector) axes.
final float invE = 1.0f / (float)Math.sqrt(Ex*Ex + Ey*Ey + Ez*Ez);
final float c = (Ex*Mx + Ey*My + Ez*Mz) * invE;
final float s = (Ex*Ax + Ey*Ay + Ez*Az) * invE;
if (I.length == 9) {
I[0] = 1; I[1] = 0; I[2] = 0;
I[3] = 0; I[4] = c; I[5] = s;
I[6] = 0; I[7] =-s; I[8] = c;
} else if (I.length == 16) {
I[0] = 1; I[1] = 0; I[2] = 0;
I[4] = 0; I[5] = c; I[6] = s;
I[8] = 0; I[9] =-s; I[10]= c;
I[3] = I[7] = I[11] = I[12] = I[13] = I[14] = 0;
I[15] = 1;
}
}
return true;
}
takes four arguments:
float [] R, float [] I, float [] gravity, float [] Geomagnetic.
One of them is just gravity... The code I'm working on currently is similar to
https://github.com/KEOpenSource/FusedLinearAcceleration/blob/master/FusedLinearAcceleration/src/com/kircherelectronics/fusedlinearacceleration/sensor/LinearAccelerationSensor.java,
with the exception of methods that refer to SensorManager. These are copied from android source:
https://gitorious.org/android-eeepc/base/source/9cb3e09ec49351401cf19b5ae5092dd9ca90a538:core/java/android/hardware/SensorManager.java.
I did not found any examples of how implement this in Java.
So my question is: How I can implement method (in java), based only on three basic sensors, which returns me array of gravity direction (x, y, z), similar to Android one, but without using Android API.
Gravity is a steady contribution in the accelerometer signals (x, y & z).
So, logically, to isolate the gravity values in function of time, just low-pass filter the 3 accelerometer signals, at a frequency of 2Hz, for example.
A simple FIR would do the job.
On this site
I calculated the following coefficients:
[0.000381, 0.001237, 0.002634, 0.004607, 0.007100, 0.009956, 0.012928,
0.015711, 0.017987, 0.019480, 0.020000, 0.019480, 0.017987, 0.015711,
0.012928, 0.009956, 0.007100, 0.004607, 0.002634, 0.001237, 0.000381]
based on those caracteristics:
Fa=0Hz, Fb=1Hz, Length=21Pts, Fs=100Hz, Att=60dB.
You will get a signal that will be the three values of gravity in function of time.
You can find here some FIR explaination and Java implementation.
What you want is the rotation matrix (SensorManager.getRotationMatrix). Its last three components (i.e. rotation[6], rotation[7], rotation[8]) are the vector that points straight up, thus the direction to the center of the earth is the negative of that. To subtract gravity from your accelerometer reading just multiply that vector by g (~9.8m/s^2, though you might want to know that more precisely).
Related
I currently have a PID algorithm to control my robots turns in an autonomous state. My robot has encoders, on each motor, which there are four of, and also a BNO055IMU. Furthermore each motor is a never rest 40 motor from Andymark, and unfortunately I am stuck with encoders that do 3 pulses. I would like to improve the accuracy of my turns either by using a different algorithm or improving my current one.
My Current Turning Code:
public void turn(int angle, Direction DIRECTION, double timeOut, int sleepTime, double kp, double ki, double kd) {
double targetAngle = imu.adjustAngle(imu.getHeading() + (DIRECTION.value * angle));
double acceptableError = 0.5;
double currentError = 1;
double prevError = 0;
double integral = 0;
double newPower;
double previousTime = 0;
timeoutClock.reset();
while (opModeIsActive() && (imu.adjustAngle(Math.abs(currentError)) > acceptableError)
&& !timeoutClock.elapsedTime(timeOut, MasqClock.Resolution.SECONDS)) {
double tChange = System.nanoTime() - previousTime;
previousTime = System.nanoTime();
tChange = tChange / 1e9;
double imuVAL = imu.getHeading();
currentError = imu.adjustAngle(targetAngle - imuVAL);
integral += currentError * ID;
double errorkp = currentError * kp;
double integralki = integral * ki * tChange;
double dervitive = (currentError - prevError) / tChange;
double dervitivekd = dervitive * kd;
newPower = (errorkp + integralki + dervitivekd);
newPower *= color;
if (Math.abs(newPower) > 1.0) {newPower /= newPower;}
driveTrain.setPower(newPower, -newPower);
prevError = currentError;
DashBoard.getDash().create("TargetAngle", targetAngle);
DashBoard.getDash().create("Heading", imuVAL);
DashBoard.getDash().create("AngleLeftToCover", currentError);
DashBoard.getDash().update();
}
driveTrain.setPower(0,0);
sleep(sleepTime);
}
NOTES:
when driveTrain.setPower(x,y); is called the left parameter is the power set to the left side and the right parameter sets the right side.
Direction is an enum that stores wither -1, or 1 to switch between left and right turns.
Dashboard.getDash.create is solely to keep a log on what is going on.
imu.adjustAngle does the following:
public double adjustAngle(double angle) {
while (angle > 180) angle -= 360;
while (angle <= -180) angle += 360;
return angle;
}
imu.getHeading() is self explanatory it gets the yaw of the robot.
My current values for pid constants. (They work pretty well.)
KP_TURN = 0.005,
KI_TURN = 0.0002,
KD_TURN = 0,
ID = 1;
I'm becoming crazy by trying to optimize the following function in java with OpenCV:
static Mat testPossibleCentersFormula(int x, int y, Mat weight, double gx, double gy, Mat outSum){
Mat out = outSum;//new Mat(weight.rows(), weight.cols(), CvType.CV_64F);
float weight_array [] = new float [weight.rows()*weight.cols()];
weight.get(0,0,weight_array);
double out_array [] = new double [weight.rows()*weight.cols()];
out.get(0,0,out_array);
for (int cy = 0; cy < out.rows(); ++cy) {
for (int cx = 0; cx < out.cols(); ++cx) {
if (x == cx && y == cy) {
continue;
}
// create a vector from the possible center to the gradient origin
double dx = x - cx;
double dy = y - cy;
// normalize d
double magnitude = Math.sqrt((dx * dx) + (dy * dy));
dx = dx / magnitude;
dy = dy / magnitude;
double dotProduct = dx*gx + dy*gy;
dotProduct = Math.max(0.0,dotProduct);
// square and multiply by the weight
if (kEnableWeight) {
out_array[cy*out.cols()+cx] = out_array[cy*out.cols()+cx] +dotProduct * dotProduct * (weight_array[cy*out.cols()+cx]/kWeightDivisor);
} else {
out_array[cy*out.cols()+cx] = out_array[cy*out.cols()+cx] +dotProduct * dotProduct;
}
} }
out.put(0, 0, out_array);
return out;
}
The function accesses some pictures' values pixel by pixel, for each frame in a video, and makes it impossible to use it in real time.
I've already converted the Mat operations into array operations, and that has made a great difference, but it is still very very slow. Do you see any way to replace the nested for loop?
Thank you very much,
As I have alluded to in my comment above, I think that the allocation of weight_array and out_array is very suspicious: whilst the Javadoc that I can find for Mat is unhelpfully silent on what is put into an array larger than the image depth when you call mat.get(...), it feels like an abuse of the API to assume that it will return the entire image's data.
Allocating such large arrays each time you call the method is unnecessary. You can allocate a much smaller array, and just reuse that on each iteration:
float[] weight_array = new float[weight.depth()];
double[] out_array = new double[out.depth()];
for (int cy = 0; cy < out.rows(); ++cy) {
for (int cx = 0; cx < out.cols(); ++cx) {
// Use weight.get(cx, cy, weight_array)
// instead of weight_array[cy*out.cols()+cx].
// Use out.get(cx, cy, out_array) and out.put(cx, cy, out_array)
// instead of out_array[cy*out.cols()+cx] += ...
}
}
Note that this does still allocate (probably very small) arrays on each iteration. If you needed to, you could allocate the weight_array and out_array outside the method, and pass them in as parameters; but I would try as suggested here first, and optimize further when/if necessary.
I am trying to take the pitch and roll of a device when in portrait orientation. With the axes looking like this, it would be about the x and z axis respectively. Right now I'm using the SensorManager API to get the pitch, roll, and yaw of the device sitting flat, as is default.
When I try and translate the rotational values from the flat device to the vertical orientation I experience what other SO users have called gimbal lock, which is a problem inherent in the way Euler angles work. Problem is I've tried implementing a rotational matrix, as other users have to solve a similar problem, but even still I'm having the same gimbal lock problem. I've included my onSensorChanged method and hopefully someone out there could help find out what's going wrong.
public void onSensorChanged(SensorEvent se) {
float degPitch = 0;
float degRoll = 0;
float degYaw = 0;
float radPitch = 0;
float radRoll = 0;
float radYaw = 0;
if (se.sensor.getType() == Sensor.TYPE_ACCELEROMETER) {
mAccelerometerResult = se.values;
Log.d("onSensorChanged", "Accelerometer: " + mAccelerometerResult.length);
}
if (se.sensor.getType() == Sensor.TYPE_MAGNETIC_FIELD) {
mMagneticFieldResult = se.values;
Log.d("onSensorChanged", "Magnetic Field: " + mMagneticFieldResult.length);
}
if (mAccelerometerResult != null && mMagneticFieldResult != null) {
float[] rotation = new float[9];
float[] inclination = new float[9];
boolean rotationMatrixCheck = mSensorManager.getRotationMatrix(rotation, inclination, mAccelerometerResult, mMagneticFieldResult);
if (rotationMatrixCheck) {
float[] orientation = new float[3];
mSensorManager.getOrientation(rotation, orientation);
radYaw = orientation[0]; //Yaw = Z axis
radPitch = orientation[1]; //Pitch = X axis
radRoll = orientation[2]; //Roll = Y axis
degYaw = round((float) Math.toDegrees(radYaw), 2);
degPitch = round((float)Math.toDegrees(radPitch), 2);
degRoll = round((float)Math.toDegrees(radRoll), 2);
if ((counter % 10) == 0) {
//mYawTextView.setText(degYaw + "°");
mPitchTextView.setText(degPitch + "°");
mRollTextView.setText(degRoll + "°");
counter = 0;
} else {
counter++;
}
}
}
Further, I'm not even sure that I understand what rotational value I'm looking for if I can get good rotation about the portrait axes. If I want the roll of the device in portrait (about z axis from my original image), would that still be the roll of the device laying flat(about y from the flat axes image)?
Any insight that can be shared here would be appreciated.
I found the solution to the problem. Getting the rotational matrix in a 3x3 grid returns the rotational matrix in Euler's angles, as discussed in the original question. Getting the matrix in a 4x4 grid returns a Quaternion representation of the rotational matrix (According to SO user Stochastically's answer found here).
Further that rotational matrix coordinate system had to be remapped in order to be used in portrait mode. commented changes are shown bellow.
public void onSensorChanged(SensorEvent se) {
float degPitch = 0;
float degRoll = 0;
float degYaw = 0;
float radPitch = 0;
float radRoll = 0;
float radYaw = 0;
if (se.sensor.getType() == Sensor.TYPE_ACCELEROMETER) {
mAccelerometerResult = se.values;
Log.d("onSensorChanged", "Accelerometer: " + mAccelerometerResult.length);
}
if (se.sensor.getType() == Sensor.TYPE_MAGNETIC_FIELD) {
mMagneticFieldResult = se.values;
Log.d("onSensorChanged", "Magnetic Field: " + mMagneticFieldResult.length);
}
if (mAccelerometerResult != null && mMagneticFieldResult != null) {
//~~~This is where the 3x3 matrix has changed to a 4x4.
float[] rotation = new float[16];
float[] inclination = new float[16];
boolean rotationMatrixCheck = mSensorManager.getRotationMatrix(rotation, inclination, mAccelerometerResult, mMagneticFieldResult);
if (rotationMatrixCheck) {
float[] orientation = new float[3];
//~~~This is where the rotational matrix is remapped to be used in portrait mode.
float[] remappedRotation = new float[16];
SensorManager.remapCoordinateSystem(rotation, SensorManager.AXIS_X, SensorManager.AXIS_Z, remappedRotation);
mSensorManager.getOrientation(rotation, orientation);
radYaw = orientation[0]; //Yaw = Z axis
radPitch = orientation[1]; //Pitch = X axis
radRoll = orientation[2]; //Roll = Y axis
degYaw = round((float) Math.toDegrees(radYaw), 2);
degPitch = round((float)Math.toDegrees(radPitch), 2);
degRoll = round((float)Math.toDegrees(radRoll), 2);
if ((counter % 10) == 0) {
//mYawTextView.setText(degYaw + "°");
mPitchTextView.setText(degPitch + "°");
mRollTextView.setText(degRoll + "°");
counter = 0;
} else {
counter++;
}
}
}
I am working on the problem of dividing an ellipse into equal sized segments. This question has been asked but the answers suggested numerical integration so that I what I'm attempting. This code short-circuits the sectors so the integration itself should never cover more than 90 degrees. The integration itself is being done by totaling the area of intermediate triangles. Below is the code I have tried, but it is sweeping more than 90 degrees in some cases.
public class EllipseModel {
protected double r_x;
protected double r_y;
private double a,a2;
private double b,b2;
boolean flip;
double area;
double sector_area;
double radstep;
double rot;
int xp,yp;
double deviation;
public EllipseModel(double r_x, double r_y, double deviation)
{
this.r_x = r_x;
this.r_y = r_y;
this.deviation = deviation;
if (r_x < r_y) {
flip = true;
a = r_y;
b = r_x;
xp = 1;
yp = 0;
rot = Math.PI/2d;
} else {
flip = false;
xp = 0;
yp = 1;
a = r_x;
b = r_y;
rot = 0d;
}
a2 = a * a;
b2 = b * b;
area = Math.PI * r_x * r_y;
sector_area = area / 4d;
radstep = (2d * deviation) / a;
}
public double getArea() {
return area;
}
public double[] getSweep(double sweep_area)
{
System.out.println(String.format("getSweep(%f) a = %f b = %f deviation = %f",sweep_area,a,b,deviation));
double[] ret = new double[2];
double[] next = new double[2];
double t_base, t_height, swept,x_mid,y_mid;
double t_area;
sweep_area = sweep_area % area;
if (sweep_area < 0d) {
sweep_area = area + sweep_area;
}
if (sweep_area == 0d) {
ret[0] = r_x;
ret[1] = 0d;
return ret;
}
double sector = Math.floor(sweep_area/sector_area);
double theta = Math.PI * sector/2d;
double theta_last = theta;
System.out.println(String.format("- Theta start = %f",Math.toDegrees(theta)));
ret[xp] = a * Math.cos(theta + rot);
ret[yp] = (1 + (((theta / Math.PI) % 2d) * -2d)) * Math.sqrt((1 - ( (ret[xp] * ret[xp])/a2)) * b2);
next[0] = ret[0];
next[1] = ret[1];
swept = sector * sector_area;
System.out.println(String.format("- Sweeping for %f sector_area=%f",sweep_area-swept,sector_area));
int c = 0;
while(swept < sweep_area) {
c++;
ret[0] = next[0];
ret[1] = next[1];
theta_last = theta;
theta += radstep;
// calculate next point
next[xp] = a * Math.cos(theta + rot);
next[yp] = (1 + (((theta / Math.PI) % 2d) * -2d)) * // selects +/- sqrt
Math.sqrt((1 - ( (ret[xp] * ret[xp])/a2)) * b2);
// calculate midpoint
x_mid = (ret[xp] + next[xp]) / 2d;
y_mid = (ret[yp] + next[yp]) / 2d;
// calculate triangle metrics
t_base = Math.sqrt( ( (ret[0] - next[0]) * (ret[0] - next[0]) ) + ( (ret[1] - next[1]) * (ret[1] - next[1])));
t_height = Math.sqrt((x_mid * x_mid) + (y_mid * y_mid));
// add triangle area to swept
t_area = 0.5d * t_base * t_height;
swept += t_area;
}
System.out.println(String.format("- Theta end = %f (%d)",Math.toDegrees(theta_last),c));
return ret;
}
}
In the output I see the following case where it sweeps over 116 degrees.
getSweep(40840.704497) a = 325.000000 b = 200.000000 deviation = 0.166667
- Theta start = 0.000000
- Sweeping for 40840.704497 sector_area=51050.880621
- Theta end = 116.354506 (1981)
Is there any way to fix the integration formula to create a function that returns the point on an ellipse that has swept a given area? The application that is using this code divides the total area by the number of segments needed, and then uses this code to determine the angle where each segment starts and ends. Unfortunately it doesn't work as intended.
* edit *
I believe the above integration failed because the base and height formula's aren't correct.
No transformation needed use parametric equations for ellipse ...
x=x0+rx*cos(a)
y=y0+ry*sin(a)
where a = < 0 , 2.0*M_PI >
if you divide ellipse by lines from center to x,y from above equation
and angle a is evenly encreased
then the segments will have the same size
btw. if you apply affine transform you will get the same result (even the same equation)
This code will divide ellipse to evenly sized chunks:
double a,da,x,y,x0=0,y0=0,rx=50,ry=20; // ellipse x0,y0,rx,ry
int i,N=32; // divided to N = segments
da=2.0*M_PI/double(N);
for (a=0.0,i=0;i<N;i++,a+=da)
{
x=x0+(rx*cos(a));
y=y0+(ry*sin(a));
// draw_line(x0,y0,x,y);
}
This is what it looks like for N=5
[edit1]
I do not understood from your comment what exactly you want to achieve now
sorry but my English skills are horrible
ok I assume these two possibilities (if you need something different please specify closer)
0.but first some global or member stuff needed
double x0,y0,rx,ry; // ellipse parameters
// [Edit2] sorry forgot to add these constants but they are I thin straight forward
const double pi=M_PI;
const double pi2=2.0*M_PI;
// [/Edit2]
double atanxy(double x,double y) // atan2 return < 0 , 2.0*M_PI >
{
int sx,sy;
double a;
const double _zero=1.0e-30;
sx=0; if (x<-_zero) sx=-1; if (x>+_zero) sx=+1;
sy=0; if (y<-_zero) sy=-1; if (y>+_zero) sy=+1;
if ((sy==0)&&(sx==0)) return 0;
if ((sx==0)&&(sy> 0)) return 0.5*pi;
if ((sx==0)&&(sy< 0)) return 1.5*pi;
if ((sy==0)&&(sx> 0)) return 0;
if ((sy==0)&&(sx< 0)) return pi;
a=y/x; if (a<0) a=-a;
a=atan(a);
if ((x>0)&&(y>0)) a=a;
if ((x<0)&&(y>0)) a=pi-a;
if ((x<0)&&(y<0)) a=pi+a;
if ((x>0)&&(y<0)) a=pi2-a;
return a;
}
1.is point inside segment ?
bool is_pnt_in_segment(double x,double y,int segment,int segments)
{
double a;
a=atanxy(x-x0,y-y0); // get sweep angle
a/=2.0*M_PI; // convert angle to a = <0,1>
if (a>=1.0) a=0.0; // handle extreme case where a was = 2 Pi
a*=segments; // convert to segment index a = <0,segments)
a-=double(segment );
// return floor(a); // this is how to change this function to return points segment id
// of course header should be slightly different: int get_pnt_segment_id(double x,double y,int segments)
if (a< 0.0) return false; // is lower then segment
if (a>=1.0) return false; // is higher then segment
return true;
}
2.get edge point of segment area
void get_edge_pnt(double &x,double &y,int segment,int segments)
{
double a;
a=2.0*M_PI/double(segments);
a*=double(segment); // this is segments start edge point
//a*=double(segment+1); // this is segments end edge point
x=x0+(rx*cos(a));
y=y0+(ry*sin(a));
}
for booth:
x,y is point
segments number of division segments.
segment is sweep-ed area < 0,segments )
Apply an affine transformation to turn your ellipse into a circle, preferrably the unit circle. Then split that into equal sized segments, before you apply the inverse transform. The transformation will scale all areas (as opposed to lengths) by the same factor, so equal area translates to equal area.
I have a problem that I can't seem to get a working algorithm for, I've been trying to days and get so close but yet so far.
I want to draw a triangle defined by 3 points (p0, p1, p2). This triangle can be any shape, size, and orientation. The triangle must also be filled inside.
Here's a few things I've tried and why they've failed:
1
Drawing lines along the triangle from side to side
Failed because the triangle would have holes and would not be flat due to the awkwardness of drawing lines across the angled surface with changing locations
2
Iterate for an area and test if the point falls past the plane parallel to the triangle and 3 other planes projected onto the XY, ZY, and XZ plane that cover the area of the triangle
Failed because for certain triangles (that have very close sides) there would be unpredictable results, e.g. voxels floating around not connected to anything
3
Iterate for an area along the sides of the triangle (line algorithm) and test to see if a point goes past a parallel plane
Failed because drawing a line from p0 to p1 is not the same as a line from p1 to p0 and any attempt to rearrange either doesn't help, or causes more problems. Asymmetry is the problem with this one.
This is all with the intent of making polygons and flat surfaces. 3 has given me the most success and makes accurate triangles, but when I try to connect these together everything falls apart and I get issues with things not connecting, asymmetry, etc. I believe 3 will work with some tweaking but I'm just worn out from trying to make this work for so long and need help.
There's a lot of small details in my algorithms that aren't really relevant so I left them out. For number 3 it might be a problem with my implementation and not the algorithm itself. If you want code I'll try and clean it up enough to be understandable, it will take me a few minutes though. But I'm looking for algorithms that are known to work. I can't seem to find any voxel shape making algorithms anywhere, I've been doing everything from scratch.
EDIT:
Here's the third attempt. It's a mess, but I tried to clean it up.
// Point3i is a class I made, however the Vector3fs you'll see are from lwjgl
public void drawTriangle (Point3i r0, Point3i r1, Point3i r2)
{
// Util is a class I made with some useful stuff inside
// Starting values for iteration
int sx = (int) Util.min(r0.x, r1.x, r2.x);
int sy = (int) Util.min(r0.y, r1.y, r2.y);
int sz = (int) Util.min(r0.z, r1.z, r2.z);
// Ending values for iteration
int ex = (int) Util.max(r0.x, r1.x, r2.x);
int ey = (int) Util.max(r0.y, r1.y, r2.y);
int ez = (int) Util.max(r0.z, r1.z, r2.z);
// Side lengths
float l0 = Util.distance(r0.x, r1.x, r0.y, r1.y, r0.z, r1.z);
float l1 = Util.distance(r2.x, r1.x, r2.y, r1.y, r2.z, r1.z);
float l2 = Util.distance(r0.x, r2.x, r0.y, r2.y, r0.z, r2.z);
// Calculate the normal vector
Vector3f nn = new Vector3f(r1.x - r0.x, r1.y - r0.y, r1.z - r0.z);
Vector3f n = new Vector3f(r2.x - r0.x, r2.y - r0.y, r2.z - r0.z);
Vector3f.cross(nn, n, n);
// Determines which direction we increment for
int iz = n.z >= 0 ? 1 : -1;
int iy = n.y >= 0 ? 1 : -1;
int ix = n.x >= 0 ? 1 : -1;
// Reorganize for the direction of iteration
if (iz < 0) {
int tmp = sz;
sz = ez;
ez = tmp;
}
if (iy < 0) {
int tmp = sy;
sy = ey;
ey = tmp;
}
if (ix < 0) {
int tmp = sx;
sx = ex;
ex = tmp;
}
// We're we want to iterate over the end vars so we change the value
// by their incrementors/decrementors
ex += ix;
ey += iy;
ez += iz;
// Maximum length
float lmax = Util.max(l0, l1, l2);
// This is a class I made which manually iterates over a line, I already
// know that this class is working
GeneratorLine3d g0, g1, g2;
// This is a vector for the longest side
Vector3f v = new Vector3f();
// make the generators
if (lmax == l0) {
v.x = r1.x - r0.x;
v.y = r1.y - r0.y;
v.z = r1.z - r0.z;
g0 = new GeneratorLine3d(r0, r1);
g1 = new GeneratorLine3d(r0, r2);
g2 = new GeneratorLine3d(r2, r1);
}
else if (lmax == l1) {
v.x = r1.x - r2.x;
v.y = r1.y - r2.y;
v.z = r1.z - r2.z;
g0 = new GeneratorLine3d(r2, r1);
g1 = new GeneratorLine3d(r2, r0);
g2 = new GeneratorLine3d(r0, r1);
}
else {
v.x = r2.x - r0.x;
v.y = r2.y - r0.y;
v.z = r2.z - r0.z;
g0 = new GeneratorLine3d(r0, r2);
g1 = new GeneratorLine3d(r0, r1);
g2 = new GeneratorLine3d(r1, r2);
}
// Absolute values for the normal
float anx = Math.abs(n.x);
float any = Math.abs(n.y);
float anz = Math.abs(n.z);
int i, o;
int si, so;
int ii, io;
int ei, eo;
boolean maxx, maxy, maxz,
midy, midz, midx,
minx, miny, minz;
maxx = maxy = maxz =
midy = midz = midx =
minx = miny = minz = false;
// Absolute values for the longest side vector
float rnx = Math.abs(v.x);
float rny = Math.abs(v.y);
float rnz = Math.abs(v.z);
int rmid = Util.max(rnx, rny, rnz);
if (rmid == rnz) midz = true;
else if (rmid == rny) midy = true;
midx = !midz && !midy;
// Determine the inner and outer loop directions
if (midz) {
if (any > anx)
{
maxy = true;
si = sy;
ii = iy;
ei = ey;
}
else {
maxx = true;
si = sx;
ii = ix;
ei = ex;
}
}
else {
if (anz > anx) {
maxz = true;
si = sz;
ii = iz;
ei = ez;
}
else {
maxx = true;
si = sx;
ii = ix;
ei = ex;
}
}
if (!midz && !maxz) {
minz = true;
so = sz;
eo = ez;
}
else if (!midy && !maxy) {
miny = true;
so = sy;
eo = ey;
}
else {
minx = true;
so = sx;
eo = ex;
}
// GeneratorLine3d is iterable
Point3i p1;
for (Point3i p0 : g0) {
// Make sure the two 'mid' coordinate correspond for the area inside the triangle
if (midz)
do p1 = g1.hasNext() ? g1.next() : g2.next();
while (p1.z != p0.z);
else if (midy)
do p1 = g1.hasNext() ? g1.next() : g2.next();
while (p1.y != p0.y);
else
do p1 = g1.hasNext() ? g1.next() : g2.next();
while (p1.x != p0.x);
eo = (minx ? p0.x : miny ? p0.y : p0.z);
so = (minx ? p1.x : miny ? p1.y : p1.z);
io = eo - so >= 0 ? 1 : -1;
for (o = so; o != eo; o += io) {
for (i = si; i != ei; i += ii) {
int x = maxx ? i : midx ? p0.x : o;
int y = maxy ? i : midy ? p0.y : o;
int z = maxz ? i : midz ? p0.z : o;
// isPassing tests to see if a point goes past a plane
// I know it's working, so no code
// voxels is a member that is an arraylist of Point3i
if (isPassing(x, y, z, r0, n.x, n.y, n.z)) {
voxels.add(new Point3i(x, y, z));
break;
}
}
}
}
}
You could use something like Besenham's line algorithm, but extended into three dimensions. The two main ideas we want to take from it are:
rotate the initial line so its slope isn't too steep.
for any given x value, find an integer value that is closest to the ideal y value.
Just as Bresenham's algorithm prevents gaps by performing an initial rotation, we'll avoid holes by performing two initial rotations.
Get the normal vector and point that represent the plane your triangle lies on. Hint: use the cross product of (line from p0 to p1) and (line from p0 to p2) for the vector, and use any of your corner points for the point.
You want the plane to be sufficiently not-steep, to avoid holes. You must satisfy these conditions:
-1 >= norm.x / norm.y >= 1
-1 >= norm.z / norm.y >= 1
Rotate your normal vector and initial points 90 degrees about the x axis and 90 degrees about the z axis until these conditions are satisfied. I'm not sure how to do this in the fewest number of rotations, but I'm fairly sure you can satisfy these conditions for any plane.
Create a function f(x,z) which represents the plane your rotated triangle now lies on. It should return the Y value of any pair of X and Z values.
Project your triangle onto the XZ plane (i.e., set all the y values to 0), and use your favorite 2d triangle drawing algorithm to get a collection of x-and-z coordinates.
For each pixel value from step 4, pass the x and z values into your function f(x,z) from step 3. Round the result to the nearest integer, and store the x, y, and z values as a voxel somewhere.
If you performed any rotations in step 2, perform the opposite of those rotations in reverse order on your voxel collection.
Start with a function that checks for triangle/voxel intersection. Now you can scan a volume and find the voxels that intersect the triangle - these are the ones you're interested in. This is a lousy algorithm but is also a regression test for anything else you try. This test is easy to implement using SAT (separating axis theorem) and considering the triangle a degenerate volume (1 face, 3 edges) and considering the voxels symmetry (only 3 face normals).
I use octtrees, so my preferred method is to test a triangle against a large voxel and figure out which of the 8 child octants it intersects. Then use recursion on the intersected children until the desired level of subdivision is attained. Hint: at most 6 of the children can be intersected by the triangle and often fewer than that. This is tricky but will produce the same results as the first method but much quicker.
Rasterization in 3d is probably fastest, but IMHO is even harder to guarantee no holes in all cases. Again, use the first method for comparison.