BlackBerry drawTexturedPath Rotate Move Anchor to Center of Image - java

I know how to rotate a BlackBerry Bitmap image an arbitrary angle with drawTexturePath. But, The Rotation Anchor is at the top-top left of the image. How do I move the Anchor to the center of the image?
This code uses Graphics.drawTexturedPath to rotate around top-left corner:
int[] x = new int[] {0, width, width, 0};
int[] y = new int[] {0, 0, height, height};
int angle32 = Fixed32.toFP(angleDegrees);
int dux = Fixed32.cosd(angle32);
int dvx = -Fixed32.sind(angle32);
int duy = Fixed32.sind(angle32);
int dvy = Fixed32.cosd(angle32);
graphics.drawTexturedPath(x, y, null, null, 0, 0, dvx, dux, dvy, duy, bitmapImage);
How do I modify this code to rotate around the center of the image with drawTexturedPath (http://www.blackberry.com/developers/docs/5.0.0api/net/rim/device/api/ui/Graphics.html#drawTexturedPath)?
FYI, a similar post describes other 2D afine transformations with drawTexturedPath including skew and some 3D effects here: "BlackBerry - image 3D transform" (BlackBerry - image 3D transform).
-Thanks in advance, David Pixelmonks.com

To rotate around the center, you need to displace your bitmap before the rotation:
instead of
int[] x = new int[] {0, width, width, 0};
int[] y = new int[] {0, 0, height, height};
you should use
int[] x = new int[] {-width / 2, width / 2, width / 2, -width / 2};
int[] y = new int[] {-height / 2, -height / 2, height / 2, height / 2};
then apply the transformation, and add again width / 2 to all your x-values, and height / 2 to your y-values.

Related

TriangleMesh - Backside faces are visible

Good Day! I have the following issue. The graphic model is not displayed correctly: some backside faces of the model that should be hidden by the frontside remain visible. Here are some exmples to clarify: (isometry)
(issue)
This issue comes out especially notable when applying light and material. So the the question is how this can be solved for JavaFX?
UPD:
public class VertexTest extends Application {
PerspectiveCamera camera;
Cam cam = new Cam();
double mouseOldX, mouseOldY, mousePosX, mousePosY, mouseDeltaX, mouseDeltaY;
public static void main(String[] args) {
launch(args);
}
#Override
public void start(Stage primaryStage) throws Exception {
TriangleMesh mesh = new Shape3DRectangle(100, 100, 100);
MeshView view = new MeshView(mesh);
view.setDrawMode(DrawMode.LINE);
view.setMaterial(new PhongMaterial(Color.RED));
cam.getChildren().add(view);
Scene scene = new Scene(cam, 1000, 1000, true);
addEvents(view, scene);
camera = new PerspectiveCamera();
camera.setTranslateX(-500);
camera.setTranslateY(-500);
camera.setTranslateZ(1000);
scene.setCamera(camera);
primaryStage.setScene(scene);
primaryStage.show();
}
private void addEvents(MeshView view, Scene s) {
s.setOnMouseDragged(new EventHandler<MouseEvent>() {
public void handle(MouseEvent me) {
mouseOldX = mousePosX;
mouseOldY = mousePosY;
mousePosX = me.getX();
mousePosY = me.getY();
mouseDeltaX = mousePosX - mouseOldX;
mouseDeltaY = mousePosY - mouseOldY;
cam.ry.setAngle(cam.ry.getAngle() - mouseDeltaX);
cam.rx.setAngle(cam.rx.getAngle() + mouseDeltaY);
}
});
}
class Cam extends Group {
Translate t = new Translate();
Translate p = new Translate();
Translate ip = new Translate();
Rotate rx = new Rotate();
{
rx.setAxis(Rotate.X_AXIS);
}
Rotate ry = new Rotate();
{
ry.setAxis(Rotate.Y_AXIS);
}
Rotate rz = new Rotate();
{
rz.setAxis(Rotate.Z_AXIS);
}
Scale s = new Scale();
public Cam() {
super();
getTransforms().addAll(t, p, rx, rz, ry, s, ip);
}
}
public class Shape3DRectangle extends TriangleMesh {
public Shape3DRectangle(float Width, float Height, float deep) {
this.getPoints().setAll(-Width / 2, Height / 2, deep / 2, // idx p0
Width / 2, Height / 2, deep / 2, // idx p1
-Width / 2, -Height / 2, deep / 2, // idx p2
Width / 2, -Height / 2, deep / 2, // idx p3
-Width / 2, Height / 2, -deep / 2, // idx p4
Width / 2, Height / 2, -deep / 2, // idx p5
-Width / 2, -Height / 2, -deep / 2, // idx p6
Width, -Height / 2, -deep / 2 // idx p7
);
this.getTexCoords().addAll(0.0f, 0.0f);
this.getFaces().addAll(5, 0, 4, 0, 0, 0 // P5,T1 ,P4,T0 ,P0,T3
, 5, 0, 0, 0, 1, 0 // P5,T1 ,P0,T3 ,P1,T4
, 0, 0, 4, 0, 6, 0 // P0,T3 ,P4,T2 ,P6,T7
, 0, 0, 6, 0, 2, 0 // P0,T3 ,P6,T7 ,P2,T8
, 1, 0, 0, 0, 2, 0 // P1,T4 ,P0,T3 ,P2,T8
, 1, 0, 2, 0, 3, 0 // P1,T4 ,P2,T8 ,P3,T9
, 5, 0, 1, 0, 3, 0 // P5,T5 ,P1,T4 ,P3,T9
, 5, 0, 3, 0, 7, 0 // P5,T5 ,P3,T9 ,P7,T10
, 4, 0, 5, 0, 7, 0 // P4,T6 ,P5,T5 ,P7,T10
, 4, 0, 7, 0, 6, 0 // P4,T6 ,P7,T10 ,P6,T11
, 3, 0, 2, 0, 6, 0 // P3,T9 ,P2,T8 ,P6,T12
, 3, 0, 6, 0, 7, 0 // P3,T9 ,P6,T12 ,P7,T13
);
}
}
}
I've been playing around with your sample, and I think I've found out the reason of your issues.
First, I checked the winding of the faces. All of them are counter-clockwise, so all their normals go outwards, as they should be.
Then I modified other vertices instead of the last one. In some cases there were no issues, in others, the issue was still there.
Basically, the issue happens when there are "concave" surfaces, meaning that two faces have normals that will cross. And it doesn't happen when all the surfaces are "convex", meaning that their normals point outwards and won't cross.
This is a clear image of both type of meshes taken from here:
Back to your sample, you are defining a concave mesh:
But if instead of modifying vertex #7, we make the #5 larger, we have a convex mesh, with no rendering issues:
Obviously, while this fix the rendering problem, it changes your initial shape.
If you want to keep your initial geometry, the other possible solution is changing the faces, so you don't have any concave areas.
Let's have a look at the faces 5-1-3 and 5-3-7, and let's say we want to move now the vertex #1.
If we keep your triangles, face 5-1-3 and 5-3-7 will define a concave surface to be render (their normals will cross), while if we change those triangles to 5-1-7 and 1-3-7, then the surface will be convex (their normals won't cross):
Back to your initial shape, this change in those two faces will solve the rendering issues.
While the vertices are the same, the geometry is a little bit difference. So it requires some refinement (more elements). Adding those elements should be done keeping in mind this convex concept. The problem is not trivial, though, as you can see here.
Nice analysis by Jose but it looks to me as if the OP has just forgotten to divide the Width by 2 in this line of his code.
Width, -Height / 2, -deep / 2 // idx p7
should be
Width / 2, -Height / 2, -deep / 2 // idx p7
The class is called Shape3DRectangle but with this mistake
the geometry is not rectangular anymore.
You can set the cullFaceProperty for every Shape3D. I guess that is what you need but I am not sure whether I understood your question precisely.
Shape3D#cullFaceProperty

Calculate World Coordinates from Normalized Device Coordinates

I'm currently trying to register touches on the screen in World Space.
I first convert them to normalized Device Coordinates and then try to multiply a point at the near side of the normalized cube (z = -1) and a point at the far side of the normalized cube (z = 1) with the inverted ProjectionViewMatrix to get a Line between them.
My approach so far:
//Calculate ProjectionViewMatrix
Matrix.multiplyMM(projectionViewMatrix,0,perspectiveProjectionMatrix,0,viewMatrix,0);
//Calculate Inverse
Matrix.invertM(invertedProjectionViewMatrix,0,projectionViewMatrix,0);
float[] nearPoint = {x, y, -1, 1};
float[] farPoint = {x, y, 1, 1};
float[] nearPointWorldSpace = new float[4];
float[] farPointWorldSpace = new float[4];
Matrix.multiplyMV(nearPointWorldSpace,0, invertedProjectionViewMatrix,0, nearPoint,0);
Matrix.multiplyMV(farPointWorldSpace,0, invertedProjectionViewMatrix,0, farPoint,0);
perspectiveDevide(nearPointWorldSpace);
perspectiveDevide(farPointWorldSpace);
Where perspectiveDevide is defined as:
private static void perspectiveDevide(float[] vector) {
vector[0] /= vector[3];
vector[1] /= vector[3];
vector[2] /= vector[3];
}
Now what I should get is a near and far point that have the same or very similar X/Y-Coordinates, because my Camera is right above the lookAt and with no angle.
However what I do get is this:
NearPointWorld:
[0] -0.002805814
[1] 0.046295937
[2] 1.9
[3] 9.999999
FarPointWorld:
[0] -2.8057494
[1] 46.294872
[2] -97.99771
[3] 0.010000229
Any Ideas what might be wrong?
EDIT:
Here's my code for the View and Projection Matrix:
Projection:
Matrix.perspectiveM(perspectiveProjectionMatrix,0, 60, (float) width / (float) height, 0.1f, 100f);
View:
Matrix.setLookAtM(viewMatrix,0,
0,0,2,
0,0,0,
0,1,0);
As Nico Schertler pointed out, these results are actuallly reasonable.
To get the correct X/Y Coordinates I had to unproject the screen center.

How to draw a triangle at the position which the user clicks on

I'm trying to draw a triangle at the position which the user clicks on.
This is what I've done so far:
int[] xPoints = {(xPosition / 2), xPosition, (xPosition + (xPosition / 2))};
int[] yPoints = {(yPosition + yPosition), yPosition, (yPosition + yPosition)};
g.drawPolygon(xPoints, yPoints, 3);
The problem is that the size of the triangle varies depending on the xPosition and yPosition (these are taken from mouse coordinates).
Any ideas how I can just place a fixed size triangle at the specified X and Y coordinates?
Instead of using xPosition / 2 and yPosition for the first and third points, use a fixed offset from the xPosition like so:
//use whatever size you want
//this will make a triangle with the top at the clicked point
int halfWidth = 50, height = 100;
int[] xPoints = { xPosition - halfWidth, xPosition, xPosition + halfWidth };
int[] yPoints = { yPosition + height, yPosition, yPosition + height };
You can play around with the sizes, but if you want it to be equilateral, then height should be Math.sqrt(3) * halfWidth.
Pick a size and call it SIZE:
int[] xPoints = {xPosition, xPosition, xPosition + SIZE))};
int[] yPoints = {yPosition, yPosition + SIZE, yPosition))};
This will draw a triangle which doesn't change size at different points. However, if you want a certain kind of triangle which points a certain direction, you will need to use some geometry and perhaps trigonometry to do the calculations.

Detecting Hough circles android

I am trying to detect circles using android. I succeeded to implement the detect lines algorithm but nothing gets displayed when trying the draw hough circles algoritm.
Here is my code:
Mat thresholdImage = new Mat(getFrameHeight() + getFrameHeight() / 2, getFrameWidth(), CvType.CV_8UC1);
mYuv.put(0, 0, data);
Imgproc.cvtColor(mYuv, destination, Imgproc.COLOR_YUV420sp2RGB, 4);
Imgproc.cvtColor(destination, thresholdImage, Imgproc.COLOR_RGB2GRAY, 4);
Imgproc.GaussianBlur(thresholdImage, thresholdImage, new Size(9, 9), 2, 2 );
Mat circles = new Mat();
Imgproc.HoughCircles(thresholdImage, circles, Imgproc.CV_HOUGH_GRADIENT, 1d, (double)thresholdImage.height()/70, 200d, 100d);
Log.w("circles", circles.cols()+"");
for (int x = 0; x < circles.cols(); x++)
{
double vCircle[]=circles.get(0,x);
Point center=new Point(Math.round(vCircle[0]), Math.round(vCircle[1]));
int radius = (int)Math.round(vCircle[2]);
// draw the circle center
Core.circle(destination, center, 3,new Scalar(0,255,0), -1, 8, 0 );
// draw the circle outline
Core.circle( destination, center, radius, new Scalar(0,0,255), 3, 8, 0 );
}
You may have got this sorted by now, but a few things. I'd check your circles mat actually has some results; sometimes vCircle seems to come back null; try one of the other versions of HoughCircles:
iCannyUpperThreshold = 100;
iMinRadius = 20;
iMaxRadius = 400;
iAccumulator = 300;
Imgproc.HoughCircles(thresholdImage, circles, Imgproc.CV_HOUGH_GRADIENT,
2.0, thresholdImage.rows() / 8, iCannyUpperThreshold, iAccumulator,
iMinRadius, iMaxRadius);
if (circles.cols() > 0)
for (int x = 0; x < circles.cols(); x++)
{
double vCircle[] = circles.get(0,x);
if (vCircle == null)
break;
Point pt = new Point(Math.round(vCircle[0]), Math.round(vCircle[1]));
int radius = (int)Math.round(vCircle[2]);
// draw the found circle
Core.circle(destination, pt, radius, new Scalar(0,255,0), iLineThickness);
Core.circle(destination, pt, 3, new Scalar(0,0,255), iLineThickness);
}
(I swapped your code into mine, renamed some stuff and swapped it back, I think I've got it back so it works...)
B.

OpenGL/Android -- Setting up a 2D OpenGL orthogonal coordinate system that matches the screen pixels

I am trying to get some circles drawn onscreen using OpenGL ES 1.5 for android. They draw, but I want to be able to input x=300, y=500, and it will draw the circle centered at that coordinate (e.g. at the (300,500) pixel on the screen). Currently, I draw and translate the circles, but its not precise, I don't know how to get it exactly where i want it: here's some broken code from my last attempt:
//doesn't take w/h ratio into consideration, not sure how to implement that
gl.glViewport(0, 0, windowWidth, windowHeight);
gl.glOrthof(0,windowWidth, 0, windowHeight, 1, 2);
GLU.gluLookAt(gl, 0, 0, 5, 0, 0, 0, 0, 1, 0);
//And for drawing a circle, with the desired x and y coordinates:
for (int j = 0; j &lt number_Triangles; j++) {
x = Math.cos(theta) + xCoor;
y = Math.sin(theta) + yCoor;
z = 1;
theta += 2 * Math.PI / (number_Triangles);
}
If you are doing 2D graphics, I'd recommend gluOrtho2D(left,right,bottom,top). That way you have exact control over what coordinates will map to each edge of your screen.
So, for example, you could have:
gl.glViewport(0,0,windowWidth,windowHeight);
GLU.gluOrtho2D(-2.0f, 2.0f, -2.0f, 2.0f);
for (int j = .....

Categories