I've got 2 questions. One's a specific question about why my code isn't working as intended and the other is a design question.
1 (Specific Question): I'm trying to map screen coordinates to world coordinates in a 2D tile-based engine (uses the x/y axis, z = 0). I've used a Nehe's tutorial port on how to achieve this but the results I get aren't as expected.
I have a class called MouseController. Whenever a mouse event is trigger (via swing's MouseListener), I pass the MouseEvent into my MouseController.
In my GLCanvas's draw function, I call the MouseController.processClick(GL) function to be able to pass the current gl context into the processClick function and grab the modelview, projection matrices and viewport.
When I click on a block rendered on the screen, the world coordinates that are given back to me make little to no sense. For one, I would expect the z value to be 0, but its 9 (which is how high my camera is set to), and my x and y values are always really close to 0 (occasionally jumping up to 1-9 with very slight movements, then back to a number very close to 0).
Anyone have any idea why this might be the case? The processClick function is below:
public void processClick(GL gl) {
int x = e.getX();
int y = e.getY();
if(e.getButton() == MouseEvent.BUTTON1) {
gl.glGetIntegerv(GL.GL_VIEWPORT, viewPort, 0);
gl.glGetDoublev(GL.GL_MODELVIEW_MATRIX, mvMatrix, 0);
gl.glGetDoublev(GL.GL_PROJECTION_MATRIX, prMatrix, 0);
int realy = viewPort[3] - y;
glu.gluUnProject((double)x, (double)realy, 0, mvMatrix, 0, prMatrix, 0, viewPort, 0, wCoord, 0);
System.out.println(x + " " + y);
System.out.println(x + " " + realy);
System.out.println(wCoord[0] + " " + wCoord[1] + " " + wCoord[2]);
}
e = null;
}
Sample output I get from the above function when I click on the screen where I rendered a square at world coordinates (4,5,0):
878 56
878 636
0.0445182388817236 0.055475957454737095 8.900000001489369
Thanks!
EDIT: Reading in the depth buffer using glReadPixels and using that as the z (which returns 1) gets me results that are kind of right, but are too big by a factor of 20.
EDIT2: If I set the far clipping plane to the same value as the height of the camera, it seems to work (but this isn't really a fit).
2 (Design Question): I feel as if it doesn't make sense to process clicks in the OpenGL canvas' draw function. I seem to require the GL from that function to be able to retrieve the matrices necessary for gluUnProject to run. How would you guys design this? Ideally I feel as if I should be able to have this run completely separate of the draw function. Store a reference to the gl object into my MouseController and process the click when the MouseListener picks up on it.
Input about how you guys would/do handle it would be much appreciated too!
I decided just to use gluOrtho2D to setup my projection to simplify solving this problem.
The only "tricky" thing was passing in the actual screen resolution to gluOrtho2D, which could be retrieved (and stored) with:
getContentPane().getSize()
From the JFrame I used.
Setting the preferred size on the panel then packing the frame achieved close results, but the panel itself was still slightly off.
Related
I'm making a 2D platformer game. I have created a texture for the platform, that is meant to be repeated over and over to fill the entire platform, without going over. My first attempt was to draw all the pixels from the bitmap manually, but this caused the background to flicker through while moving the platform (the movement and drawing threads are seperate, so the movement can run at a specific speed, while the FPS doesn't need to suffer). I found this technique worked better:
// Init
bitmap = new BitmapDrawable(res, Texture.PLATFORM.getBitmap());
bitmap.setTileModeXY(Shader.TileMode.REPEAT, Shader.TileMode.REPEAT);
// Drawing loop
int x = getX() + (isStill() ? 0 : (int)MainActivity.offsetX);
int y = getY() + (isStill() ? 0 : (int)MainActivity.offsetY);
bitmap.setBounds(x, y, x + getWidth(), y + getHeight());
bitmap.draw(canvas);
However, the bitmap appears to be staying static while the platform is acting as a "view hole" to see through to the bitmap. The only work around I can think of is to somehow "offset" the static bitmap:
bitmap.offset(x, y);
Obviously, that isn't a function. I couldn't find one that would do what I want when looking through the docs.
To summon things up, the BitmapDrawable is causing the background to not move with the platform, making it look super weird.
Thanks in advance!
Try these tips in your code:(I assumed the game moves forward in the horizontal direction)
The GUY should only move up and down(with the appropriate touch input) and not forward and backward as you want the focus(or camera alternatively) solely on the GUY.I noticed that the WALL was moving up in your video when the GUY moved from initial higher position of the wall to little bit lower position later, rectify this because the GUY should move down(try to implement Gravity effect).
The WALL should only move forward(mostly) and backward(less often I guess).The WALL shouldn't move up and down normally. Do not apply Gravity effect to it. You can create at least 2 BitmapDrawable instance of WALL for a screen. They are going to be reused sequencially(for eg: If the 1st one goes totally outside of the screen, reshow it in the desired position using setBounds() method) and continue same for others the whole game.
The currently BLUE BACKGROUND, if it is a part of a larger map, then it needs to be appropriately offsetted.
One of the obstacles that I can think of at the time of writing this is to move the WALL down until it goes out of the screen which results in the death of the GUY.
At those places, where I have used the word move, you need to use the setBounds(a, b, c, d) method to make necessary position based changes as I didn't find other way to update the position of a BitmapDrawable instance. I think, you need to use game framework like libGdx to get method of luxury like setOffset(x, y) or of similar sort.
Sorry that I could only present you the ideas without specific code as I do not have past experience working in a project like this. Hope, it helps you in anyway possible.
I am very new to this ARCore and I have been looking at the HelloAR Java Android Studio project provided in the SDK.
Everthing works OK and is pretty cool, however, I want to place/drop an object when I touch the screen even when no planes have been detected. Let me explain a little better...
As I understand ARCore, it will detect horizontal planes and ONLY on those horizontal planes I can place 3D objects to be motion tracked.
Is there any way (perhaps using PointCloud information) to be able to place an object in the scene even if there are no horizontal planes detected? Sort of like these examples?
https://experiments.withgoogle.com/ar/flight-paths
https://experiments.withgoogle.com/ar/arcore-drawing
I know they are using Unity and openFrameworks, but could that be done in Java?
Also, I have looked at
How to put an object in the air?
and
how to check ray intersection with object in ARCore
but I don't think I'm understanding the concept of Ancor (I managed to drop the object on the scene, but it either disappears immediately or it is just a regular OpenGL object with no knowledge about the real world.
What I want to understand is:
- How and is it possible to create a custom/user defined plane, that is, a plane that is NOT automatically detected by ARCore?
- How can I create an Ancor (the sample does it in the PlaneAttachment class, I think) that is NOT linked to any plane OR that is linked to some PointCloud point?
- How do I draw the object and place it at the Ancor previously created?
I think this is too much to ask but looking at the API documentation has not helped me at all
Thank you!
Edit:
Here is the code that I added to HelloArActivity.java (Everything is the same as the original file except for the lines after // ***** and before ...
#Override
public void onDrawFrame(GL10 gl) {
...
MotionEvent tap = mQueuedSingleTaps.poll();
// I added this to use screenPointToWorldRay function in the second link I posted... I am probably using this wrong
float[] worldXY = new float[6];
...
if (tap != null && frame.getTrackingState() == TrackingState.TRACKING) {
// ***** I added this to use screenPointToWorldRay function
worldXY = screenPointToWorldRay(tap.getX(), tap.getY(), frame);
...
}
...
// Visualize anchors created by touch.
float scaleFactor = 1.0f;
for (PlaneAttachment planeAttachment : mTouches) {
...
}
// ***** This places the object momentarily in the scene (it disappears immediately)
frame.getPose().compose(Pose.makeTranslation(worldXY[3], worldXY[4], worldXY[5])).toMatrix(mAnchorMatrix, 0);
// ***** This places the object in the middle of the scene but since it is not attached to anything, there is no tracking, it is always in the middle of the screen (pretty much expected behaviour)
// frame.getPose().compose(Pose.makeTranslation(0, 0, -1.0f)).toMatrix(mAnchorMatrix, 0);
// *****I duplicated this code which gets executed ONLY when touching a detected plane/surface.
mVirtualObject.updateModelMatrix(mAnchorMatrix, scaleFactor);
mVirtualObjectShadow.updateModelMatrix(mAnchorMatrix, scaleFactor);
mVirtualObject.draw(viewmtx, projmtx, lightIntensity);
mVirtualObjectShadow.draw(viewmtx, projmtx, lightIntensity);
...
}
You would first have to perform a hit test via Frame.hitTest and iterate over the HitResult objects until you hit a Point type Trackable. You could then retrieve a pose for that hit result via HitResult.getHitPose, or attach an anchor to that point and get the pose from that via ArAnchor.getPose (best approach).
However, if you want to do this yourself from an arbitraty point retrieved with ArPointCloud.getPoints, it will take a little more work. In this approach, the question effectively reduces down to "How can I derive a pose / coordinate basis from a point?".
When working from a plane it is relatively easy to derive a pose as you can use the plane normal as the up (y) vector for your model and can pick x and y vectors to configure where you want the model to "face" about that plane. (Where each vector is perpendicular to the other vectors)
When trying to derive a basis from a point, you have to pick all three vectors (x, y and z) relative to the origin point you have. You can derive the up vector by transforming the vector (0,1,0) through the camera view matrix (assuming you want the top of the model to face the top of your screen) using ArCamera.getViewMatrix. Then you can pick the x and z vectors as any two mutually perpendicular vectors that orient the model in your desired direction.
g.setColor(c);
double changeFactor = (2<<count);
g.fillRect(x+1, y+1+(int)(boxHeight*(double)(1/changeFactor)), boxWidth-2, boxHeight-2-(int)(boxHeight*(double)(1/changeFactor)));
count++;
if(count == iterations)
{
count = 0;
}
Above is the code for a graphical halfing mechanism that I've been trying to make. In short, the idea for it is a simple graphics exercise that starts with a filled bar and subtracts from its current height by half the height every 250ms. In this way, it is an exponential decline in height. This is the current output:
Link for those of you who are having trouble viewing the image
This is the exact opposite of the output that I want. The white region in the bar is a flipped image of what I want the red region to do. Can you guys help me figure out why the code is not behaving correctly?
Sorry for the newbie question, by the way.
EDIT:
here's the answer:
g.fillRect(x+1, y+1+(int)(boxHeight*(double)((changeFactor-1)/changeFactor)), boxWidth-2, boxHeight-2-(int)(boxHeight*(double)((changeFactor-1)/changeFactor)));
Long story short, it's because I miscalculated the numerator for what I would want the height/y-coordinate to be.
According to the fillRect definition, the second parameter is the location of the top of the box. (In computer graphics, numbers often get bigger as you go downward along the Y axis, which is the opposite from how mathematicians draw graphs.) Assuming you want the top of your bar to keep heading downward (as opposed to the bottom of the bar going upward), that means that the second parameter needs to keep getting larger. However, in your code, the parameter keeps getting smaller, since 1/count keeps getting smaller.
If h is the total height and y is the top of the box, then the successive values you want for this parameter are:
y
y + h/2
y + h/2 + h/4
y + h/2 + h/4 + h/8 ...
or, expressed another way:
y + h - h
y + h - h/2
y + h - h/4
y + h - h/8
Based on that, I think you can see how to make some small modifications to the code and get it to work.
Also, make sure you clear the box in between iterations. fillRect won't touch anything outside the rectangle that you're drawing. So if you want to make the filled part smaller, you have to do something to clear out the part that was previously colored that you don't want colored any more.
My question can be simplified to the following: If a 3d triangle is being projected and rendered to a 2d viewing plane, how can the z value of each pixel being rendered be calculated in order to be stored to a buffer?
I currently have a working Java program that is capable of rendering 3d triangles to the 2d view as a solid color, and the camera can be moved, rotated, etc. with no problem, working exactly how one would expect it to, but if I try to render two triangles over each other, the one closer to the camera being expected to obscure the farther one, this isn't always the case. A Z buffer seems like the best idea as to how to remedy this issue, storing the z value of each pixel I render to the screen, and then if there's another pixel trying to be rendered to the same coordinate, I compare it to the z value of the current pixel when deciding which one to render. The issue I'm now facing is as follows:
How do I determine the z value of each pixel I render? I've thought about it, and there seem to be a few possibilities. One option involves finding the equation of the plane(ax + by + cz + d = 0) on which the face lies, then some sort of interpolation of each pixel in the triangle being rendered(e.g. halfway x-wise on the 2d rendered triangle -> halfway x-wise through the 3d triangle, same for the y, then solve for z using the plane's equation), though I'm not certain this would work. The other option I thought of is iterating through each point, with a given quantum, of the 3d triangle, then render each point individually, using the z of that point(which I'd also probably have to find through the plane's equation).
Again, I'm currently mainly considering using interpolation, so the pseudo-code would look like(if I have the plane's equation as "ax + by + cz + d = 0"):
xrange = (pixel.x - 2dtriangle.minX)/(2dtriangle.maxX - 2dtriangle.minX)
yrange = (pixel.y - 2dtriangle.minY)/(2dtriangle.maxY - 2dtriangle.minY)
x3d = (3dtriangle.maxX - 3dtriangle.minX) * xrange + 3dtriangle.minX
y3d = (3dtriangle.maxY - 3dtriangle.minY) * yrange + 3dtriangel.minY
z = (-d - a*x3d - b*y3d)/c
Where pixel.x is the x value of the pixel being rendered, 2dtraingle.minX and 2dtriangle.maxX are the minimum and maximum x values of the triangle being rendered(i.e. of its bounding box) after having been projected onto the 2d view, and it's min/max Y variables are the same, but for its Y. 3dtriangle.minX and 3dtriangle.maxX are the minimum and maximum x values of the 3d triangle before having been projected onto the 2d view, a, b, c, and d are the coefficients of the equation of the plane on which the 3d triangle lies, and z is the corresponding z value of the pixel being rendered.
Will that method work? If there's any ambiguity please let me know in the comments before closing the question! Thank you.
The best solution would be calculating the depth for each vertex of the triangle. Then we are able to get the depth of each pixel the same way we do for the colors when rendering a triangle with Gouraud shading. Doing that simultaneously with rendering allows to check the depth easily.
If we have a situation like this:
And we start to draw lines from the top to the bottom. We calculate the slopes from the point one to the others, and add the correct amount of depth every time we move to the next line... And so on.
You did't provide your rendering method, so can't say anything specific to it, but you should take a look at some tutorials related to Gouraud shading. Do some simple modifications to them and you should be able to use it with depth values.
Well, hopefully this helps!
I am currently working on a 3d visualisation project using javafx 8.
As having too many points is slow when rotating the camera around, I decided to hide those points(3d boxes in my case) not displayed on the scene.
The problem is when I call box.localToScreen(0, 0, 0), the coordinates seems not correct some times. e.g, sometimes the point is still displayed on the screen, but its coordinates returned by localToScreen(0, 0, 0) can be negative. Have I missed something? or have I misused this method?
Here are some codes I have:
// where I build these boxes from points
for (point p : mergedList) {
Box pointBox = new Box(length, width, height);
boxList.add(pointBox);
pointBox.setTranslateX(p.getX());
pointBox.setTranslateY(p.getY());
pointBox.setTranslateZ(p.getZ());
...
// where I call localToScreen to get its coordinates
for (Box b : boxList) {
Point2D p = b.localToScreen(0, 0, 0); // I have also tried b.localToScreen(b.getTranslateX(), b.getTranslateY(), b.getTranslateZ())
double x = p.getX(), y = p.getY();
System.out.println(x);
System.out.println(y);
}
Thanks in advance.
I am also searching for solution to some of the localToScreen and screenToLocal issues.
For your case, if you are using multiple monitors, only the primary monitor provides you positive coordinates. The secondary monitor will give you negative coordinates.
Have you try localToScene instead of localToScreen ?
Firstly, the localToScreen method transforms the "provided" point by the "calling" objects point.
Use getLocalToSceneTransform() ...
this is your "world-Matrix", and hold all your transformation info. Rotations, scale, etc..
your position values are {Tx, Ty, Tz} so plug those into a Point3D and you have your position in SceneSpace (mostly accurate)
Another dirty option to "hide" the Boxes, is to set it's Cullface to Front. This will reduce some of the performance issues, since it does not need to be rendered, but leads to other potential problems with Mouse Picking and such.
I recently posted a video of 32k+ cubes being rendered, and I noticed 0 performance issues,
(the video encoding was not that great it's blurry in the beginning)
Video
Hope it helps!