Stretch 2D plane to 3D cube - java

I'm working on a Java game that has both a 2D game panel and "pseudo"-3D one.
It isn't real 3D as it's merely some 2D planes put in a 3D environment (no custom created models).
I'm using jPCT as render engine and I'm currently looking into getting the walls rendered.
In the 2D view, they look like this:
In the 3D view, I'm trying to get them to look like this:
This works by stacking 10 planes on top of each other and that gives the illusion of a 3D wall.
The code to get this is:
Object3D obj = new Object3D(20);
for (int y=0; y < 10; y++) {
float fY = y / -10f;
obj.addTriangle(new SimpleVector(-1, fY, 1), 0, 0,
new SimpleVector(-1, fY, -1), 0, 1,
new SimpleVector(1, fY, -1), 1, 1,
textureManager.getTextureID(topTexture));
obj.addTriangle(new SimpleVector(1, fY, -1), 1, 1,
new SimpleVector(1, fY, 1), 1, 0,
new SimpleVector(-1, fY, 1), 0, 0,
textureManager.getTextureID(topTexture));
}
Problem is that when looking straight at it, you get this effect:
This could be reduced by increasing the amount of planes and putting them closer together, but I'm looking for a more efficient way of getting the same effect.
I was thinking of rendering a cube with the 2D image as top texture and using the last line of pixels as textures for the sides, e.g. extract a 40x1 image at (0,0) and (0,39) from the base image and stretch these across the sides of the cubes (the original images are 40x40).
This won't work perfectly though because the visible part of these images are smaller than 40x40 (e.g. the top and bottom 40x9 pixels are transparent for a horizontal wall), so I should do some edge-detection and start cutting there.
Any better suggestions to try to do same?

The simplest but likely least performant solution is for any pixel in your image that is next to both a transparent pixel and a non transparent pixel, render a tall rectangular cube for just that pixel.

Related

I want to inscribe rectangle into a grid. How to get the list of grid cells that collide with the rectangle

I'm coding a collision system for my 2D game engine in Java but I'm having problems with obtaining certain values. Lets say I have a rectangle and want to inscribe it into a grid. I want to list every grid cell which collide with the rectangle. As of rectangle what I know is its width, height, center point (x, y), angle in radians. As of cell, the coordinates of each cell are basically (n * size, m * size) where n, m = -2, -1, 0, 1, 2... (like in the image). I've been trying to find a fast solution for a long time but with no luck. I have also created a reference image for you to better understand my problem. The pink cells are the ones I want. I hope there's someone who had a similar problem and is willing to help me out :) Best of luck in your projects.

Android, calling glRotatef based on geomagnetic sensor

I have an app that renders a cube. I'm kind of new to using openGL for 3d stuff, but essentially what I want is for a corner of my cube to point north at all times, but also orient itself according to the geomagnetic sensor.
That way, when the user has the phone parallel to the ground and faces north, the corner will point "up" on the screen, whereas if the user has the phone upright, the corner will point "away" from the user, toward the back of the phone.
I had no problem writing this in 2d on only one rotational axis so that it would point the corner north if the phone was parallel to the ground.
However, when I made this 3D, two of the axes seem to be working fine, but the axis I worked with the first time doesn't seem to behave the same way.
I use the following code to get the rotation for each:
gl.glPushMatrix();
gl.glTranslatef(0,0,-4);
//get target angle
targetAngle1 = rotationHandler.getRotation1();
targetAngle2 = rotationHandler.getRotation2();
targetAngle3 = rotationHandler.
if (Math.abs(getShortestAngle(targetAngle1, currentAngle)) > 5) //this is to create a 5 degree "dead zone" so the compass isnt shaky
currentAngle1 = (getShortestAngle(currentAngle, targetAngle1) > 0) ?
currentAngle+1f : currentAngle-1f; //increase or decrease the current angle to move it towards the target angle
if (Math.abs(getShortestAngle(targetAngle2, currentAngle2))>5)
currentAngle2 = (getShortestAngle(currentAngle2, targetAngle2) > 0) ?
currentAngle2 + 1f : currentAngle2-1f;
if (Math.abs(getShortestAngle(targetAngle3, currentAngle3))>5)
currentAngle3 = (getShortestAngle(currentAngle3, targetAngle3) > 0) ?
currentAngle3 + 1f : currentAngle3 - 1f;
gl.glRotatef(currentAngle, 0, 0, -4);
gl.glRotatef(currentAngle2, 0, -4, 0);
gl.glRotatef(currentAngle3, -4, 0, 0);
cube.draw(gl);
gl.glPopMatrix();
The calls to glRotatef that use currentAngle2 and currentAngle3 seem to rotate the cube on an axis relative to the cube, while the first call seems to rotate it on an axis relative to the screen. When I comment out any two of the rotation calls, the third works as intended. But I can't seem to figure out how to get them to work together.
---EDIT---
I found that I could get the cube to rotate to almost any position possible even after taking away the first call. So it seems like I'm going to have to come up with an algorithm that will calculate the rotation as appropriate. I honestly don't know if I can do it but I'll post it here if I figure it out.
This might not be the best solution, but it seems to work as long as the phone isn't completely flat.
After changing the model to a rectangular prism, I could approach the problem a little more clearly. I used only the first two axes, and didn't even touch the last axis. I treated the prism like a fighter jet - in order to turn it left or right, I rotated it on its side, then altered the pitch up or down. Then I "un-rotated" it, and altered the pitch again for to point it up or down as appropriate.
public void findTargetRotation(float rotationAngle, float pitchAngle, GL10 gl){
//first, enable rotation around a vertical axis by rotating on frontBack axis by -90 degrees
gl.glRotatef(-90, 0, -4, 0);
//rotate around the leftRight axis by the angle1 amount
gl.glRotatef(rotationAngle, 0,-4,0);
//then rotate on frontBack axis by 90 degrees again, to straighten out
gl.glRotatef(90, 0, -4, 0);
//lastly, rotate around the leftRight axis to point it up or down
gl.glRotatef(pitchAngle, -4, 0, 0);
}
This seems to work as long as pitchAngle isn't extremely close to or equal to 0. So I just added a little more code to treat it like a 2D object if the pitchAngle is really small.
There's probably a better solution but at least this works.

What is Matrix.frustumM(mProjMatrix, 0, -ratio, ratio, -1, 1, 3, 7); in OpenGL ES 20?

How does the parameters work for it and exactly what is mProjMatrix getting from the method?
Also why is 'float mProjMatrix = new float[16];` declared with 16, could I have used another number instead?
float mProjMatrix = new float[16];
Matrix.frustumM(mProjMatrix, 0, -ratio, ratio, -1, 1, 3, 7);
View frustum is just a visual representation of perspective projection that is used to convert 3D point in the world coordinate space to the 2D point on the screen.
There're multiple ways to define the projection matrix (at least that I used personally):
By specifying 6 clip planes
By specifying aspect ratio, far and near clipping planes, field of view angle
But in the end they all end up as a single 4x4 perspective transform matrix.
Here is a must read article.
Near/far is clipping distance from viewpoint.
Question is what the left/right and top/bottom values are, and as far as I understand the OpenGL docs mean the values on the near plane. So you have to calculate these to be smaller if you want to define the width/height of your view at the point you look at (which is farer away than the near plane). Does it work like this on ES20, too?

Blending problems using opengl (via lwjgl) when using a png texture

I have a (hopefully) small problem when using blending in OpenGL.
Currently I use LWJGL and Slick-Util to load the Texture.
The texture itself is a 2048x2048 png graphic, in which I store tiles of a size of 128x128 so that I have 16 sprites per row/column.
Since glTexCoord2f() uses normalized Coordinates, I wrote a little function to scale the whole image to only show the sprite I want to.
It looks like this:
private Rectangle4d getTexCoord(int x, int y) {
double row = 0, col = 0;
if(x > 0)
row = x/16d;
if(y > 0)
col = y/16d;
return new Rectangle4d(row, col, 1d/16d, 1d/16d);
}
(Rectangle4d is just a type to store x, y, width and height as double coords)
Now the problem is, once I use these coords, the sprite displays correctly, the transparency works correctly too, just everything else becomes significantly darker (well more correctly it becomes transparent I guess, but since the ClearColor is black). The sprite itself however is drawn correctly. I already tried changing all the glColor3d(..) to glColor4d(..) and setting alpha to 1d, but that didn't change anything. The sprite is currently the only image, everything else are just colored quads.
Here is how I initialised OpenGL:
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glOrtho(0, WIDTH, HEIGHT, 0, 1, -1);
glMatrixMode(GL_MODELVIEW);
glEnable(GL_TEXTURE_2D);
glEnable(GL_BLEND);
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
And here is how I load the sprite (using slick-util):
texture = TextureLoader.getTexture("PNG", ResourceLoader.getResourceAsStream("res/sprites.png"));
And finally I render it like this (using the helper function getTexCoord() mentioned at the top):
texture.bind();
glColor4d(1, 1, 1, 1);
glBegin(GL_QUADS);
{
Rectangle4d texCoord = getTexCoord(0, 0);
glTexCoord2f((float)texCoord.getX(), (float)texCoord.getY());
glVertex2i((Game.WIDTH-PLAYERSIZE)/2, (Game.HEIGHT-PLAYERSIZE)/2);
glTexCoord2f((float)texCoord.getX()+(float)texCoord.getWidth(), (float)texCoord.getY());
glVertex2i((Game.WIDTH+PLAYERSIZE)/2, (Game.HEIGHT-PLAYERSIZE)/2);
glTexCoord2f((float)texCoord.getX()+(float)texCoord.getWidth(), (float)texCoord.getY()+(float)texCoord.getHeight());
glVertex2i((Game.WIDTH+PLAYERSIZE)/2, (Game.HEIGHT+PLAYERSIZE)/2);
glTexCoord2f((float)texCoord.getX(), (float)texCoord.getY()+(float)texCoord.getHeight());
glVertex2i((Game.WIDTH-PLAYERSIZE)/2, (Game.HEIGHT+PLAYERSIZE)/2);
}
glEnd();
The result is this (sprite is drawn correctly, just everything else is darker/transparent):
Without the texture (just a gray quad), it looks like this (now everything is correctly drawn except I don't have a sprite):
Thanks for everyone who bothers to read this at all!
Edit:
Some additional info, from my attempts to find the problem:
This is how it looks when I set the ClearColor to white (using glClearColor(1, 1, 1, 1) ):
Another thing I tried is enabling blending just before I draw the player and disable it again right after I finished drawing:
Its a bit better now, but its still noticeably darker. In this case it really seems to be "darker" not "more transparent" because it is the same when I use white as a clear color (while still only enabling blending when needed and disabling it right after) as seen here:
I read some related questions and eventually found the solution (Link). Apparantly I can't/shouldn't have GL_TEXTURE_2D enabled all the time when I want to render textureless objects (colored quads in this case)!
So now, I enable it only before I render the sprite and then disable it again once the sprite is drawn. It works perfectly now! :)

Starting with OpenGL ES. Drawing using pixels

I'm starting to learn open GL in android (GL10) using java and I followed some tutorials to draw squares, triangles, etc.
Now I'm starting to draw some ideas I have but I'm really confused with the drawing vertexs of the screen. When I draw something using openGL ES, I have to specify the part of the screen I want to draw and the same for the texture...
So I started to make some tests and I printed a fullscreen texture with this vertexs:
(-1, -1, //top left
-1, 1, //bottom left
1, -1, //top right
1, 1); //bottom right
Why is this fullscreen? Isn't the center of OpenGL coordinates at top left (0, 0)? Why with that vertexs the draw is correct? It seems that the center is really the real center of the screen and the width and height is from -1...1, but I dont really understand it because I thought that the center was at the top left...
Another question... I read a lot of c++ code where they print using pixels. It seems really necesary in videogames using pixels because needs the exact position of the things, and with -1...1 I cant be really precise. How can I use pixels instead of -1...1?
Really thanks and sorry about my poor english. Thanks
Why is this fullscreen? Isn't the center of OpenGL coordinates at top left (0, 0)? Why with that vertexs the draw is correct? It seems that the center is really the real center of the screen and the width and height is from -1...1, but I dont really understand it because I thought that the center was at the top left...
There are 3 things coming together. The so called viewport, the so called normalized device coordinates (NDC), and the projection from model space to eye space to clip space to NDC space.
The viewport selects the portion of your window into which the NDC range
[-1…1]×[-1…1]
is mapped to. The function signature is glViewport(x, y, width, height). OpenGL assumes a coordinate system with rising NDC x-coordinates as going to the right and rising NDC y-coordinates going up.
So if you call glViewport(0, 0, window_width, window_height), which is also the default after a OpenGL context is bound the first time to a window, the NDC coordinate (-1, -1) will be in the lower left and the NDC coordinate (1,1) in the upper right corners of the window.
OpenGL starts with all transformations being set to identity, which means, that the vertex coordinates you pass through are getting right through to NDC space and are interpreted like this. However most of the time in OpenGL you're applying to successive transformations:
modelview
and
projection
The modelview transformation is used to move around the world in front of the stationary eye/camera (which is always located at (0,0,0)). Placing a camera just means, adding an additional transformation of the whole world (view transformation), that's the exact opposite of how you'd place the camera in the world. Fixed function OpenGL calls this the MODELVIEW matrix, being accessed if matrix mode has been set to GL_MODELVIEW.
The projection transformation is kind of the lens of OpenGL. You use it to set if it's a wide or small angle (in case of perspective) or the edges of a cuboid (ortho projection) or even something different. Fixed function OpenGL calls this the PROJECTION matrix, being accessed if matrix mode has been set to GL_PROJECTION.
After the projection primitives are clipped, and then the so called homogenous divide applied, which is creating the actual perspective effect, if a perspective projection has been applied.
At this point vertices have been transformed into NDC space, which then gets mapped to the viewport as explained in the beginning.
Regarding your problem: What you want is a projection that maps vertex coordinates 1:1 to viewport pixels. Easy enough:
glViewport(0, 0, width, height);
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
if( origin_lower_left ) {
glOrtho(0, width, height, 0, -1, 1);
} else {
glOrtho(0, width, 0, height, -1, 1);
}
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
Now vertex coordinates map to viewport pixels.
Update: Drawing a full viewport textured quad by triangles:
OpenGL-2.1 and OpenGL ES-1
void fullscreenquad(int width, int height, GLuint texture)
{
GLfloat vtcs[] = {
0, 0,
1, 0,
1, 1,
0, 1
};
glEnableClientState(GL_VERTEX_ARRAY);
glEnableClientState(GL_TEXTURE_COORD_ARRAY);
glVertexPointer(2, GL_FLOAT, 0, vtcs);
glTexCoordPointer(2, GL_FLOAT, 0, vtcs);
glViewport(0, 0, width, height);
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glOrtho(0, 1, 0, 1, -1, 1);
glMatrixMode(GL_MODELVIEW);
glEnable(GL_TEXTURE_2D);
glBindTexture(GL_TEXTURE_2D, texture);
glDrawArrays(GL_TRIANGLE_FAN, 0, 4);
}
By default opengl camera is placed at origin pointing towards negative z axis.
Whatever is projected on the camera near plane is the what is seen on your screen.
Thus the center of the screen corresponds to (0,0)
Depending on what you want to draw you have the option of using GL_POINT, GL_LINE and GL_TRIANGLE for drawing.
You can use GL_POINTS to color some pixels on the screen but in case you need to draw objects/mesh such as teapot, cube etc than you should go for triangles.
You need to read a bit more to get things clearer. In openGL, you specify a viewport. This viewport is your view of the OpenGL space, so if you set it so that the center of the screen is in the middle of your screen and the view to extend from -1 to 1, then this is your full view of the OpenGL prespective. Do not mix that with the screen coordinates in Android (these coordinates have the origin at the top left corner as you mentioned). You need to translate between these coordinates (for touchevents for e.g.) to match the other coordinate system.

Categories