Starting with OpenGL ES. Drawing using pixels - java

I'm starting to learn open GL in android (GL10) using java and I followed some tutorials to draw squares, triangles, etc.
Now I'm starting to draw some ideas I have but I'm really confused with the drawing vertexs of the screen. When I draw something using openGL ES, I have to specify the part of the screen I want to draw and the same for the texture...
So I started to make some tests and I printed a fullscreen texture with this vertexs:
(-1, -1, //top left
-1, 1, //bottom left
1, -1, //top right
1, 1); //bottom right
Why is this fullscreen? Isn't the center of OpenGL coordinates at top left (0, 0)? Why with that vertexs the draw is correct? It seems that the center is really the real center of the screen and the width and height is from -1...1, but I dont really understand it because I thought that the center was at the top left...
Another question... I read a lot of c++ code where they print using pixels. It seems really necesary in videogames using pixels because needs the exact position of the things, and with -1...1 I cant be really precise. How can I use pixels instead of -1...1?
Really thanks and sorry about my poor english. Thanks

Why is this fullscreen? Isn't the center of OpenGL coordinates at top left (0, 0)? Why with that vertexs the draw is correct? It seems that the center is really the real center of the screen and the width and height is from -1...1, but I dont really understand it because I thought that the center was at the top left...
There are 3 things coming together. The so called viewport, the so called normalized device coordinates (NDC), and the projection from model space to eye space to clip space to NDC space.
The viewport selects the portion of your window into which the NDC range
[-1…1]×[-1…1]
is mapped to. The function signature is glViewport(x, y, width, height). OpenGL assumes a coordinate system with rising NDC x-coordinates as going to the right and rising NDC y-coordinates going up.
So if you call glViewport(0, 0, window_width, window_height), which is also the default after a OpenGL context is bound the first time to a window, the NDC coordinate (-1, -1) will be in the lower left and the NDC coordinate (1,1) in the upper right corners of the window.
OpenGL starts with all transformations being set to identity, which means, that the vertex coordinates you pass through are getting right through to NDC space and are interpreted like this. However most of the time in OpenGL you're applying to successive transformations:
modelview
and
projection
The modelview transformation is used to move around the world in front of the stationary eye/camera (which is always located at (0,0,0)). Placing a camera just means, adding an additional transformation of the whole world (view transformation), that's the exact opposite of how you'd place the camera in the world. Fixed function OpenGL calls this the MODELVIEW matrix, being accessed if matrix mode has been set to GL_MODELVIEW.
The projection transformation is kind of the lens of OpenGL. You use it to set if it's a wide or small angle (in case of perspective) or the edges of a cuboid (ortho projection) or even something different. Fixed function OpenGL calls this the PROJECTION matrix, being accessed if matrix mode has been set to GL_PROJECTION.
After the projection primitives are clipped, and then the so called homogenous divide applied, which is creating the actual perspective effect, if a perspective projection has been applied.
At this point vertices have been transformed into NDC space, which then gets mapped to the viewport as explained in the beginning.
Regarding your problem: What you want is a projection that maps vertex coordinates 1:1 to viewport pixels. Easy enough:
glViewport(0, 0, width, height);
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
if( origin_lower_left ) {
glOrtho(0, width, height, 0, -1, 1);
} else {
glOrtho(0, width, 0, height, -1, 1);
}
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
Now vertex coordinates map to viewport pixels.
Update: Drawing a full viewport textured quad by triangles:
OpenGL-2.1 and OpenGL ES-1
void fullscreenquad(int width, int height, GLuint texture)
{
GLfloat vtcs[] = {
0, 0,
1, 0,
1, 1,
0, 1
};
glEnableClientState(GL_VERTEX_ARRAY);
glEnableClientState(GL_TEXTURE_COORD_ARRAY);
glVertexPointer(2, GL_FLOAT, 0, vtcs);
glTexCoordPointer(2, GL_FLOAT, 0, vtcs);
glViewport(0, 0, width, height);
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glOrtho(0, 1, 0, 1, -1, 1);
glMatrixMode(GL_MODELVIEW);
glEnable(GL_TEXTURE_2D);
glBindTexture(GL_TEXTURE_2D, texture);
glDrawArrays(GL_TRIANGLE_FAN, 0, 4);
}

By default opengl camera is placed at origin pointing towards negative z axis.
Whatever is projected on the camera near plane is the what is seen on your screen.
Thus the center of the screen corresponds to (0,0)
Depending on what you want to draw you have the option of using GL_POINT, GL_LINE and GL_TRIANGLE for drawing.
You can use GL_POINTS to color some pixels on the screen but in case you need to draw objects/mesh such as teapot, cube etc than you should go for triangles.

You need to read a bit more to get things clearer. In openGL, you specify a viewport. This viewport is your view of the OpenGL space, so if you set it so that the center of the screen is in the middle of your screen and the view to extend from -1 to 1, then this is your full view of the OpenGL prespective. Do not mix that with the screen coordinates in Android (these coordinates have the origin at the top left corner as you mentioned). You need to translate between these coordinates (for touchevents for e.g.) to match the other coordinate system.

Related

In OpenGL, how do I make it so that my skybox does not cover any of my entities?

I am working on an OpenGL game in Java with LWJGL (ThinMatrix's tutorials at the moment) and I just added my skybox. As you can see from the picture, however, it is clipping through the trees and covering everything behind a certain point.
Here is my rendering code for the skybox:
public void render(Camera camera, float r, float g, float b) {
shader.start();
shader.loadViewMatrix(camera);
shader.loadFogColor(r, g, b);
GL30.glBindVertexArray(cube.getVaoID());
GL20.glEnableVertexAttribArray(0);
bindTextures();
GL11.glDrawArrays(GL11.GL_TRIANGLES, 0, cube.getVertexCount());
GL30.glBindVertexArray(0);
shader.stop();
}
private void bindTextures() {
GL13.glActiveTexture(GL13.GL_TEXTURE0);
GL11.glBindTexture(GL13.GL_TEXTURE_CUBE_MAP, texture);
GL13.glActiveTexture(GL13.GL_TEXTURE1);
GL11.glBindTexture(GL13.GL_TEXTURE_CUBE_MAP, nightTexture);
shader.loadBlendFactor(getBlendFactor());
}
also if it is needed, here is my code for my master renderer:
public void render(List<Light> lights, Camera camera){
prepare();
shader.start();
shader.loadSkyColor(RED, GREEN, BLUE);
shader.loadLights(lights);
shader.loadViewMatrix(camera);
renderer.render(entities);
shader.stop();
terrainShader.start();
terrainShader.loadSkyColor(RED, GREEN, BLUE);
terrainShader.loadLight(lights);
terrainShader.loadViewMatrix(camera);
terrainRenderer.render(terrains);
terrainShader.stop();
skyboxRenderer.render(camera, RED, GREEN, BLUE);
terrains.clear();
entities.clear();
}
There are two things you can do
If you draw your skybox first, you can disable your depth test glDisable(GL_DEPTH_TEST) or your depth write glDepthMask(false). This will prevent that your skybox draws depth values, and the skybox will never be in front of anything that will be drawn later.
If you draw your skybox last, you can make it literally infinitely big by using vertex coordinates with a w-coordinate as 0. A vertex (x y z 0) means it is a vertex infinitely far in the direction of the vector (x y z). To prevent clipping, you have to enable depth clamping glEnable(GL_DEPTH_CLAMP) this will prevent OpenGl to clip away your skybox faces, and you are sure that the skybox is always at the maximum distance and will never hide anything you have drawn earlier.
the advantage of the second method is within the depth test. Because you already have a depth values written for your scene, the OpenGL pipeline can skip the calculation of the skybox pixels that are already covered by your scene. But the fragment shader for skyboxes is usually very trivial, so it shouldn't make that much of a difference.
I am not familiar with LWJGL, are you alllowed to write shader? In plain opengl, you don't have to worry about the size of skybox cube, it can be {1.0, 1.0, 1.0} if you like. What you need is first place your camera at {0.0, 0.0, 0.0} and make skybox fail depth test against everything in your scene, you can achieve that by making the skybox's z value in normalized device coordinate be 1.0.
Do this in your vertex shader
gl_Position = (mvp_mat * vec4(xyz, 1.0)).xyww;
after the perspective divide by w, z will be w / w or 1.0.
You might want to check out How can I increase distance (zfar/gluPerspective) where openGL stops drawing objects?
The problem in that instance is that the skybox itself was too small and intersecting with the geometry.
I also see that you're rendering your terrain first, and then your skybox. I would try flipping the order there; draw the skybox first then the terrain.
First, you should remove the skybox and render the scene again to check if it is skybox that clip the tree.
If it is skybox, simply scale the skybox to make it contain all the object in the terrain.
If not, it is likely to be the problem of camera (like Hanston said). You need to set the far clipping plane at least behind the skybox. That is, it should be larger the diameter of your skybox.
If you want to scale the skybox or any other object, use the transformationMatrix. the game engine use a 4x4 matrix to control the size, location and rotation of the model. you can see example in source TerrainRenderer.java, at function loadModelMatrix. It create a transform matrix and uploads it into the shader. You should do the same thing, but change the scale parameter into what you want.

Blending problems using opengl (via lwjgl) when using a png texture

I have a (hopefully) small problem when using blending in OpenGL.
Currently I use LWJGL and Slick-Util to load the Texture.
The texture itself is a 2048x2048 png graphic, in which I store tiles of a size of 128x128 so that I have 16 sprites per row/column.
Since glTexCoord2f() uses normalized Coordinates, I wrote a little function to scale the whole image to only show the sprite I want to.
It looks like this:
private Rectangle4d getTexCoord(int x, int y) {
double row = 0, col = 0;
if(x > 0)
row = x/16d;
if(y > 0)
col = y/16d;
return new Rectangle4d(row, col, 1d/16d, 1d/16d);
}
(Rectangle4d is just a type to store x, y, width and height as double coords)
Now the problem is, once I use these coords, the sprite displays correctly, the transparency works correctly too, just everything else becomes significantly darker (well more correctly it becomes transparent I guess, but since the ClearColor is black). The sprite itself however is drawn correctly. I already tried changing all the glColor3d(..) to glColor4d(..) and setting alpha to 1d, but that didn't change anything. The sprite is currently the only image, everything else are just colored quads.
Here is how I initialised OpenGL:
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glOrtho(0, WIDTH, HEIGHT, 0, 1, -1);
glMatrixMode(GL_MODELVIEW);
glEnable(GL_TEXTURE_2D);
glEnable(GL_BLEND);
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
And here is how I load the sprite (using slick-util):
texture = TextureLoader.getTexture("PNG", ResourceLoader.getResourceAsStream("res/sprites.png"));
And finally I render it like this (using the helper function getTexCoord() mentioned at the top):
texture.bind();
glColor4d(1, 1, 1, 1);
glBegin(GL_QUADS);
{
Rectangle4d texCoord = getTexCoord(0, 0);
glTexCoord2f((float)texCoord.getX(), (float)texCoord.getY());
glVertex2i((Game.WIDTH-PLAYERSIZE)/2, (Game.HEIGHT-PLAYERSIZE)/2);
glTexCoord2f((float)texCoord.getX()+(float)texCoord.getWidth(), (float)texCoord.getY());
glVertex2i((Game.WIDTH+PLAYERSIZE)/2, (Game.HEIGHT-PLAYERSIZE)/2);
glTexCoord2f((float)texCoord.getX()+(float)texCoord.getWidth(), (float)texCoord.getY()+(float)texCoord.getHeight());
glVertex2i((Game.WIDTH+PLAYERSIZE)/2, (Game.HEIGHT+PLAYERSIZE)/2);
glTexCoord2f((float)texCoord.getX(), (float)texCoord.getY()+(float)texCoord.getHeight());
glVertex2i((Game.WIDTH-PLAYERSIZE)/2, (Game.HEIGHT+PLAYERSIZE)/2);
}
glEnd();
The result is this (sprite is drawn correctly, just everything else is darker/transparent):
Without the texture (just a gray quad), it looks like this (now everything is correctly drawn except I don't have a sprite):
Thanks for everyone who bothers to read this at all!
Edit:
Some additional info, from my attempts to find the problem:
This is how it looks when I set the ClearColor to white (using glClearColor(1, 1, 1, 1) ):
Another thing I tried is enabling blending just before I draw the player and disable it again right after I finished drawing:
Its a bit better now, but its still noticeably darker. In this case it really seems to be "darker" not "more transparent" because it is the same when I use white as a clear color (while still only enabling blending when needed and disabling it right after) as seen here:
I read some related questions and eventually found the solution (Link). Apparantly I can't/shouldn't have GL_TEXTURE_2D enabled all the time when I want to render textureless objects (colored quads in this case)!
So now, I enable it only before I render the sprite and then disable it again once the sprite is drawn. It works perfectly now! :)

Render ellipse using libgdx

I am attempting to render an ellipse using ShapeRenderer, and have come up with the following partial solution:
void drawEllipse(float x, float y, float width, float height) {
float r = (width / 2);
ShapeRenderer renderer = new ShapeRenderer();
renderer.setProjectionMatrix(/* camera matrix */);
renderer.begin(ShapeType.Circle);
renderer.scale(1f, (height / width), 1f);
renderer.circle(x + r, y, r);
renderer.identity();
renderer.end();
}
This draws an ellipse at the specified coordinates with the correct width and height; however, it appears that the scale transformation causes the circle to be translated in the viewport, and I have not been successful in determining the mathematics behind the translation. I am using an orthogonal projection with y-up where the coordinates map to a pixel on the screen. I am not very familiar with OpenGL.
How can I draw an ellipse using libgdx, and have it draw the ellipse at the exact coordinates I specify? Ideally, that would mean that the origin of the ellipse is located in the top-left corner, if the ellipse was contained in a rectangle.
The new Libgdx ShapeRenderer API (current nightlies, in whatever release will come after v0.9.8) contains an ellipse drawing method so you can ignore the rest of this answer. The ShapeRenderer method has changed in other ways, too though (e.g., the ShapeType is just Filled, Line, or Point now).
For folks stuck with the older API, you should be able to work-around the distortion by making sure the scaling happens around the origin. This is a standard OpenGL practice (so its a bit obtuse, but they're following OpenGL's lead). See Opengl order of matrix transformations and OpenGL: scale then translate? and how?. Even better (again standard OpenGL practice) you end up listing the operations in the reverse order you want them to happen at, so to make a circle, distort it into an ellipse, then move it to a specific destination you actually write code like:
renderer.begin(ShapeType.Circle);
renderer.translate(x, y, 0);
renderer.scale(1f, (height/width), 1f);
renderer.circle(0, 0, r);
renderer.end();

Stretch 2D plane to 3D cube

I'm working on a Java game that has both a 2D game panel and "pseudo"-3D one.
It isn't real 3D as it's merely some 2D planes put in a 3D environment (no custom created models).
I'm using jPCT as render engine and I'm currently looking into getting the walls rendered.
In the 2D view, they look like this:
In the 3D view, I'm trying to get them to look like this:
This works by stacking 10 planes on top of each other and that gives the illusion of a 3D wall.
The code to get this is:
Object3D obj = new Object3D(20);
for (int y=0; y < 10; y++) {
float fY = y / -10f;
obj.addTriangle(new SimpleVector(-1, fY, 1), 0, 0,
new SimpleVector(-1, fY, -1), 0, 1,
new SimpleVector(1, fY, -1), 1, 1,
textureManager.getTextureID(topTexture));
obj.addTriangle(new SimpleVector(1, fY, -1), 1, 1,
new SimpleVector(1, fY, 1), 1, 0,
new SimpleVector(-1, fY, 1), 0, 0,
textureManager.getTextureID(topTexture));
}
Problem is that when looking straight at it, you get this effect:
This could be reduced by increasing the amount of planes and putting them closer together, but I'm looking for a more efficient way of getting the same effect.
I was thinking of rendering a cube with the 2D image as top texture and using the last line of pixels as textures for the sides, e.g. extract a 40x1 image at (0,0) and (0,39) from the base image and stretch these across the sides of the cubes (the original images are 40x40).
This won't work perfectly though because the visible part of these images are smaller than 40x40 (e.g. the top and bottom 40x9 pixels are transparent for a horizontal wall), so I should do some edge-detection and start cutting there.
Any better suggestions to try to do same?
The simplest but likely least performant solution is for any pixel in your image that is next to both a transparent pixel and a non transparent pixel, render a tall rectangular cube for just that pixel.

Texturing Vertex Buffer Objects

What I want to do is drawing a (large) terrain with OpenGL. So I have a set of vertices, lets say 256 x 256 which I store in a vertex buffer object in the VRAM. I properly triangulated them, so I've got an index buffer for the faces.
// vertexes
glBindBufferARB(GL_ARRAY_BUFFER_ARB, vertexBufferId);
glVertexPointer(3, GL_FLOAT, 0, 0);
// textures
glBindBufferARB(GL_ARRAY_BUFFER_ARB, texCoordBufferId);
glTexCoordPointer(2, GL_FLOAT, 0, 0);
// indexes
glBindBufferARB(GL_ELEMENT_ARRAY_BUFFER_ARB, indexBufferId);
// draw it
glDrawRangeElements(GL11.TRIANGLES, 0, size - 1, size, GL_UNSIGNED_INT, 0);
I also loaded a square texture which has to be applied to each triangle. So I've got a problem with the texture coords:
Each vertex is included in 4 triangles which means it needs 4 texture coords. But glDrawRangeElements() requires as much texture coords as vertexes.
So I don't the see the way how to do this with the VBOs. Maybe there is better concept for solving my problem or I'm just lacking a good idea.
Thanks in advance.
If your texture should repeat (or mirror) itself in each quad the best way would be to use texture coordinates that match the number of the (x, y) position in your array. E.g. for the first line of vertices use these texture coordinates: (0.0, 0.0), (1.0, 0.0), (2.0, 0.0)...(255.0, 0.0).
As you probably want your texture to seamlessly tile, all you need to is to compute the proper texture coordinate for each vertex and pass them just like the vertices.

Categories