I have a (hopefully) small problem when using blending in OpenGL.
Currently I use LWJGL and Slick-Util to load the Texture.
The texture itself is a 2048x2048 png graphic, in which I store tiles of a size of 128x128 so that I have 16 sprites per row/column.
Since glTexCoord2f() uses normalized Coordinates, I wrote a little function to scale the whole image to only show the sprite I want to.
It looks like this:
private Rectangle4d getTexCoord(int x, int y) {
double row = 0, col = 0;
if(x > 0)
row = x/16d;
if(y > 0)
col = y/16d;
return new Rectangle4d(row, col, 1d/16d, 1d/16d);
}
(Rectangle4d is just a type to store x, y, width and height as double coords)
Now the problem is, once I use these coords, the sprite displays correctly, the transparency works correctly too, just everything else becomes significantly darker (well more correctly it becomes transparent I guess, but since the ClearColor is black). The sprite itself however is drawn correctly. I already tried changing all the glColor3d(..) to glColor4d(..) and setting alpha to 1d, but that didn't change anything. The sprite is currently the only image, everything else are just colored quads.
Here is how I initialised OpenGL:
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glOrtho(0, WIDTH, HEIGHT, 0, 1, -1);
glMatrixMode(GL_MODELVIEW);
glEnable(GL_TEXTURE_2D);
glEnable(GL_BLEND);
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
And here is how I load the sprite (using slick-util):
texture = TextureLoader.getTexture("PNG", ResourceLoader.getResourceAsStream("res/sprites.png"));
And finally I render it like this (using the helper function getTexCoord() mentioned at the top):
texture.bind();
glColor4d(1, 1, 1, 1);
glBegin(GL_QUADS);
{
Rectangle4d texCoord = getTexCoord(0, 0);
glTexCoord2f((float)texCoord.getX(), (float)texCoord.getY());
glVertex2i((Game.WIDTH-PLAYERSIZE)/2, (Game.HEIGHT-PLAYERSIZE)/2);
glTexCoord2f((float)texCoord.getX()+(float)texCoord.getWidth(), (float)texCoord.getY());
glVertex2i((Game.WIDTH+PLAYERSIZE)/2, (Game.HEIGHT-PLAYERSIZE)/2);
glTexCoord2f((float)texCoord.getX()+(float)texCoord.getWidth(), (float)texCoord.getY()+(float)texCoord.getHeight());
glVertex2i((Game.WIDTH+PLAYERSIZE)/2, (Game.HEIGHT+PLAYERSIZE)/2);
glTexCoord2f((float)texCoord.getX(), (float)texCoord.getY()+(float)texCoord.getHeight());
glVertex2i((Game.WIDTH-PLAYERSIZE)/2, (Game.HEIGHT+PLAYERSIZE)/2);
}
glEnd();
The result is this (sprite is drawn correctly, just everything else is darker/transparent):
Without the texture (just a gray quad), it looks like this (now everything is correctly drawn except I don't have a sprite):
Thanks for everyone who bothers to read this at all!
Edit:
Some additional info, from my attempts to find the problem:
This is how it looks when I set the ClearColor to white (using glClearColor(1, 1, 1, 1) ):
Another thing I tried is enabling blending just before I draw the player and disable it again right after I finished drawing:
Its a bit better now, but its still noticeably darker. In this case it really seems to be "darker" not "more transparent" because it is the same when I use white as a clear color (while still only enabling blending when needed and disabling it right after) as seen here:
I read some related questions and eventually found the solution (Link). Apparantly I can't/shouldn't have GL_TEXTURE_2D enabled all the time when I want to render textureless objects (colored quads in this case)!
So now, I enable it only before I render the sprite and then disable it again once the sprite is drawn. It works perfectly now! :)
Related
So I am having a little hard time understanding how ortographic cameras work in libgdx.
what I want is to have a camera that will only render things within a square while having another camera set the bounds for my whole screen.
So here, I was able to do what I wanted on the whole screen for the game pad. But, the thing you see on the top right is the background map of the game and i want to render the parts only fall within the red square you see here. How do I achieve that?
Are cameras supposed to do that or do I need to figure out a way to do it manually? I am really confused as to how cameras and projection matrices work.
Here on this screen, The red square and the green pad on the left are being drawn using the projection matrix of my screen camera. The map(top right) is drawn using my map cam.
Map cam is a view port of 400x400 but as you can see , the tiles are rectangular and that isnt the aspect ration i want. If someone can briefly explain how cameras work , I'd greatly appreciate it.
The reason I am not posting my code here is because I feel like I need to understand how camera mechanics work to even code it properly so I want to address that issue first.
Following #Tenfour04's advice worked perfectly. In case anyone wonders what I wanted to achieve. Here's a picture.
A camera alone cannot crop off part of the screen. For that you need to use glViewport. There is already a Viewport class in Libgdx that can do that for you. You will need two orthographic cameras (one for the map and one for the GUI), but the viewport can create its own.
private Viewport viewport;
//in create:
viewport = new FitViewport(400, 400);
//in resize:
viewport.update(width, height);
//in render:
viewport.getCamera().position.set(/*...move your map camera as needed*/);
viewport.apply(); //viewport cropped to your red square
batch.setProjectionMatrix(viewport.getCamera().combined);
batch.begin();
//draw map
batch.end();
//return to full screen viewport
Gdx.gl.glViewport(0, 0, Gdx.graphics.getWidth(), Gdx.graphics.getHeight());
batch.setProjectionMatrix(yourGUICamera.combined);
batch.begin();
//draw gui
batch.end();
What happens, is the camera will fit itself to the size of the screen. In order to change this, you would want to use a FrameBuffer. The frame buffer will constrain the camera into the desired size, then can be drawn as a texture.
Create the frame buffer with the dimensions being in pixels.
//Initialize the buffer
FrameBuffer fbo = new FrameBuffer(Format.RGB565, width, helght, false);
Render the world within the buffer.
fbo.begin();
//Draw the world here
fbo.end();
Draw the buffer to the screen with a batch.
batch.begin();
batch.draw(fbo.getColorBufferTexture(), x, y);
batch.end();
I'm working on a top down RPG game using LibGDX, and am creating an Ortho.. camera for my game. However in doing so, only my tile textures render now. This is how the render code looks:
Camera initialized as new OrthographicCamera(Gdx.graphics.getWidth(), Gdx.graphics.getHeight());
(note that the camera. calls are actually in the world.update method, I just figured I would minimize the amount of code needed on here)
Updating/Rendering:
camera.position.set(getPlayer().getPosition().x, getPlayer().getPosition().y, 0);
camera.update();
batch.setProjectionMatrix(world.getCamera().combined);
batch.begin();
world.render(batch);
batch.end();
the world's render method ends up calling this:
public void render(SpriteBatch batch, float x, float y, float w, float h) {
batch.draw(region, x, y, w, h);
}
Where region is a TextureRegion
Without the camera, it all works just fine, so I am very confused as to why textures only render in order now (my tile textures are rendered below entities) Does anyone have any idea why this might be? In case you want to see more code, I also have this on my github: CLICK HERE
I hadn't realized this later, but I was commenting out a lot of rendering lines, line by line to see if I could find what was wrong, it turns out that my debugging tool I made (which renders collision bounds using a ShapeRenderer) was messing it up because apparently, a ShapeRenderer cannot be used between a batch.begin and a batch.end
I figured this out with the help of this badlogicgames forum post
Can't get much out of the code but are you using this at the root of the render structure?
Gdx.gl.glClearColor(0, 0, 0, 1);
Gdx.gl.glClear(GL20.GL_COLOR_BUFFER_BIT);
So I am developing a small Pong replica simply for some practice with LWJGL. Since there is no easy way to write customizable text in LWJGL, I am using textures for the start button, other buttons, and so on. However, when I drew the texture, it turned out to be discolored on my purple background. If I change the background to white, there is no discoloration. Help?
Also, my start button texture is something like 50x22, but I put it on a 64x64 image because Slick can only load resolutions that are an exponent of two. I adjusted the rectangle being drawn so that it is not warped, and the rest of the image is transparent, so it shouldn't be visible once I sort out the above problem. Are there any alternatives to my method?
This is where I initialize my OpenGL stuff:
public static void setCamera()
{
glClear(GL_COLOR_BUFFER_BIT);
glEnable(GL_TEXTURE_2D);
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glOrtho(0,width,0,height,-1,1);
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
}
And this is where I draw the texture:
public void draw()
{
logic();
glPushMatrix();
glTranslated(x,y,0);
texture.bind();
glBegin(GL_QUADS);
glTexCoord2d(0,1); glVertex2d(0,-196.875);
glTexCoord2d(1,1); glVertex2d(width+18.75,-196.875);
glTexCoord2d(1,0); glVertex2d(width+18.75,height);
glTexCoord2d(0,0); glVertex2d(0,height);
glEnd();
glPopMatrix();
}
Thanks :)
As discussed in comments, your initial problem was that you had neglected to reset the "current color" before drawing your texture. GL is a glorified state machine, it will continue to use the color you set for every other draw operation... so setting glColor3d (...) when you drew your background also affects your foreground image.
Adding the following before drawing your textured quad will fix this problem:
glColor3f (1.0f, 1.0f, 1.0f);
However, you have brought up a new issue in your comments related to blending. This question boils down to a lack of a blending function. By default when you draw something in OpenGL it will merely overwrite anything else in the framebuffer.
What you need for transparency to work is enable blending and use this function:
glBlendFunc (GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
This effectively does the following:
NewColor = (NewColor.RGB * NewColor.A) + OldColor.RGB * (1.0 - NewColor.A)
With this, the parts of your texture that have an alpha value != 1.0 will be a mix of whatever was already in the framebuffer and what you just drew.
Remember to enable blending before you draw your transparent objects:
glEnable (GL_BLEND);
and disable it when you do not need it:
glDisable (GL_BLEND);
Lastly, you should be aware that the order you draw translucent objects in OpenGL is pretty important. Opaque objects (such as your background) need to be drawn first. In general, you need to draw things from back-to-front in order for alpha blending to function correctly. This is the opposite order you would ideally want for opaque geometry (since hardware can skip shading for obstructed objects if you draw objects in front of them first).
I'm trying to render a background image for a new game I'm creating. To do this, I thought I'd just create a simple Quad and draw it first so that it stretched over the background of my game. The problem is that the quad doesn't draw to it's correct size and draws at the complete wrong place on the screen. I am using LWJGL and an added slick-util library for loading textures.
background = TextureHandler.getTexture("background", "png");
This is the line of code which basically gets my background texture using a class that I wrote using slick-util. I then bind the texture to a quad and draw it using glBegin() and glEnd() like this:
// Draw the background.
background.bind();
glBegin(GL_QUADS);
{
glTexCoord2d(0.0, 0.0);
glVertex2d(0, 0);
glTexCoord2d(1.0, 0.0);
glVertex2d(Game.WIDTH, 0);
glTexCoord2d(1.0, 1.0);
glVertex2d(Game.WIDTH, Game.HEIGHT);
glTexCoord2d(0.0, 1.0);
glVertex2d(0, Game.HEIGHT);
}
glEnd();
You'd expect this block to draw the quad so that it covered the entire screen, but it actually doesn't do this. It draws it in the middle of the screen, like so:
http://imgur.com/Xw9Xs9Z
The large, multicolored sprite that takes up the larger portion of the screen is my background, but it isn't taking up the full space like I want it to.
A few things I've tried:
Checking, double-checking, and triple-checking to make sure that the sprite's size and the window's size are identical
Resizing the sprite so that it is both larger and smaller than my target size. Nothing seems to change when I do this.
Positioning the sprite at different intervals or messing with the parameters of the glTexCoord2d() and glVertex2d(). This is just messy, and looks unnatural.
Why won't this background sprite draw to it's correct size?
If you have not created your own orthogonal projection (I.E. using glOrtho()), then your vertex coordinates will need to range from -1 to +1. Right now you're only drawing on the left half of that projection, thus giving you this result.
I'm starting to learn open GL in android (GL10) using java and I followed some tutorials to draw squares, triangles, etc.
Now I'm starting to draw some ideas I have but I'm really confused with the drawing vertexs of the screen. When I draw something using openGL ES, I have to specify the part of the screen I want to draw and the same for the texture...
So I started to make some tests and I printed a fullscreen texture with this vertexs:
(-1, -1, //top left
-1, 1, //bottom left
1, -1, //top right
1, 1); //bottom right
Why is this fullscreen? Isn't the center of OpenGL coordinates at top left (0, 0)? Why with that vertexs the draw is correct? It seems that the center is really the real center of the screen and the width and height is from -1...1, but I dont really understand it because I thought that the center was at the top left...
Another question... I read a lot of c++ code where they print using pixels. It seems really necesary in videogames using pixels because needs the exact position of the things, and with -1...1 I cant be really precise. How can I use pixels instead of -1...1?
Really thanks and sorry about my poor english. Thanks
Why is this fullscreen? Isn't the center of OpenGL coordinates at top left (0, 0)? Why with that vertexs the draw is correct? It seems that the center is really the real center of the screen and the width and height is from -1...1, but I dont really understand it because I thought that the center was at the top left...
There are 3 things coming together. The so called viewport, the so called normalized device coordinates (NDC), and the projection from model space to eye space to clip space to NDC space.
The viewport selects the portion of your window into which the NDC range
[-1…1]×[-1…1]
is mapped to. The function signature is glViewport(x, y, width, height). OpenGL assumes a coordinate system with rising NDC x-coordinates as going to the right and rising NDC y-coordinates going up.
So if you call glViewport(0, 0, window_width, window_height), which is also the default after a OpenGL context is bound the first time to a window, the NDC coordinate (-1, -1) will be in the lower left and the NDC coordinate (1,1) in the upper right corners of the window.
OpenGL starts with all transformations being set to identity, which means, that the vertex coordinates you pass through are getting right through to NDC space and are interpreted like this. However most of the time in OpenGL you're applying to successive transformations:
modelview
and
projection
The modelview transformation is used to move around the world in front of the stationary eye/camera (which is always located at (0,0,0)). Placing a camera just means, adding an additional transformation of the whole world (view transformation), that's the exact opposite of how you'd place the camera in the world. Fixed function OpenGL calls this the MODELVIEW matrix, being accessed if matrix mode has been set to GL_MODELVIEW.
The projection transformation is kind of the lens of OpenGL. You use it to set if it's a wide or small angle (in case of perspective) or the edges of a cuboid (ortho projection) or even something different. Fixed function OpenGL calls this the PROJECTION matrix, being accessed if matrix mode has been set to GL_PROJECTION.
After the projection primitives are clipped, and then the so called homogenous divide applied, which is creating the actual perspective effect, if a perspective projection has been applied.
At this point vertices have been transformed into NDC space, which then gets mapped to the viewport as explained in the beginning.
Regarding your problem: What you want is a projection that maps vertex coordinates 1:1 to viewport pixels. Easy enough:
glViewport(0, 0, width, height);
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
if( origin_lower_left ) {
glOrtho(0, width, height, 0, -1, 1);
} else {
glOrtho(0, width, 0, height, -1, 1);
}
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
Now vertex coordinates map to viewport pixels.
Update: Drawing a full viewport textured quad by triangles:
OpenGL-2.1 and OpenGL ES-1
void fullscreenquad(int width, int height, GLuint texture)
{
GLfloat vtcs[] = {
0, 0,
1, 0,
1, 1,
0, 1
};
glEnableClientState(GL_VERTEX_ARRAY);
glEnableClientState(GL_TEXTURE_COORD_ARRAY);
glVertexPointer(2, GL_FLOAT, 0, vtcs);
glTexCoordPointer(2, GL_FLOAT, 0, vtcs);
glViewport(0, 0, width, height);
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glOrtho(0, 1, 0, 1, -1, 1);
glMatrixMode(GL_MODELVIEW);
glEnable(GL_TEXTURE_2D);
glBindTexture(GL_TEXTURE_2D, texture);
glDrawArrays(GL_TRIANGLE_FAN, 0, 4);
}
By default opengl camera is placed at origin pointing towards negative z axis.
Whatever is projected on the camera near plane is the what is seen on your screen.
Thus the center of the screen corresponds to (0,0)
Depending on what you want to draw you have the option of using GL_POINT, GL_LINE and GL_TRIANGLE for drawing.
You can use GL_POINTS to color some pixels on the screen but in case you need to draw objects/mesh such as teapot, cube etc than you should go for triangles.
You need to read a bit more to get things clearer. In openGL, you specify a viewport. This viewport is your view of the OpenGL space, so if you set it so that the center of the screen is in the middle of your screen and the view to extend from -1 to 1, then this is your full view of the OpenGL prespective. Do not mix that with the screen coordinates in Android (these coordinates have the origin at the top left corner as you mentioned). You need to translate between these coordinates (for touchevents for e.g.) to match the other coordinate system.