Difficulty drawing a background sprite using LWJGL - java

I'm trying to render a background image for a new game I'm creating. To do this, I thought I'd just create a simple Quad and draw it first so that it stretched over the background of my game. The problem is that the quad doesn't draw to it's correct size and draws at the complete wrong place on the screen. I am using LWJGL and an added slick-util library for loading textures.
background = TextureHandler.getTexture("background", "png");
This is the line of code which basically gets my background texture using a class that I wrote using slick-util. I then bind the texture to a quad and draw it using glBegin() and glEnd() like this:
// Draw the background.
background.bind();
glBegin(GL_QUADS);
{
glTexCoord2d(0.0, 0.0);
glVertex2d(0, 0);
glTexCoord2d(1.0, 0.0);
glVertex2d(Game.WIDTH, 0);
glTexCoord2d(1.0, 1.0);
glVertex2d(Game.WIDTH, Game.HEIGHT);
glTexCoord2d(0.0, 1.0);
glVertex2d(0, Game.HEIGHT);
}
glEnd();
You'd expect this block to draw the quad so that it covered the entire screen, but it actually doesn't do this. It draws it in the middle of the screen, like so:
http://imgur.com/Xw9Xs9Z
The large, multicolored sprite that takes up the larger portion of the screen is my background, but it isn't taking up the full space like I want it to.
A few things I've tried:
Checking, double-checking, and triple-checking to make sure that the sprite's size and the window's size are identical
Resizing the sprite so that it is both larger and smaller than my target size. Nothing seems to change when I do this.
Positioning the sprite at different intervals or messing with the parameters of the glTexCoord2d() and glVertex2d(). This is just messy, and looks unnatural.
Why won't this background sprite draw to it's correct size?

If you have not created your own orthogonal projection (I.E. using glOrtho()), then your vertex coordinates will need to range from -1 to +1. Right now you're only drawing on the left half of that projection, thus giving you this result.

Related

Rendering and Cropping/Stretching an Image Using Slick/OpenGL (using getSubImage)

I'm trying to recreate a shadow effect on some 2D sprites in a project using Slick. To do this, I'm recolouring a copy of the sprite and stretching it using Slick OpenGL using this method:
public static void getStretched(Shape shape, Image image) {
TextureImpl.bindNone();
image.getTexture().bind();
SGL GL = Renderer.get();
GL.glEnable(SGL.GL_TEXTURE_2D);
GL.glBegin(SGL.GL_QUADS);
//topleft
GL.glTexCoord2f(0f, 0f);
GL.glVertex2f(shape.getPoints()[0], shape.getPoints()[1]);
//topright
GL.glTexCoord2f(0.5f, 0f);
GL.glVertex2f(shape.getPoints()[2], shape.getPoints()[3]);
//bottom right
GL.glTexCoord2f(1f, 1f);
GL.glVertex2f(shape.getPoints()[4], shape.getPoints()[5]);
//btoom left
GL.glTexCoord2f(0.5f, 1f);
GL.glVertex2f(shape.getPoints()[6], shape.getPoints()[7]);
GL.glEnd();
GL.glDisable(SGL.GL_TEXTURE_2D);
TextureImpl.bindNone();
}
This gives the almost the desired effect, aside from the fact that the image is cropped a bit.
This becomes more extreme for higher distortions
I'm new to using OpenGL, so some help regarding how to fix this would be great.
Furthermore, if I feed an image into the method that was obtained using getSubImage, OpenGL renders the original image, rather than the sub image.
I'm unsure as to why this happens, as the sprite itself is taken from a spritesheet using getSubImage and has no problem rendering.
Help would be greatly appreciated!
I'm recolouring a copy of the sprite and stretching it
The issue is that you stretch the texture coordinates, but the region which is covered by the sprite stays the same. If the shadow exceeds the the region which is covered by the quad primitive, then it is cropped.
You have to "stretch" the vertex coordinates rather than the texture coordinates. Define a rhombic geometry for the shadow and wrap the texture on it:
float distortionX = ...;
GL.glEnable(SGL.GL_TEXTURE_2D);
GL.glBegin(SGL.GL_QUADS);
//topleft
GL.glTexCoord2f(0f, 0f);
GL.glVertex2f(shape.getPoints()[0] + distortionX, shape.getPoints()[1]);
//topright
GL.glTexCoord2f(1f, 0f);
GL.glVertex2f(shape.getPoints()[2] + distortionX, shape.getPoints()[3]);
//bottom right
GL.glTexCoord2f(1f, 1f);
GL.glVertex2f(shape.getPoints()[4], shape.getPoints()[5]);
//btoom left
GL.glTexCoord2f(0f, 1f);
GL.glVertex2f(shape.getPoints()[6], shape.getPoints()[7]);
GL.glEnd();
[...] if I feed an image into the method that was obtained using getSubImage, [...] the sprite itself is taken from a spritesheet [...]
The sprite is covers just a small rectangular region of the entire texture. This region is defined by the texture coordinates. You have to use the same texture coordinates as when you draw the sprite itself.

How to create a proper ortographic camera in libgdx

So I am having a little hard time understanding how ortographic cameras work in libgdx.
what I want is to have a camera that will only render things within a square while having another camera set the bounds for my whole screen.
So here, I was able to do what I wanted on the whole screen for the game pad. But, the thing you see on the top right is the background map of the game and i want to render the parts only fall within the red square you see here. How do I achieve that?
Are cameras supposed to do that or do I need to figure out a way to do it manually? I am really confused as to how cameras and projection matrices work.
Here on this screen, The red square and the green pad on the left are being drawn using the projection matrix of my screen camera. The map(top right) is drawn using my map cam.
Map cam is a view port of 400x400 but as you can see , the tiles are rectangular and that isnt the aspect ration i want. If someone can briefly explain how cameras work , I'd greatly appreciate it.
The reason I am not posting my code here is because I feel like I need to understand how camera mechanics work to even code it properly so I want to address that issue first.
Following #Tenfour04's advice worked perfectly. In case anyone wonders what I wanted to achieve. Here's a picture.
A camera alone cannot crop off part of the screen. For that you need to use glViewport. There is already a Viewport class in Libgdx that can do that for you. You will need two orthographic cameras (one for the map and one for the GUI), but the viewport can create its own.
private Viewport viewport;
//in create:
viewport = new FitViewport(400, 400);
//in resize:
viewport.update(width, height);
//in render:
viewport.getCamera().position.set(/*...move your map camera as needed*/);
viewport.apply(); //viewport cropped to your red square
batch.setProjectionMatrix(viewport.getCamera().combined);
batch.begin();
//draw map
batch.end();
//return to full screen viewport
Gdx.gl.glViewport(0, 0, Gdx.graphics.getWidth(), Gdx.graphics.getHeight());
batch.setProjectionMatrix(yourGUICamera.combined);
batch.begin();
//draw gui
batch.end();
What happens, is the camera will fit itself to the size of the screen. In order to change this, you would want to use a FrameBuffer. The frame buffer will constrain the camera into the desired size, then can be drawn as a texture.
Create the frame buffer with the dimensions being in pixels.
//Initialize the buffer
FrameBuffer fbo = new FrameBuffer(Format.RGB565, width, helght, false);
Render the world within the buffer.
fbo.begin();
//Draw the world here
fbo.end();
Draw the buffer to the screen with a batch.
batch.begin();
batch.draw(fbo.getColorBufferTexture(), x, y);
batch.end();

LibGDX Scaling and Rendering Sprites from TextureAtlas

I am currently experiencing issues when drawing and scaling sprites.
I am loading my assets from a texture-atlas, which I packed no problem with the LibGDX texture packer gui tool. My texture atlas image currently looks like this.
These images are supposed to be buttons, but as you can see, the image is very small, so when the sprites are loaded, they load a sprite of say, 34x16 pixels. When I render these buttons, on a canvas of 1920x1080, they are much too small. I use sprite.scale(int scale) to scale the sprites, but when I scale them, they appear blurry. What I would like to happen, is when they are scaled, each pixel is scaled proportionally, keeping the pixelated effect on the button, rather than a blurry resized image from a really small texture. I currently render the sprites using sprite.render(SpriteBatch batch). Is this the proper way of rendering a sprite, after they are loaded using atlas.createSprite(String name)? I am new to using sprites, and loading textures from a texture-atlas, so i am wondering if this is the correct way of doing things.
Also, when I initialize my game, I load numerous different Sprite objects from a TextureAtlas. Each sprite holds a texture that will represent a game object, however it is my understanding that you render a sprite using sprite.render(SpriteBatch batch), so therefore I could only use a sprite, loaded from the TextureAtlas for one game object, because I would also have to set the scale, and position of the sprite, as it represents the game object. I am used to loading a Texture, then rendering this texture using batch.render(), at a given position, but I don't see how this is possible if I am using a sprite. Even if I use batch.render(Sprite, x, y), I am unable to scale the sprite properly, because as I mentioned before, I would like to scale the sprite while maintaining a pixelated effect, and even so, using the Sprite.scale() method, this would scale the Sprite object as a whole, making it impossible to use the Sprite's texture multiple times for numerous game objects.
Any suggestions would be greatly appreciated.
The code I am currently using to render/load the sprites is as follows:
Loading from TextureAtlas:
public static TextureAtlas atlas = new TextureAtlas(Gdx.files.internal("data/texture/pack/output/pack.pack"));
public static Sprite sprite = atlas.createSprite("buttonUp");
sprite.setScale(10);
Rendering Sprite: GdxGame.WIDTH/HEIGHT are 1920x1080. Though the Desktop window is scaled down from that size. Thus, everything is rendered as if the screen were 1920x1080.
batch = new SpriteBatch();
camera = new OrthographicCamera(GdxGame.WIDTH, GdxGame.HEIGHT);
camera.position.set(GdxGame.WIDTH/2, GdxGame.HEIGHT/2, 0);
camera.setToOrtho(false, GdxGame.WIDTH, GdxGame.HEIGHT);
public void render(float delta){
Gdx.gl.glClear(GL10.GL_COLOR_BUFFER_BIT);
batch.setProjectionMatrix(camera.combined);
batch.begin();
camera.update();
sprite.draw(batch);
batch.end();
}
After further investigation, I have discovered that I can use Sprite.set(Sprite sprite) to make Sprite x a copy of Sprite y, and therefore render each sprite multiple times. However, this does not solve the issue of scaling the sprite. I must emphasize further that when rendering a scaled sprite, the scaling is not done by pixel, meaning that it is blurry. But, when rendering a TextureRegion, like: batch.draw(TextureRegion, x, y, width, height) if the width and height are greater than that of the original texture, it will scale each pixel, rather than blur the whole thing to try and make it look better. The following is an image of the blurriness I am talking about:
Notice how this sprite is scaled to be blurry, even though the original image is small, and pixelated.
What TextureFilter settings are you using in your code or in the texturepacker? Try the "Nearest" filter. If you have set it to "Linear" or alike, it will always take 4 texture pixels (texels) and interpolate them to get the color of the pixel to be drawn.
That might help against the blur, but I am not sure if it will produce exactly that 8-bit look you are aiming for...

LWJGL: Discolored Textures

So I am developing a small Pong replica simply for some practice with LWJGL. Since there is no easy way to write customizable text in LWJGL, I am using textures for the start button, other buttons, and so on. However, when I drew the texture, it turned out to be discolored on my purple background. If I change the background to white, there is no discoloration. Help?
Also, my start button texture is something like 50x22, but I put it on a 64x64 image because Slick can only load resolutions that are an exponent of two. I adjusted the rectangle being drawn so that it is not warped, and the rest of the image is transparent, so it shouldn't be visible once I sort out the above problem. Are there any alternatives to my method?
This is where I initialize my OpenGL stuff:
public static void setCamera()
{
glClear(GL_COLOR_BUFFER_BIT);
glEnable(GL_TEXTURE_2D);
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glOrtho(0,width,0,height,-1,1);
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
}
And this is where I draw the texture:
public void draw()
{
logic();
glPushMatrix();
glTranslated(x,y,0);
texture.bind();
glBegin(GL_QUADS);
glTexCoord2d(0,1); glVertex2d(0,-196.875);
glTexCoord2d(1,1); glVertex2d(width+18.75,-196.875);
glTexCoord2d(1,0); glVertex2d(width+18.75,height);
glTexCoord2d(0,0); glVertex2d(0,height);
glEnd();
glPopMatrix();
}
Thanks :)
As discussed in comments, your initial problem was that you had neglected to reset the "current color" before drawing your texture. GL is a glorified state machine, it will continue to use the color you set for every other draw operation... so setting glColor3d (...) when you drew your background also affects your foreground image.
Adding the following before drawing your textured quad will fix this problem:
glColor3f (1.0f, 1.0f, 1.0f);
However, you have brought up a new issue in your comments related to blending. This question boils down to a lack of a blending function. By default when you draw something in OpenGL it will merely overwrite anything else in the framebuffer.
What you need for transparency to work is enable blending and use this function:
glBlendFunc (GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
This effectively does the following:
NewColor = (NewColor.RGB * NewColor.A) + OldColor.RGB * (1.0 - NewColor.A)
With this, the parts of your texture that have an alpha value != 1.0 will be a mix of whatever was already in the framebuffer and what you just drew.
Remember to enable blending before you draw your transparent objects:
glEnable (GL_BLEND);
and disable it when you do not need it:
glDisable (GL_BLEND);
Lastly, you should be aware that the order you draw translucent objects in OpenGL is pretty important. Opaque objects (such as your background) need to be drawn first. In general, you need to draw things from back-to-front in order for alpha blending to function correctly. This is the opposite order you would ideally want for opaque geometry (since hardware can skip shading for obstructed objects if you draw objects in front of them first).

Strange issues with texture mapping

I am attempting to use texture coordinates from a pre-generated PNG file on a 3d world of quads loaded into Java with LWJGL's slick-util extension.
The texture file is 192x96pixels, and properly formatted. It's composed of 6x3 32x32 tiles.
The 3d quads are 1.5f wide and long. They are spaced apart properly.
I am having issues getting the right texture coordinates. When I put 0.0f to 0.333333f as the y coordinates, I get slightly more than the top tile's height displayed. However, if I put 0.0f-0.25f, I get exactly 1/3rd, which is my tile's height. I have yet to find a magic number for the X coordinates, but maybe someone could explain to me why 1/4 of 96 is 24 according to textures coordinates, or what I'm doing wrong? I'm suspecting it could be a clash between my quad size and textures.
The tops of the cubes are using the texture coordinates (0.0, 0.0f), (0.0, 0.333333f), (0.166666f, 0.333333f), (0.166666f, 0.0f), which is applied moving anticlockwise from the top left to the top right. Again, the main texture file is 32x32 tiles arranged to make 192x96(96 is the height).
Notice I placed a white line at the top of one of the tiles to see its border, and black line at it's bottom, then a white line for the top of the next one below it. The texture 'bleeds' too far down. The other textures have their own even stranger coordinates, as you can see.
Arranging texture coordinates with the assumption the top of the image is 1.0 rather than the bottom produces odd squares with a rectangular hole in the center where quads should be.
I am using TEX_ENV GL_MODULATE.
Texture sizes are usually a power of 2. I suspect something resized your 192x96 texture as a 256x128 or 256x256 texture. This doesn't really explain the values you found however... But, I think, if you resize your texture to 256x256(increase the size, don't scale!) and calculate your texture coordinates based on that, your problem will go away.
I don't know about java but with my image atlases in objective C and openGL ES you need to make the textures smaller than what you are referring to when selecting them from the atlas.
Have you left a sufficient gap between the texture images to prevent 'bleeding?

Categories