Libgdx SpriteBatch.SetProjectionMatrix makes spritebatch not draw - java

SpriteBatch.SetProjectionMatrix(cam.combined) makes Spritebatch not draw blocks and my character but camera movement works, but if I don't use this line of code everything is drawn but camera is not working? Does anyone know the solution, because if simply can't see it.
EDIT: sorry for the messy first post
Here is the piece of code that is troublesome:
public void render()
{
cam.update();
spriteBatch.setProjectionMatrix(cam.combined);
spriteBatch.begin();
drawBlocks();
drawBob();
spriteBatch.end();
cam.position.x = world.bob.GetPosition().x;
cam.update();
drawCollisionBlocks();
if(debug)
drawDebug();
}

I found the solution but for anyone who may have this kind of problem in the future, the problem was in drawing methods where i was drawing
textures like this:
CODE:
spriteBatch.draw(bobFrame, bob.GetPosition().x * PPuX, bob.GetPosition().y*PPuY ,Bob.SIZE*PPuX , Bob.SIZE*PPuY );
PPuX,PPuY were of type int and were used for different screen sizes and that was messing spriteBatch up when i was setting the projection matrix

The spriteBatch.setProjectionMatrix(cam.combined) lets spriteBatch use the coordinate system that specified by cam instead of the default ones. This is because the both's coordinate system are different and the cam.combined will do the maths for you.

I believe you actually ARE drawing the sprites, however you are not seeing them because your camera viewports are not set(ie. you are looking at the wrong coordinate area).
Adding
cam.setToOrtho(false); //true to invert y axis
"Sets this camera to an orthographic projection using a viewport fitting the screen resolution, centered at (Gdx.graphics.getWidth()/2, Gdx.graphics.getHeight()/2), with the y-axis pointing up or down."
Link to JavaDOC here

Related

How to create a proper ortographic camera in libgdx

So I am having a little hard time understanding how ortographic cameras work in libgdx.
what I want is to have a camera that will only render things within a square while having another camera set the bounds for my whole screen.
So here, I was able to do what I wanted on the whole screen for the game pad. But, the thing you see on the top right is the background map of the game and i want to render the parts only fall within the red square you see here. How do I achieve that?
Are cameras supposed to do that or do I need to figure out a way to do it manually? I am really confused as to how cameras and projection matrices work.
Here on this screen, The red square and the green pad on the left are being drawn using the projection matrix of my screen camera. The map(top right) is drawn using my map cam.
Map cam is a view port of 400x400 but as you can see , the tiles are rectangular and that isnt the aspect ration i want. If someone can briefly explain how cameras work , I'd greatly appreciate it.
The reason I am not posting my code here is because I feel like I need to understand how camera mechanics work to even code it properly so I want to address that issue first.
Following #Tenfour04's advice worked perfectly. In case anyone wonders what I wanted to achieve. Here's a picture.
A camera alone cannot crop off part of the screen. For that you need to use glViewport. There is already a Viewport class in Libgdx that can do that for you. You will need two orthographic cameras (one for the map and one for the GUI), but the viewport can create its own.
private Viewport viewport;
//in create:
viewport = new FitViewport(400, 400);
//in resize:
viewport.update(width, height);
//in render:
viewport.getCamera().position.set(/*...move your map camera as needed*/);
viewport.apply(); //viewport cropped to your red square
batch.setProjectionMatrix(viewport.getCamera().combined);
batch.begin();
//draw map
batch.end();
//return to full screen viewport
Gdx.gl.glViewport(0, 0, Gdx.graphics.getWidth(), Gdx.graphics.getHeight());
batch.setProjectionMatrix(yourGUICamera.combined);
batch.begin();
//draw gui
batch.end();
What happens, is the camera will fit itself to the size of the screen. In order to change this, you would want to use a FrameBuffer. The frame buffer will constrain the camera into the desired size, then can be drawn as a texture.
Create the frame buffer with the dimensions being in pixels.
//Initialize the buffer
FrameBuffer fbo = new FrameBuffer(Format.RGB565, width, helght, false);
Render the world within the buffer.
fbo.begin();
//Draw the world here
fbo.end();
Draw the buffer to the screen with a batch.
batch.begin();
batch.draw(fbo.getColorBufferTexture(), x, y);
batch.end();

Coordinate (0, 0) is 1 pixel off with LibGDX Camera/Viewport

Hopefully the title isn't too confusing, but I am new to LibGDX and after hours of searching on Google I finally got the camera and viewport to work, it now properly scales to the screen. There's still one problem, while the Y coordinate at 0 is correct, the X coordinate at 0 seems to be 1/a few pixels off, I am bad at describing, so here's a picture: Link to image
Like I said, I'm new to LibGDX and I am guessing it's a pretty obvious mistake, but here's the code I'm using:
I use these variables:
public static final int WORLD_WIDTH = 480;
public static final int WORLD_HEIGHT = 800;
OrthographicCamera cam;
Viewport viewport;
SpriteBatch batch;
Texture img;
I have this in the create():
cam = new OrthographicCamera(WORLD_WIDTH, WORLD_HEIGHT);
cam.setToOrtho(false, WORLD_WIDTH, WORLD_HEIGHT);
viewport = new FitViewport(WORLD_WIDTH, WORLD_HEIGHT);
img = new Texture("badlogic.jpg");
I have this in the render():
batch.setProjectionMatrix(cam.combined); //Important
And further down in render() I have this code:
batch.begin();
batch.draw(img, 0, 0);
batch.end();
And I have this in resize():
viewport.update(width, height);
Which gives the result from the picture.
Is there a solution to this problem, do I need to alter a part of the code, add some new lines or is there a smarter way to do a camera/viewport?
I'm looking forward to your answers, thanks in advance.
Cheers!
What you are experiencing is the way it is supposed to be. You are working with a FitViewport and a "virtual" resolution. A FitViewport keeps the aspect ratio of the given viewport size, while scaling it up or down to fill your game window. If your window does not have the same aspect ratio like the virtual resolution, FitViewport will lead to parts of your window being "empty" (that means having only the glClear-color).
Have a look at the Viewports wiki page, where you can see this behaviour in a more extreme way.
If you don't want this, you can try it with the "opposite" FillViewport (no empty areas, but some parts may be cut off because they are "outside" of your window), a ScreenViewport (good for UI), or an ExtendViewport.
About a year ago I developed a full LibGDX game tutorial and provided code that was reusable for any game you'd want to create. I included a dynamic Camera which handled the aspect ratio and resolution perfectly. You can take a look at the code for the camera here and determine how it is similar to yours and if there are any things you can do to improve your camera.

LibGDX Scaling and Rendering Sprites from TextureAtlas

I am currently experiencing issues when drawing and scaling sprites.
I am loading my assets from a texture-atlas, which I packed no problem with the LibGDX texture packer gui tool. My texture atlas image currently looks like this.
These images are supposed to be buttons, but as you can see, the image is very small, so when the sprites are loaded, they load a sprite of say, 34x16 pixels. When I render these buttons, on a canvas of 1920x1080, they are much too small. I use sprite.scale(int scale) to scale the sprites, but when I scale them, they appear blurry. What I would like to happen, is when they are scaled, each pixel is scaled proportionally, keeping the pixelated effect on the button, rather than a blurry resized image from a really small texture. I currently render the sprites using sprite.render(SpriteBatch batch). Is this the proper way of rendering a sprite, after they are loaded using atlas.createSprite(String name)? I am new to using sprites, and loading textures from a texture-atlas, so i am wondering if this is the correct way of doing things.
Also, when I initialize my game, I load numerous different Sprite objects from a TextureAtlas. Each sprite holds a texture that will represent a game object, however it is my understanding that you render a sprite using sprite.render(SpriteBatch batch), so therefore I could only use a sprite, loaded from the TextureAtlas for one game object, because I would also have to set the scale, and position of the sprite, as it represents the game object. I am used to loading a Texture, then rendering this texture using batch.render(), at a given position, but I don't see how this is possible if I am using a sprite. Even if I use batch.render(Sprite, x, y), I am unable to scale the sprite properly, because as I mentioned before, I would like to scale the sprite while maintaining a pixelated effect, and even so, using the Sprite.scale() method, this would scale the Sprite object as a whole, making it impossible to use the Sprite's texture multiple times for numerous game objects.
Any suggestions would be greatly appreciated.
The code I am currently using to render/load the sprites is as follows:
Loading from TextureAtlas:
public static TextureAtlas atlas = new TextureAtlas(Gdx.files.internal("data/texture/pack/output/pack.pack"));
public static Sprite sprite = atlas.createSprite("buttonUp");
sprite.setScale(10);
Rendering Sprite: GdxGame.WIDTH/HEIGHT are 1920x1080. Though the Desktop window is scaled down from that size. Thus, everything is rendered as if the screen were 1920x1080.
batch = new SpriteBatch();
camera = new OrthographicCamera(GdxGame.WIDTH, GdxGame.HEIGHT);
camera.position.set(GdxGame.WIDTH/2, GdxGame.HEIGHT/2, 0);
camera.setToOrtho(false, GdxGame.WIDTH, GdxGame.HEIGHT);
public void render(float delta){
Gdx.gl.glClear(GL10.GL_COLOR_BUFFER_BIT);
batch.setProjectionMatrix(camera.combined);
batch.begin();
camera.update();
sprite.draw(batch);
batch.end();
}
After further investigation, I have discovered that I can use Sprite.set(Sprite sprite) to make Sprite x a copy of Sprite y, and therefore render each sprite multiple times. However, this does not solve the issue of scaling the sprite. I must emphasize further that when rendering a scaled sprite, the scaling is not done by pixel, meaning that it is blurry. But, when rendering a TextureRegion, like: batch.draw(TextureRegion, x, y, width, height) if the width and height are greater than that of the original texture, it will scale each pixel, rather than blur the whole thing to try and make it look better. The following is an image of the blurriness I am talking about:
Notice how this sprite is scaled to be blurry, even though the original image is small, and pixelated.
What TextureFilter settings are you using in your code or in the texturepacker? Try the "Nearest" filter. If you have set it to "Linear" or alike, it will always take 4 texture pixels (texels) and interpolate them to get the color of the pixel to be drawn.
That might help against the blur, but I am not sure if it will produce exactly that 8-bit look you are aiming for...

Transparency in texture not working in libgdx

I'm trying to draw a sprite in libgdx that has a transparent background to it, but the transparent background is filled in with white instead of showing what has already been rendered at that location.
The sprite (with the hat) is 64 by 64 with transparency around the edge and on the right. There should be two tiles with a 'C!' behind him but it's just filled in with white.
This is the code I am using to render the image.
public void draw(SpriteBatch sb, float parentAlpha){
sb.enableBlending();
sb.draw(texture, getX() + 32, getY());
}
If you're going to enable blending, you also need to set the blend function with setBlendFunction. This defines exactly how you want the blending to work. Presumably, you want the classic GL11.GL_SRC_ALPHA, GL11.GL_ONE_MINUS_SRC_ALPHA blending.
All the sprites are supposed to be blended? Maybe disabling the blend after you already drawn could solve your problem.
Maybe something like that:
public void draw(SpriteBatch sb, float parentAlpha){
sb.enableBlending();
sb.draw(texture, getX() + 32, getY());
sb.disableBlending();
}
If this doesn't work you could try activating the blend in the GL options with something like that:
Gdx.gl.glEnable(GL10.GL_BLEND);
Gdx.gl.glBlendFunc(GL10.GL_SRC_ALPHA, GL10.GL_ONE_MINUS_SRC_ALPHA);
...
Your code Here
...
Gdx.gl.glDisable(GL10.GL_BLEND);
Hope it helps :)
First, make sure your PNG has transparency information in it.
Second, make sure you're drawing things in the right order (you have to draw the background boxes before the sprite).
I managed to fix the problem by changing my application to OpenGL 2.0 instead of 1.0. Still not sure what the problem was.

Android OpenGL ES: How do you select a 2D object?

I have been searching for a introductory to 2D selection in OpenGL ES in Stack Overflow. I mostly see questions about 3D.
I'm designing a 2D tile-based level editor on Android 4.0.3, using OpenGL ES. In the level editor, there is a 2D, yellow, square object placed in the center of the screen. All I wanted is to detect to see if the object has been touched by a user.
In the level editor, there aren't any tiles overlapping. Instead, they are placed side-by-side, just like two nearby pixels in a bitmap image in MS Paint. My purpose is to individually detect a touch event for each square object in the level editor.
The object is created with a simple vertex array, and using GL_TRIANGLES to draw 2 flat right triangles. There are no manipulations and no loading from a file or anything. The only thing I know is that if a user touches any one of the yellow triangles, then both yellow triangles are to be selected.
Could anyone provide a hint as to how I need to do this? Thanks in advance.
EDIT:
This is the draw() function:
public void draw(GL10 gl) {
gl.glPushMatrix();
gl.glTranslatef(-(deltaX - translateX), (deltaY - translateY), 1f);
gl.glColor4f(1f, 1f, 0f, 1f);
//TODO: Move ClientState and MatrixStack outside of draw().
gl.glEnableClientState(GL10.GL_VERTEX_ARRAY);
gl.glVertexPointer(2, GL10.GL_FLOAT, 0, vertices);
gl.glDrawArrays(GL10.GL_TRIANGLES, 0, 6);
gl.glDisableClientState(GL10.GL_VERTEX_ARRAY);
gl.glPopMatrix();
}
EDIT 2:
I'm still missing some info. Are you using a camera? or pushing other
matrixes before the model rendering?. For example, if you are using an
orthographic camera, you can easily unproject your screen coordinates
[x_screen, y_screen] like this (y is analogous):
I'm not using a camera, but I'm probably using an orthographic projection. Again, I do not know, as I'm just using a common OpenGL function. I do pushing and popping matrices, because I plan on integrating many tiles (square 2D objects) with different translation matrices. No two tiles will have the same translation matrix M.
Is a perspective projection the same as orthographic projection when it comes to 2D? I do not see any differences between the two.
Here's the initial setup when the surface is created (a class extending GLSurfaceView, and implementing GLSurfaceView.Renderer):
public void onSurfaceChanged(GL10 gl, int width, int height) {
gl.glViewport(0, 0, width, height);
}
public void onSurfaceCreated(GL10 gl, EGLConfig arg1) {
reset();
}
public void onDrawFrame(GL10 gl) {
clearScreen(gl);
gl.glMatrixMode(GL10.GL_PROJECTION);
gl.glLoadIdentity();
gl.glOrthof(0f, super.getWidth(), 0f, super.getHeight(), 1, -1);
gl.glMatrixMode(GL10.GL_MODELVIEW);
gl.glLoadIdentity();
canvas.draw(gl);
}
private void clearScreen(GL10 gl) {
gl.glClearColor(0.5f, 1f, 1f, 1f);
gl.glClear(GL10.GL_COLOR_BUFFER_BIT);
}
A basic approach would be the following:
Define a bounding box for each "touchable" object. This could be
just a rectangle (x, y, width, height).
When you update a tile in the world you update its
bounding box (completely in world coordinates).
When user touches the screen, you have to unproject screen
coordinates to world coordinates
Check if unprojected point overlaps with any bounding box.
Some hints on prev items.[Edited]
1 and 2. You should have to keep track of where you are rendering
your tiles. Store their position and size. A rectangle is a
convenient structure. In your example it could be computed like
this. And you have to recompute it when model changes. Lets call it Rectangle r:
r.x = yourTile.position.x -(deltaX - translateX)
r.y = yourTile.position.y -(deltaY - translateY)
r.width= yourTile.width //as there is no model scaling
r.height = yourTile.height//
3 - if you are using
an orthographic camera, you can easily unproject your screen
coordinates [x_screen, y_screen] like this (y is analogous):
x_model = ((x_screen/GL_viewport_width) -0.5 )*camera.WIDTH + Camera.position.x
4 - For each of your Rectangles check if [x_model; y_model] is inside it.
[2nd Edit] By the way you are updating your matrixes, you can consider you are using a camera with postition surfaceView.width()/2, surfaceView.height()/2. You are matching 1 pixel on screen to 1 unit in world, so you dont need to unproject anything. You can replace that values on my formula and get x_screen = x_model - (You 'll need to flip the Y component of the touch event because of the Y grows downwards in Java, and upwards in GL).
Final words. If user touches point [x,y] check if [x, screenHeight-y]* hits some of your rectangles and you are done.
Do some debugging, log the touching points and see if they are as expected. Generate your rectangles and see if they match what you see on screen, then is a matter of checking if a point is inside a rectangle.
I must tell you that you should not set the camera to screen dimensions, because your app will look dramatically different on different devices. This is a topic on its own so i won't go any further, but consider defining your model in terms of world units - independent from screen size. This is getting so off-topic, but i hope you have gotten a good glimpse of what you need to know!
*The flipping i told you.
PS: stick with the orthographic projection (perspective would be more complex to use).
Please, allow me to post a second answer to your question. This is completely more high-level/philosophical. May be a silly, useless answer but, I hope it will help someone new to computer graphics to change it's mind to "graphics mode".
You can't really select a triangle on the screen. That square is not 2 triangles. That square is just a bunch of yellow pixels. OpenGL takes some vertices, connects them, process them and colors some pixels on the screen. At one stage on the graphics pipeline even geometrical information is lost, and you only have isolated pixels. That's analogous to a letter printed by a printer on a paper. You usually don't process information from a paper (ok, maybe a barcode reader does :D)
If you need to further process your drawings, you have to model them and process them yourself with auxiliary data structures. That's why I suggested you created a rectangle to model your tiles. You create your imaginary "world" of objects, and then render them to screen. The user touch-event does not belong to the same world, so you have to "translate" screen coordinates into your world coordinates. Then you change something in your world (may be the user drags her finger and you have to move an object), and back again tell OpenGL to render your world to screen.
You should operate on your model, not the view. Meshes are more of a view thing, so you shouldn't mix them with the model information, it's a good practice to separate both things. (please, an expert correct me, I'm quite a graphics hobbyist)
Have you checked out LibGDX?
Makes life so much easier when working with OpenGL ES.

Categories