Is it possible to draw shapes by using ShapeRenderer between SpriteBatch begin and end calls.
I have tried but no result, only SpriteBatch textures are drawn, no shape is on the scene. Sample code is as followed below :
shapeRenderer.begin(ShapeType.FilledCircle);
shapeRenderer.setColor(0f, 1f, 0f, 1f);
shapeRenderer.filledCircle( 100, 100, 100);
shapeRenderer.end();
I have a orthographic camera created by these commands :
camera = new OrthographicCamera(1, Gdx.graphics.getHeight() / Gdx.graphics.getWidth());
camera.setToOrtho(true);
Both ShapeRenderer and SpriteBatch set state in OpenGL that they expect to remain constant during their use. Nesting them can create problems. See this post in the badlogic forum.
This should probably be spelled out a bit more clearly in the docs.
Related
I'm trying to recreate a shadow effect on some 2D sprites in a project using Slick. To do this, I'm recolouring a copy of the sprite and stretching it using Slick OpenGL using this method:
public static void getStretched(Shape shape, Image image) {
TextureImpl.bindNone();
image.getTexture().bind();
SGL GL = Renderer.get();
GL.glEnable(SGL.GL_TEXTURE_2D);
GL.glBegin(SGL.GL_QUADS);
//topleft
GL.glTexCoord2f(0f, 0f);
GL.glVertex2f(shape.getPoints()[0], shape.getPoints()[1]);
//topright
GL.glTexCoord2f(0.5f, 0f);
GL.glVertex2f(shape.getPoints()[2], shape.getPoints()[3]);
//bottom right
GL.glTexCoord2f(1f, 1f);
GL.glVertex2f(shape.getPoints()[4], shape.getPoints()[5]);
//btoom left
GL.glTexCoord2f(0.5f, 1f);
GL.glVertex2f(shape.getPoints()[6], shape.getPoints()[7]);
GL.glEnd();
GL.glDisable(SGL.GL_TEXTURE_2D);
TextureImpl.bindNone();
}
This gives the almost the desired effect, aside from the fact that the image is cropped a bit.
This becomes more extreme for higher distortions
I'm new to using OpenGL, so some help regarding how to fix this would be great.
Furthermore, if I feed an image into the method that was obtained using getSubImage, OpenGL renders the original image, rather than the sub image.
I'm unsure as to why this happens, as the sprite itself is taken from a spritesheet using getSubImage and has no problem rendering.
Help would be greatly appreciated!
I'm recolouring a copy of the sprite and stretching it
The issue is that you stretch the texture coordinates, but the region which is covered by the sprite stays the same. If the shadow exceeds the the region which is covered by the quad primitive, then it is cropped.
You have to "stretch" the vertex coordinates rather than the texture coordinates. Define a rhombic geometry for the shadow and wrap the texture on it:
float distortionX = ...;
GL.glEnable(SGL.GL_TEXTURE_2D);
GL.glBegin(SGL.GL_QUADS);
//topleft
GL.glTexCoord2f(0f, 0f);
GL.glVertex2f(shape.getPoints()[0] + distortionX, shape.getPoints()[1]);
//topright
GL.glTexCoord2f(1f, 0f);
GL.glVertex2f(shape.getPoints()[2] + distortionX, shape.getPoints()[3]);
//bottom right
GL.glTexCoord2f(1f, 1f);
GL.glVertex2f(shape.getPoints()[4], shape.getPoints()[5]);
//btoom left
GL.glTexCoord2f(0f, 1f);
GL.glVertex2f(shape.getPoints()[6], shape.getPoints()[7]);
GL.glEnd();
[...] if I feed an image into the method that was obtained using getSubImage, [...] the sprite itself is taken from a spritesheet [...]
The sprite is covers just a small rectangular region of the entire texture. This region is defined by the texture coordinates. You have to use the same texture coordinates as when you draw the sprite itself.
So I am having a little hard time understanding how ortographic cameras work in libgdx.
what I want is to have a camera that will only render things within a square while having another camera set the bounds for my whole screen.
So here, I was able to do what I wanted on the whole screen for the game pad. But, the thing you see on the top right is the background map of the game and i want to render the parts only fall within the red square you see here. How do I achieve that?
Are cameras supposed to do that or do I need to figure out a way to do it manually? I am really confused as to how cameras and projection matrices work.
Here on this screen, The red square and the green pad on the left are being drawn using the projection matrix of my screen camera. The map(top right) is drawn using my map cam.
Map cam is a view port of 400x400 but as you can see , the tiles are rectangular and that isnt the aspect ration i want. If someone can briefly explain how cameras work , I'd greatly appreciate it.
The reason I am not posting my code here is because I feel like I need to understand how camera mechanics work to even code it properly so I want to address that issue first.
Following #Tenfour04's advice worked perfectly. In case anyone wonders what I wanted to achieve. Here's a picture.
A camera alone cannot crop off part of the screen. For that you need to use glViewport. There is already a Viewport class in Libgdx that can do that for you. You will need two orthographic cameras (one for the map and one for the GUI), but the viewport can create its own.
private Viewport viewport;
//in create:
viewport = new FitViewport(400, 400);
//in resize:
viewport.update(width, height);
//in render:
viewport.getCamera().position.set(/*...move your map camera as needed*/);
viewport.apply(); //viewport cropped to your red square
batch.setProjectionMatrix(viewport.getCamera().combined);
batch.begin();
//draw map
batch.end();
//return to full screen viewport
Gdx.gl.glViewport(0, 0, Gdx.graphics.getWidth(), Gdx.graphics.getHeight());
batch.setProjectionMatrix(yourGUICamera.combined);
batch.begin();
//draw gui
batch.end();
What happens, is the camera will fit itself to the size of the screen. In order to change this, you would want to use a FrameBuffer. The frame buffer will constrain the camera into the desired size, then can be drawn as a texture.
Create the frame buffer with the dimensions being in pixels.
//Initialize the buffer
FrameBuffer fbo = new FrameBuffer(Format.RGB565, width, helght, false);
Render the world within the buffer.
fbo.begin();
//Draw the world here
fbo.end();
Draw the buffer to the screen with a batch.
batch.begin();
batch.draw(fbo.getColorBufferTexture(), x, y);
batch.end();
So I am developing a small Pong replica simply for some practice with LWJGL. Since there is no easy way to write customizable text in LWJGL, I am using textures for the start button, other buttons, and so on. However, when I drew the texture, it turned out to be discolored on my purple background. If I change the background to white, there is no discoloration. Help?
Also, my start button texture is something like 50x22, but I put it on a 64x64 image because Slick can only load resolutions that are an exponent of two. I adjusted the rectangle being drawn so that it is not warped, and the rest of the image is transparent, so it shouldn't be visible once I sort out the above problem. Are there any alternatives to my method?
This is where I initialize my OpenGL stuff:
public static void setCamera()
{
glClear(GL_COLOR_BUFFER_BIT);
glEnable(GL_TEXTURE_2D);
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glOrtho(0,width,0,height,-1,1);
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
}
And this is where I draw the texture:
public void draw()
{
logic();
glPushMatrix();
glTranslated(x,y,0);
texture.bind();
glBegin(GL_QUADS);
glTexCoord2d(0,1); glVertex2d(0,-196.875);
glTexCoord2d(1,1); glVertex2d(width+18.75,-196.875);
glTexCoord2d(1,0); glVertex2d(width+18.75,height);
glTexCoord2d(0,0); glVertex2d(0,height);
glEnd();
glPopMatrix();
}
Thanks :)
As discussed in comments, your initial problem was that you had neglected to reset the "current color" before drawing your texture. GL is a glorified state machine, it will continue to use the color you set for every other draw operation... so setting glColor3d (...) when you drew your background also affects your foreground image.
Adding the following before drawing your textured quad will fix this problem:
glColor3f (1.0f, 1.0f, 1.0f);
However, you have brought up a new issue in your comments related to blending. This question boils down to a lack of a blending function. By default when you draw something in OpenGL it will merely overwrite anything else in the framebuffer.
What you need for transparency to work is enable blending and use this function:
glBlendFunc (GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
This effectively does the following:
NewColor = (NewColor.RGB * NewColor.A) + OldColor.RGB * (1.0 - NewColor.A)
With this, the parts of your texture that have an alpha value != 1.0 will be a mix of whatever was already in the framebuffer and what you just drew.
Remember to enable blending before you draw your transparent objects:
glEnable (GL_BLEND);
and disable it when you do not need it:
glDisable (GL_BLEND);
Lastly, you should be aware that the order you draw translucent objects in OpenGL is pretty important. Opaque objects (such as your background) need to be drawn first. In general, you need to draw things from back-to-front in order for alpha blending to function correctly. This is the opposite order you would ideally want for opaque geometry (since hardware can skip shading for obstructed objects if you draw objects in front of them first).
I'm creating an app that only runs in landscape mode. I'm trying to create a background using a textured quad although I'm not going to worry about texturing yet. I've been trying to simply draw a quad that fills the screen from drawOverlay(GL10 gl) with GL_DEPTH_TEST disabled but whenever I do that the quad does not completely fill the screen and I can see bars of the glClearColor on the bottom and top of the screen.
Unable to draw it using the modelview matrix I was using for all the other objects, I tried to draw it using gluOrtho2D and glOrthof but neither of them worked. I don't really understand how the near and far clipping plane work with orthographic drawing. Whenever I tried to draw it using glOrtho2D or glOrthof, the quad wasn't drawn at all(although the rest of the scene was still rendered).
Here is my attempt at drawing using an orthographic matrix
private void drawOverlay(GL10 gl) {
gl.glDisable(GL10.GL_DEPTH_TEST);
gl.glMatrixMode(GL10.GL_PROJECTION);
gl.glPushMatrix();
gl.glLoadIdentity();
GLU.gluOrtho2D(gl, 0f, 1f, 1f, 0f);
gl.glMatrixMode(GL10.GL_MODELVIEW);
gl.glLoadIdentity();
background.draw(gl, 1.0f, 1.0f, 1.0f, 1.0f);
gl.glEnable(GL10.GL_DEPTH_TEST);
gl.glMatrixMode(GL10.GL_PROJECTION);
gl.glPopMatrix();
gl.glMatrixMode(GL10.GL_MODELVIEW);
}
I call that function from the beginning of onDrawFrame - before anything else is drawn:
public void onDrawFrame(GL10 gl) {
drawOverlay(gl);
gl.glLoadIdentity
//...
}
Here is how "background" is created:
background = new ShapePostcard(1f, 1f);
I'm fairly certain I'm not going to be able to get the quad to cover the screen using the normal modelview matrix, but basically all I was doing was drawing "background" in onDrawFrame before everything else with depth testing disabled.
Thanks for any support!
The easiest way to draw a quad that fills the screen is to set both the projection and model to an identity matrix, and then draw a mesh with coordinates [-1,-1] to [1,1]. Was it this what you were drawing when you saw the borders?
I mean,
(x, y, widht, height) = (-1,-1, 2, 2)
OpenGL ES does not support quads.
http://www.songho.ca/opengl/gl_vertexarray.html
I am attempting to render an ellipse using ShapeRenderer, and have come up with the following partial solution:
void drawEllipse(float x, float y, float width, float height) {
float r = (width / 2);
ShapeRenderer renderer = new ShapeRenderer();
renderer.setProjectionMatrix(/* camera matrix */);
renderer.begin(ShapeType.Circle);
renderer.scale(1f, (height / width), 1f);
renderer.circle(x + r, y, r);
renderer.identity();
renderer.end();
}
This draws an ellipse at the specified coordinates with the correct width and height; however, it appears that the scale transformation causes the circle to be translated in the viewport, and I have not been successful in determining the mathematics behind the translation. I am using an orthogonal projection with y-up where the coordinates map to a pixel on the screen. I am not very familiar with OpenGL.
How can I draw an ellipse using libgdx, and have it draw the ellipse at the exact coordinates I specify? Ideally, that would mean that the origin of the ellipse is located in the top-left corner, if the ellipse was contained in a rectangle.
The new Libgdx ShapeRenderer API (current nightlies, in whatever release will come after v0.9.8) contains an ellipse drawing method so you can ignore the rest of this answer. The ShapeRenderer method has changed in other ways, too though (e.g., the ShapeType is just Filled, Line, or Point now).
For folks stuck with the older API, you should be able to work-around the distortion by making sure the scaling happens around the origin. This is a standard OpenGL practice (so its a bit obtuse, but they're following OpenGL's lead). See Opengl order of matrix transformations and OpenGL: scale then translate? and how?. Even better (again standard OpenGL practice) you end up listing the operations in the reverse order you want them to happen at, so to make a circle, distort it into an ellipse, then move it to a specific destination you actually write code like:
renderer.begin(ShapeType.Circle);
renderer.translate(x, y, 0);
renderer.scale(1f, (height/width), 1f);
renderer.circle(0, 0, r);
renderer.end();