JOGL 2.0, render depth buffer to texture - java

I am trying to implement simple shadow mapping technique in JOGL 2.0 and I struggle with rendering depth values into texture. Maybe I am doing it completely wrong but it is weird that rendering scene in color works properly. I have also found a similar question here at stackoverflow, which is asked here: Render the depth buffer into a texture using a frame buffer
and problem is solved by calling
gl.glDrawBuffer(GL2.GL_NONE);
gl.glReadBuffer(GL2.GL_NONE);
However, this does not help in my case. When I render scene in texture in color as normally, function works properly. Here is the result:
However, after trying to render depth values, it just renders white color (and something which doesn't correspond with the scene at all)
---- UPDATED code, which is working properly now:
private void initializeFBO3(GL2 gl) {
//Create frame buffer
gl.glGenFramebuffers(1, frameBufferID, 0);
gl.glBindFramebuffer(GL2.GL_FRAMEBUFFER, frameBufferID[0]);
// ------------- Depth buffer texture --------------
gl.glGenTextures(1,depthBufferID,0);
gl.glBindTexture(GL2.GL_TEXTURE_2D, depthBufferID[0]);
gl.glTexImage2D(GL2.GL_TEXTURE_2D, // target texture type
0, // mipmap LOD level
GL2.GL_DEPTH_COMPONENT, // internal pixel format
//GL2.GL_DEPTH_COMPONENT
shadowMapWidth, // width of generated image
shadowMapHeight, // height of generated image
0, // border of image
GL2.GL_DEPTH_COMPONENT, // external pixel format
GL2.GL_UNSIGNED_INT, // datatype for each value
null); // buffer to store the texture in memory
//Some parameters
gl.glTexParameteri(GL2.GL_TEXTURE_2D, GL2.GL_TEXTURE_MIN_FILTER, GL2.GL_NEAREST);
gl.glTexParameteri(GL2.GL_TEXTURE_2D, GL2.GL_TEXTURE_MAG_FILTER, GL2.GL_NEAREST);
gl.glTexParameteri(GL2.GL_TEXTURE_2D, GL2.GL_TEXTURE_WRAP_S, GL2.GL_CLAMP_TO_EDGE);
gl.glTexParameteri(GL2.GL_TEXTURE_2D, GL2.GL_TEXTURE_WRAP_T, GL2.GL_CLAMP_TO_EDGE);
//Attach 2D texture to this FBO
gl.glFramebufferTexture2D(GL2.GL_FRAMEBUFFER,
GL2.GL_DEPTH_ATTACHMENT,
GL2.GL_TEXTURE_2D,
depthBufferID[0],0);
gl.glBindTexture(GL2.GL_TEXTURE_2D, 0);
//Disable color buffer
//https://stackoverflow.com/questions/12546368/render-the-depth-buffer-into-a-texture-using-a-frame-buffer
gl.glDrawBuffer(GL2.GL_NONE);
gl.glReadBuffer(GL2.GL_NONE);
//Set pixels ((width*2)* (height*2))
//It has to have twice the size of shadowmap size
pixels = GLBuffers.newDirectByteBuffer(shadowMapWidth*shadowMapHeight*4);
//Set default frame buffer before doing the check
//http://www.opengl.org/wiki/FBO#Completeness_Rules
gl.glBindFramebuffer(GL2.GL_FRAMEBUFFER, 0);
int status = gl.glCheckFramebufferStatus(GL2.GL_FRAMEBUFFER);
// Always check that our framebuffer is ok
if(gl.glCheckFramebufferStatus(GL2.GL_FRAMEBUFFER) != GL2.GL_FRAMEBUFFER_COMPLETE)
{
System.err.println("Can not use FBO! Status error:" + status);
}
}
public void display(GLAutoDrawable drawable) {
GL2 gl = drawable.getGL().getGL2(); // get the OpenGL graphics context
gl.glClear(GL2.GL_COLOR_BUFFER_BIT | GL2.GL_DEPTH_BUFFER_BIT);
gl.glLoadIdentity(); // reset the model-view matrix
//Render scene into Frame buffer first
gl.glBindFramebuffer(GL2.GL_FRAMEBUFFER, frameBufferID[0]);
renderSmallScene(gl);
//Read pixels from buffer
gl.glBindFramebuffer(GL2.GL_READ_FRAMEBUFFER, frameBufferID[0]);
//Read pixels
gl.glReadPixels(0, 0, shadowMapWidth, shadowMapHeight, GL2.GL_DEPTH_COMPONENT , GL2.GL_UNSIGNED_BYTE, pixels);
//Switch back to default FBO
gl.glBindFramebuffer(GL2.GL_FRAMEBUFFER, 0);
drawSceneObjects(gl);
//Draw pixels, format has to have only one
gl.glDrawPixels(shadowMapWidth, shadowMapHeight, GL2.GL_LUMINANCE , GL2.GL_UNSIGNED_BYTE, pixels);
}
Working result:

You must read about using FBO and OpenGL in general.
In your code you create FBO and its attachments in each frame. That's wrong.It's huge overhead.Construct your FBOs on init only once.Second, you must bind FBO in order to draw into it (or read from it), otherwise OpenGL will draw into default FBO.Take a look here and here
So ,once your FBO is ready you render into it like this:
glBindFrameBuffer((GL_DRAW_FRAMEBUFFER, yourFbo);
drawSceneObjects(gl);
glBindFrameBuffer((GL_READ_FRAMEBUFFER, yourFbo);
readPixelsHere()
glBindFrameBuffer((GL_FRAMEBUFFER, 0);///switch to default FBO
In fact , in your case ,as you leave the FBO bound,just call
glBindFrameBuffer((GL_READ_FRAMEBUFFER, yourFbo);
after drawing your geometry.
Also , if you are not using shaders there is no reason to use textures as FBO attachments.Create render buffer instead.

Related

Sprites not fitting according to different screen resolutions on Libdx

I'm testing my sprite that has the game title, and on my Motorola Moto G 2nd generation the dimensions of the sprite looks good but I'm testing also on my mothers phone, a Samsung GT-S5830i, and the height of the sprite looks stretched out.
I'm also trying to understand the concept of Viewport (I'm using the StretchViewport), but I don't know if I'm doing right. My game are designed for mobile, not desktop.
I did that to my SplashScreen:
this.gameTitle = new Sprite(new Texture(Gdx.files.internal("images/GameTitle.png")));
this.gameTitle.setSize(Configuration.DEVICE_WIDTH - 50, this.gameTitle.getHeight() * Configuration.DEVICE_HEIGHT / Configuration.DEVICE_WIDTH);
The DEVICE_HEIGTH and DEVICE_WIDTH are constants about the dimension of the screen. And the "-50" is a margin to the sprite
In my Viewport I used the real size of the screen for the dimensions, or should I use a virtual dimension? But how it works?
This is a part of my main class, what can I change?
// Create the Orthografic camera
this.orthoCamera = new OrthographicCamera(Configuration.DEVICE_WIDTH, Configuration.DEVICE_HEIGHT);
this.orthoCamera.setToOrtho(false, Configuration.VIRTUAL_GAME_WIDTH, Configuration.VIRTUAL_GAME_HEIGHT);
this.orthoCamera.position.set(this.orthoCamera.viewportWidth / 2f, this.orthoCamera.viewportHeight / 2f, 0);
this.orthoCamera.update();
// Combine SpriteBatch with the camera
this.spriteBatch.setTransformMatrix(this.orthoCamera.combined);
// Create the ViewPort
this.viewPort = new ExtendViewport(Configuration.DEVICE_WIDTH, Configuration.DEVICE_HEIGHT);
I updated my viewport to the ExtendViewport as you said.
Main class render method:
public void render() {
super.render();
// Update Orthographic camera
this.orthoCamera.update();
// Combine SpriteBatch with the camera
this.spriteBatch.setTransformMatrix(this.orthoCamera.combined);
}
Screen class render method:
#Override
public void render(float delta) {
// OpenGL clear screen
Gdx.gl.glClearColor(0, 0, 0, 1);
Gdx.gl.glClear(Gdx.gl.GL_COLOR_BUFFER_BIT | Gdx.gl.GL_DEPTH_BUFFER_BIT);
// SpriteBatch begins
this.game.spriteBatch.begin();
// Display the ClimbUp logo
this.gameTitle.draw(this.game.spriteBatch);
this.character.draw(this.game.spriteBatch);
// SpriteBatch ends
this.game.spriteBatch.end();
}
If you don't want stuff to look distorted on some devices and you don't want black bars (which none of your customers will like), you need to use an ExtendViewport instead of StretchViewport. And the dimensions you give it should be virtual dimensions based on whatever units you would like to work with.
For example, assuming a landscape orientation game, you could use 800 x 480 as virtual dimensions, and then you know that anything within that area (in world units) will be shown on the screen and you can design your game for that. On narrower devices (4:3 ratio) there will be more than 480 vertical units shown, and on wider devices (16:9 ratio) there will be more than 800 horizontal units shown.
There's one other option that avoids black bars and distortion, and that's FillViewport. But I think in general that's not a good option because you have no easy way to predict how much of your virtual dimensions are going to get cropped off.
Based on your edited question, here's what I would change in your code:
//No need to create your own camera. ExtendViewport creates its own.
// Pointless to call this now before resize method is called. Call this in render
//XXX this.spriteBatch.setTransformMatrix(this.orthoCamera.combined);
//This is where you give the viewport its minimum virtual dimensions
this.viewPort = new ExtendViewport(Configuration.VIRTUAL_GAME_WIDTH, Configuration.VIRTUAL_GAME_HEIGHT);
//Get reference to the viewport's camera for use with your sprite batch:
this.orthoCamera = (OrthographicCamera) this.viewport.getCamera();
Then in the resize method:
orthoCamera.setPosition(/*wherever you want it*/);
viewport.update(width, height, false); //these are actual device width and height that are provided by the resize method parameters.
You might want to position your camera in relation to the size calculated by the viewport. Then you should omit the setPosition line above, and instead calculate it after calling viewport.update. For example if you want 0,0 in the bottom left of the screen:
viewport.update(width, height, false);
orthoCamera.setPosition(orthoCamera.viewportWidth/2f, orthoCamera.viewportHeight/2f);
In your render method you can put this before spriteBatch.begin():
orthoCamera.update(); //good idea to call this right before applying to SpriteBatch, in case you've moved it.
spriteBatch.setProjectionMatrix(orthoCamera.combined);

Box2dlights - Layering lights

How do you make the box2dlights ignore textures and sprites in ambient lighting? For example I have a stage that has the ambient lighting set to dark. I want my lights to brighten up a platform directly underneath the light, but the background image behind the light should remain dark and not lit up. Currently the lights are the top rendered layer and everything underneath the light is lit up.
The right way to achieve this is the following:
Update your physics and cameras.
Render the light map so you can later fetch the texture from the RayHandler's FrameBuffer.
Render your top layers to a transparent FrameBuffer object, in the desired order, but don't render the light map inside of it. Do not render here your HUD or whatever top-most layers you don't want to be affected by you lighting.
Finish rendering to your FBO and begin rendering to your screen.
Render the background which is not affected by lights.
Bind to Texture Units 0 and 1 the light map and your top layers' FBO Texture.
Begin a Shader you will use to blend your light map with your FBO Texture. The mixing is quite simple (occurs in the Fragment Shader): glFragColor = tex0.rgb * tex1.rgb, and keep tex1.a untouched (tex0 = light map texture, tex1 = fbo texture). The RayHandler's ambient light is lost with this rendering method, so you can pass the ambient light colour to the shader and add it to the light map channels.
Bind the texture units to the shader and perform the rendering. This rendering must be done with alpha blending enabled (SRC_ALPHA, ONE_MINUS_SRC_ALPHA).
Bind again the default Texture Unit so remaining rendering is properly done (TEXTURE_0): render any remaining top-most layers and the HUD, if any.
Some example code:
#Override
public void render(float delta) {
Gdx.gl.glClearColor(0, 0, 0, 1);
Gdx.gl.glClear(GL20.GL_COLOR_BUFFER_BIT);
tweenManager.update(delta);
worldUpdate(delta);
/* We have three cameras (foreground + double parallax background) */
moveForegroundCamera(player.getPosition().x, player.getPosition().y);
moveBackground0Camera(player.getPosition().x, player.getPosition().y);
moveBackground1Camera(player.getPosition().x, player.getPosition().y);
cameraMatrixCopy.set(foregroundCamera.combined);
rayHandler.setCombinedMatrix(cameraMatrixCopy.scale(Globals.BOX_TO_WORLD, Globals.BOX_TO_WORLD, 1.0f), foregroundCamera.position.x,
foregroundCamera.position.y, foregroundCamera.viewportWidth * camera.zoom * Globals.BOX_TO_WORLD,
foregroundCamera.viewportHeight * foregroundCamera.zoom * Globals.BOX_TO_WORLD);
rayHandler.update();
rayHandler.render();
lightMap = rayHandler.getLightMapTexture();
fbo.begin();
{
Gdx.gl.glClearColor(0, 0, 0, 0);
Gdx.gl.glClear(GL20.GL_COLOR_BUFFER_BIT);
/* Draw the second background (affected by lights), the player, the enemies and all the objects */
batch.enableBlending();
batch.setProjectionMatrix(background1Camera.combined);
batch.begin();
background1.draw(batch);
batch.end();
batch.setProjectionMatrix(foregroundCamera.combined);
batch.begin();
// Draw stuff...
batch.end();
}
fbo.end();
/* Now let's pile things up: draw the bottom-most layer */
batch.setProjectionMatrix(background0Camera.combined);
batch.disableBlending();
batch.begin();
background0.draw(batch);
batch.end();
/* Blend the frame buffer's texture with the light map in a fancy way */
Gdx.gl20.glActiveTexture(GL20.GL_TEXTURE0);
fboRegion.getTexture().bind(); // fboRegion = new TextureRegion(fbo.getColorBufferTexture());
Gdx.gl20.glActiveTexture(GL20.GL_TEXTURE1);
lightMap.bind();
Gdx.gl20.glEnable(Gdx.gl20.GL_BLEND);
Gdx.gl20.glBlendFunc(Gdx.gl20.GL_SRC_ALPHA, Gdx.gl20.GL_ONE_MINUS_SRC_ALPHA);
lightShader.begin();
lightShader.setUniformf("ambient_color", level.getAmbientLightColor());
lightShader.setUniformi("u_texture0", 0);
lightShader.setUniformi("u_texture1", 1);
fullScreenQuad.render(lightShader, GL20.GL_TRIANGLE_FAN, 0, 4);
lightShader.end();
Gdx.gl20.glDisable(Gdx.gl20.GL_BLEND);
Gdx.gl20.glActiveTexture(GL20.GL_TEXTURE0); // Bind again the default texture unit
/* Draw any top-most layers you might have */
hud.draw();
}

Call glReadPixels Within display() JOGL

I am trying to implement object picking. I have code to render the objects as solid, unlit colors, then I read the pixels in the screen buffer. I interpret the readings from glReadPixels() to determine which object the cursor is currently on. Finally, I re-render everything lit, textured, and colored.
gl.glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
gl.glMatrixMode(GL_MODELVIEW);
gl.glLoadIdentity();
gl.glPushMatrix();
//Picking render goes here...
gl.glPopMatrix();
//The following code reads the current pixel color at the center of the screen
FloatBuffer buffer = FloatBuffer.allocate(4);
gl.glReadBuffer(GL_FRONT);
gl.glReadPixels(drawable.getWidth() / 2, drawable.getHeight() / 2, 1, 1, GL_RGBA, GL_FLOAT, buffer);
float[] pixels = new float[3];
pixels = buffer.array();
float red = pixels[0];
float green = pixels[1];
float blue = pixels[2];
System.out.println(red + ", " + green + ", " + blue);
gl.glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
gl.glMatrixMode(GL_MODELVIEW);
gl.glLoadIdentity();
gl.glPushMatrix();
//Final render goes here...
gl.glPopMatrix();
The problem is that the call to glReadPixels() returns me the pixel color of the FINAL render, rather than the picking render. I would appreciate an explanation of how to read the picking render pixels.
I am aware that I can use the OpenGL name stack, but frankly I find it inconvenient and ridiculous.
The problem is that you render to the back buffer, but your glReadPixels() call reads from the front buffer. If you render to a double buffered surface (which you almost always want when using OpenGL), your rendering goes to the back buffer by default. You have this call in your code:
gl.glReadBuffer(GL_FRONT);
This tells the following glReadPixels() call to read from the front buffer, which is the buffer you are not drawing to.
You should be able to simply remove the glReadBuffer() call, as GL_BACK is the default read buffer for a double buffered default rendering surface.

Insufficient buffer size when trying to create a texture with openGL

I'm currently trying to generate a textured polygonal surface using JOGL, and I'm getting an error message I don't understand. Eclipse tells me "java.lang.IndexOutOfBoundsException: Required 430233 remaining bytes in buffer, only had 428349". As far as I can see, the buffered image being generated by the readTexture method is not of sufficient size to use with the glTex2D() method. However, I'm not sure how to go about resolving the issue. The relevant sections of code are below, and any help would be much appreciated.
public void init(GLAutoDrawable drawable)
{
final GL2 gl = drawable.getGL().getGL2();
GLU glu = GLU.createGLU();
//Create the glu object which allows access to the GLU library\
gl.glShadeModel(GL2.GL_SMOOTH); // Enable Smooth Shading
gl.glClearColor(0.0f, 0.0f, 0.0f, 0.5f); // Black Background
gl.glClearDepth(1.0f); // Depth Buffer Setup
gl.glEnable(GL.GL_DEPTH_TEST); // Enables Depth Testing
gl.glDepthFunc(GL.GL_LEQUAL); // The Type Of Depth Testing To Do
gl.glEnable(GL.GL_TEXTURE_2D);
texture = genTexture(gl);
gl.glBindTexture(GL.GL_TEXTURE_2D, texture);
TextureReader.Texture texture = null;
try {
texture = TextureReader.readTexture ("/C:/Users/Alex/Desktop/boy_reaching_up_for_goalpost_stencil.png");
} catch (IOException e) {
e.printStackTrace();
throw new RuntimeException(e);
}
makeRGBTexture(gl, glu, texture, GL.GL_TEXTURE_2D, false);
gl.glTexParameteri(GL.GL_TEXTURE_2D, GL.GL_TEXTURE_MIN_FILTER, GL.GL_LINEAR);
gl.glTexParameteri(GL.GL_TEXTURE_2D, GL.GL_TEXTURE_MAG_FILTER, GL.GL_LINEAR);
}
private void makeRGBTexture(GL gl, GLU glu, TextureReader.Texture img,
int target, boolean mipmapped) {
if (mipmapped) {
glu.gluBuild2DMipmaps(target, GL.GL_RGB8, img.getWidth(),
img.getHeight(), GL.GL_RGB, GL.GL_UNSIGNED_BYTE, img.getPixels());
} else {
gl.glTexImage2D(target, 0, GL.GL_RGB, img.getWidth(),
img.getHeight(), 0, GL.GL_RGB, GL.GL_UNSIGNED_BYTE, img.getPixels());
}
}
private int genTexture(GL gl) {
final int[] tmp = new int[1];
gl.glGenTextures(1, tmp, 0);
return tmp[0];
}
//Within the TextureReader class
public static Texture readTexture(String filename, boolean storeAlphaChannel)
throws IOException {
BufferedImage bufferedImage;
if (filename.endsWith(".bmp")) {
bufferedImage = BitmapLoader.loadBitmap(filename);
} else {
bufferedImage = readImage(filename);
}
return readPixels(bufferedImage, storeAlphaChannel);
}
The error is being generated by the call to glTexImage2D() inside the makeRGBTexture() method.
By default, the GL expects that each line of an image starts at an memory address divisiable by 4 (4 byte alignment). With RGBA images, this is always the case (als long as the first pixel is correctly aligned). But with RGB images, this will only be the case when the width is divisable by 4, too. Note that this is totally unrelated to the "power of two" requirements of very old GPUs.
Woth your particular image resolutiion of 227x629, you get 681 bytes per line, so the GL expetcs 3 additional bytes per line. For 629 lines, this makes 1887 extra bytes. If you look at those numbers, you can see that the buffer is just 1884 bytes to small. The difference of 3 is just due to the fact that we do not need the 3 padding bytes at the end of the last line, since there is no next line to be started, and the GL won't read beyond that end of that data.
So you have two options here: align the image data to the way expect them (that is, pad every line with some extra bytes), or - the simpler approach from the user's point of view - just tell the GL that your data is tightly packed (1 byte alignment) by calling glPixelStorei(GL_UNPACK_ALIGNMENT,1) before you specify the image data.

Problems rendering to texture - scaling issues(?)

I'm trying to get my rendering-to-texture working. So far, it does all the necessary GL gubbins to draw on the texture and everything - the only problem is its getting the scaling all off.
I figured I'd want to set the viewport to the size of the texture, and the gluOrtho2d (the way I'm going to be drawing onto the texture) as -halfwidth, halfwidth, -halfheight, halfheight. This means when drawing position 0,0 should be in the center. A position of halfwdith, halfheight should be in the top right corner etc etc.
I'm getting really weird effects though, it seems that its not drawing on the texture in the right scale - so everything gets skewed, can anyone suggest what I might be doing wrong?
Thanks
public void renderToTexture(GLRenderer glRenderer, GL10 gl)
{
boolean checkIfContextSupportsExtension = checkIfContextSupportsExtension(gl, "GL_OES_framebuffer_object");
if(checkIfContextSupportsExtension)
{
GL11ExtensionPack gl11ep = (GL11ExtensionPack) gl;
int mFrameBuffer = createFrameBuffer(gl,texture.getWidth(), texture.getHeight(), texture.getGLID());
if (mFrameBuffer == -1)
{
return;
}
gl11ep.glBindFramebufferOES(GL11ExtensionPack.GL_FRAMEBUFFER_OES, mFrameBuffer);
int halfWidth = texture.getWidth()/2;//width/2;
int halfHeight = texture.getHeight()/2;//height/2;
gl.glViewport(0,0,texture.getWidth(), texture.getHeight());
gl.glLoadIdentity();
GLU.gluOrtho2D(gl, -halfWidth, halfWidth , -halfHeight, halfHeight);
gl.glMatrixMode(GL10.GL_MODELVIEW);
gl.glLoadIdentity();
gl.glClearColor(0f, 1f, 0f, 1f);
gl.glClear(GL10.GL_COLOR_BUFFER_BIT | GL10.GL_DEPTH_BUFFER_BIT);
//draw the old version of the texture to framebuffer:
Quad quad = new Quad(texture.getWidth(), texture.getHeight());
quad.setTexture(texture);
SpriteRenderable sr = new SpriteRenderable(quad);
sr.renderTo(glRenderer, gl, 1);
//draw the new gl objects etc to framebuffer
for (Renderable renderable : renderThese)
{
if (renderable.isVisible()) {
renderable.renderTo(glRenderer, gl, 1);
}
}
//default to the old framebuffer
gl11ep.glBindFramebufferOES(GL11ExtensionPack.GL_FRAMEBUFFER_OES, 0);
}
}
images:
This one is the game prior to any texture rendering.
image1
This is after the "blood splats" (currently pigs!) are rendered on the "arena" background texture shown in picture 1. Notice that the original texture has shrunk too small to see in the middle (its a few pixles) and the pig "blood splat" jumps in a zig-zag, flipping over the center of the texture and becoming smaller...
image2
(sorry, dont have enough rep to post images in the post!)
Just a speculative guess, but do you remember to set matrixMode to GL_PROJECTION prior to entering renderToTexture function? It's not set inside the function, where it seems like it should be. Also don't forget to restore projection matrix and viewport at the end of the function.

Categories