I have a depth texture defined as the following:
//shadow FBO and texture
depthFBO = new FrameBufferObject().create().bind();
depthTexture = new Texture2D().create().bind()
.storage2D(12, GL_DEPTH_COMPONENT32F, 4096, 4096)
.minFilter(GL_LINEAR)
.magFilter(GL_LINEAR)
.compareMode(GL_COMPARE_REF_TO_TEXTURE)
.compareFunc(GL_LEQUAL);
depthFBO.texture2D(GL_DEPTH_ATTACHMENT, GL11.GL_TEXTURE_2D, depthTexture, 0)
.checkStatus().unbind();
depthTexture.unbind();
It's written in Java/LWJGL/Own small framework, but the idea should be clear.
Then I use a fragment shader at some point to visualize the data in it (fragment's depth):
#version 430 core
layout(location = 7) uniform int screen_width;
layout(location = 8) uniform int screen_height;
layout(binding = 0) uniform sampler2D shadow_tex;
out vec4 color;
void main(void) {
vec2 tex_coords = vec2(
(gl_FragCoord.x - 100) / (screen_width / 5),
(gl_FragCoord.y - (screen_height - (screen_height / 5) - 100)) / (screen_height / 5)
);
float red_channel = texture(shadow_tex, tex_coords).r;
if (red_channel < 0.999) {
red_channel = red_channel / 2.0;
}
color = vec4(vec3(red_channel), 1.0);
}
My tex_coords and shadow_tex are correct, but I need some more clarification on reading out of a GL_DEPTH_COMPONENT32F format.
I want to read the depth and I assume it is being stored being 0.0 and 1.0 in 4 bytes of float values.
So I was thinking I can use the red channel of what texture gives me back, however I cannot see a difference in depth. Other than it not being 1.0 exactly, but somewhat lower. If I do not divide by 2.0, then everything appears white.
Note that the floor is black at some points, but that is due to a failed shadow mapping, hence why I am visualizing it - however it is temporarily set to use the normal view MVP instead of the light's, to also make sure the depth information is saved correctly.
UPDATE: The colorization of the depth value is now working correctly with the following:
#version 430 core
layout(location = 7) uniform int screen_width;
layout(location = 8) uniform int screen_height;
layout(binding = 0) uniform sampler2D shadow_tex;
out vec4 color;
float linearize_depth(float original_depth) {
float near = 0.1;
float far = 1000.0;
return (2.0 * near) / (far + near - original_depth * (far - near));
}
void main(void) {
//calculate texture coordinates, based on current dimensions and positions of the viewport
vec2 tex_coords = vec2(
(gl_FragCoord.x - 100) / (screen_width / 5),
(gl_FragCoord.y - (screen_height - (screen_height / 5) - 100)) / (screen_height / 5)
);
//retrieve depth value from the red channel
float red_channel = texture(shadow_tex, tex_coords).r;
//colorize depth value, only if there actually is an object
if (red_channel < 0.999) {
red_channel = linearize_depth(red_channel) * 4.0;
}
color = vec4(vec3(red_channel), 1.0);
}
I would still like clarification though if accessing the red component is correct to retrieve the depth value?
In GLSL 4.30?
No.
If this is truly a depth texture (internal format = GL_DEPTH_COMPONENT[...]), then GLSL automatically samples it this way: vec4 (r, r, r, 1.0). Older versions would behave differently depending on the "depth texture mode" (which was removed from GL 3.1 / GLSL 1.30).
Now, if it is a depth texture with comparison as your code implies, then sampling it using sampler2D should be undefined. If you use sampler2DShadow though, sampling with texture (...) will return a single float unlike all other texture (...) overloads (which all return vec4).
Hopefully this is an oversight in the Java code you pasted, because your shader should be producing undefined results as it stands right now.
Related
currently, I need the fragment shader to write to a texture(which it does), but rather than overwriting, it blends. Here is the fragment shader itself
#version 400 core
in vec2 pass_textureCoordinates;
out vec4 out_Color;
layout(location = 1) out vec4 out_location0;
uniform sampler2D modelTexture;
uniform sampler2D bumpTexture;
uniform sampler2D overlayTexture;
uniform sampler2D scratchLevels;
void main(void)
{
vec2 txt = pass_textureCoordinates;
vec4 base = texture(overlayTexture,txt);
vec4 over = texture(modelTexture,txt);
float baseA = base[3] * (1.0f - over[3]);
float overA = over[3];
float finalA = base[3] + (1.0f - base[3]) * overA;
if(finalA == 0)
{
out_Color[0] = 0;
out_Color[1] = 0;
out_Color[2] = 0;
}
else
{
out_Color[0] = (base[0] * baseA + over[0] * overA) / finalA;
out_Color[1] = (base[1] * baseA + over[1] * overA) / finalA;
out_Color[2] = (base[2] * baseA + over[2] * overA) / finalA;
}
out_Color[3] = finalA;
out_location0 = out_Color;
}
How can I write to the texture with out blending?
Edit: I need to overwrite the alpha channel as well
Blending depends on the blend function and can be disabled (glDisable(GL_BLEND)).
If you're using the traditional alpha blending function (glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA)) or premultiplied alpha blending (glBlendFunc(GL_SRC_ALPHA, GL_ONE)), you can treat the texture as opaque by setting the output alpha channel to 1:
out_Color[3] = finalA;
out_Color[3] = 1.0;
Well you have several solutions to choose from:
the easiest is just to Disable Blending before the render call glDisable(GL_BLEND) and enable it again afterwards
You could just clear the data of the texture before rendering (if that is what you want) with opengl calls, e.g. glClear(GL_COLOR_BUFFER_BIT) and glClearColor(r,g,b)
change the alpha in your fragment shader e.g. to
if(out_Color.a > 0) out_Color = vec4(vec3(0,0,0)*(1.0-out_Color.a) + out_Color.rgb*out_Color.a, 1.0); this is just manually setting the background to black, but you could also discard it if out_Color.a == 0 (just discard;, that would - at least without a glClear call - cause the old data to stay visible at this pixel).
I hope I was able to help you :)
There is a particle system for an explosion:
Vertex shader:
#version 300 es
uniform float u_lastTimeExplosion; // time elapsed since the explosion
// explosion center (particle coordinates are set relative to this center
uniform vec3 u_centerPosition;
uniform float u_sizeSprite;
layout(location = 0) in float a_lifeTime; // particle lifetime in seconds
// initial position of the particle at the time of the explosion
layout(location = 1) in vec3 a_startPosition;
layout(location = 2) in vec3 a_endPosition; // final position of the particle
out float v_lifeTime; // remaining particle lifetime
void main()
{
gl_Position.xyz = a_startPosition + (u_lastTimeExplosion * a_endPosition);
gl_Position.xyz += u_centerPosition;
gl_Position.w = 1.0;
// calculate the remaining particle lifetime
v_lifeTime = 1.0 - (u_lastTimeExplosion / a_lifeTime);
v_lifeTime = clamp(v_lifeTime, 0.0, 1.0);
// calculate sprite size based on remaining life time
gl_PointSize = pow(v_lifeTime, 5.0) * u_sizeSprite;
}
Fragment shader:
#version 300 es
precision lowp float;
in float v_lifeTime;
uniform vec4 u_color;
out vec4 fragColor;
uniform sampler2D s_texture;
void main()
{
vec4 texColor = texture(s_texture, gl_PointCoord);
fragColor = u_color * texColor;
fragColor.a *= v_lifeTime; // increase sprite transparency
}
If the size of the sprite is less than 10, then everything is fine:
GLES20.glUniform1f(sizeSpriteLink, 10f);
If the size of the sprite increases, then is a slowdown in rendering (FPS reduction):
GLES20.glUniform1f(sizeSpriteLink, 150f);
Strangely - the number of sprites affects performance less than their size.
Question: Why does sprite size affect performance? Would be grateful for any anwer/comment.
Note: mipmap for particle texture used.
Not sure, but it seems that increasing the size of the sprite greatly affects the consumption of graphic memory resources. Maybe someone knows about this?
It is also better to use this type of filtering to increase performance:
// one value is taken from the nearest pyramid level
GLES20.glTexParameteri(GLES20.GL_TEXTURE_2D, GLES20.GL_TEXTURE_MIN_FILTER,
GLES20.GL_NEAREST_MIPMAP_NEAREST);
I am trying to implement MVP Matrices into my engine.
My Model matrix is working fine but my View and Projection Matrices do not work.
Here is the creation for both:
public void calculateProjectionMatrix() {
final float aspect = Display.getDisplayWidth() / Display.getDisplayHeight();
final float y_scale = (float) ((1f / Math.tan(Math.toRadians(FOV / 2f))) * aspect);
final float x_scale = y_scale / aspect;
final float frustum_length = FAR_PLANE - NEAR_PLANE;
proj.identity();
proj._m00(x_scale);
proj._m11(y_scale);
proj._m22(-((FAR_PLANE + NEAR_PLANE) / frustum_length));
proj._m23(-1);
proj._m32(-((2 * NEAR_PLANE * FAR_PLANE) / frustum_length));
proj._m33(0);
}
public void calculateViewMatrix() {
view.identity();
view.rotate((float) Math.toRadians(rot.x), Mathf.xRot);
view.rotate((float) Math.toRadians(rot.y), Mathf.yRot);
view.rotate((float) Math.toRadians(rot.z), Mathf.zRot);
view.translate(new Vector3f(pos).mul(-1));
System.out.println(view);
}
The vertices i'm trying to render are:
-0.5f, 0.5f, -1.0f,
-0.5f, -0.5f, -1.0f,
0.5f, -0.5f, -1.0f,
0.5f, 0.5f, -1.0f
I Tested the view matrix before the upload to shader and it is correct.
This is how I render:
ss.bind();
glEnableVertexAttribArray(0);
glEnableVertexAttribArray(1);
glEnableVertexAttribArray(2);
ss.loadViewProjection(cam);
ss.loadModelMatrix(Mathf.transformation(new Vector3f(0, 0, 0), new Vector3f(), new Vector3f(1)));
ss.connectTextureUnits();
glDrawElements(GL_TRIANGLES, 6, GL_UNSIGNED_INT, 0);
glDisableVertexAttribArray(0);
glDisableVertexAttribArray(1);
glDisableVertexAttribArray(2);
ss.unbind();
Vertex Shader:
#version 330 core
layout(location = 0) in vec3 i_position;
layout(location = 1) in vec2 i_texCoord;
layout(location = 2) in vec3 i_normal;
out vec2 p_texCoord;
uniform mat4 u_proj;
uniform mat4 u_view;
uniform mat4 u_model;
void main() {
gl_Position = u_proj * u_view * u_model * vec4(i_position, 1.0);
p_texCoord = i_texCoord;
}
While the answer of #Rabbid76 of course it totally right in general, the actual issue in this case is a combination of bad API design (in the case of JOML) and not reading the JavaDocs of the used methods.
In particular the Matrix4f._mNN() methods only set the respective matrix field but omit reevaluating the matrix properties stored internally in order to accelerate/route further matrix operations (most notably multiplication) to more optimized methods when knowing about the properties of a matrix, such as "is it identity", "does it only represent translation", "is it a perspective projection", "is it affine", "is it orthonormal" etc...
This is an optimization that JOML applies to in most cases very significantly improve performance of matrix multiplications.
As for bad API design: Those methods are only public in order for the class org.joml.internal.MemUtil to access them to set matrix elements read from NIO buffers. Since Java does not (yet) have friend classes, those _mNN() methods must have been public for this reason. They are however not meant for public/client usage.
I've changed this now and the next JOML version 1.9.21 will not expose them anymore.
In order to set matrix fields still explicitly (not necessary in most cases like these here) one can still use the mNN() methods, which do reevaluate/weaken the matrix properties to make all further operations still correct, albeit likely non-optimal.
So the issue in this cases is actually: JOML still thinks that the manually created perspective projection matrix is the identity matrix and will short-cut further matrix multiplications assuming so.
The geometry has to be place in the Viewing frustum. All the geometry which is out of the viewing frustum, is clipped an not "visible".
The geometry has to be positioned between the NEAR_PLANE and FAR_PLANE. Note, at perspective projection NEAR_PLANE and FAR_PLANE have to be greater 0:
0 < NEAR_PLANE < FAR_PLANE
Note, in view space the z axis points out of the viewport. An initial view matrix that looks at the object can be defined by:
pos = new Vector3f(0, 0, (NEAR_PLANE + FAR_PLANE)/2f );
rot = new Vector3f(0, 0, 0);
Note, if the distance to the FAR_PLANE is very far, then the object is possibly very small and almost invisible. In this case change the inital values:
pos = new Vector3f(0, 0, NEAR_PLANE * 0.99f + FAR_PLANE * 0.01f );
rot = new Vector3f(0, 0, 0);
So I tried everything and found out that you should not initialise the uniform locations to 0.
In the class that extends the ShaderProgram class, I had:
private int l_TextureSampler = 0;
private int l_ProjectionMatrix = 0;
private int l_ViewMatrix = 0;
private int l_ModelMatrix = 0;
Changing it to this:
private int l_TextureSampler;
private int l_ProjectionMatrix;
private int l_ViewMatrix;
private int l_ModelMatrix;
Worked for me.
I am trying to make a lighting system, the program changes the texture(block) brightness depends on the light it gets, and the program does it for every block(texture) that is visible to the player.
However, the lighting system works perfectly, but when it comes to rendering with shaders everything gets destroyed.
This code is in the render loop -
float light = Lighting.checkLight(mapPosY, mapPosX, this); // Returns the light the current block gets
map[mapPos].light = light; // Applies the light to the block
Shaders.block.setUniformf("lightBlock", light); // Sets the light value to the shader's uniform, to change the brightness of the current block / texture.
batch.draw(map[mapPos].TEXTURE, (mapPosX * Block.WIDTH), (mapPosY * Block.HEIGHT), Block.WIDTH, Block.HEIGHT); // Renders the block / texture to the screen.
The result is pretty random..
As i said the first two lines work perfectly, the problem is probably is in the third line or in the shader itself.
The shader:
Vertex shader -
attribute vec4 a_color;
attribute vec3 a_position;
attribute vec2 a_texCoord0;
uniform mat4 u_projTrans;
varying vec4 vColor;
varying vec2 vTexCoord;
void main() {
vColor = a_color;
vTexCoord = a_texCoord0;
gl_Position = u_projTrans * vec4(a_position, 1.0f);
}
Fragment shader -
varying vec4 vColor;
varying vec2 vTexCoord;
uniform vec2 screenSize;
uniform sampler2D tex;
uniform float lightBlock;
const float outerRadius = .65, innerRadius = .4, intensity = .6;
const float SOFTNESS = 0.6;
void main() {
vec4 texColor = texture2D(tex, vTexCoord) * vColor;
vec2 relativePosition = gl_FragCoord.xy / screenSize - .5;
float len = length(relativePosition);
float vignette = smoothstep(outerRadius, innerRadius, len);
texColor.rgb = mix(texColor.rgb, texColor.rgb * vignette * lightBlock, intensity);
gl_FragColor = texColor;
}
I fixed the problem.. but i dont have any idea why it fixed it.
Thanks to Pedro, i focused more on the render loop instead of the shader itself.
Before the loop i added those 2 lines -
List<Integer> lBlocks = new ArrayList<Integer>();
Shaders.block.setUniformf("lightBlock", 0.3f);
Basically I created an array to store the bright blocks later on.
I set the uniform of the shader to be 0.3f which means pretty dark. the value should be 0-1f.
Now in the render loop, inside the for -
float light = Lighting.checkLight(mapPosY, mapPosX, this);
map[mapPos].light = light;
if( light == 1.0f) {
lBlocks.add(mapPos);
} else {
batch.draw(map[mapPos].TEXTURE, (mapPosX * Block.WIDTH), (mapPosY * Block.HEIGHT), Block.WIDTH, Block.HEIGHT);
}
As you can see, the bright blocks i add to the array and the dark ones i render, i set the uniform to 0.3f before the render loop as you can in the first code sample.
After the render loop i loop again through the bright blocks.. because we didn't render them.
Shaders.block.setUniformf("lightBlock", 1.0f);
for(int i = 0; i < lBlocks.size(); i++) {
batch.draw(map[lBlocks.get(i)].TEXTURE, ((lBlocks.get(i) % width) * Block.WIDTH), ((lBlocks.get(i) / width) * Block.HEIGHT), Block.WIDTH, Block.HEIGHT);
}
Now i rendered the bright blocks and it works.. the result was good.
But I don't have any idea why its like that, its like cutting the render loop to two, one for dark blocks and one for the bright ones.
Thanks :)
I'm currently having an issue with directional light shadow maps from a moving (sun-like) light source.
When I initially implemented, the light projection matrix was computed as 3D, and the shadow map appears beautifully. I then learned that for what I'm trying to do, an orthographic projection would work better, but I'm having a hard time substituting the proper projection matrix.
Each tick, the sun moves a certain amount along a circle, as one would expect. I use a homegrown "lookAt" method to determine the proper viewing matrix. So, for instance, daylight occurs from 6AM to 6PM. When the sun is at the 9AM position (45 degrees) it should look at the origin and render the shadow map to the framebuffer. What appears to be happening with the orthographic projection is that it doesn't "tilt down" toward the origin. It simply keeps looking straight down the Z axis instead. Things look fine at 6AM and 6PM, but 12 noon, for instance, show absolutely nothing.
Here's how I'm setting things up:
Original 3D projection matrix:
Matrix4f projectionMatrix = new Matrix4f();
float aspectRatio = (float) width / (float) height;
float y_scale = (float) (1 / cos(toRadians(fov / 2f)));
float x_scale = y_scale / aspectRatio;
float frustum_length = far_z - near_z;
projectionMatrix.m00 = x_scale;
projectionMatrix.m11 = y_scale;
projectionMatrix.m22 = (far_z + near_z) / (near_z - far_z);
projectionMatrix.m23 = -1;
projectionMatrix.m32 = -((2 * near_z * far_z) / frustum_length);
LookAt method:
public Matrix4f lookAt( float x, float y, float z,
float center_x, float center_y, float center_z ) {
Vector3f forward = new Vector3f( center_x - x, center_y - y, center_z - z );
Vector3f up = new Vector3f( 0, 1, 0 );
if ( center_x == x && center_z == z && center_y != y ) {
up.y = 0;
up.z = 1;
}
Vector3f side = new Vector3f();
forward.normalise();
Vector3f.cross(forward, up, side );
side.normalise();
Vector3f.cross(side, forward, up);
up.normalise();
Matrix4f multMatrix = new Matrix4f();
multMatrix.m00 = side.x;
multMatrix.m10 = side.y;
multMatrix.m20 = side.z;
multMatrix.m01 = up.x;
multMatrix.m11 = up.y;
multMatrix.m21 = up.z;
multMatrix.m02 = -forward.x;
multMatrix.m12 = -forward.y;
multMatrix.m22 = -forward.z;
Matrix4f translation = new Matrix4f();
translation.m30 = -x;
translation.m31 = -y;
translation.m32 = -z;
Matrix4f result = new Matrix4f();
Matrix4f.mul( multMatrix, translation, result );
return result;
}
Orthographic projection (using width 100, height 75, near 1.0, far 100 ) I've tried this with many many different values:
Matrix4f projectionMatrix = new Matrix4f();
float r = width * 1.0f;
float l = -width;
float t = height * 1.0f;
float b = -height;
projectionMatrix.m00 = 2.0f / ( r - l );
projectionMatrix.m11 = 2.0f / ( t - b );
projectionMatrix.m22 = 2.0f / (far_z - near_z);
projectionMatrix.m30 = - ( r + l ) / ( r - l );
projectionMatrix.m31 = - ( t + b ) / ( t - b );
projectionMatrix.m32 = -(far_z + near_z) / (far_z - near_z);
projectionMatrix.m33 = 1;
Shadow map vertex shader:
#version 150 core
uniform mat4 projectionMatrix;
uniform mat4 viewMatrix;
uniform mat4 modelMatrix;
in vec4 in_Position;
out float pass_Position;
void main(void) {
gl_Position = projectionMatrix * viewMatrix * modelMatrix * in_Position;
pass_Position = gl_Position.z;
}
Shadow map fragment shader:
#version 150 core
in vec4 pass_Color;
in float pass_Position;
layout(location=0) out float fragmentdepth;
out vec4 out_Color;
void main(void) {
fragmentdepth = gl_FragCoord.z;
}
I feel that I'm missing something very simple here. As I said, this works fine with a 3D projection matrix, but I want the shadows constant as the user travels across the world, which makes sense for directional lighting, and thus orthographic projection.
Actually, who told you that using an orthographic projection matrix would be a good idea for shadow maps? This might work for things like the sun, which are effectively infinitely far away, but for local lights perspective is very relevant. You have to be careful with perspective projection and shadow maps though, because the sample frequency varies with distance and you wind up getting a lot of precision at some distances and not enough at others unless you use things like cascading or perspective warping in general; this is probably more than you should be thinking about at the moment though :)
Also, orthographic projection matrices are no more or no less 3D than perspective, insofar as they work by projecting a 3D "image" onto a 2D viewing plane... the only difference between them and perspective is that parallel lines remain parallel. Put another way, (x,y,near) and (x,y,far) ideally project to the same position on screen in an orthographic projection.
Your use of gl_FragCoord.z in the fragment shader is unusual. Since this is the value that is written to the depth buffer, you might as well write NOTHING in your fragment shader and re-use the depth buffer. Unless your implementation does not support a floating-point depth buffer you are wasting memory bandwidth by writing the depth to two places. A depth-only pass with glColorMask (GL_FALSE, GL_FALSE, GL_FALSE, GL_FALSE) will usually get you much higher throughput when constructing shadow maps.
If you actually used the value of pass_Position (which is your non-perspective corrected Z coordinate in clip-space), I could see using a separate color attachment to write this, but you're writing the perspective-correct depth-range adjusted depth (gl_FragDepth) currently.
In any case, when the sun is directly overhead and you are using orthographic projection it is to be expected that no shadows are cast. This goes back to the property I explained earlier where parallel lines remain parallel. Since the distance an object is from the sun has no affect on where the object is projected (orthographically), if it is directly overhead you will not see any shadows. Try tracking the sun's position along a sphere instead of a circle to minimize this.