View & Projection Matrices are not working - java

I am trying to implement MVP Matrices into my engine.
My Model matrix is working fine but my View and Projection Matrices do not work.
Here is the creation for both:
public void calculateProjectionMatrix() {
final float aspect = Display.getDisplayWidth() / Display.getDisplayHeight();
final float y_scale = (float) ((1f / Math.tan(Math.toRadians(FOV / 2f))) * aspect);
final float x_scale = y_scale / aspect;
final float frustum_length = FAR_PLANE - NEAR_PLANE;
proj.identity();
proj._m00(x_scale);
proj._m11(y_scale);
proj._m22(-((FAR_PLANE + NEAR_PLANE) / frustum_length));
proj._m23(-1);
proj._m32(-((2 * NEAR_PLANE * FAR_PLANE) / frustum_length));
proj._m33(0);
}
public void calculateViewMatrix() {
view.identity();
view.rotate((float) Math.toRadians(rot.x), Mathf.xRot);
view.rotate((float) Math.toRadians(rot.y), Mathf.yRot);
view.rotate((float) Math.toRadians(rot.z), Mathf.zRot);
view.translate(new Vector3f(pos).mul(-1));
System.out.println(view);
}
The vertices i'm trying to render are:
-0.5f, 0.5f, -1.0f,
-0.5f, -0.5f, -1.0f,
0.5f, -0.5f, -1.0f,
0.5f, 0.5f, -1.0f
I Tested the view matrix before the upload to shader and it is correct.
This is how I render:
ss.bind();
glEnableVertexAttribArray(0);
glEnableVertexAttribArray(1);
glEnableVertexAttribArray(2);
ss.loadViewProjection(cam);
ss.loadModelMatrix(Mathf.transformation(new Vector3f(0, 0, 0), new Vector3f(), new Vector3f(1)));
ss.connectTextureUnits();
glDrawElements(GL_TRIANGLES, 6, GL_UNSIGNED_INT, 0);
glDisableVertexAttribArray(0);
glDisableVertexAttribArray(1);
glDisableVertexAttribArray(2);
ss.unbind();
Vertex Shader:
#version 330 core
layout(location = 0) in vec3 i_position;
layout(location = 1) in vec2 i_texCoord;
layout(location = 2) in vec3 i_normal;
out vec2 p_texCoord;
uniform mat4 u_proj;
uniform mat4 u_view;
uniform mat4 u_model;
void main() {
gl_Position = u_proj * u_view * u_model * vec4(i_position, 1.0);
p_texCoord = i_texCoord;
}

While the answer of #Rabbid76 of course it totally right in general, the actual issue in this case is a combination of bad API design (in the case of JOML) and not reading the JavaDocs of the used methods.
In particular the Matrix4f._mNN() methods only set the respective matrix field but omit reevaluating the matrix properties stored internally in order to accelerate/route further matrix operations (most notably multiplication) to more optimized methods when knowing about the properties of a matrix, such as "is it identity", "does it only represent translation", "is it a perspective projection", "is it affine", "is it orthonormal" etc...
This is an optimization that JOML applies to in most cases very significantly improve performance of matrix multiplications.
As for bad API design: Those methods are only public in order for the class org.joml.internal.MemUtil to access them to set matrix elements read from NIO buffers. Since Java does not (yet) have friend classes, those _mNN() methods must have been public for this reason. They are however not meant for public/client usage.
I've changed this now and the next JOML version 1.9.21 will not expose them anymore.
In order to set matrix fields still explicitly (not necessary in most cases like these here) one can still use the mNN() methods, which do reevaluate/weaken the matrix properties to make all further operations still correct, albeit likely non-optimal.
So the issue in this cases is actually: JOML still thinks that the manually created perspective projection matrix is the identity matrix and will short-cut further matrix multiplications assuming so.

The geometry has to be place in the Viewing frustum. All the geometry which is out of the viewing frustum, is clipped an not "visible".
The geometry has to be positioned between the NEAR_PLANE and FAR_PLANE. Note, at perspective projection NEAR_PLANE and FAR_PLANE have to be greater 0:
0 < NEAR_PLANE < FAR_PLANE
Note, in view space the z axis points out of the viewport. An initial view matrix that looks at the object can be defined by:
pos = new Vector3f(0, 0, (NEAR_PLANE + FAR_PLANE)/2f );
rot = new Vector3f(0, 0, 0);
Note, if the distance to the FAR_PLANE is very far, then the object is possibly very small and almost invisible. In this case change the inital values:
pos = new Vector3f(0, 0, NEAR_PLANE * 0.99f + FAR_PLANE * 0.01f );
rot = new Vector3f(0, 0, 0);

So I tried everything and found out that you should not initialise the uniform locations to 0.
In the class that extends the ShaderProgram class, I had:
private int l_TextureSampler = 0;
private int l_ProjectionMatrix = 0;
private int l_ViewMatrix = 0;
private int l_ModelMatrix = 0;
Changing it to this:
private int l_TextureSampler;
private int l_ProjectionMatrix;
private int l_ViewMatrix;
private int l_ModelMatrix;
Worked for me.

Related

How to anchor an object using Cardboard Java SDK

I'm currently working on a project with the Cardboard SDK, and I'm relatively struck right now.
I want to display a cross in the center of the sight, like in a FPS, and keep it in the center of the sight when the user moves his head.
I know that in this code :
public void onNewFrame(HeadTransform headTransform) {
float[] headView = new float[16];
headTransform.getHeadView(headView, 0);
}
the headView param will contains the transformation matrix (rotation + translation) of the head (thanks to this SO : Android VR Toolkit - HeadTransform getHeadView matrix representation ).
I tried to do this :
private float[] mHeadView = new float[16];
public void onNewFrame(HeadTransform headTransform) {
headTransform.getHeadView(mHeadView, 0);
}
public void onDrawEye(Eye eye) {
float[] mvpMatrix = new float[16];
float[] modelMatrix = new float[16];
float[] mvMatrix = new float[16];
float[] camera = new float[16];
Matrix.setLookAt(camera, 0, 0, 0, -2, 0, 0, -1, 0, 1, 0);
Matrix.multiplyMM(modelMatrix, 0, mHeadView, 0, camera, 0);
Matrix.multiplyMM(mvMatrix, 0, eye.getEyeView(), 0, modelMatrix, 0);
Matrix.multiplyMM(mvpMatrix, 0, eye.getEyePerspective(0.1, 100), 0, mvMatrix, 0);
// Pass the mvpMatrix and vertices buffer to the vertex shader.
}
And here is my vertex shader :
uniform mat4 uMatrix;
attribute vec4 vPosition;
attribute vec4 vColors;
varying vec4 color;
void main() {
color = vColors;
gl_Position = uMatrix * vPosition;
}
But the cross is still anchored to its initial position and doesn't follow the head.
Am I missing something ?
How can I make my cross follow my head and stay in the center of the sight ?
Thanks in advance for your answers :)
(PS : I don't want to use Unity because this project must only use the Java SDK).
tl;dr: For a head-locked crosshair, skip the multiplication by the mHeadView matrix.
If you want a head locked object, you need to define it in head space, not in world space. Your current code defines the crosshair in world space. The mHeadView transform maps from world space to current head space, accounting for current head rotation. You don't need to multiply by this, it's only required for world-locked objects.

How do you display a 3D object using indices in jogl 2 with OpenGL 3.3

im tryign to write a script to display basic 3D objects/polygon triangles using JOGL 2 with OpenGL 3.3 however when the item compiles i receive no error and get an blank window of where the object appears. So my question is, is there anything in specific im missing in adding to make the object to appear.. my code is as follows...
public void init(GL3 gl)
{
gl.glGenVertexArrays(1, IntBuffer.wrap(temp));
//create vertice buffers
int vao = temp[0];
gl.glBindVertexArray(vao);
gl.glGenBuffers(1, IntBuffer.wrap(temp));
int[] temp2 = new int[]{1,1};
gl.glGenBuffers(2, IntBuffer.wrap(temp2));
vbo = temp2[0];
ebo = temp2[1];
//creates vertex array
float vertices[] = {
-0.5f, 0.5f, 0.0f,//1,0,0, // Top-left
0.5f, 0.5f, 0.0f,//0,1,0, // Top-right
0.5f, -0.5f, 0.0f,//0,0,1, // Bottom-right
-0.5f, -0.5f, 0.0f//1,1,0 // Bottom-left
};
gl.glBindBuffer(GL.GL_ARRAY_BUFFER, vbo);
gl.glBufferData(GL.GL_ARRAY_BUFFER, vertices.length * 4,
FloatBuffer.wrap(vertices), GL.GL_STATIC_DRAW);
//creates element array
int elements[] = {
0,1,2,
2,3,0
};
gl.glBindBuffer(GL.GL_ELEMENT_ARRAY_BUFFER, ebo);
gl.glBufferData(GL.GL_ELEMENT_ARRAY_BUFFER, elements.length * 4,
IntBuffer.wrap(elements), GL.GL_STATIC_DRAW);
gl.glVertexAttribPointer(0, 3, GL.GL_FLOAT, false, 3*4, 0* 4);
gl.glEnableVertexAttribArray(0);
}
public void draw(GL3 gl)
{
gl.glBindVertexArray(vao);
gl.glDrawElements(GL.GL_TRIANGLES, 2, GL.GL_UNSIGNED_INT, 0);
}
As for where my shaders being initiated, its in a different class, which is as follows..
//Matrix4 view = new Matrix4(MatrixFactory.perspective(scene.camera.getHeightAngle(),scene.camera.getAspectRatio(),scene.camera.getPosition());
projection = MatrixFactory.perspective(scene.camera.getHeightAngle(), scene.camera.getAspectRatio(), 0.01f, 100f);
view = MatrixFactory.lookInDirection(scene.camera.getPosition(), scene.camera.getDirection(), scene.camera.getUp());
try {
shader = new Shader(new File("shaders/Transform.vert"), new File("shaders/Transform.frag"));
shader.compile(gl);
shader.enable(gl);
shader.setUniform("projection", projection, gl);
shader.setUniform("view", view, gl);
}
catch (Exception e) {
System.out.println("message " + e.getMessage());
}
for (Shape s : scene.shapes) {
s.init(gl);
}
And finally, my shader files
#version 330
out vec4 fragColour;
//in vec3 outColour;
void main() {
fragColour = vec4(1,0,0,1);
}
#version 330
uniform mat4 projection;
uniform mat4 view;
layout(location=0) in vec3 pos;
//layout(location=2) in vec2 texCoord;
//layout(location=1) in vec3 colours;
out vec2 fragTex;
out vec3 outColour;
vec4 newPos;
void main() {
newPos = vec4(pos,1.0);
gl_Position = projection * view * newPos;
//fragTex = texCoord;
//outColour = colours;
}
i am unsure on where i am going wrong, whether it is the shader files, or the actualy code itself..
I am not experienced in JOGL, I am used to c++ GL. However there are several problems: First, as Reto Koradi stated you are using the same value to ebo, vbo and vao. It should be like,
gl.glGenVertexArrays(1, IntBuffer.wrap(tempV));
int vao = tempV[0];
gl.glGenBuffers(2, IntBuffer.wrap(tempB));
int vbo = tempB[0];
int ebo = tempB[1];
Lastly, your draw seems a bit problematic, you seem to skip a step."bind the array to want to draw." Then draw.
gl.glBindVertexArray (vao);
gl.glDrawElements(GL.GL_TRIANGLES, 2, GL.GL_UNSIGNED_INT, 0);
I hope these help.
Ok, after many frustrating hours. someone helped me with the solution. The issue wasnt making seperate buffers, but rather not clearing them each time, meaning i needed to do
gl.glGenVertexArrays(1, IntBuffer.wrap(temp));
//create vertice buffers
vao = temp[0];
gl.glGenBuffers(1, IntBuffer.wrap(temp));
vbo = temp[0];
gl.glGenBuffers(1, IntBuffer.wrap(temp));
ebo = temp[0];
which is similar to how Hakes however i didnt need a seperate temp, i just needed to clear the buffer each time. one other thing i needed to do was to also put
gl.glBindVertexArray(vao);
in the init as well as the draw.
(edit)
im actually not too sure gl.glBindVertexArray(vao); needed to be in the draw method

Reading from a depth texture in a fragment shader

I have a depth texture defined as the following:
//shadow FBO and texture
depthFBO = new FrameBufferObject().create().bind();
depthTexture = new Texture2D().create().bind()
.storage2D(12, GL_DEPTH_COMPONENT32F, 4096, 4096)
.minFilter(GL_LINEAR)
.magFilter(GL_LINEAR)
.compareMode(GL_COMPARE_REF_TO_TEXTURE)
.compareFunc(GL_LEQUAL);
depthFBO.texture2D(GL_DEPTH_ATTACHMENT, GL11.GL_TEXTURE_2D, depthTexture, 0)
.checkStatus().unbind();
depthTexture.unbind();
It's written in Java/LWJGL/Own small framework, but the idea should be clear.
Then I use a fragment shader at some point to visualize the data in it (fragment's depth):
#version 430 core
layout(location = 7) uniform int screen_width;
layout(location = 8) uniform int screen_height;
layout(binding = 0) uniform sampler2D shadow_tex;
out vec4 color;
void main(void) {
vec2 tex_coords = vec2(
(gl_FragCoord.x - 100) / (screen_width / 5),
(gl_FragCoord.y - (screen_height - (screen_height / 5) - 100)) / (screen_height / 5)
);
float red_channel = texture(shadow_tex, tex_coords).r;
if (red_channel < 0.999) {
red_channel = red_channel / 2.0;
}
color = vec4(vec3(red_channel), 1.0);
}
My tex_coords and shadow_tex are correct, but I need some more clarification on reading out of a GL_DEPTH_COMPONENT32F format.
I want to read the depth and I assume it is being stored being 0.0 and 1.0 in 4 bytes of float values.
So I was thinking I can use the red channel of what texture gives me back, however I cannot see a difference in depth. Other than it not being 1.0 exactly, but somewhat lower. If I do not divide by 2.0, then everything appears white.
Note that the floor is black at some points, but that is due to a failed shadow mapping, hence why I am visualizing it - however it is temporarily set to use the normal view MVP instead of the light's, to also make sure the depth information is saved correctly.
UPDATE: The colorization of the depth value is now working correctly with the following:
#version 430 core
layout(location = 7) uniform int screen_width;
layout(location = 8) uniform int screen_height;
layout(binding = 0) uniform sampler2D shadow_tex;
out vec4 color;
float linearize_depth(float original_depth) {
float near = 0.1;
float far = 1000.0;
return (2.0 * near) / (far + near - original_depth * (far - near));
}
void main(void) {
//calculate texture coordinates, based on current dimensions and positions of the viewport
vec2 tex_coords = vec2(
(gl_FragCoord.x - 100) / (screen_width / 5),
(gl_FragCoord.y - (screen_height - (screen_height / 5) - 100)) / (screen_height / 5)
);
//retrieve depth value from the red channel
float red_channel = texture(shadow_tex, tex_coords).r;
//colorize depth value, only if there actually is an object
if (red_channel < 0.999) {
red_channel = linearize_depth(red_channel) * 4.0;
}
color = vec4(vec3(red_channel), 1.0);
}
I would still like clarification though if accessing the red component is correct to retrieve the depth value?
In GLSL 4.30?
  No.
If this is truly a depth texture (internal format = GL_DEPTH_COMPONENT[...]), then GLSL automatically samples it this way: vec4 (r, r, r, 1.0). Older versions would behave differently depending on the "depth texture mode" (which was removed from GL 3.1 / GLSL 1.30).
Now, if it is a depth texture with comparison as your code implies, then sampling it using sampler2D should be undefined. If you use sampler2DShadow though, sampling with texture (...) will return a single float unlike all other texture (...) overloads (which all return vec4).
Hopefully this is an oversight in the Java code you pasted, because your shader should be producing undefined results as it stands right now.

Vertex world position in glsl, JOGL

so i've been trying to implement bump mapping for some time and i have it working in some way. So it renders the texture and shadowing correct but does not change as the light source moves around I determined that it applies the light source moving around from the source (0,0) and not where the light source is in the world. How do i determine the world position of the fragment in the shader? I am a bit stuck at the moment, any help would be appreciated.
--vertex shader
void main()
{
gl_TexCoord[0] = gl_MultiTexCoord0;
// Set the position of the current vertex
gl_Position = gl_ProjectionMatrix * gl_ModelViewMatrix * gl_Vertex;
}
--fragment shader
uniform sampler2D color_texture;
uniform sampler2D normal_texture;
uniform vec4 lightColor;
uniform vec3 falloff;
uniform vec3 lightPos;
uniform vec2 resolution;
uniform float ambience;
//uniform float lightDirection;
void main()
{
// Extract the normal from the normal map
vec3 normal = normalize(texture2D(normal_texture, gl_TexCoord[0].st).rgb * 2.0 - 1.0);
// Determine where the light is positioned
vec3 light_pos = normalize(lightPos);
//vec3 light_pos = normalize(vec3(1.0, 1.0, 0.5));
// Calculate the lighting diffuse value, the ambience is the darkness due to no light
float diffuse = max(dot(normal, light_pos), 0.0);
//direction
float lightDir = length(vec3(lightPos.xy - (gl_FragCoord.xy / resolution.xy), lightPos.z));
//calculate attenuation
float attenuation = 1.0 / ( falloff.x + (falloff.y*lightDir) + (falloff.z*lightDir*lightDir) );
//calculate the final color
vec3 color = diffuse * texture2D(color_texture, gl_TexCoord[0].st).rgb;
// Set the output color of our current pixel
gl_FragColor = vec4(color, 1.0);
}
--jogl, java code hooking up the shader
int shaderProgram = ShaderControl.enableShader(gl, shaderName);
//apply vars
int diffuseTextureVariableLocation = gl.getGL2().glGetUniformLocation(shaderProgram, "color_texture");
int normalColorVariableLocation = gl.getGL2().glGetUniformLocation(shaderProgram, "normal_texture");
int lightPositionVariableLocation = gl.getGL2().glGetUniformLocation(shaderProgram, "lightPos");
int lightColorVariableLocation = gl.getGL2().glGetUniformLocation(shaderProgram, "lightColor");
int falloffVariableLocation = gl.getGL2().glGetUniformLocation(shaderProgram, "falloff");
int resolutionVariableLocation = gl.getGL2().glGetUniformLocation(shaderProgram, "resolution");
int ambienceVariableLocation = gl.getGL2().glGetUniformLocation(shaderProgram, "ambience");
gl.getGL2().glUniform1i(diffuseTextureVariableLocation, 0);
gl.getGL2().glUniform1i(normalColorVariableLocation, 1);
gl.getGL2().glUniform3f(lightPositionVariableLocation, positionLight.x, positionLight.y, 1.5f);
gl.getGL2().glUniform4f(lightColorVariableLocation, 1f, 1.0f, 1.0f, 1f);
gl.getGL2().glUniform3f(falloffVariableLocation,.4f, 3f, 20f);
gl.getGL2().glUniform2f(resolutionVariableLocation, Game._viewPortDimension.width, Game._viewPortDimension.height);
gl.getGL2().glUniform1f(ambienceVariableLocation, 0.93f);
gl.getGL2().glActiveTexture(GL2.GL_TEXTURE1);
normalTexture.bind(gl);
//bind diffuse color to texture unit 0
gl.getGL2().glActiveTexture(GL2.GL_TEXTURE0);
texture.bind(gl);
//draw the texture and apply the bump mapping shader
drawTexture(gl, worldOffsetX, worldOffsetY, x, y, depth, rotation, percentageToDraw, width, height, texture);
ShaderControl.disableShader(gl);
Kind regards
Johandre
First, make sure you really need that. Once you are, you can create a varying vec3 in your fragment shaders that gets interpolated from the vertex shader that holds the world position. In order to do that, make sure you have separate modelview matrix and projection matrix. (I prefer having only a projection matrix for the games I made so far). Use the output of the modelview matrix for your varying vec3.

OpenGL 3+ with orthographic projection of directional light

I'm currently having an issue with directional light shadow maps from a moving (sun-like) light source.
When I initially implemented, the light projection matrix was computed as 3D, and the shadow map appears beautifully. I then learned that for what I'm trying to do, an orthographic projection would work better, but I'm having a hard time substituting the proper projection matrix.
Each tick, the sun moves a certain amount along a circle, as one would expect. I use a homegrown "lookAt" method to determine the proper viewing matrix. So, for instance, daylight occurs from 6AM to 6PM. When the sun is at the 9AM position (45 degrees) it should look at the origin and render the shadow map to the framebuffer. What appears to be happening with the orthographic projection is that it doesn't "tilt down" toward the origin. It simply keeps looking straight down the Z axis instead. Things look fine at 6AM and 6PM, but 12 noon, for instance, show absolutely nothing.
Here's how I'm setting things up:
Original 3D projection matrix:
Matrix4f projectionMatrix = new Matrix4f();
float aspectRatio = (float) width / (float) height;
float y_scale = (float) (1 / cos(toRadians(fov / 2f)));
float x_scale = y_scale / aspectRatio;
float frustum_length = far_z - near_z;
projectionMatrix.m00 = x_scale;
projectionMatrix.m11 = y_scale;
projectionMatrix.m22 = (far_z + near_z) / (near_z - far_z);
projectionMatrix.m23 = -1;
projectionMatrix.m32 = -((2 * near_z * far_z) / frustum_length);
LookAt method:
public Matrix4f lookAt( float x, float y, float z,
float center_x, float center_y, float center_z ) {
Vector3f forward = new Vector3f( center_x - x, center_y - y, center_z - z );
Vector3f up = new Vector3f( 0, 1, 0 );
if ( center_x == x && center_z == z && center_y != y ) {
up.y = 0;
up.z = 1;
}
Vector3f side = new Vector3f();
forward.normalise();
Vector3f.cross(forward, up, side );
side.normalise();
Vector3f.cross(side, forward, up);
up.normalise();
Matrix4f multMatrix = new Matrix4f();
multMatrix.m00 = side.x;
multMatrix.m10 = side.y;
multMatrix.m20 = side.z;
multMatrix.m01 = up.x;
multMatrix.m11 = up.y;
multMatrix.m21 = up.z;
multMatrix.m02 = -forward.x;
multMatrix.m12 = -forward.y;
multMatrix.m22 = -forward.z;
Matrix4f translation = new Matrix4f();
translation.m30 = -x;
translation.m31 = -y;
translation.m32 = -z;
Matrix4f result = new Matrix4f();
Matrix4f.mul( multMatrix, translation, result );
return result;
}
Orthographic projection (using width 100, height 75, near 1.0, far 100 ) I've tried this with many many different values:
Matrix4f projectionMatrix = new Matrix4f();
float r = width * 1.0f;
float l = -width;
float t = height * 1.0f;
float b = -height;
projectionMatrix.m00 = 2.0f / ( r - l );
projectionMatrix.m11 = 2.0f / ( t - b );
projectionMatrix.m22 = 2.0f / (far_z - near_z);
projectionMatrix.m30 = - ( r + l ) / ( r - l );
projectionMatrix.m31 = - ( t + b ) / ( t - b );
projectionMatrix.m32 = -(far_z + near_z) / (far_z - near_z);
projectionMatrix.m33 = 1;
Shadow map vertex shader:
#version 150 core
uniform mat4 projectionMatrix;
uniform mat4 viewMatrix;
uniform mat4 modelMatrix;
in vec4 in_Position;
out float pass_Position;
void main(void) {
gl_Position = projectionMatrix * viewMatrix * modelMatrix * in_Position;
pass_Position = gl_Position.z;
}
Shadow map fragment shader:
#version 150 core
in vec4 pass_Color;
in float pass_Position;
layout(location=0) out float fragmentdepth;
out vec4 out_Color;
void main(void) {
fragmentdepth = gl_FragCoord.z;
}
I feel that I'm missing something very simple here. As I said, this works fine with a 3D projection matrix, but I want the shadows constant as the user travels across the world, which makes sense for directional lighting, and thus orthographic projection.
Actually, who told you that using an orthographic projection matrix would be a good idea for shadow maps? This might work for things like the sun, which are effectively infinitely far away, but for local lights perspective is very relevant. You have to be careful with perspective projection and shadow maps though, because the sample frequency varies with distance and you wind up getting a lot of precision at some distances and not enough at others unless you use things like cascading or perspective warping in general; this is probably more than you should be thinking about at the moment though :)
Also, orthographic projection matrices are no more or no less 3D than perspective, insofar as they work by projecting a 3D "image" onto a 2D viewing plane... the only difference between them and perspective is that parallel lines remain parallel. Put another way, (x,y,near) and (x,y,far) ideally project to the same position on screen in an orthographic projection.
Your use of gl_FragCoord.z in the fragment shader is unusual. Since this is the value that is written to the depth buffer, you might as well write NOTHING in your fragment shader and re-use the depth buffer. Unless your implementation does not support a floating-point depth buffer you are wasting memory bandwidth by writing the depth to two places. A depth-only pass with glColorMask (GL_FALSE, GL_FALSE, GL_FALSE, GL_FALSE) will usually get you much higher throughput when constructing shadow maps.
If you actually used the value of pass_Position (which is your non-perspective corrected Z coordinate in clip-space), I could see using a separate color attachment to write this, but you're writing the perspective-correct depth-range adjusted depth (gl_FragDepth) currently.
In any case, when the sun is directly overhead and you are using orthographic projection it is to be expected that no shadows are cast. This goes back to the property I explained earlier where parallel lines remain parallel. Since the distance an object is from the sun has no affect on where the object is projected (orthographically), if it is directly overhead you will not see any shadows. Try tracking the sun's position along a sphere instead of a circle to minimize this.

Categories