OpenGL ES Android Strange Projection - java

I write an application in Android and use OpenGL ES.
In my test scene I have a cube and a sphere. The sphere is able to move. If the Spehre is in the center from the display the sphere renders fine See here.
But if I move the sphere to a edge from the screen the spehre is distorted See here. I don't know where it come from.
Here is my ProjectionMatrix:
public void onSurfaceChanged(GL10 unused, int width, int height) {
// ...
float ratio = (float) width / height;
Matrix.frustumM(mProjectionMatrix, 0, -ratio, ratio, -1, 1, 1, 100);
// ...
}
My calculation vor the ViewMatrix and the TransformationMatrix:
public static Matrix4f createTransformationMatrix(Vector3f translation,
float rx, float ry, float rz, float scale){
Matrix4f matrix4f = new Matrix4f();
matrix4f.setIdentity();
Matrix4f.translate(translation,matrix4f,matrix4f);
Matrix4f.rotate((float) Math.toRadians(rx), new Vector3f(1,0,0), matrix4f,matrix4f);
Matrix4f.rotate((float) Math.toRadians(ry), new Vector3f(0,1,0), matrix4f,matrix4f);
Matrix4f.rotate((float) Math.toRadians(rz), new Vector3f(0,0,1), matrix4f,matrix4f);
Matrix4f.scale(new Vector3f(scale,scale,scale),matrix4f,matrix4f);
return matrix4f;
}
public static Matrix4f createViewMatrix(Camera camera) {
Matrix4f viewMatrix = new Matrix4f();
viewMatrix.setIdentity();
Matrix4f.rotate((float) Math.toRadians(camera.getPitch()), new Vector3f(1, 0, 0), viewMatrix,viewMatrix);
Matrix4f.rotate((float) Math.toRadians(camera.getYaw()), new Vector3f(0, 1, 0), viewMatrix, viewMatrix);
Matrix4f.rotate((float) Math.toRadians(camera.getRoll()), new Vector3f(0, 0, 1), viewMatrix, viewMatrix);
Vector3f cameraPos = camera.getPosition();
Vector3f negativeCameraPos = new Vector3f(-cameraPos.x,-cameraPos.y,-cameraPos.z);
Matrix4f.translate(negativeCameraPos, viewMatrix, viewMatrix);
return viewMatrix;
}
And the VertexShaderCode:
#version 300 es
//Vertex Shader
in vec3 position;
in vec2 textureCoords;
in vec3 normal;
out vec2 pass_textureCoords;
out vec3 surfaceNormal;
out vec3 toLightVector;
out vec3 toCameraVector;
uniform mat4 transformationMatrix;
uniform mat4 projectionMatrix;
uniform mat4 viewMatrix;
uniform vec3 lightPosition;
void main(void){
vec4 worldPosition = transformationMatrix *
vec4(position.x,position.y,position.z,1.0);
gl_Position = projectionMatrix * viewMatrix * worldPosition;
pass_textureCoords = textureCoords;
surfaceNormal = (transformationMatrix * vec4(normal,0.0)).xyz;
toLightVector = lightPosition - worldPosition.xyz;
toCameraVector = (inverse(viewMatrix) * vec4(0.0, 0.0, 0.0, 1.0)).xyz -
worldPosition.xyz;
}
does everyone know where my problem came from and how to solve it ?
please let me know if you have enough information. Here a two moore pictures about the problem. pic3 and pic4
EDIT
Matrix.frustumM(mProjectionMatrix, 0, -ratio/2f, ratio/2f, -1f/2f, 1f/2f, 2, 100);
But know I have this problem: Sphere out of Grid

Related

This "Single - pass wire frame shader "opengl shader worked on my old machine with amd integrated graphics but it dose not my new nvidea pc

I created this shader from following this tutorial on single pass wireframe rendering: http://codeflow.org/entries/2012/aug/02/easy-wireframe-display-with-barycentric-coordinates/
Fragment:
#version 450
layout (location = 0) out vec4 outColor;
in vec3 vBC;
const float lineWidth = 0.5;
const vec3 color = vec3(0.7, 0.7, 0.7);
float edgeFactor(){
vec3 d = fwidth(vBC);
vec3 a3 = smoothstep(vec3(0.0), d*1.5, vBC);
return min(min(a3.x, a3.y), a3.z);
}
void main(){
outColor = vec4(min(vec3(edgeFactor()), color), 1.0);
}
Vertex:
#version 450
layout (location = 0) in vec3 position;
layout (location = 1) in vec3 baryCentric;
out vec3 vBC;
uniform mat4 T_MVP;
void main() {
//texCoord0 = texCoord;
gl_Position = T_MVP * vec4(position, 1.0);
vBC = baryCentric;
}
And here is the gl prep before rendering:
wir.bind();
wir.updateUniforms(super.getTransform(), mat, engine);
GL45.glEnable(GL45.GL_SAMPLE_ALPHA_TO_COVERAGE);
GL45.glEnable(GL45.GL_BLEND);
GL45.glBlendFunc(GL45.GL_SRC_ALPHA, GL45.GL_ONE_MINUS_SRC_ALPHA);
mesh.draw("baryCentric", GL15.GL_TRIANGLES);
And here is how i bind the vertex atrribs;
The shader worked perfectly fine on my old amd integrated graphics card. But it dosnt on my rtx 2060 super.
Shader and Gl version
on old: OpenGL version: 4.5.13399 Compatibility Profile Context 15.200.1062.1004
on new: 4.6.0 NVIDIA 445.87
First of all I dont know what causes this but i think its the model files.
How i solved this was instead of pre processing the Bary centric coords i would calculate them or rather assign them in a geometry shader like so:
vBC = vec3(1, 0, 0);
gl_Position = gl_in[0].gl_Position;
EmitVertex();
vBC = vec3(0, 1, 0);
gl_Position = gl_in[1].gl_Position;
EmitVertex();
vBC = vec3(0, 0, 1);
gl_Position = gl_in[2].gl_Position;
EmitVertex();
and nothing else just pass them onto the fragment shader and it would do the rest:
#version 400
precision mediump float;
layout (location = 0) out vec4 outColor;
in vec3 vBC;
const float lineWidth = 0.5;
const vec3 lineColor = vec3(0.7, 0.7, 0.7);
float edgeFactor() {
vec3 d = fwidth(vBC);
vec3 f = step(d * lineWidth, vBC);
return min(min(f.x, f.y), f.z);
}
void main(){
outColor = vec4(255, 191, 0.0, (1.0-edgeFactor())*0.95);
}
The vertex shader only defines the positions nothing else the most basic.
Here is the full geometry shader if any one needs it:
#version 400
layout(triangles) in;
layout(triangle_strip, max_vertices = 3) out;
out vec3 vBC;
void main()
{
vBC = vec3(1, 0, 0);
gl_Position = gl_in[0].gl_Position;
EmitVertex();
vBC = vec3(0, 1, 0);
gl_Position = gl_in[1].gl_Position;
EmitVertex();
vBC = vec3(0, 0, 1);
gl_Position = gl_in[2].gl_Position;
EmitVertex();
}
Here are some pictures:
As you can see its working with transparency which is done using:
Here is the articles i looked at:
https://tchayen.github.io/wireframes-with-barycentric-coordinates/
http://codeflow.org/entries/2012/aug/02/easy-wireframe-display-with-barycentric-coordinates/
And a cool book that helped me a lot:
https://people.inf.elte.hu/plisaai/pdf/David%20Wolff%20-%20OpenGL%204.0%20Shading%20Language%20Cookbook%20(2).pdf
Just in case here is the Vertex shader:
#version 400
precision mediump int;
precision mediump float;
layout (location = 0) in vec3 position;
uniform mat4 T_MVP;
void main() {
gl_Position = T_MVP * vec4(position, 1.0);
}

Changing spotlight direction in Processing

I'm trying to implement 2 spotlights at the top of the scene using Processing that changes their respective directions overtime. I tried using the default spotlight(r,g,b,x,y,z,nx,ny,nz,angle,concentration) method to create the spotlights and tried changing the nx, ny and nz variables to change the light direction. However, the method don't seem to take in the 3 variables. This is the glsl that I'm using.
precision mediump float;
varying vec3 normalInterp;
varying vec3 vertPos;
uniform int lightCount;
uniform vec4 lightPosition[8];
uniform vec3 lightNormal[8];
//ambient
const vec3 ambientColor = vec3(0.1, 0, 0);
//diffuse
const vec3 diffuseColor = vec3(0.5, 0.0, 0.0);
//specular
const vec3 specColor = vec3(1.0, 1.0, 1.0);
//specular reflection parameter
const float n = 30.0;
//Depth cueing
//not implemented
void main() {
float lightR = 0.0;
float lightG = 0.0;
float lightB = 0.0;
for (int i = 0; i < lightCount; i++)
{
vec3 normal = normalize(normalInterp);
vec3 lightDir = normalize(lightPosition[i] - vertPos);
//diffuse
float diffuse = max(dot(lightDir,normal), 0.0);
//specular
float specular = 0.0;
if(diffuse > 0.0) {
vec3 viewDir = normalize(-vertPos);
vec3 reflectDir = reflect(-lightDir, normal);
float specAngle = max(dot(reflectDir, viewDir), 0.0);
specular = pow(specAngle, n);
}
//Note: can add in depth cueing here
vec3 colorLinear = ambientColor +
diffuse * diffuseColor +
specular * specColor;
lightR += colorLinear.x;
lightG += colorLinear.y;
lightB += colorLinear.z;
}
gl_FragColor = vec4(lightR,lightG,lightB, 1.0);
}
There is an simple issue in the shader program. First there is a typo. It has to be lightPosition rather than lightPostion. But that is not the only issue.
The type of lightPosition[i] is vec4 and the typo of vertPos is vec3. That causes and error when vertPos is subtracted from lightPosition[i].
Either you have to construct a vec3 from lightPosition[i]:
vec3 lightDir = normalize(lightPostion[i] - vertPos);
vec3 lightDir = normalize(vec3(lightPosition[i]) - vertPos);
Or You have to get the x, y and z component form lightPosition[i] (see Swizzling):
vec3 lightDir = normalize(lightPosition[i].xyz - vertPos);
Both solutions lead to the same result.
Of course, the light position has to be set relative to the object. Note when spotLight() is called, the the light position and direction is transformed by the current model view matrix.
See the example:
Vertex shader
uniform mat4 modelview;
uniform mat4 transform;
uniform mat3 normalMatrix;
attribute vec4 position;
attribute vec4 color;
attribute vec3 normal;
varying vec3 normalInterp;
varying vec3 vertPos;
varying vec4 vertColor;
void main() {
gl_Position = transform * position;
vertPos = vec3(modelview * position);
normalInterp = normalize(normalMatrix * normal);
}
Fragment shader
precision mediump float;
varying vec3 normalInterp;
varying vec3 vertPos;
uniform int lightCount;
uniform vec4 lightPosition[8];
uniform vec3 lightNormal[8];
uniform vec3 lightDiffuse[8];
uniform vec3 lightSpecular[8];
uniform vec2 lightSpot[8];
const vec3 ambientColor = vec3(0.2);
const vec3 diffuseColor = vec3(1.0);
const vec3 specColor = vec3(1.0);
const float n = 30.0;
void main() {
vec3 lightColor = vec3(0.0, 0.0, 0.0);
for (int i = 0; i < lightCount; i++)
{
// ambient
lightColor += lightDiffuse[i] * ambientColor;
vec3 normal = normalize(normalInterp);
vec3 lightDir = normalize(lightPosition[i].xyz - vertPos);
float spot = dot(-lightNormal[i], lightDir);
if (spot < lightSpot[i].x)
continue;
//diffuse
float diffuse = max(dot(lightDir,normal), 0.0);
lightColor += diffuse * lightDiffuse[i] * diffuseColor;
//specular
if(diffuse > 0.0) {
vec3 viewDir = normalize(-vertPos);
vec3 reflectDir = reflect(-lightDir, normal);
float specAngle = max(dot(reflectDir, viewDir), 0.0);
float specular = pow(specAngle, n);
lightColor += specular * lightSpecular[i] * specColor;
}
}
gl_FragColor = vec4(lightColor.rgb, 1.0);
}
Code
PShader lightShader;
void setup() {
size(800, 600, P3D);
lightShader = loadShader("fragment.glsl","vertex.glsl");
}
float ry = 0.0;
void draw() {
background(0);
shader(lightShader);
translate(width/2.0, height/2.0);
spotLight(255, 0, 0, 0, 500, 500, 0, -1, -1, PI/25, 2);
spotLight(0, 0, 255, 500, 0, 500, -1, 0, -1, PI/25, 2);
rotateY(ry);
rotateX(-0.5);
ry += 0.02;
noStroke();
box(200);
}

trouble with entity coordinates with lwjgl

I've a trouble with moving my entities in a OpenGL context:
when I try to place an entity, the position seems correct, but when the entity starts to move, everything is going wrong, and collisions don't work. I'm new to OpenGL, and I suspect my world matrix or model matrix to be wrong.
Here's the code of the vertex shader:
#version 330 core
layout (location=0) in vec3 position;
out vec3 extColor;
uniform mat4 projectionMatrix;
uniform mat4 modelMatrix;
uniform vec3 inColor;
void main()
{
gl_Position = projectionMatrix * modelMatrix * vec4(position, 1.0);
extColor = inColor;
}
Here is the class that computes most of the Matrix:
public class Transformations {
private Matrix4f projectionMatrix;
private Matrix4f modelMatrix;
public Transformations() {
projectionMatrix = new Matrix4f();
modelMatrix = new Matrix4f();
}
public final Matrix4f getOrthoMatrix(float width, float height, float zNear, float zFar) {
projectionMatrix.identity();
projectionMatrix.ortho(0.0f, width, 0.0f, height, zNear, zFar);
return projectionMatrix;
}
public Matrix4f getModelMatrix(Vector3f offset, float angleZ, float scale) {
modelMatrix.identity().translation(offset).rotate(angleZ, 0, 0, 0).scale(scale);
return modelMatrix;
}
}
Here's the test for collisions:
public boolean isIn(Pos p) {
return (p.getX() >= this.pos.getX() &&
p.getX() <= this.pos.getX() + DIMENSION)
&& (p.getY() >= this.pos.getY() &&
p.getY() <= this.pos.getY() + DIMENSION);
}
Also, there's a link to the github project: https://github.com/ShiroUsagi-san/opengl-engine.
I'm really new to OpenGL 3 so I could have done some really big mistakes.
I'm also running i3 as WM, I don't really know if this could lead to this kind of issues.
I fixes the issues after thinking about how openGL and VBO work: Indeed, I was setting a new reference for each entity, so I had to change the line
Mesh fourmiMesh = MeshBuilder.buildRect(this.position.getX(), this.position.getY(), 10, 10);
to
Mesh fourmiMesh = MeshBuilder.buildRect(0, 0, 10, 10);
It was a confusion that I made between the positions of the vertex in a VBO and the positions in my world.
Hope that misunderstood helps people to understand.

GLSL Matrix Translation Leaves Blank Screen?

I have a matrix4f that I'm passing from my ShaderProgram class into my vertex shader class using uniform variables. This matrix is supposed to act as a translation for the vertices. The following is what the matrix looks like
1, 0, 0, 0,
0, 1, 0, 0,
0, 0, 1, 0,
0, 0, 0, 1
When I multiply that variable (Called "test") by the vertex points (Called gl_Vertex) nothing is visible, it just leaves a blank screen. This only happens when I multiply it by the uniform variable "test", if I multiply it by a new matrix4f with the same values, it works normally. If I use vector uniform variables instead of matrices it works as expected.
Am I passing the variable into the GLSL vertex shader class correctly? And if so, why is my quad not showing up on the screen?
Here is my vertex shader
#version 400 core
uniform vec4 translation;
uniform vec4 size;
uniform vec4 rotation;
uniform mat4 test;
in vec2 textureCoords;
in vec3 position;
out vec2 pass_textureCoords;
void main(void){
//pass texture cords
pass_textureCoords = textureCoords;
//This works by multiplying by identity matrix
//gl_Position = mat4(1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1) * gl_Vertex;
//This works by passing vec4's not matrix4
/*gl_Position = vec4(((gl_Vertex.x + translation.x)*size.x),
((gl_Vertex.y + translation.y)*size.y),
((gl_Vertex.z + translation.z)*size.z),
((gl_Vertex.w + translation.w)*size.w)
);*/
//this leaves a blank window
gl_Position = test * gl_Vertex;
}
This is how I declare the uniform variable locations:
translationLocation = GL20.glGetUniformLocation(programID, "translation");
sizeLocation = GL20.glGetUniformLocation(programID, "size");
rotationLocation = GL20.glGetUniformLocation(programID, "rotation");
textureLocation = GL20.glGetUniformLocation(programID, "textureSampler");
testMat = GL20.glGetUniformLocation(programID, "test");
This is how I render the uniform variables
public void start(){
GL20.glUseProgram(programID);
Vector4f translation = offset.getTranslation();
Vector4f size = offset.getSize();
Vector4f rotation = offset.getRotation();
GL20.glUniform4f(translationLocation, translation.x, translation.y, translation.z, translation.w);
GL20.glUniform4f(sizeLocation, size.x, size.y, size.z, size.w);
GL20.glUniform4f(rotationLocation, rotation.x, rotation.y, rotation.z, rotation.w);
FloatBuffer buff = BufferUtils.createFloatBuffer(16);
offset.getTestTranslation().storeTranspose(buff);
GL20.glUniformMatrix4(testMat, false, buff);
GL20.glUniform1i(textureLocation, 0);
}
And this is how I declare my variables before passing it into GLSL
Vector4f translation;
Vector4f size;
Vector4f rotation;
Matrix4f testTranslation;
public Offset(){
translation = new Vector4f(0, 0, 0, 0);
size = new Vector4f(1, 1, 1, 1);
rotation = new Vector4f(0, 0 , 0, 0);
testTranslation = new Matrix4f();
testTranslation.translate(new Vector3f(0,0,0));
}
Well, it turns out that I was using the following method to convert the matrix4f into a floatBuffer
matrix4f.storeTranspose(buff)
When apparently that doesn't properly store the matrix into a float buffer. I'm now using this method to send the matrix to the vertex shader while rendering the shader program
public void setMatrixArray(boolean transposed, Matrix4f[] matrices){
FloatBuffer matrixBuffer = BufferUtils.createFloatBuffer(16*matrices.length);
for(int i = 0; i<matrices.length; i++) {
matrices[i].store(matrixBuffer);
}
matrixBuffer.flip();
GL20.glUniformMatrix4(testMat,transposed,matrixBuffer);
}

Texturing two triangles (rectangle) with a depth texture gives all red colors

I am implementing shadow mapping in my program, and as intermediate step I want to be able to see the depth texture that has been generated after the first light pass, however all I see on the screen is red, relevant code:
#Override
protected void render(final double msDelta) {
super.render(msDelta);
glEnable(GL_DEPTH_TEST);
glDepthFunc(GL_LEQUAL);
//Light pass
glViewport(0, 0, 4096, 4096);
...
//Draw pass
glViewport(0, 0, screenWidth, screenHeight);
...
//Visualize depth texture
if (visualDepthTexture) {
glViewport(100, screenHeight - (screenHeight / 5) - 100, screenWidth / 5, screenHeight / 5);
glClearDepthf(1f);
glClear(GL_DEPTH_BUFFER_BIT);
glActiveTexture(GL_TEXTURE0);
depthTexture.bind();
glDrawBuffer(GL_BACK);
depthBox.draw(0);
depthTexture.unbind();
}
}
where depthTexture:
//shadow FBO and texture
depthFBO = new FrameBufferObject().create().bind();
depthTexture = new Texture2D().create().bind()
.storage2D(11, GL_DEPTH_COMPONENT32F, 4096, 4096) //11 is log2(4096)
.minFilter(GL_LINEAR)
.magFilter(GL_LINEAR)
.compareMode(GL_COMPARE_REF_TO_TEXTURE)
.compareFunc(GL_LEQUAL);
depthFBO.texture2D(GL_DEPTH_ATTACHMENT, GL11.GL_TEXTURE_2D, depthTexture, 0)
.unbind();
depthTexture.unbind();
And depthBox.draw(0) will draw 6 vertices in GL_TRIANGLES mode, using the following shaders:
depth-box.vs.glsl
#version 430 core
void main(void) {
const vec4 vertices[6] = vec4[](
vec4(-1.0, -1.0, 1.0, 1.0),
vec4(-1.0, 1.0, 1.0, 1.0),
vec4(1.0, 1.0, 1.0, 1.0),
vec4(1.0, 1.0, 1.0, 1.0),
vec4(1.0, -1.0, 1.0, 1.0),
vec4(-1.0, -1.0, 1.0, 1.0)
);
gl_Position = vertices[gl_VertexID];
}
depth-box.fs.glsl
#version 430 core
layout(binding = 0) uniform sampler2D shadow_tex;
out vec4 color;
void main(void) {
color = vec4(texture2D(shadow_tex, gl_FragCoord.xy));
}
What I am trying to achieve is that the depth texture gets converted to grayscale (to see where shadow should occur), and that the two triangles in the small viewport get textured with that result.

Categories