After literally 3 days of finding out how to do shadowmaps without it going mental I finally reached a stage where it's acutally a visible shadow map. But now I have one last problem with some strange appearances of objects on places where it's impossible to be (At least in normal life).
The problem I have is this:
I draw 3 test objects in the scene a sphere, a weird blocky guy, and a smaller version of that blocky guy.
The sphere is basicly the closest to the light, so you won't expect the shadow of the big guy to appear on it.
The small guy is hovering in the air.
The big guy's shadow is on the correct position(He's also at position 0,0,0 in world coordinates), but has the sphere's shadow all over him on places which don't make sense.
This is the first image:
And the second one where I moved the big guy to a further location:
As you can see on the second image the shadow is also no longer on the sphere.
The order in which I draw them is Sphere -> Big guy -> Small guy
I also send the current ModelView matrix to the shader just before I draw each object
Sphere s = new Sphere();
glPushMatrix();
glTranslatef(100,150,-100);
s.draw(50, 36, 36);
glPopMatrix();
glPushMatrix();
glTranslatef(-50,0,50);
glScalef(0.1f,0.1f,0.1f);
glColor3f(1f,195f/255f,0f);
drawBot();
glPopMatrix();
glPushMatrix();
glTranslatef(0,0,100);
glRotatef(180,0,1,0);
glScalef(0.01f,0.01f,0.01f);
glColor3f(0,114f/255f,1f);
drawBot();
glPopMatrix();
This is my vertex shader:
#version 330 core
uniform mat4 lightP, lightMV, camP, camMV, bias, curMV;
out vec4 shadowCoord;
void main()
{
gl_Position = (camP*curMV) * gl_Vertex;
gl_Position.z-=0.01;
shadowCoord = (bias*(lightP*lightMV)) * (gl_Vertex);
}
This is my fragment shader:
#version 330 core
uniform sampler2D shadowMapTexture;
in vec4 shadowCoord;
out vec4 oColor;
float shadowMapping(){
float visibility = 0.0;
float bias = 0.01;
vec3 shadowPos = shadowCoord.xyz/shadowCoord.w;
if (texture2D( shadowMapTexture, shadowPos.xy ).z < shadowPos.z-bias){
visibility = 0.5;
}
return visibility;
}
void main(){
float shade = shadowMapping();
oColor = vec4(0,0,0,shade);
}
Is there anyone who understands this riddle?
Related
When calculating diffuse lighting for a moving object, I have to move the luminous source along with the object itself:
#Override
public void draw() { // draw frame
...
// Move object
GLES20.glVertexAttribPointer(aPositionLink, 3, GLES30.GL_FLOAT,
false, 0, object3D.getVertices());
// The luminous source moves nearby the object, so the
// object is always illuminated from one side
GLES20.glUniform3f(lightPositionLink, object3D.getX(),
object3D.getY(), object3D.getZ() + 2.0f);
...
}
Snippet of vertex shader:
#version 300 es
uniform mat4 u_mvMatrix; // model-view matrix of object
uniform vec3 u_lightPosition; // position of the luminous source
in vec4 a_position; // vertex data is loaded here
in vec3 a_normal; // normal data is loaded here
struct DiffuseLight {
vec3 color;
float intensity;
};
uniform DiffuseLight u_diffuseLight;
...
void main() {
...
vec3 modelViewNormal = vec3(u_mvMatrix * vec4(a_normal, 0.0));
vec3 modelViewVertex = vec3(u_mvMatrix * a_position);
// calculate the light vector by subtracting the
// position of the object from the light position
vec3 lightVector = normalize(u_lightPosition - modelViewVertex);
float diffuse = max(dot(modelViewNormal, lightVector), 0.1);
float distance = length(u_lightPosition - modelViewVertex);
diffuse = diffuse * (1.0 / (1.0 + pow(distance, 2.0)));
// calculate the final color for diffuse lighting
lowp vec3 diffuseColor = diffuse * u_diffuseLight.color * u_diffuseLight.intensity;
v_commonLight = vec4((ambientColor + diffuseColor), 1.0);
...
}
Is this the right approach? Or is there another rational option with stationary luminous source so as not to expend resources on calculating the position of the luminous source each frame? Note: Increasing the distance does not help. Thanks in advance.
SOLUTION:
On the advice of Rabbid76 I applied directional light as described here.
I have to move the luminous source along with the object itself
Why does the light source move with the object?
If the light is a point light source in the world, and the object moves, then the illumination of the object changes (in the "real" world).
In your case, the lighting is computed in view space. If the light source is a point in the world, then you have to transform the position by the view matrix (the view matrix transforms from world space to view space). e.g:
uniform mat4 u_viewMatrix;
void main()
{
// [...]
vec3 lightPosView = vec3(u_viewMatrix * vec4(u_lightPosition.xyz, 1.0));
vec3 lightVector = normalize(u_lightPosition - modelViewVertex);
// [...]
}
Anyway, if the object moves and the light source is somehow anchored to the object, the you have to apply the transformations, which are applied to the vertices of the object, to the light source, too.
In that case u_lightPosition has to be a position in the model space of the object, that means it is relative to the object (u_lightModelPosition). Then you can do:
uniform vec3 u_lightModelPosition;
void main()
{
mat3 normalMat = inverse(transpose(mat3(u_mvMatrix)));
vec3 modelViewNormal = normalMat * a_normal;
vec3 modelViewVertex = vec3(u_mvMatrix * a_position);
vec3 modelViewLight = vec3(u_mvMatrix * vec4(u_lightModelPosition, 1.0));
vec3 lightVector = normalize(modelViewLight - modelViewVertex);
// [...]
}
If you want a light, that doesn't depend on the position, the you have to use a directional light. In that case the light source is not a point in the world, it is a direction only. e.g.:
vec3 lightVector = -u_lightRayDirection;
u_lightRayDirection has to be in the space of the light calculations. Since the lighting is computed in view space, u_lightRayDirection has to be a direction in view space, too. If u_lightRayDirection is a vector in world space, then it has to be transformed by mat3(u_viewMatrix).
A directional light has no distance (or a constant distance).
If the light source is anchored to the camera, no transformations are required at all (because you the light calculations in view space).
In particular I am using a Processing Java example that makes use of a GLSL shader (it's called InfiniteTiles). The original sketch is actually just moving a tiled image.
I have a uniform variable called time that I call in java.
tileShader.set("time", millis() / 1000.0);
Now in the fragment shader there is a code section
vec2 pos = gl_FragCoord.xy - vec2(TILES_COUNT_X * time);
vec2 p = (resolution - TILES_COUNT_X * pos) / resolution.x;
vec3 col = texture2D (tileImage, p).xyz;
What I attempted to do in the java code is set the time variable such that I might be able to increase and decrease the speed at which the image scrolls.
I wrote this
float t =millis() / 1000.0;
float pctX = map (mouseX, 0, width, 0, 1);
tileShader.set("time", t*pctX);
What happens is that when I move the mouse, the entire image moves rapidly either left or right depending on where im moving as if its like 'scrubbing' the image. When i stop moving the mouse, then it will move at the desired speed.
I would like to avoid this 'scrubbing' effect and have the image scrolling speed transition smoothly with the mouse movement.
Normally I could accomplish such a thing with just drawing an image in java and scrolling it, but I think I'm not understanding something fundamental about the way glsl works to achieve the same effect on the graphics card.
Any help appreciated.
Full processing code from example:
//-------------------------------------------------------------
// Display endless moving background using a tile texture.
// Contributed by martiSteiger
//-------------------------------------------------------------
PImage tileTexture;
PShader tileShader;
void setup() {
size(640, 480, P2D);
textureWrap(REPEAT);
tileTexture = loadImage("penrose.jpg");
loadTileShader();
}
void loadTileShader() {
tileShader = loadShader("scroller.glsl");
tileShader.set("resolution", float(width), float(height));
tileShader.set("tileImage", tileTexture);
}
void draw() {
tileShader.set("time", millis() / 1000.0);
shader(tileShader);
rect(0, 0, width, height);
}
Full Shader code
//---------------------------------------------------------
// Display endless moving background using a tile texture.
// Contributed by martiSteiger
//---------------------------------------------------------
uniform float time;
uniform vec2 resolution;
uniform sampler2D tileImage;
#define TILES_COUNT_X 4.0
void main() {
vec2 pos = gl_FragCoord.xy - vec2(4.0 * time);
vec2 p = (resolution - TILES_COUNT_X * pos) / resolution.x;
vec3 col = texture2D (tileImage, p).xyz;
gl_FragColor = vec4 (col, 1.0);
}
Sigh... it was a bit simpler than i thought. answer provided here by JeremyDouglass
https://forum.processing.org/two/discussion/comment/90488
solution:
"This problem isn't specific to shaders -- you would have the same problem if you were doing this with img(). You can't do clock math in this way. Multiplying anything by millis() will always create a scaling effect -- which in this case will always create what you call "scrubbing." For example, if you change the multiplier, 10 seconds suddenly becomes 15.
Instead, in order to change the speed at which the clock changes in the future but not to change how far it has advanced up-to-now, keep your own clock variable separate from millis(), and change the step amount (use addition, not multiplication) each draw frame. Now the speed at which the clock advances will change, but the base offset (the last clock time) won't jump around, because the original value isn't being scaled (multiplied)."
I'm looking for a specific shader or an idea for another approach to get the desired result.
A picture shows the desired result (left-side input, right-side output):
I already experimented with modifying a simple vignette shader:
varying vec4 v_color;
varying vec2 v_texCoord0;
uniform vec2 u_resolution;
uniform sampler2D u_sampler2D;
const float outerRadius = .65, innerRadius = .4, intensity = .6;
void main() {
vec4 color = texture2D(u_sampler2D, v_texCoord0) * v_color;
vec2 relativePosition = gl_FragCoord.xy / u_resolution - .5;
float len = length(relativePosition);
float vignette = smoothstep(outerRadius, innerRadius, len);
color.rgb = mix(color.rgb, color.rgb * vignette, intensity);
gl_FragColor = color;
}
I think it's more confusing than helpful when I show you my modified code. I tried to imitate the same concept as of the vignette shader: Used the bounding box of the island, transforming x,y,width,height in screenCoords and get the relative position of fragCoords to island's center (normal vignette would use the screen resolution instead the island 'resolution'). Then I wanted to invert the vignette effect (inside dark, outside fade out)
Unfortunately it doesn't work and I think whole approach should be changed.
Second idea is to place a dark light in all islands on my map. (with Box2DLights)
But this might be a little expensive?
Any other ideas?
(I am using a LibGDX framework which is basically just LWJGL(Java) with OpenGL for rendering)
Hi, I'm trying to render a laser beam, so far I've got this effect,
It's just a rectangle and then the whole effect is done in fragment Shader.
However, as it is a laser beam, I want the rectangle to face a camera, so the player always sees this red transparent "line" everytime. And this is driving me crazy. I tried to do some billboarding stuff, however what I want isn't really billboarding. I just want to rotate it on Z axis so that the player always sees the whole line, that's all. No X and Y rotations.
As you can see, that's what I want. And it's not billboarding at all.
If it was billboarding, it would look like this: .
I also tried to draw cylinder and the effect based on gl_FragCoord, which was working fine, but the coords were varying(sometimes the UVs were 0 and 1, sometimes 0 and 0.7) and it was not sampling whole texture, so the effect was broken.
Thus I don't even know what to do now.
I would really appreciate any help. Thanks in advance.
Here's vertexShader code:
attribute vec3 a_position;
attribute vec2 a_texCoord0;
uniform mat4 u_worldTrans; //model matrix
uniform mat4 u_view; //view matrix
uniform mat4 u_proj; // projection matrix
varying vec2 v_texCoord0;
void main() {
v_texCoord0 = a_texCoord0;
vec4 worldTrans = u_worldTrans * vec4(a_position, 1.0);
gl_Position = u_proj * u_view * worldTrans;
}
and here's fragmentShader codE:
#ifdef GL_ES
precision mediump float;
#endif
varying vec2 v_texCoord0;
uniform sampler2D tex; //texture I apply the red color onto. It's how I get the smooth(transparent) edges.
void main() {
vec4 texelColor = texture2D( tex, v_texCoord0 ); //sampling the texture
vec4 color = vec4(10.0,0.0,0.0,1.0); //the red color
float r = 0.15; //here I want to make the whole texture be red, so when there's less transparency, I want it to be more red, and on the edges(more transparency) less red.
if (texelColor.a > 0.5) r = 0.1;
gl_FragColor = vec4(mix(color.rgb,texelColor.rgb,texelColor.a * r),texelColor.a); //and here I just mix the two colors into one, depengind on the alpha value of texColor and the r float.
}
The texture is just a white line opaque in the middle, but transparent at the edges of the texuture. (smooth transition)
If you use DecalBatch to draw your laser, you can do it this way. It's called axial billboarding or cylindrical billboarding, as opposed to the spherical billboarding you described.
The basic idea is that you calculate the direction the sprite would be oriented for spherical billboarding, and then you do a couple of cross products to get the component of that direction that is perpendicular to the axis.
Let's assume your laser sprite is aligned to point up and down. You would do this series of calculations on every frame that the camera or laser moves.
//reusable calculation vectors
final Vector3 axis = new Vector3();
final Vector3 look = new Vector3();
final Vector3 tmp = new Vector3();
void orientLaserDecal (Decal decal, float beamWidth, Vector3 endA, Vector3 endB, Camera camera) {
axis.set(endB).sub(endA); //the axis direction
decal.setDimensions(beamWidth, axis.len());
axis.scl(0.5f);
tmp.set(endA).add(axis); //the center point of the laser
decal.setPosition(tmp);
look.set(camera.position).sub(tmp); //Laser center to camera. This is
//the look vector you'd use if doing spherical billboarding, so it needs
//to be adjusted.
tmp.set(axis).crs(look); //Axis cross look gives you the
//right vector, the direction the right edge of the sprite should be
//pointing. This is the same for spherical or cylindrical billboarding.
look.set(tmp).crs(axis); //Right cross axis gives you an adjusted
//look vector that is perpendicular to the axis, i.e. cylindrical billboarding.
decal.setRotation(look.nor(), axis); //Note that setRotation method requires
//direction vector to be normalized beforehand.
}
I didn't check to make sure the direction doesn't get flipped, because I draw it with back face culling turned off. So if you have culling on and don't see the sprite, that last cross product step might need to have its order reversed so the look vector points in the opposite direction.
I have an Android app using OpenGL ES 2.0. I need to draw 10 lines from an array each of which are described by a start point and an end point. So there are 10 lines = 20 points = 60 floats values. None of the points are connected so each pair of points in the array is unrelated to the others, hence I draw with GL_LINES.
I draw them by putting the values into a float buffer and calling some helper code like this:
public void drawLines(FloatBuffer vertexBuffer, float lineWidth,
int numPoints, float colour[]) {
GLES20.glLineWidth(lineWidth);
drawShape(vertexBuffer, GLES20.GL_LINES, numPoints, colour);
}
protected void drawShape(FloatBuffer vertexBuffer, int drawType,
int numPoints, float colour[]) {
// ... set shader ...
GLES20.glDrawArrays(drawType, 0, numPoints);
}
The drawLines takes the float buffer (60 floats), a linewidth, the number of points (20) and a 4 float colour value array. I haven't shown the shader setting code but it basically exposes the colour variable to uniform uColour value.
The fragment shader that picks up uColour just plugs it straight into the output.
/* Fragment */
precision mediump float;
uniform vec4 uColour;
uniform float uTime;
void main() {
gl_FragColor = uColour;
}
The vertex shader:
uniform mat4 uMVPMatrix;
attribute vec4 vPosition;
void main() {
gl_Position = uMVPMatrix * vPosition;
}
But now I want to do something different. I want every line in my buffer to have a different colour. The colours are a function of the position of the line in the array. I want to shade the beginning line white, the last dark gray and lines between a gradation between the two, e.g. #ffffff, #eeeeee, #dddddd etc.
I could obviously just draw each line individually plugging a new value into uColour each time but that is inefficient. I don't want to call GL 10 times when I could call it once and modify the value in a shader each time around.
Perhaps I could declare a uniform value called uVertexCount in my vertex shader? Prior to the draw I set uVertexCount to 0 and for each time the vertex shader is called I increment this value. The fragment shader could determine the line index by looking at uVertexCount. It could then interpolate a value for the colour between some start and end value or some other means. But this depends if every line or point is considered a primitive or the whole array of lines is a single primitive.
Is this feasible? I don't know how many times the vertex shader is called per fragment shader. Are the calls interleaved in a way such as this to make it viable, i.e. vertex 0, vertex 1, x * fragment, vertex 2, vertex 3, x * fragment etc.
Does anyone know of some reasonable sample code that might demonstrate the concept or point me to some other way of doing something similar?
Add color information into your Vertexbuffer (Floatbuffer) and use the attribute in your shader.
Example vertexbuffer:
uniform mat4 uMVPMatrix;
attribute vec4 vPosition;
attribute vec3 vColor;
varying vec3 color;
void main() {
gl_Position = uMVPMatrix * vPosition;
color = vColor;
}
Example fragmentshader:
precision mediump float;
varying vec3 color;
void main() {
gl_FragColor = color;
}