I'm pretty new to shaders and I have trouble understanding what can be done what cannot. For example, I have a 3d voxel terrain. Each vertex has 3 info: its position, its color, and its normal. The position is pretty much unique to each vertex, but they are only 6 normals possible, left right up down forward backward, and 256 different colors in my game. So I tried to create a const array in my vertexShader and put the 6 normals in there and then instead of using 3 bytes for each vertex to store the normals only using 1 to store the index to look at. However, this is not working because arrays indices can only be constant. I also tried to test if the normal = 0 then the value is normals[0], etc.. butit didn't work either.
My question is: How can I store recurrent data, and then retrieve it by storing indices in my buffers instead of the said data?
EDIT:
I forgot to mention what behavior I have:
when passing a "in" variable as an index for the array this is automatically converted to the 0 index, and when testing the value of the "in" and assigning the correct index, I have strange artifacts everywhere.
#version 400
const vec4 normals[6](vec4(1,0,0,0),....)
in int normal;
void main(void){
normals[normal] ---> always returning the first element of the array
}
EDIT 2:
So after changing glVertexAttribPointer to glVertexAttribIPointer, I still have big artifacts so I'll post the code and the result:
Method called to create the normals vbo:
private void storeDataInAttributeList(int attributeNumber, int coordsSize,byte[] data) {
int vboID = GL15.glGenBuffers();
vbos.add(vboID);
GL15.glBindBuffer(GL15.GL_ARRAY_BUFFER,vboID);
ByteBuffer buffer = storeDataInByteBuffer(data);
GL15.glBufferData(GL15.GL_ARRAY_BUFFER, buffer, GL15.GL_STATIC_DRAW);
GL30.glVertexAttribIPointer(attributeNumber, coordsSize, GL11.GL_BYTE, 0,0);
GL15.glBindBuffer(GL15.GL_ARRAY_BUFFER, 0);
}
Vertex shader:
#version 400 core
const vec4 normals[] = vec4[6](vec4(1,0,0,0),vec4(0,1,0,0),vec4(0,0,1,0),vec4(-1,0,0,0),vec4(0,-1,0,0),vec4(0,0,-1,0));
in vec3 position;
in vec3 color;
in int normal;
out vec4 color_out;
out vec3 unitNormal;
out vec3 unitLightVector;
out vec3 unitToCameraVector;
out float visibility;
uniform mat4 transformationMatrix;
uniform mat4 projectionMatrix;
uniform mat4 viewMatrix;
uniform vec3 lightPosition;
uniform float fogDensity;
uniform float fogGradient;
void main(void){
vec4 worldPosition = transformationMatrix * vec4(position,1.0);
vec4 positionRelativeToCam = viewMatrix * worldPosition;
gl_Position = projectionMatrix * positionRelativeToCam;
color_out = vec4(color,0.0);
unitNormal = normalize((transformationMatrix * normals[normal]).xyz);
unitLightVector = normalize(lightPosition - worldPosition.xyz);
unitToCameraVector = normalize((inverse(viewMatrix) * vec4(0.0,0.0,0.0,1.0)).xyz - worldPosition.xyz);
visibility = clamp(exp(-pow(length(positionRelativeToCam.xyz)*fogDensity,fogGradient)),0.0,1.0);
}
Result:
Final Edit: I forgot changing the size of the vertexarray from 3 to 1 so now everything works fine
A vertex attribute with an integral data type has you specify glVertexAttribIPointer (focus on I) rather than glVertexAttribPointer. See glVertexAttribPointer.
The type of the attribute is not specified by the type argument. The type argument just specifies the type of the source data array. Attribute data specified by glVertexAttribPointer is converted to floating point.
Related
I have recently started rewriting my game in LWJGL 3, and it is so different.
I tried looking for tutorials, and this person used this slick-util compatible with LWJGL version 3.
I followed the tutorial and watched the 30 minute tutorial almost three times and couldn't figure out what was wrong, I checked it's github repository and still couldn't figure out why I couldn't get my texture loading properly.
Here is how my texture looks rendered with my code
Here is his github repository for the tutorial
enter link description here
Here is the tutorial video
enter link description here
Skip to timestamp: 32:02
I really need help I tried all day I can't get this working!
Let me know if sharing code is necessary, because maybe one of you might have had this bug already and already know the solution.
Thank you
The vertex specification does not match the the attribute indices of the shader program:
pbo = storeData(positionBuffer, 0, 3);
// [...]
cbo = storeData(colorBuffer, 1, 3);
// [...]
tbo = storeData(textureBuffer, 2, 2);
Your vertex shader:
#version 460 core
in vec3 position;
in vec3 color;
in vec2 textureCoord;
out vec3 passColor;
out vec2 passTextureCoord;
void main() {
gl_Position = vec4(position, 1.0);
passColor = color;
passTextureCoord = textureCoord;
}
Your fragment shader
#version 330 core
in vec3 passColor;
in vec2 passTextureCoord;
out vec4 outColor;
uniform sampler2D tex;
void main() {
outColor = texture(tex, passTextureCoord);
}
The attribute indices are not automatically enumerated (0, 1, 2, ...). The attribute color doesn't even get an attribute index, because it is not an active program resource. color is set to the interface variable passColor, but that variable is not used in the fragment shader. Hence this shader program has only 2 active attributes and the attribute indices of this attributes are not specified. Possibly the indices are 0 for position and 1 for textureCoord (that is what most hardware drivers will do), but you cannot be sure about that.
Use Layout Qualifier (GLSL) for the Vertex shader attribute indeices:
#version 460 core
layout(location = 0) in vec3 position;
layout(location = 1) in vec3 color;
layout(location = 2) in vec2 textureCoord;
// [...]
I have been following the tutorial on LWJGL's website (http://wiki.lwjgl.org/wiki/GLSL_Tutorial:_Communicating_with_Shaders.html) to try to send an array of random floats to my fragment shader, but every value in the shader was 0. I tried copy-pasting the "arrayOfInts" example to try and isolate the problem, changing everything to be ints, but I still only get zeros shader-side. According to the documentation ( https://javadoc.lwjgl.org/org/lwjgl/opengl/GL20.html#glUniform1iv(int,int%5B%5D) ), the function glUniform1iv exists and does what I need, but when I try it Eclipse tells me it doesn't exist in the class GL20, both when using an int[] or an IntBuffer, same for glUniform1fv. The location of the uniform variable is 0, therefore it is correctly loaded. The values stored in the java arrays generated are correct.
Vertex Shader:
#version 400 core
in vec3 position;
out vec2 uvPosition;
void main(void){
gl_Position = vec4(position, 1.0);
uvPosition = (position.xy+vec2(1.0, 1.0))/2;
}
Fragment shader
#version 400 core
in vec2 uvPosition;
out vec4 outColor;
uniform int random[50];
int randomIterator = 0;
float randf(){
return random[randomIterator++];
}
void main(void){
outColor = vec4(random[0]/10, 1.0, 1.0, 1.0);
}
Java code where I load the uniform:
final int RAND_AMOUNT = 50;
IntBuffer buffer = BufferUtils.createIntBuffer(RAND_AMOUNT);
int[] array = new int[RAND_AMOUNT];
for(int i = 0; i < RAND_AMOUNT; i++) {
buffer.put((int)(Math.random()*9.999f));
array[i] = (int)(Math.random()*9.999f);
}
buffer.rewind();
System.out.println(buffer.get(0));
GL20.glUniform1(glGetUniformLocation(programID, "random"), buffer); //In this line, using GL20.glUniform1iv tells me it doesn't exist. Same with
Nothing throws errors and the display is cyan, meaning the Red component is 0. Any help in making the random ints or directly sending random floats would help. Feel free to ask any question.
glUniform* sets a value of a uniform in the default uniform block of the currently installed program. Thus the program has to be installed by glUseProgram before.
int random_loc = GL20.glGetUniformLocation(programID, "random")
GL20.glUseProgram(programID);
GL20.glUniform1(random_loc, buffer);
I'm looking for a specific shader or an idea for another approach to get the desired result.
A picture shows the desired result (left-side input, right-side output):
I already experimented with modifying a simple vignette shader:
varying vec4 v_color;
varying vec2 v_texCoord0;
uniform vec2 u_resolution;
uniform sampler2D u_sampler2D;
const float outerRadius = .65, innerRadius = .4, intensity = .6;
void main() {
vec4 color = texture2D(u_sampler2D, v_texCoord0) * v_color;
vec2 relativePosition = gl_FragCoord.xy / u_resolution - .5;
float len = length(relativePosition);
float vignette = smoothstep(outerRadius, innerRadius, len);
color.rgb = mix(color.rgb, color.rgb * vignette, intensity);
gl_FragColor = color;
}
I think it's more confusing than helpful when I show you my modified code. I tried to imitate the same concept as of the vignette shader: Used the bounding box of the island, transforming x,y,width,height in screenCoords and get the relative position of fragCoords to island's center (normal vignette would use the screen resolution instead the island 'resolution'). Then I wanted to invert the vignette effect (inside dark, outside fade out)
Unfortunately it doesn't work and I think whole approach should be changed.
Second idea is to place a dark light in all islands on my map. (with Box2DLights)
But this might be a little expensive?
Any other ideas?
So I'm working with Java/LibGDX and I'm trying to set up the very basics of deferred rendering, namely rendering the actual game art to one color buffer of an FBO and the corresponding normals to another color buffer of the FBO.
(So essentially, I'm wanting to create this and this by using multiple-render-targets.)
My problem is my end output is blank, as if nothing is being rendered or is being rendered incorrectly.
My shader (vertex and fragment)
I'm sort of sure this works, since if I just render some sprites completely normally with it enabled to the screen (not an FBO), they do render.
#version 150
in vec4 a_position;
in vec4 a_color;
in vec2 a_texCoord0;
uniform mat4 u_projTrans;
out vec4 v_color;
out vec2 v_texCoords;
void main()
{
v_color = a_color;
v_color.a = v_color.a * (255.0/254.0);
v_texCoords = a_texCoord0;
gl_Position = u_projTrans * a_position;
}
#version 150
#ifdef GL_ES
#define LOWP lowp
precision mediump float;
#else
#define LOWP
#endif
in LOWP vec4 v_color;
in vec2 v_texCoords;
uniform sampler2D u_texture;
uniform sampler2D u_normal;
out vec4 fragColor;
void main()
{
gl_FragData[0] = v_color * texture(u_texture, v_texCoords);
gl_FragData[1] = texture(u_normal, v_texCoords);
}
The following code is in the render loop.
The basic idea is I'm doing binding the two color buffers of the FBO with glDrawBuffers. I then bind two texture units, game art and normal'd game art. My shader above is supposed to take this and output game art to one color buffer and the corresponding normal art to the other.
// Position the camera.
_cameraRef.position.set(_stage.getWidth() * 0.5f, _stage.getHeight() * 0.5f, 0);
// Update the camera, SpriteBatch, and map renderer.
_cameraRef.update();
_spriteBatch.setProjectionMatrix(_cameraRef.combined);
_mapRenderer.setView(_cameraRef);
// Bind the color texture units of the FBO (multiple-render-targets).
Gdx.gl30.glBindFramebuffer(GL30.GL_FRAMEBUFFER, _gBufferFBOHandle);
IntBuffer intBuffer = BufferUtils.newIntBuffer(2);
intBuffer.put(GL30.GL_COLOR_ATTACHMENT0);
intBuffer.put(GL30.GL_COLOR_ATTACHMENT1);
Gdx.gl30.glDrawBuffers(2, intBuffer);
// Draw!
Gdx.gl30.glClearColor(0, 0, 0, 1);
Gdx.gl30.glClear(GL30.GL_COLOR_BUFFER_BIT);
_spriteBatch.setShader(_shaderGBuffer);
_spriteBatch.begin();
_tilesAtlasNormal.bind(1); // Using multiple texture units to draw art and normals at same time to the two color buffers of the FBO.
_tilesAtlas.bind(0);
_mapRenderer.renderTileLayer(_mapLayerBackground);
_spriteBatch.end();
_spriteBatch.setShader(null);
// Bind the default FBO.
Gdx.gl30.glBindFramebuffer(GL30.GL_FRAMEBUFFER, 0);
// Draw the contents of the FBO onto the screen to see what it looks like.
Gdx.gl30.glClearColor(0, 0, 0, 1);
Gdx.gl30.glClear(GL30.GL_COLOR_BUFFER_BIT);
_spriteBatch.begin();
_spriteBatch.draw(_gBufferTexture1, 0.0f, 0.0f); // <-- This texture is blank? It's the texture that was just rendered to above.
_spriteBatch.end();
So like I said, the end output is blank. I'm sure the FBO I create is valid since I check with glCheckFramebufferStatus.
It's just when I take the color texture(s) from that FBO and draw them to the screen, they're blank. I don't know where I'm going wrong.
Appreciate any input.
Last Ive checked the gl30 wasnt really ready for prime time in LibGDX.
That said, Ive achieved exactly what you want, but in gl20. Even if you have other reasons to use gl30, perhaps you will find my implementation useful source, video.
Don't forget to rewind your intbuffer:
IntBuffer intBuffer = BufferUtils.newIntBuffer(2);
intBuffer.put(GL30.GL_COLOR_ATTACHMENT0);
intBuffer.put(GL30.GL_COLOR_ATTACHMENT1);
intBuffer.rewind();
I have an Android app using OpenGL ES 2.0. I need to draw 10 lines from an array each of which are described by a start point and an end point. So there are 10 lines = 20 points = 60 floats values. None of the points are connected so each pair of points in the array is unrelated to the others, hence I draw with GL_LINES.
I draw them by putting the values into a float buffer and calling some helper code like this:
public void drawLines(FloatBuffer vertexBuffer, float lineWidth,
int numPoints, float colour[]) {
GLES20.glLineWidth(lineWidth);
drawShape(vertexBuffer, GLES20.GL_LINES, numPoints, colour);
}
protected void drawShape(FloatBuffer vertexBuffer, int drawType,
int numPoints, float colour[]) {
// ... set shader ...
GLES20.glDrawArrays(drawType, 0, numPoints);
}
The drawLines takes the float buffer (60 floats), a linewidth, the number of points (20) and a 4 float colour value array. I haven't shown the shader setting code but it basically exposes the colour variable to uniform uColour value.
The fragment shader that picks up uColour just plugs it straight into the output.
/* Fragment */
precision mediump float;
uniform vec4 uColour;
uniform float uTime;
void main() {
gl_FragColor = uColour;
}
The vertex shader:
uniform mat4 uMVPMatrix;
attribute vec4 vPosition;
void main() {
gl_Position = uMVPMatrix * vPosition;
}
But now I want to do something different. I want every line in my buffer to have a different colour. The colours are a function of the position of the line in the array. I want to shade the beginning line white, the last dark gray and lines between a gradation between the two, e.g. #ffffff, #eeeeee, #dddddd etc.
I could obviously just draw each line individually plugging a new value into uColour each time but that is inefficient. I don't want to call GL 10 times when I could call it once and modify the value in a shader each time around.
Perhaps I could declare a uniform value called uVertexCount in my vertex shader? Prior to the draw I set uVertexCount to 0 and for each time the vertex shader is called I increment this value. The fragment shader could determine the line index by looking at uVertexCount. It could then interpolate a value for the colour between some start and end value or some other means. But this depends if every line or point is considered a primitive or the whole array of lines is a single primitive.
Is this feasible? I don't know how many times the vertex shader is called per fragment shader. Are the calls interleaved in a way such as this to make it viable, i.e. vertex 0, vertex 1, x * fragment, vertex 2, vertex 3, x * fragment etc.
Does anyone know of some reasonable sample code that might demonstrate the concept or point me to some other way of doing something similar?
Add color information into your Vertexbuffer (Floatbuffer) and use the attribute in your shader.
Example vertexbuffer:
uniform mat4 uMVPMatrix;
attribute vec4 vPosition;
attribute vec3 vColor;
varying vec3 color;
void main() {
gl_Position = uMVPMatrix * vPosition;
color = vColor;
}
Example fragmentshader:
precision mediump float;
varying vec3 color;
void main() {
gl_FragColor = color;
}