How to pass an array of float to Shader in LWJGL - java

I have been following the tutorial on LWJGL's website (http://wiki.lwjgl.org/wiki/GLSL_Tutorial:_Communicating_with_Shaders.html) to try to send an array of random floats to my fragment shader, but every value in the shader was 0. I tried copy-pasting the "arrayOfInts" example to try and isolate the problem, changing everything to be ints, but I still only get zeros shader-side. According to the documentation ( https://javadoc.lwjgl.org/org/lwjgl/opengl/GL20.html#glUniform1iv(int,int%5B%5D) ), the function glUniform1iv exists and does what I need, but when I try it Eclipse tells me it doesn't exist in the class GL20, both when using an int[] or an IntBuffer, same for glUniform1fv. The location of the uniform variable is 0, therefore it is correctly loaded. The values stored in the java arrays generated are correct.
Vertex Shader:
#version 400 core
in vec3 position;
out vec2 uvPosition;
void main(void){
gl_Position = vec4(position, 1.0);
uvPosition = (position.xy+vec2(1.0, 1.0))/2;
}
Fragment shader
#version 400 core
in vec2 uvPosition;
out vec4 outColor;
uniform int random[50];
int randomIterator = 0;
float randf(){
return random[randomIterator++];
}
void main(void){
outColor = vec4(random[0]/10, 1.0, 1.0, 1.0);
}
Java code where I load the uniform:
final int RAND_AMOUNT = 50;
IntBuffer buffer = BufferUtils.createIntBuffer(RAND_AMOUNT);
int[] array = new int[RAND_AMOUNT];
for(int i = 0; i < RAND_AMOUNT; i++) {
buffer.put((int)(Math.random()*9.999f));
array[i] = (int)(Math.random()*9.999f);
}
buffer.rewind();
System.out.println(buffer.get(0));
GL20.glUniform1(glGetUniformLocation(programID, "random"), buffer); //In this line, using GL20.glUniform1iv tells me it doesn't exist. Same with
Nothing throws errors and the display is cyan, meaning the Red component is 0. Any help in making the random ints or directly sending random floats would help. Feel free to ask any question.

glUniform* sets a value of a uniform in the default uniform block of the currently installed program. Thus the program has to be installed by glUseProgram before.
int random_loc = GL20.glGetUniformLocation(programID, "random")
GL20.glUseProgram(programID);
GL20.glUniform1(random_loc, buffer);

Related

GLSL store recurrent data in constant arrays

I'm pretty new to shaders and I have trouble understanding what can be done what cannot. For example, I have a 3d voxel terrain. Each vertex has 3 info: its position, its color, and its normal. The position is pretty much unique to each vertex, but they are only 6 normals possible, left right up down forward backward, and 256 different colors in my game. So I tried to create a const array in my vertexShader and put the 6 normals in there and then instead of using 3 bytes for each vertex to store the normals only using 1 to store the index to look at. However, this is not working because arrays indices can only be constant. I also tried to test if the normal = 0 then the value is normals[0], etc.. butit didn't work either.
My question is: How can I store recurrent data, and then retrieve it by storing indices in my buffers instead of the said data?
EDIT:
I forgot to mention what behavior I have:
when passing a "in" variable as an index for the array this is automatically converted to the 0 index, and when testing the value of the "in" and assigning the correct index, I have strange artifacts everywhere.
#version 400
const vec4 normals[6](vec4(1,0,0,0),....)
in int normal;
void main(void){
normals[normal] ---> always returning the first element of the array
}
EDIT 2:
So after changing glVertexAttribPointer to glVertexAttribIPointer, I still have big artifacts so I'll post the code and the result:
Method called to create the normals vbo:
private void storeDataInAttributeList(int attributeNumber, int coordsSize,byte[] data) {
int vboID = GL15.glGenBuffers();
vbos.add(vboID);
GL15.glBindBuffer(GL15.GL_ARRAY_BUFFER,vboID);
ByteBuffer buffer = storeDataInByteBuffer(data);
GL15.glBufferData(GL15.GL_ARRAY_BUFFER, buffer, GL15.GL_STATIC_DRAW);
GL30.glVertexAttribIPointer(attributeNumber, coordsSize, GL11.GL_BYTE, 0,0);
GL15.glBindBuffer(GL15.GL_ARRAY_BUFFER, 0);
}
Vertex shader:
#version 400 core
const vec4 normals[] = vec4[6](vec4(1,0,0,0),vec4(0,1,0,0),vec4(0,0,1,0),vec4(-1,0,0,0),vec4(0,-1,0,0),vec4(0,0,-1,0));
in vec3 position;
in vec3 color;
in int normal;
out vec4 color_out;
out vec3 unitNormal;
out vec3 unitLightVector;
out vec3 unitToCameraVector;
out float visibility;
uniform mat4 transformationMatrix;
uniform mat4 projectionMatrix;
uniform mat4 viewMatrix;
uniform vec3 lightPosition;
uniform float fogDensity;
uniform float fogGradient;
void main(void){
vec4 worldPosition = transformationMatrix * vec4(position,1.0);
vec4 positionRelativeToCam = viewMatrix * worldPosition;
gl_Position = projectionMatrix * positionRelativeToCam;
color_out = vec4(color,0.0);
unitNormal = normalize((transformationMatrix * normals[normal]).xyz);
unitLightVector = normalize(lightPosition - worldPosition.xyz);
unitToCameraVector = normalize((inverse(viewMatrix) * vec4(0.0,0.0,0.0,1.0)).xyz - worldPosition.xyz);
visibility = clamp(exp(-pow(length(positionRelativeToCam.xyz)*fogDensity,fogGradient)),0.0,1.0);
}
Result:
Final Edit: I forgot changing the size of the vertexarray from 3 to 1 so now everything works fine
A vertex attribute with an integral data type has you specify glVertexAttribIPointer (focus on I) rather than glVertexAttribPointer. See glVertexAttribPointer.
The type of the attribute is not specified by the type argument. The type argument just specifies the type of the source data array. Attribute data specified by glVertexAttribPointer is converted to floating point.

LWJGL 3 failing to render textures properly with slick-util compatible version with LWJGL 3

I have recently started rewriting my game in LWJGL 3, and it is so different.
I tried looking for tutorials, and this person used this slick-util compatible with LWJGL version 3.
I followed the tutorial and watched the 30 minute tutorial almost three times and couldn't figure out what was wrong, I checked it's github repository and still couldn't figure out why I couldn't get my texture loading properly.
Here is how my texture looks rendered with my code
Here is his github repository for the tutorial
enter link description here
Here is the tutorial video
enter link description here
Skip to timestamp: 32:02
I really need help I tried all day I can't get this working!
Let me know if sharing code is necessary, because maybe one of you might have had this bug already and already know the solution.
Thank you
The vertex specification does not match the the attribute indices of the shader program:
pbo = storeData(positionBuffer, 0, 3);
// [...]
cbo = storeData(colorBuffer, 1, 3);
// [...]
tbo = storeData(textureBuffer, 2, 2);
Your vertex shader:
#version 460 core
in vec3 position;
in vec3 color;
in vec2 textureCoord;
out vec3 passColor;
out vec2 passTextureCoord;
void main() {
gl_Position = vec4(position, 1.0);
passColor = color;
passTextureCoord = textureCoord;
}
Your fragment shader
#version 330 core
in vec3 passColor;
in vec2 passTextureCoord;
out vec4 outColor;
uniform sampler2D tex;
void main() {
outColor = texture(tex, passTextureCoord);
}
The attribute indices are not automatically enumerated (0, 1, 2, ...). The attribute color doesn't even get an attribute index, because it is not an active program resource. color is set to the interface variable passColor, but that variable is not used in the fragment shader. Hence this shader program has only 2 active attributes and the attribute indices of this attributes are not specified. Possibly the indices are 0 for position and 1 for textureCoord (that is what most hardware drivers will do), but you cannot be sure about that.
Use Layout Qualifier (GLSL) for the Vertex shader attribute indeices:
#version 460 core
layout(location = 0) in vec3 position;
layout(location = 1) in vec3 color;
layout(location = 2) in vec2 textureCoord;
// [...]

GLSL renders textures wrong

I am trying to make a lighting system, the program changes the texture(block) brightness depends on the light it gets, and the program does it for every block(texture) that is visible to the player.
However, the lighting system works perfectly, but when it comes to rendering with shaders everything gets destroyed.
This code is in the render loop -
float light = Lighting.checkLight(mapPosY, mapPosX, this); // Returns the light the current block gets
map[mapPos].light = light; // Applies the light to the block
Shaders.block.setUniformf("lightBlock", light); // Sets the light value to the shader's uniform, to change the brightness of the current block / texture.
batch.draw(map[mapPos].TEXTURE, (mapPosX * Block.WIDTH), (mapPosY * Block.HEIGHT), Block.WIDTH, Block.HEIGHT); // Renders the block / texture to the screen.
The result is pretty random..
As i said the first two lines work perfectly, the problem is probably is in the third line or in the shader itself.
The shader:
Vertex shader -
attribute vec4 a_color;
attribute vec3 a_position;
attribute vec2 a_texCoord0;
uniform mat4 u_projTrans;
varying vec4 vColor;
varying vec2 vTexCoord;
void main() {
vColor = a_color;
vTexCoord = a_texCoord0;
gl_Position = u_projTrans * vec4(a_position, 1.0f);
}
Fragment shader -
varying vec4 vColor;
varying vec2 vTexCoord;
uniform vec2 screenSize;
uniform sampler2D tex;
uniform float lightBlock;
const float outerRadius = .65, innerRadius = .4, intensity = .6;
const float SOFTNESS = 0.6;
void main() {
vec4 texColor = texture2D(tex, vTexCoord) * vColor;
vec2 relativePosition = gl_FragCoord.xy / screenSize - .5;
float len = length(relativePosition);
float vignette = smoothstep(outerRadius, innerRadius, len);
texColor.rgb = mix(texColor.rgb, texColor.rgb * vignette * lightBlock, intensity);
gl_FragColor = texColor;
}
I fixed the problem.. but i dont have any idea why it fixed it.
Thanks to Pedro, i focused more on the render loop instead of the shader itself.
Before the loop i added those 2 lines -
List<Integer> lBlocks = new ArrayList<Integer>();
Shaders.block.setUniformf("lightBlock", 0.3f);
Basically I created an array to store the bright blocks later on.
I set the uniform of the shader to be 0.3f which means pretty dark. the value should be 0-1f.
Now in the render loop, inside the for -
float light = Lighting.checkLight(mapPosY, mapPosX, this);
map[mapPos].light = light;
if( light == 1.0f) {
lBlocks.add(mapPos);
} else {
batch.draw(map[mapPos].TEXTURE, (mapPosX * Block.WIDTH), (mapPosY * Block.HEIGHT), Block.WIDTH, Block.HEIGHT);
}
As you can see, the bright blocks i add to the array and the dark ones i render, i set the uniform to 0.3f before the render loop as you can in the first code sample.
After the render loop i loop again through the bright blocks.. because we didn't render them.
Shaders.block.setUniformf("lightBlock", 1.0f);
for(int i = 0; i < lBlocks.size(); i++) {
batch.draw(map[lBlocks.get(i)].TEXTURE, ((lBlocks.get(i) % width) * Block.WIDTH), ((lBlocks.get(i) / width) * Block.HEIGHT), Block.WIDTH, Block.HEIGHT);
}
Now i rendered the bright blocks and it works.. the result was good.
But I don't have any idea why its like that, its like cutting the render loop to two, one for dark blocks and one for the bright ones.
Thanks :)

GLSL shaders, gl_ModelViewMatrix not correct?

So I was trying to make a shader that changed the color of my crystal a little bit over time, and it all went fine until i noticed that it didn't get darker the further away it went from the light source ( default opengl lights for now! ). So I tried to tone down the color values by the distance away from the light it was but that didn't work. Later on i discovered ( by setting the color to red if the x position of the vertex in world coordinates was greater than a certain value ), that the vertex.x value was around 0. Even though it should be about 87.0.
void main()
{
vec3 vertexPosition = vec3(gl_ModelViewMatrix * vertexPos);
vec3 surfaceNormal = (gl_NormalMatrix * normals).xyz;
vec3 lightDirection = normalize(gl_LightSource[0].position.xyz - vertexPosition);
float diffuseLI = max(0.0, dot(surfaceNormal, lightDirection));
vec4 texture = texture2D(textureSample, gl_TexCoord[0]);
if(vertexPosition.x > 0)gl_FragColor.rgba = vec4(1, 0, 0, 1);
/*And so on....*/
}
As far as I know gl_ModelViewMatrix * gl_Vertex should give the world coordinates of the vertex. Am I just stupid or what?
( I also tried to do the same if statement with the light position which was correct! )

How do I use a shader to colour lines drawn with GL_LINES and OpenGL ES 2.0

I have an Android app using OpenGL ES 2.0. I need to draw 10 lines from an array each of which are described by a start point and an end point. So there are 10 lines = 20 points = 60 floats values. None of the points are connected so each pair of points in the array is unrelated to the others, hence I draw with GL_LINES.
I draw them by putting the values into a float buffer and calling some helper code like this:
public void drawLines(FloatBuffer vertexBuffer, float lineWidth,
int numPoints, float colour[]) {
GLES20.glLineWidth(lineWidth);
drawShape(vertexBuffer, GLES20.GL_LINES, numPoints, colour);
}
protected void drawShape(FloatBuffer vertexBuffer, int drawType,
int numPoints, float colour[]) {
// ... set shader ...
GLES20.glDrawArrays(drawType, 0, numPoints);
}
The drawLines takes the float buffer (60 floats), a linewidth, the number of points (20) and a 4 float colour value array. I haven't shown the shader setting code but it basically exposes the colour variable to uniform uColour value.
The fragment shader that picks up uColour just plugs it straight into the output.
/* Fragment */
precision mediump float;
uniform vec4 uColour;
uniform float uTime;
void main() {
gl_FragColor = uColour;
}
The vertex shader:
uniform mat4 uMVPMatrix;
attribute vec4 vPosition;
void main() {
gl_Position = uMVPMatrix * vPosition;
}
But now I want to do something different. I want every line in my buffer to have a different colour. The colours are a function of the position of the line in the array. I want to shade the beginning line white, the last dark gray and lines between a gradation between the two, e.g. #ffffff, #eeeeee, #dddddd etc.
I could obviously just draw each line individually plugging a new value into uColour each time but that is inefficient. I don't want to call GL 10 times when I could call it once and modify the value in a shader each time around.
Perhaps I could declare a uniform value called uVertexCount in my vertex shader? Prior to the draw I set uVertexCount to 0 and for each time the vertex shader is called I increment this value. The fragment shader could determine the line index by looking at uVertexCount. It could then interpolate a value for the colour between some start and end value or some other means. But this depends if every line or point is considered a primitive or the whole array of lines is a single primitive.
Is this feasible? I don't know how many times the vertex shader is called per fragment shader. Are the calls interleaved in a way such as this to make it viable, i.e. vertex 0, vertex 1, x * fragment, vertex 2, vertex 3, x * fragment etc.
Does anyone know of some reasonable sample code that might demonstrate the concept or point me to some other way of doing something similar?
Add color information into your Vertexbuffer (Floatbuffer) and use the attribute in your shader.
Example vertexbuffer:
uniform mat4 uMVPMatrix;
attribute vec4 vPosition;
attribute vec3 vColor;
varying vec3 color;
void main() {
gl_Position = uMVPMatrix * vPosition;
color = vColor;
}
Example fragmentshader:
precision mediump float;
varying vec3 color;
void main() {
gl_FragColor = color;
}

Categories