Weird output when specifying GL_RGBA32F in glTexImage2D() - java

I'm working on a Java (JOGL) program to conduct some computation with shader programs, and I'm experiencing weird output values from a fragment shader.
More specifically, it seems that glGetTexImage() returns scaled values.
Now, here is a simple fragment shader.
#version 330
layout(location=0) out vec4 fs_out_color;
void main(){
fs_out_color=vec4(-1.0,0.5,0.4,2.0);
}
I create a floating-point texture to get the output from the shader.
TEXTURE_WIDTH and TEXTURE_HEIGHT are both 2.
glBindTexture(GL4.GL_TEXTURE_2D, texture_id);
glTexImage2D(
GL4.GL_TEXTURE_2D, 0, GL4.GL_RGBA32F,
TEXTURE_WIDTH, TEXTURE_HEIGHT, 0, GL4.GL_RGBA, GL4.GL_FLOAT, null);
glTexParameteri(GL4.GL_TEXTURE_2D, GL4.GL_TEXTURE_MAG_FILTER, GL4.GL_NEAREST);
glTexParameteri(GL4.GL_TEXTURE_2D, GL4.GL_TEXTURE_MIN_FILTER, GL4.GL_NEAREST);
glTexParameteri(GL4.GL_TEXTURE_2D, GL4.GL_TEXTURE_WRAP_S, GL4.GL_CLAMP_TO_EDGE);
glTexParameteri(GL4.GL_TEXTURE_2D, GL4.GL_TEXTURE_WRAP_T, GL4.GL_CLAMP_TO_EDGE);
glBindTexture(GL4.GL_TEXTURE_2D, 0);
GL_RGBA32F is passed to the third argument of glTexImage2D() to get the values that are not clamped.
Then, the results are fetched with the following code:
int size=TEXTURE_WIDTH*TEXTURE_HEIGHT*4;
FloatBuffer buf=Buffers.newDirectFloatBuffer(size);
glBindTexture(GL4.GL_TEXTURE_2D, texture_id);
glGetTexImage(GL4.GL_TEXTURE_2D, 0, GL4.GL_RGBA, GL4.GL_FLOAT, buf);
glBindTexture(GL4.GL_TEXTURE_2D, 0);
And finally, they are output to the console.
for(int i=0;i<size;i+=4) {
float r=buf.get();
float g=buf.get();
float b=buf.get();
float a=buf.get();
String str="("+r+","+g+","+b+","+a+")";
System.out.println(str);
}
What I expect here is something like (-1.0,0.5,0.4,2.0).
However, the actual output is
(-2.0,1.0,0.8,4.0)
(-2.0,1.0,0.8,4.0)
(-2.0,1.0,0.8,4.0)
(-2.0,1.0,0.8,4.0)
I tried another vec4 variable vec4(-10.0,0.5,0.4,20.0) in the shader, and I got
(-200.0,10.0,8.0,400.0)
(-200.0,10.0,8.0,400.0)
(-200.0,10.0,8.0,400.0)
(-200.0,10.0,8.0,400.0)
Is there anything I'm doing wrong? How can I get the original output of the fragment shader?
Any advice would be appreciated.
Note:
Fragment shader
fs_out_color=vec4(-1.0,0.5,0.4,2.0);
Java code
glTexImage2D(
GL4.GL_TEXTURE_2D, 0, GL4.GL_RGBA,
TEXTURE_WIDTH, TEXTURE_HEIGHT, 0, GL4.GL_RGBA, GL4.GL_FLOAT, null);
and the output is
(0.0,0.5019608,0.4,1.0)
(0.0,0.5019608,0.4,1.0)
(0.0,0.5019608,0.4,1.0)
(0.0,0.5019608,0.4,1.0)
Seems to work well with GL_RGBA.

OK, let's do some more or less educated guesswork:
Input: -1.0,0.5,0.4,2.0 -> Output: -2.0,1.0,0.8,4.0
Input: -10.0,0.5,0.4,20.0 -> Output: -200.0,10.0,8.0,400.0
Input: -1.0,0.5,0.4,2.0 -> Output: 0.0,0.5019608,0.4,1.0 (using UNORM target)
Hypothesis: you have Blending enabled, and you have especially set glBlendFunc(GL_SRC_ALPHA, ...) so that your fragment shader's output (which will be the source operand for the blending stage) will be multiplied by the alpha value you provide. Note that when using an UNORM render target like GL_RGBA, the fragment shader's output will be clamped to [0,1] before the blending will happen, so you get alpha = 1.0 in there, and the multiplication has no effect.
If my hypothesis is correct, you would also get "wrong" scaled results in the UNORM GL_RGBA format case if you tried with input alpha of 0.5.

Related

GL_INVALID_OPERATION setting image2D in computeShader

I am using OpenGL ES 3.2 and the NVIDIA driver 361.00 on a Pixel C tablet with Tegra X1 GPU. I would like to use a compute shader to write data to a colour map and then later I will use some graphics shaders to display the image.
I already have this concept working using desktop GL and now I want to port to mobile. I am implementing the GL in Java rather than in native code. I extend GLSurfaceView and the GLSurfaceView.Renderer and then during the OnSurfaceCreated callback I initialise the shader programs and textures etc.
The compute shader compiles just fine without any errors:
#version 310 es
layout(binding = 0, rgba32f) uniform highp image2D colourMap;
layout(local_size_x = 128, local_size_y = 1, local_size_z = 1) in;
void main()
{
imageStore(colourMap, ivec2(gl_GlobalInvocationID.xy), vec4(1.0f, 0.0f, 0.0f, 1.0f));
}
And I initialise a texture
// Generate a 2D texture
GLES31.glGenTextures(1, colourMap, 0);
GLES31.glBindTexture(GLES31.GL_TEXTURE_2D, colourMap[0]);
// Set interpolation to nearest
GLES31.glTexParameteri(GLES31.GL_TEXTURE_2D, GLES31.GL_TEXTURE_MAG_FILTER, GLES31.GL_LINEAR);
GLES31.glTexParameteri(GLES31.GL_TEXTURE_2D, GLES31.GL_TEXTURE_MIN_FILTER, GLES31.GL_LINEAR);
// Create some dummy texture to begin with so we can see if it changes
float texData[] = new float[texWidth * texHeight * 4];
for (int j = 0; j < texHeight; j++)
{
for (int i = 0; i < texWidth; i++)
{
// Set a few pixels in here...
}
}
Buffer texDataBuffer = FloatBuffer.wrap(texData);
GLES31.glTexImage2D(GLES31.GL_TEXTURE_2D, 0, GLES31.GL_RGBA32F, texWidth, texHeight, 0, GLES31.GL_RGBA, GLES31.GL_FLOAT, texDataBuffer);
After this I set the image unit in the shader here: EDIT: I don't do this now but just assume it will be assigned automatically when the shader program is created as per solidpixel's answer.
GLES31.glUseProgram(idComputeShaderProgram);
int loc = GLES31.glGetUniformLocation(idComputeShaderProgram, "colourMap");
if (loc == -1) Log.e("Error", "Cannot locate variable");
GLES31.glUniform1i(loc, 0);
After every call to GL I check for errors using GLES31.glGetError() -- left out here for clarity.
EDIT: When I dispatch compute I bind the image texture but first query the unit assignment:
GLES31.glUseProgram(idComputeShaderProgram);
int[] unit = new int[1];
GLES31.glGetUniformiv(idComputeShaderProgram, GLES31.glGetUniformLocation(idComputeShaderProgram, "colourMap"), unit, 0);
GLES31.glBindImageTexture(unit[0], velocityMap[0], 0, false, 0, GLES31.GL_WRITE_ONLY, GLES31.GL_RGBA32F);
This final line is the one which errors now. The error code translates to GL_INVALID_OPERATION. The shader compiles correctly and the program object is valid and active. The location of the variable is also valid. I have even used glGetActiveUniform() to get the type of the variable and it returns a type of 36941 which translates to GL_IMAGE_2D which I believe is an integer.
I still think I'm misunderstanding something here but not sure what.
You can't assign your own unit identities for images. See OpenGL ES 3.2 specification section 7.6.
An INVALID_OPERATION error is generated if any of the following
conditions occur:
an image uniform is loaded with any of the Uniform* commands.
You need to query the automatic unit assignment using glGetUniformiv(prog, loc, &unit) to get the unit name.

Texture renders incorrectly (OpenGL)

I've been trying to render an 8x8 texture. I've used code from 2 tutorials, but the texture doesn't render correctly. For now I have this initialization code:
int shaderProgram,fragmentShader,vertexShader,texture,elementBuffer,vertexBuffer, vertexArray;
public Texture2D(String texturePath_, String vertexShader_,String fragmentShader_)
{
vertexArray=GL30.glGenVertexArrays();
GL30.glBindVertexArray(vertexArray);
String[] vertexshader=Utilities.loadShaderFile(vertexShader_,getClass());
String[] fragmentshader=Utilities.loadShaderFile(fragmentShader_,getClass());
if(vertexshader==null)
throw new NullPointerException("The vertex shader is null");
if(fragmentshader==null)
throw new NullPointerException("The fragment shader is null");
vertexShader=GL20.glCreateShader(GL20.GL_VERTEX_SHADER);
GL20.glShaderSource(vertexShader,vertexshader);
GL20.glCompileShader(vertexShader);
Utilities.showShaderCompileLog(vertexShader);
fragmentShader=GL20.glCreateShader(GL20.GL_FRAGMENT_SHADER);
GL20.glShaderSource(fragmentShader,fragmentshader);
GL20.glCompileShader(fragmentShader);
Utilities.showShaderCompileLog(fragmentShader);
shaderProgram= GL20.glCreateProgram();
GL20.glAttachShader(shaderProgram,fragmentShader);
GL20.glAttachShader(shaderProgram,vertexShader);
GL30.glBindFragDataLocation(shaderProgram,0,"fragcolor");
GL20.glLinkProgram(shaderProgram);
GL20.glUseProgram(shaderProgram);
texture= GL11.glGenTextures();
GL11.glBindTexture(GL11.GL_TEXTURE_2D,texture);
GL11.glTexParameteri(GL11.GL_TEXTURE_2D,GL11.GL_TEXTURE_WRAP_S, GL13.GL_CLAMP_TO_BORDER);
GL11.glTexParameteri(GL11.GL_TEXTURE_2D,GL11.GL_TEXTURE_WRAP_T,GL13.GL_CLAMP_TO_BORDER);
GL11.glTexParameteri(GL11.GL_TEXTURE_2D,GL11.GL_TEXTURE_MIN_FILTER,GL11.GL_LINEAR);
GL11.glTexParameteri(GL11.GL_TEXTURE_2D,GL11.GL_TEXTURE_MAG_FILTER,GL11.GL_LINEAR);
ByteBuffer image;
FloatBuffer verteces;
IntBuffer imagewidth,imageheight, positions,imagechannels;
try(MemoryStack memoryStack=MemoryStack.stackPush())
{
imageheight=memoryStack.mallocInt(1);
imagewidth=memoryStack.mallocInt(1);
positions=memoryStack.mallocInt(6);
imagechannels=memoryStack.mallocInt(1);
image= STBImage.stbi_load(texturePath_,imagewidth,imageheight,imagechannels,0);
if(image==null) throw new NullPointerException("Failed to load image");
verteces=memoryStack.mallocFloat(28);
}
positions.put(0).put(1).put(2).put(2).put(3).put(0).flip();
int width=imagewidth.get();
int height=imageheight.get();
GL11.glTexImage2D(GL11.GL_TEXTURE_2D,0,GL11.GL_RGBA,width,height,0,GL11.GL_RGBA,GL11.GL_UNSIGNED_BYTE,image);
elementBuffer=GL15.glGenBuffers();
GL15.glBindBuffer(GL15.GL_ELEMENT_ARRAY_BUFFER,elementBuffer);
GL15.glBufferData(GL15.GL_ELEMENT_ARRAY_BUFFER,positions,GL15.GL_STATIC_DRAW);
float x1=0f, x2=1f;
float y1=1f,y2=-1f;
verteces.put(x1).put(y1).put(1).put(1).put(1).put(0).put(0);
verteces.put(x1).put(y2).put(1).put(1).put(1).put(1).put(0);
verteces.put(x2).put(y2).put(1).put(1).put(1).put(1).put(1);
verteces.put(x2).put(y1).put(1).put(1).put(1).put(0).put(1).flip();
vertexBuffer=GL15.glGenBuffers();
GL15.glBindBuffer(GL15.GL_ARRAY_BUFFER,vertexBuffer);
GL15.glBufferData(GL15.GL_ARRAY_BUFFER,verteces,GL15.GL_STATIC_DRAW);
int uniform=GL20.glGetUniformLocation(shaderProgram,"texture_image");
GL20.glUniform1i(uniform,0);
int position=GL20.glGetAttribLocation(shaderProgram,"position");
GL20.glEnableVertexAttribArray(position);
GL20.glVertexAttribPointer(position,2,GL11.GL_FLOAT,false,0,0);
int color=GL20.glGetAttribLocation(shaderProgram,"color");
GL20.glEnableVertexAttribArray(color);
GL20.glVertexAttribPointer(color,3,GL11.GL_FLOAT,false,7*Float.BYTES, 2 * Float.BYTES);
int textureST=GL20.glGetAttribLocation(shaderProgram,"textureCoord");
GL20.glEnableVertexAttribArray(textureST);
GL20.glVertexAttribPointer(textureST,3,GL11.GL_FLOAT,false,7*Float.BYTES, 5 * Float.BYTES);
Utilities.showErrors(1);
}
The result is:
But I'd like the texture to occupy all area. The shaders compile fine, and there are no GL errors.
If I change values to the ones from the tutorial:
verteces.put(-1f).put(1f).put(1).put(1).put(1).put(0).put(0);
verteces.put(1f).put(1f).put(1).put(1).put(1).put(1).put(0);
verteces.put(1f).put(-1f).put(1).put(1).put(1).put(1).put(1);
verteces.put(-1f).put(-1f).put(1).put(1).put(1).put(0).put(1).flip();
I get:
The tutorials: https://open.gl/textures and https://github.com/SilverTiger/lwjgl3-tutorial/wiki/Textures
I'm using profile 3.0 with shaders version 300 ES. The texture's format is PNG.
The vertex attribute layout:
GL20.glVertexAttribPointer(position,2,GL11.GL_FLOAT,false,0,0);
GL20.glVertexAttribPointer(color,3,GL11.GL_FLOAT,false,7*Float.BYTES, 2 * Float.BYTES);
GL20.glVertexAttribPointer(textureST,3,GL11.GL_FLOAT,false,7*Float.BYTES, 5 * Float.BYTES);
doesn't look correct. There are multiple problems with it:
The texture coordinates try to read 3 floats from the array. In combination with the stride, your last vertex will read outside the VBO. Most probably texture coordinates should only read 2 floats.
The total number of floats used (2+3+3=8) does not fit to the data where only 7 floats per vertex are given. This is solved when texture coordinates read only two floats.
The stride of the positions look wrong. 0 means that all positions are tightly packed. Basically, the positions use the first 8 floats in the VBO. If you look at them: {-1, 1, 1, 1, 1, 0, 0, 1}, then this is exactly the geometry you see. It was just luck that it worked in first place. Solution: Change position layout to:
GL20.glVertexAttribPointer(position,2,GL11.GL_FLOAT,false,7*Float.BYTES,0);

Wrong Display of a simple Cube in Android with OpenGL ES [duplicate]

I'm working on a 2d engine. It already works quite good, but I keep getting pixel-errors.
For example, my window is 960x540 pixels, I draw a line from (0, 0) to (959, 0). I would expect that every pixel on scan-line 0 will be set to a color, but no: the right-most pixel is not drawn. Same problem when I draw vertically to pixel 539. I really need to draw to (960, 0) or (0, 540) to have it drawn.
As I was born in the pixel-era, I am convinced that this is not the correct result. When my screen was 320x200 pixels big, I could draw from 0 to 319 and from 0 to 199, and my screen would be full. Now I end up with a screen with a right/bottom pixel not drawn.
This can be due to different things:
where I expect the opengl line primitive is drawn from a pixel to a pixel inclusive, that last pixel just is actually exclusive? Is that it?
my projection matrix is incorrect?
I am under a false assumption that when I have a backbuffer of 960x540, that is actually has one pixel more?
Something else?
Can someone please help me? I have been looking into this problem for a long time now, and every time when I thought it was ok, I saw after a while that it actually wasn't.
Here is some of my code, I tried to strip it down as much as possible. When I call my line-function, every coordinate is added with 0.375, 0.375 to make it correct on both ATI and nvidia adapters.
int width = resX();
int height = resY();
for (int i = 0; i < height; i += 2)
rm->line(0, i, width - 1, i, vec4f(1, 0, 0, 1));
for (int i = 1; i < height; i += 2)
rm->line(0, i, width - 1, i, vec4f(0, 1, 0, 1));
// when I do this, one pixel to the right remains undrawn
void rendermachine::line(int x1, int y1, int x2, int y2, const vec4f &color)
{
... some code to decide what std::vector the coordinates should be pushed into
// m_z is a z-coordinate, I use z-buffering to preserve correct drawing orders
// vec2f(0, 0) is a texture-coordinate, the line is drawn without texturing
target->push_back(vertex(vec3f((float)x1 + 0.375f, (float)y1 + 0.375f, m_z), color, vec2f(0, 0)));
target->push_back(vertex(vec3f((float)x2 + 0.375f, (float)y2 + 0.375f, m_z), color, vec2f(0, 0)));
}
void rendermachine::update(...)
{
... render target object is queried for width and height, in my test it is just the back buffer so the window client resolution is returned
mat4f mP;
mP.setOrthographic(0, (float)width, (float)height, 0, 0, 8000000);
... all vertices are copied to video memory
... drawing
if (there are lines to draw)
glDrawArrays(GL_LINES, (int)offset, (int)lines.size());
...
}
// And the (very simple) shader to draw these lines
// Vertex shader
#version 120
attribute vec3 aVertexPosition;
attribute vec4 aVertexColor;
uniform mat4 mP;
varying vec4 vColor;
void main(void) {
gl_Position = mP * vec4(aVertexPosition, 1.0);
vColor = aVertexColor;
}
// Fragment shader
#version 120
#ifdef GL_ES
precision highp float;
#endif
varying vec4 vColor;
void main(void) {
gl_FragColor = vColor.rgb;
}
In OpenGL, lines are rasterized using the "Diamond Exit" rule. This is almost the same as saying that the end coordinate is exclusive, but not quite...
This is what the OpenGL spec has to say:
http://www.opengl.org/documentation/specs/version1.1/glspec1.1/node47.html
Also have a look at the OpenGL FAQ, http://www.opengl.org/archives/resources/faq/technical/rasterization.htm, item "14.090 How do I obtain exact pixelization of lines?". It says "The OpenGL specification allows for a wide range of line rendering hardware, so exact pixelization may not be possible at all."
Many will argue that you should not use lines in OpenGL at all. Their behaviour is based on how ancient SGI hardware worked, not on what makes sense. (And lines with widths >1 are nearly impossible to use in a way that looks good!)
Note that OpenGL coordinate space has no notion of integers, everything is a float and the "centre" of an OpenGL pixel is really at the 0.5,0.5 instead of its top-left corner. Therefore, if you want a 1px wide line from 0,0 to 10,10 inclusive, you really had to draw a line from 0.5,0.5 to 10.5,10.5.
This will be especially apparent if you turn on anti-aliasing, if you have anti-aliasing and you try to draw from 50,0 to 50,100 you may see a blurry 2px wide line because the line fell in-between two pixels.

Difference between glNormal3f and glNormalPointer

i'm getting these strange two results when drawing with immediate mode and vertex arrays. In immediate mode i'm passing the normals by glNormal3f. My shader takes the normal and computes something like a shadow, but nothing seriously.
Immediate Mode:
glBegin(GL_TRIANGLES);
for (Triangle tri : this.triangles) {
Vec3d normal = Vec3d.vectorProduct(
Vec3d.sub(tri.getB(), tri.getA()),
Vec3d.sub(tri.getC(), tri.getA())); //calculating normalvector
glNormal3d(normal.x, normal.y, normal.z);
glColor3d(1.0, 1.0, 1.0);
glVertex3d(tri.getA().x, tri.getA().y, tri.getA().z);
glVertex3d(tri.getB().x, tri.getB().y, tri.getB().z);
glVertex3d(tri.getC().x, tri.getC().y, tri.getC().z);
}
glEnd();
Result:
In the VAO variant, i'm storing the normals in a separate buffer, but calculated the same way:
for(Triangle tri : triangles) {
Vec3d normal = Vec3d.vectorProduct(
Vec3d.sub(tri.getB(), tri.getA()),
Vec3d.sub(tri.getC(), tri.getA()));
normals.put((float) normal.x);
normals.put((float) normal.y);
normals.put((float) normal.z);
}
normals.flip();
glEnableClientState(GL_VERTEX_ARRAY);
glEnableClientState(GL_NORMAL_ARRAY);
glNormalPointer(4, normals);
glVertexPointer(3,3*4, vertices);
glDrawArrays(GL_TRIANGLES, 0, (vertices.capacity() / 3));
glDisableClientState(GL_VERTEX_ARRAY);
glDisableClientState(GL_NORMAL_ARRAY);
Result 2:
Obviously the normals get somehow mismatched but i can't find my mistake.
And the final question: What is the difference between glNormal3f and glNormalPointer considering the values passed to the shader?
The stride of your normal array is suspect. Why are you passing 4 for the stride to glNormalPointer (...)?
You are telling GL that there are 4-bytes (1 float) worth of space between each of your normals. However, normals is built in your code with 12-bytes between each successive normal. Incidentally, this is what you would call tightly packed and therefore passing 0 for the stride implies the same thing (4-bytes per-component x 3-components).
Your vertex array and normal array should actually have the same stride. 4*3 or simply 0.

Same texture displayed. Binding issue?

Today a wild bug that I had several times before appeared again...!
And I don't know how to fix it. I hope you know a trick to solve the problem or an official way to get it work. Some of tutorials don't seem to work the correct way they promise.
I have some quads, displayed as pictures on my Surfaceview. I also have two textures. One for the tiles and the walls and one for the objects so far.
I draw tiles and objects in my own order of isometric system. (because of preventing overlapping images)
On my smartphone, it works very well. The tiles use the texture of tiles and the objects use the texture of objects.
As you can see. The room itself is drawn with the first texture. The ovens are drawn with the second. (I use Xperia Mini Pro)
A friend of mine uses Galaxy S. And now, something weird is happening. All quads are using the same texture. There's no obvious solution I found how to fix the bug.
Here the image taken with a camera:
In the second texture there are also red tiles, so don't wonder about the color. Fact is, that all tiles suddenly use the first texture. You cannot see any parts drawn with the second one.
Here are some of my drawing functions:
Function for drawing pictures:
public void render(float posX, float posY) {
// ======== Pass Masking Color ========
if (this.masked) {
GLES20.glUniform1i(ShaderCache.activeShader.masked, 1);
GLES20.glUniform3i(ShaderCache.activeShader.maskColor, this.maskColor.r, this.maskColor.g, this.maskColor.b);
} else {
GLES20.glUniform1i(ShaderCache.activeShader.masked, 0);
}
// ======== Passing Vertex And UV Attributes ========
if (ImageCache.lastUsedImageBuffer != this.buffer) {
GLES20.glBindBuffer(GLES20.GL_ARRAY_BUFFER, this.buffer);
GLES20.glVertexAttribPointer(ShaderCache.activeShader.attributeVertex, 2, GLES20.GL_FLOAT, false, 16, 0);
GLES20.glVertexAttribPointer(ShaderCache.activeShader.attributeUV, 2, GLES20.GL_FLOAT, false, 16, 8);
ImageCache.lastUsedImageBuffer = this.buffer;
}
// ======== Set Sampler Texture2D ========
if (TextureCache.lastTextureUnit != this.imagetexture.unit) {
GLES20.glBindTexture(GLES20.GL_TEXTURE_2D, this.imagetexture.unit);
GLES20.glUniform1i(ShaderCache.activeShader.texture, this.imagetexture.unit);
TextureCache.lastTextureUnit = this.imagetexture.unit;
}
// ======== Passing Image Position ========
GLES20.glUniform2f(ShaderCache.activeShader.position, posX, posY);
// ======== Draw Arrays With Image Vertices ========
GLES20.glDrawArrays(GLES20.GL_TRIANGLES, 0, this.vertices);
}
And here the function just for generating textureunits:
int[] textureID = new int[1];
GLES20.glGenTextures(1, textureID, 0);
this.unit = textureID[0];
GLES20.glActiveTexture(GLES20.GL_TEXTURE0 + this.unit);
GLES20.glBindTexture(GLES20.GL_TEXTURE_2D, this.unit);
Thank you so much!
GLES20.glBindTexture(GLES20.GL_TEXTURE_2D, this.imagetexture.unit);
GLES20.glUniform1i(ShaderCache.activeShader.texture, this.imagetexture.unit);
I believe this is wrong. The sampler2d uniform value that you pass to glsl should be the number of the sampler, not the id of the texture. If you bind texture id 3 to sampler 0 (glActiveTexture(GL_TEXTURE0)), then you pass 0 to the uniform sampler2d, not 3. If you bind a texture to GL_TEXTURE1, then the sampler2d value becomes 1, etc.

Categories