Java + OpenGL: Texture Z-fighting in batch rendering - java

The main issue:
Good afternoon, everyone. I have such a question, when I use more than two textures in batch rendering, when using 2 textures (that is, my downloaded from disk), it starts glitching with the first texture (white), which in theory should not be because when the second texture is used from the texture slot, it becomes the main one for rendering. And this happens only in the perspective projection, everything is fine with the orthographic projection (most likely because it does not display the distance between objects and the Z-coordinate).
Here is a couple images of the issue:
Video
Google Drive
The whole problem is that I have a lot of code written and I think it will not make sense to throw off all of it. I will still show some pieces, but I would like you to express your assumption why this may happen. Maybe it's because I'm binding several texture slots at once. In general, I will be grateful to every advice or answer if it turns out.
Draw Sprite using batch rendering (Example)
public static void drawRectnagle(GameObject rect) {
var transform = rect.getComponent(Transform.class);
var spriteRenderer = rect.getComponent(SpriteRenderer.class);
// Begin new batch if in this batch not has room left.
if (batch.indexCount >= Batch.maxIndices)
newBatch();
int textureIndex = 0;
Vector2f[] textureCoords = batch.textureCoords;
Texture texture = null;
Vector4f color = null;
if(spriteRenderer != null) {
texture = spriteRenderer.texture;
color = spriteRenderer.color;
} else
color = new Vector4f(1, 1, 1, 1);
if(texture != null)
textureCoords = texture.getTextureCoords();
else
texture = batch.textures[0];
for (int i = 1; i < batch.textureIndex; i++) {
if (batch.textureIndex >= Batch.maxTextures)
newBatch();
// Equal texture by OpenGL ID's.
if (batch.textures[i].equals(texture)) {
textureIndex = i;
break;
}
}
if (textureIndex == 0.0f) {
textureIndex = batch.textureIndex;
batch.textures[batch.textureIndex] = texture;
batch.textureIndex++;
}
...
// Loading vertex data to batch.
...
}
Rendering batch after setting sprites data (Example)
private static void endBatch() {
if (batch.indexCount == 0)
return;
int sizeBytes = sizeof(Vertex.class); // NOTE: Size of Vertex class is currect here.
int size = batch.vertexIndex * sizeBytes;
batch.vertexBuffer.putData(batch.vertex, size, batch.vertexIndex, sizeBytes / 4);
// Bind each texture from slot.
for (int i = 0; i < batch.textureIndex; i++)
batch.textures[i].bind(i);
RenderEngine.EnableAlphaBlending();
RenderEngine.EnableDepthTesting();
glBindVertexArray(batch.vertexArray.get());
glDrawElements(GL_TRIANGLES, batch.vertexArray.getIndexBuffer().getCount(), GL_UNSIGNED_INT, 0);
RenderEngine.DisableAlphaBlending();
RenderEngine.DisableDepthTesting();
}
And if someone interest how my putData method works:
public void putData(Vertex[] vertex, int size, int index, int elementCount) {
//Convert all Vertex data to single float[] to pass as data in vertex buffer.
float[] data = ...
if(data != null) {
var pData = memAlloc(size).asFloatBuffer().put(data).flip();
glBindBuffer(GL_ARRAY_BUFFER, handle);
nglBufferData(GL_ARRAY_BUFFER, size, memAddress(pData), getOpenGLUsage(usage));
memFree(pData);
}
}
Fragment Shader
#version 450 core
layout (location = 0) out vec4 out_Pixel;
in vec3 position;
in vec4 color;
in vec2 textureCoord;
in float textureIndex;
uniform sampler2D u_Textures[32];
void main() {
vec4 texColour = vec4(1, 1, 1, 1);
texColour *= texture(u_Textures[int(textureIndex)], textureCoord);
vec4 finalColor = color * texColour;
if(finalColor.a < 0.1)
discard;
out_Pixel = finalColor;
}

Related

Adding thickness to a 2D sprite when turning

In my game, my entities turn like a piece of paper, as shown here at half-speed: https://imgur.com/a/u2suen6
I want to give the entities a bit of thickness when they turn, making them more cardboard-thin than paper-thin.
I thought about using a Pixmap to detect and extend the edge pixels and give the image some Three-Dimensionality. I also considered duplicating the image along the x-axis to give the same effect. Of the two ideas, the Pixmap holds out the most promise in my mind. However, I'm wondering if there's a better solution.
I'm using a GLSL shader to give the entities highlights and shadows while turning, as you saw in the gif. I think that with the right knowledge, I could achieve what I'm going for using the same shader program.
My shader looks like this:
#ifdef GL_ES
precision mediump float;
#endif
varying vec4 v_color;
varying vec2 v_texCoords;
uniform sampler2D u_texture;
uniform vec2 u_resolution;
uniform vec3 color;
void main()
{
vec4 col = vec4(color, 0.0);
gl_FragColor = texture2D(u_texture, v_texCoords) * v_color + col;
}
I think that one might be able to make calculations based on the uniform vec3 color that I pass it (with its values ranging from 0, 0, 0 to 1, 1, 1. 1's being highlight and 0's being shadow). Unfortunately, I don't have the understanding of shaders to do so.
If any of you have the know-how, could you steer me in the right direction? Or let me know if I should just stick to the Pixmap idea.
Edit: I'm trying to stay away from using a 3D model because I'm 6.4k lines of code deep using a 2d Orthographic Camera.
Edit 2: I figured that the reflection shader wouldn't look good if I tried making the sprite look 3D. I scrapped the shader, went with the Pixmap idea, and plan on implementing shadows and reflections to the pixmap without any shader. Though it looks good so far without reflections.
I ended up going with my pixmap idea. I want to share my code so that others can know how I got 2D thickness to work.
Please note that in the following code:
dir is a floating point value in the range -1.0 to 1.0. It tells the program where the sprite is in its turn. -1 means facing fully left. 1 meaning right. 0 means that it's 'facing' the camera.
right is a boolean that tells the program which direction the entity is turning. true means that the entity is turning from left to right. false means from right to left.
The Code:
private Texture getTurningImage(TextureRegion input, int thickness)
{
if(Math.abs(dir) < 0.1)
dir = (right ? 1 : -1) * 0.1f;
Texture texture = input.getTexture();
if (!texture.getTextureData().isPrepared())
{
texture.getTextureData().prepare();
}
Pixmap pixmap = texture.getTextureData().consumePixmap();
Pixmap p = new Pixmap(64, 64, Pixmap.Format.RGBA8888);
p.setFilter(Pixmap.Filter.NearestNeighbour);
Pixmap texCopy = new Pixmap(input.getRegionWidth(), input.getRegionHeight(), Pixmap.Format.RGBA8888);
// getting a texture out of the input region. I can't use input.getTexture()
// because it's an animated sprite sheet
for (int x = 0; x < input.getRegionWidth(); x++)
{
for (int y = 0; y < input.getRegionHeight(); y++)
{
int colorInt = pixmap.getPixel(input.getRegionX() + x, input.getRegionY() + y);
Color c = new Color(colorInt);
colorInt = Color.rgba8888(c);
texCopy.drawPixel(x, y, colorInt);
}
}
pixmap.dispose();
float offsetVal = Math.round(thickness/2.0) * (float) -Math.cos((dir * Math.PI)/2);
if(offsetVal > -1.23/Math.pow(10, 16))
{
offsetVal = 0;
}
// generate the pixel colors we'll use for the side view
Pixmap sideProfile = new Pixmap(1, 64, Pixmap.Format.RGBA8888);
for (int y = 0; y < texCopy.getHeight(); y++)
{
for (int x = 0; x < texCopy.getWidth(); x++)
{
int colorInt = texCopy.getPixel(x, y);
if(new Color(colorInt).a != 0 && new Color(texCopy.getPixel(x + 1, y)).a == 0)
{
Color c = new Color(colorInt);
c.mul(.8f); // darken the color
c.a = 1;
colorInt = Color.rgba8888(c);
sideProfile.drawPixel(0, y, colorInt);
continue;
}
}
}
// drawing the bottom layer
p.drawPixmap(texCopy, 0, 0, 64, 64, (int) (Math.round(-offsetVal) + (64 - texCopy.getWidth()*Math.abs(dir))/2), 0, (int)(64*Math.abs(dir)), 64);
// drawing the middle (connecting) layer
// based on the edge pixels of the bottom layer, then translated to be in the middle
for (int y = 0; y < p.getHeight(); y++)
{
int colorInt = sideProfile.getPixel(0, y);
for (int x = 0; x < p.getWidth(); x++)
{
if(new Color(p.getPixel(x, y)).a != 0 && new Color(p.getPixel(x + 1, y)).a == 0)
{
for(int i = 0; i <= 2 * Math.round(Math.abs(offsetVal)); i++) // the for the length between the top and bottom
{
p.drawPixel(x + i - 2 * (int)Math.round(Math.abs(offsetVal)), y, colorInt);
}
}
}
}
// drawing the top layer
p.drawPixmap(texCopy, 0, 0, 64, 64, (int) (Math.round(offsetVal) + (64 - texCopy.getWidth()*Math.abs(dir))/2), 0, (int)(64*Math.abs(dir)), 64);
// flip if facing left
if(dir < 0)
{
p = flipPixmap(p);
}
return new Texture(p);
}
My flipPixmap method looks like this (stolen from stack overflow):
private Pixmap flipPixmap(Pixmap src)
{
final int width = src.getWidth();
final int height = src.getHeight();
Pixmap flipped = new Pixmap(width, height, src.getFormat());
for (int x = 0; x < width; x++)
{
for (int y = 0; y < height; y++)
{
flipped.drawPixel(x, y, src.getPixel(width - x - 1, y));
}
}
return flipped;
}
Here's the result :D https://imgur.com/a/wGeHg9D

Occlusion query using GL_ANY_SAMPLES_PASSED returning true when fragments are occluded

I am in the process of implementing a lens glow effect for my engine.
However, attempting to use an occlusion query only returns true when the fragments in question are completely occluded.
Perhaps the problem lies in that I am manually writing to the z-value of each vertex, since I am using a logarithmic depth buffer. However, I am not sure why this would affect occlusion testing.
Here are the relevant code snippets:
public class Query implements Disposable{
private final int id;
private final int type;
private boolean inUse = false;
public Query(int type){
this.type = type;
int[] arr = new int[1];
Gdx.gl30.glGenQueries(1,arr,0);
id = arr[0];
}
public void start(){
Gdx.gl30.glBeginQuery(type, id);
inUse = true;
}
public void end(){
Gdx.gl30.glEndQuery(type);
}
public boolean isResultReady(){
IntBuffer result = BufferUtils.newIntBuffer(1);
Gdx.gl30.glGetQueryObjectuiv(id,Gdx.gl30.GL_QUERY_RESULT_AVAILABLE, result);
return result.get(0) == Gdx.gl.GL_TRUE;
}
public int getResult(){
inUse = false;
IntBuffer result = BufferUtils.newIntBuffer(1);
Gdx.gl30.glGetQueryObjectuiv(id, Gdx.gl30.GL_QUERY_RESULT, result);
return result.get(0);
}
public boolean isInUse(){
return inUse;
}
#Override
public void dispose() {
Gdx.gl30.glDeleteQueries(1, new int[]{id},0);
}
}
Here is the method where I do the actual test:
private void doOcclusionTest(Camera cam){
if(query.isResultReady()){
int visibleSamples = query.getResult();
System.out.println(visibleSamples);
}
temp4.set(cam.getPosition());
temp4.sub(position);
temp4.normalize();
temp4.mul(getSize()*10);
temp4.add(position);
occlusionTestPoint.setPosition(temp4.x,temp4.y,temp4.z);
if(!query.isInUse()) {
query.start();
Gdx.gl.glEnable(Gdx.gl.GL_DEPTH_TEST);
occlusionTestPoint.render(renderer.getPointShader(), cam);
query.end();
}
}
My vertex shader for a point, with logarithmic depth buffer calculations included:
#version 330 core
layout (location = 0) in vec3 aPos;
uniform mat4 modelView;
uniform mat4 projection;
uniform float og_farPlaneDistance;
uniform float u_logarithmicDepthConstant;
vec4 modelToClipCoordinates(vec4 position, mat4 modelViewPerspectiveMatrix, float depthConstant, float farPlaneDistance){
vec4 clip = modelViewPerspectiveMatrix * position;
clip.z = ((2.0 * log(depthConstant * clip.z + 1.0) / log(depthConstant * farPlaneDistance + 1.0)) - 1.0) * clip.w;
return clip;
}
void main()
{
gl_Position = modelToClipCoordinates(vec4(aPos, 1.0), projection * modelView, u_logarithmicDepthConstant, og_farPlaneDistance);
}
Fragment shader for a point:
#version 330 core
uniform vec4 color;
void main() {
gl_FragColor = color;
}
Since I am just testing occlusion for a single point I know that the alternative would be to simply check the depth value of that pixel after everything is rendered. However, I am unsure of how I would calculate the logarithmic z-value of a point on the CPU.
I have found a solution to my problem. It is a workaround, only plausible for single points, not for entire models, but here it goes:
Firstly, you must calculate the z-value of your point and the pixel coordinate where it lies. Calculating the z-value should be straight-forward, however in my case I was using a logarithmic depth buffer. For this reason, I had to make a few extra calculations for the z-value.
Here is my method to get the coordinates in Normalized Device Coordinate, including z-value(temp4f can be any Vector4f):
public Vector4f worldSpaceToDeviceCoords(Vector4f pos){
temp4f.set(pos);
Matrix4f projection = transformation.getProjectionMatrix(FOV, screenWidth,screenHeight,1f,MAXVIEWDISTANCE);
Matrix4f view = transformation.getViewMatrix(camera);
view.transform(temp4f); //Multiply the point vector by the view matrix
projection.transform(temp4f); //Multiply the point vector by the projection matrix
temp4f.x = ((temp4f.x / temp4f.w) + 1) / 2f; //Convert x coordinate to range between 0 to 1
temp4f.y = ((temp4f.y / temp4f.w) + 1) / 2f; //Convert y coordinate to range between 0 to 1
//Logarithmic depth buffer z-value calculation (Get rid of this if not using a logarithmic depth buffer)
temp4f.z = ((2.0f * (float)Math.log(LOGDEPTHCONSTANT * temp4f.z + 1.0f) /
(float)Math.log(LOGDEPTHCONSTANT * MAXVIEWDISTANCE + 1.0f)) - 1.0f) * temp4f.w;
temp4f.z /= temp4f.w; //Perform perspective division on the z-value
temp4f.z = (temp4f.z + 1)/2f; //Transform z coordinate into range 0 to 1
return temp4f;
}
And this other method is used to get the coordinates of the pixel on the screen(temp2 is any Vector2f):
public Vector2f projectPoint(Vector3f position){
temp4f.set(worldSpaceToDeviceCoords(temp4f.set(position.x,position.y,position.z, 1)));
temp4f.x*=screenWidth;
temp4f.y*=screenHeight;
//If the point is not visible, return null
if (temp4f.w < 0){
return null;
}
return temp2f.set(temp4f.x,temp4f.y);
}
Finally, a method to get the stored depth value at a given pixel(outBuff is any direct FloatBuffer):
public float getFramebufferDepthComponent(int x, int y){
Gdx.gl.glReadPixels(x,y,1,1,Gdx.gl.GL_DEPTH_COMPONENT,Gdx.gl.GL_FLOAT,outBuff);
return outBuff.get(0);
}
So with these methods, what you need to do to find out if a certain point is occluded is this:
Check at what pixel the point lies(second method)
Retrieve the current stored z-value at that pixel(third method)
Get the calculated z-value of the point(first method)
If the calculated z-value is lower than the stored z-value, then the point is visible
Please note that you should draw everything in the scene before sampling the depth buffer, otherwise the extracted depth buffer value will not reflect all that is rendered.

Texture UVs not sent to shaders correctly

I am rendering a mesh using GLSL shaders and a VBO, and the VBO stores 4 attributes; positionXYZ, normalXYZ, textureUV, colourRGBA. Everything works, except for the UVs (possibly the normals too, but I haven't got a way to test them yet).
What is happening is, the texture UV positions in the array are offset to the normal x and y position in the array. The array is structured as VVVNNNTTCCCC (vertex position, normal, texture, colour) by the way. I am pretty sure that the problem is somewhere in sending the VBO to the shaders. I know for certain that the data in the VBO is in the correct order.
This is my rendering code:
The VBO class
public final class Mesh
{
public static final int FLOAT_SIZE_BYTES = 4;
public static final int FLOATS_PER_POSITION = 3;
public static final int FLOATS_PER_NORMAL = 3;
public static final int FLOATS_PER_TEXTURE = 2;
public static final int FLOATS_PER_COLOUR = 4;
public static final int VERTEX_SIZE_FLOATS = FLOATS_PER_POSITION + FLOATS_PER_NORMAL + FLOATS_PER_TEXTURE + FLOATS_PER_COLOUR;
public static final int VERTEX_SIZE_BYTES = VERTEX_SIZE_FLOATS * FLOAT_SIZE_BYTES;
public static final int POSITION_OFFSET_FLOATS = 0;
public static final int NORMAL_OFFSET_FLOATS = POSITION_OFFSET_FLOATS + FLOATS_PER_POSITION;
public static final int TEXTURE_OFFSET_FLOATS = NORMAL_OFFSET_FLOATS + FLOATS_PER_NORMAL;
public static final int COLOUR_OFFSET_FLOATS = TEXTURE_OFFSET_FLOATS + FLOATS_PER_TEXTURE;
public static final int POSITION_OFFSET_BYTES = POSITION_OFFSET_FLOATS * FLOAT_SIZE_BYTES;
public static final int NORMAL_OFFSET_BYTES = NORMAL_OFFSET_FLOATS * FLOAT_SIZE_BYTES;
public static final int TEXTURE_OFFSET_BYTES = TEXTURE_OFFSET_FLOATS * FLOAT_SIZE_BYTES;
public static final int COLOUR_OFFSET_BYTES = COLOUR_OFFSET_FLOATS * FLOAT_SIZE_BYTES;
public static final int POSITION_STRIDE_BYTES = VERTEX_SIZE_BYTES;
public static final int NORMAL_STRIDE_BYTES = VERTEX_SIZE_BYTES;
public static final int TEXTURE_STRIDE_BYTES = VERTEX_SIZE_BYTES;
public static final int COLOUR_STRIDE_BYTES = VERTEX_SIZE_BYTES;
public final static int VERTICES_PER_FACE = 3;
public static final int ATTRIBUTE_LOCATION_POSITION = 0;
public static final int ATTRIBUTE_LOCATION_NORMAL = 1;
public static final int ATTRIBUTE_LOCATION_TEXTURE = 2;
public static final int ATTRIBUTE_LOCATION_COLOUR = 3;
private int vaoID;
private int iboID;
private int indexCount;
private Mesh(int vaoID, int iboID, int indexCount)
{
this.vaoID = vaoID;
this.iboID = iboID;
this.indexCount = indexCount;
}
public void draw(AbstractShaderProgram shader, Texture texture)
{
glEnable(GL_TEXTURE_2D);
if (texture != null) texture.bind(shader);
else Texture.MISSING_TEXTURE.bind(shader);
glBindVertexArray(vaoID);
glEnableVertexAttribArray(ATTRIBUTE_LOCATION_POSITION);
// glEnableVertexAttribArray(ATTRIBUTE_LOCATION_NORMAL);
// glEnableVertexAttribArray(ATTRIBUTE_LOCATION_TEXTURE);
// glEnableVertexAttribArray(ATTRIBUTE_LOCATION_COLOUR);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, iboID);
glDrawElements(GL_TRIANGLES, indexCount, GL_FLOAT, 0);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, 0);
glDisableVertexAttribArray(ATTRIBUTE_LOCATION_POSITION);
// glDisableVertexAttribArray(ATTRIBUTE_LOCATION_NORMAL);
// glDisableVertexAttribArray(ATTRIBUTE_LOCATION_TEXTURE);
// glDisableVertexAttribArray(ATTRIBUTE_LOCATION_COLOUR);
glBindVertexArray(0);
glDisable(GL_TEXTURE_2D);
}
public static Mesh compile(List<Face> faces)
{
if (faces.size() <= 0)
throw new RuntimeException("Failed to compile mesh. No faces were provided.");
HashMap<Vertex, Integer> indexMap = new HashMap<>();
ArrayList<Vertex> vertices = new ArrayList<>();
int vertexCount = 0;
for (Face face : faces)
{
for (Vertex vertex : face.getVertices())
{
if (!indexMap.containsKey(vertex))
{
indexMap.put(vertex, vertexCount++);
vertices.add(vertex);
}
}
}
int indicesCount = faces.size() * VERTICES_PER_FACE;
int dataSize = vertexCount * VERTEX_SIZE_FLOATS;
FloatBuffer vertexData = BufferUtils.createFloatBuffer(dataSize);
if (vertexData == null)
System.err.println("Failed to allocate FloatBuffer with size " + dataSize);
for (Vertex vertex : vertices)
{
vertexData.put(vertex.getPosition().x);
vertexData.put(vertex.getPosition().y);
vertexData.put(vertex.getPosition().z);
// vertexData.put(vertex.getNormal() == null ? 1.0F : vertex.getNormal().x);
// vertexData.put(vertex.getNormal() == null ? 1.0F : vertex.getNormal().y);
// vertexData.put(vertex.getNormal() == null ? 1.0F : vertex.getNormal().z);
// vertexData.put(vertex.getTexture() == null ? 0.0F : vertex.getTexture().x);
// vertexData.put(vertex.getTexture() == null ? 0.0F : vertex.getTexture().y);
// vertexData.put(vertex.getColour() == null ? 1.0F : vertex.getColour().getRGBA().x);
// vertexData.put(vertex.getColour() == null ? 1.0F : vertex.getColour().getRGBA().y);
// vertexData.put(vertex.getColour() == null ? 1.0F : vertex.getColour().getRGBA().z);
// vertexData.put(vertex.getColour() == null ? 1.0F : vertex.getColour().getRGBA().w);
}
vertexData.flip();
IntBuffer indices = BufferUtils.createIntBuffer(indicesCount);
for (Face face : faces)
{
for (Vertex vertex : face.getVertices())
{
int index = indexMap.get(vertex);
indices.put(index);
}
}
indices.flip();
int vaoID = glGenVertexArrays();
glBindVertexArray(vaoID);
int vboID = glGenBuffers();
glBindBuffer(GL_ARRAY_BUFFER, vboID);
glBufferData(GL_ARRAY_BUFFER, vertexData, GL_STATIC_DRAW);
glVertexAttribPointer(ATTRIBUTE_LOCATION_POSITION, FLOATS_PER_POSITION, GL_FLOAT, false, 0, 0);
// glVertexAttribPointer(ATTRIBUTE_LOCATION_NORMAL, FLOATS_PER_NORMAL, GL_FLOAT, false, NORMAL_STRIDE_BYTES, NORMAL_OFFSET_BYTES);
// glVertexAttribPointer(ATTRIBUTE_LOCATION_TEXTURE, FLOATS_PER_TEXTURE, GL_FLOAT, false, TEXTURE_STRIDE_BYTES, TEXTURE_OFFSET_BYTES);
// glVertexAttribPointer(ATTRIBUTE_LOCATION_COLOUR, FLOATS_PER_COLOUR, GL_FLOAT, false, COLOUR_STRIDE_BYTES, COLOUR_OFFSET_BYTES);
glBindBuffer(GL_ARRAY_BUFFER, 0);
glBindVertexArray(0);
int iboID = glGenBuffers();
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, iboID);
glBufferData(GL_ELEMENT_ARRAY_BUFFER, indices, GL_STATIC_DRAW);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, 0);
return new Mesh(vaoID, iboID, indicesCount);
}
}
The vertex shader:
#version 400 core
in vec3 vertPosition;
in vec3 vertNormal;
in vec2 vertTexture;
in vec4 vertColour;
uniform mat4 projectionMatrix;
uniform mat4 viewMatrix;
uniform mat4 modelMatrix;
out vec3 pVertPosition;
out vec3 pVertNormal;
out vec2 pVertTexture;
out vec4 pVertColour;
void main()
{
pVertPosition = vertPosition;
pVertNormal = vertNormal;
pVertTexture = vertTexture;
pVertColour = vertColour;
gl_Position = vec4(vec3(vertPosition.xy + vertTexture, vertPosition.z), 1.0);
}
The fragment shader:
#version 400 core
in vec3 ppVertPosition;
in vec3 ppVertNormal;
in vec2 ppVertTexture;
in vec4 ppVertColour;
uniform sampler2D texture;
void main()
{
gl_FragColor = texture2D(texture, ppVertTexture);
}
There is a geometry shader in between these, but it is currently redundant and just pases the information straight to the fragment shader (that is why the out and in variables don't match.) Also, the reason the textureUV is added to the vertex position in the fragment shader was to debug what the texture UV values being passed actually were. This is how I know the UVs are offset to the normal xy. If I put the texture UVs into the normal xy, they work perfectly fine.
If there is any extra code you would like to see, that I haven't included, I'll add it. I haven't added everything, for example, the whole VBO class, because it is too much code. I have only included what I think is relevant and where I think the problem is.
Edit #1:
The variable locations in the shader, such as vertPosition and vertNormal are bound. This is my code to bind them
glBindAttribLocation(program, Mesh.ATTRIBUTE_LOCATION_POSITION, "vertPosition");
glBindAttribLocation(program, Mesh.ATTRIBUTE_LOCATION_NORMAL, "vertNormal");
glBindAttribLocation(program, Mesh.ATTRIBUTE_LOCATION_TEXTURE, "vertTexture");
glBindAttribLocation(program, Mesh.ATTRIBUTE_LOCATION_COLOUR, "vertColour");
Changing the vertex shader to use layouts, like so, yields the ex\ct same result;
layout(location = 0) in vec3 vertPosition;
layout(location = 1) in vec3 vertNormal;
layout(location = 2) in vec2 vertTexture;
layout(location = 3) in vec4 vertColour;
Edit #2
I decided to post the entire Mesh class, rather than just parts of it. I have also tried to implement VAOs instead of VBOs, but it isn't working.
You are mixing standard pipeline functionality with custom shader variables.
Calling glEnableClientState(GL_VERTEX_ARRAY); tells OpenGL to use a certain data bind point, all good.
Calling glVertexPointer( tells OpenGL where to find its vertices. Since you enabled the correct array previously, all is still good.
Then we get to the vertex shader and you use in vec3 vertPosition; but GLSL doesn't know that you want your vertex data there. Sure, we can see the name is "vertPosition", but GLSL shouldn't have to guess the data you want based on the variable name of course! Instead, the default pipeline behavior for GLSL is to use gl_Vertex, a pre built GLSL variable tied to GL_VERTEX_ARRAY.
So why does it work for positions? Sheer luck with the variables you define being allocated the pre-built constants by chance, I guess.
What you should do is switch glEnableClientState to glEnableVertexAttribArray, use Layouts to assign each variable a number, and call glVertexAttribPointer instead of glVertexPointer to link that number to the correct data.
Now, the variables you declared, like vertPosition, point to the right data in your buffers, not by chance, but because you told them to!
This is the correct way to do things in modern OpenGL, using the pre built variables like gl_Vertex and functions like glEnableClientState is considered old and bad because its inflexible.
You can also omit the layouts (because it requires OGL 4+, which not everyone has), but that requires some more work before linking the shaders.
Good luck!
More info on converting your code
(I hope im right with this, I cant comment to actually verify that this is the issue)
Okay, I got it working... finally. I have no idea what the initial problem was with the VBO, but once I switched to use a VAO, and not render with glClientState, it worked fine. Also, the problem I was having with the VAO not rendering anything what-so-ever was the line:
glDrawElements(GL_TRIANGLES, indexCount, GL_FLOAT, 0);
should have been
glDrawElements(GL_TRIANGLES, indexCount, GL_UNSIGNED_INT, 0);

Opengl terrain texture shading

I am currently working on an OpenGL (JOGL) project in java.
The goal is to create a terrain with textures and shading.
I'm creating a random simplex noise, using these values as a heightmap.
The heights are mapped to a 1D texture to simulate coloring based on height.
A material (ambient/diffuse/specular/shininess) is then used to simulate shading.
However; after adding shading to the terrain, 'stripes' appear on each 'column' (Y direction) of the terrain.
The following material is then applied:
TERRAIN(
new float[]{0.5f, 0.5f, 0.5f, 1.0f},
new float[]{0.7f, 0.7f, 0.7f, 1.0f},
new float[]{0.2f, 0.2f, 0.2f, 1.0f},
new float[]{100f})
The material enum constructor:
Material(float[] ambient, float[] diffuse, float[] specular, float[] shininess) {
this.ambient = ambient;
this.diffuse = diffuse;
this.specular = specular;
this.shininess = shininess;
}
I apply the material using the following method:
public void use(GL2 gl) {
// set the material properties
gl.glMaterialfv(GL.GL_FRONT_AND_BACK, GLLightingFunc.GL_AMBIENT, ambient, 0);
gl.glMaterialfv(GL.GL_FRONT_AND_BACK, GLLightingFunc.GL_DIFFUSE, diffuse, 0);
gl.glMaterialfv(GL.GL_FRONT_AND_BACK, GLLightingFunc.GL_SPECULAR, specular, 0);
gl.glMaterialfv(GL.GL_FRONT_AND_BACK, GLLightingFunc.GL_SHININESS, shininess, 0);
}
After creating a 2D 'noisearray' consistent of 0-1 values, an 2D vectorarray is created, consiting of X*Y vectors, where each vector represents a point in the plane/terrain.
Here is the method that draws triangles in between those points, where you can see I draw the plane per column (Y direction):
public void draw(GL2 gl, GLU glu, GLUT glut, Drawer drawer) {
Material.TERRAIN.use(gl);
texture.bind(gl);
if (showGrid)
gl.glPolygonMode( gl.GL_FRONT_AND_BACK, gl.GL_LINE );
ArrayList<Vector[]> normals = new ArrayList<>();
for(int i=1;i<vectors.length;i++) {
gl.glBegin(gl.GL_TRIANGLE_STRIP);
for (int j = 0; j < vectors[i].length; j++) {
Vector normalTopRight, normalBottomLeft;
//Calculate normals top right
Vector v1, v2, triangleCenterTR;
if (j < vectors[i].length - 1)
{
v1 = vectors[i-1][j].subtract(vectors[i][j]);
v2 = vectors[i][j+1].subtract(vectors[i][j]);
normalTopRight = v2.cross(v1).normalized();
// Get center (a+b+c)*(1/3)
triangleCenterTR = (vectors[i][j].add(vectors[i - 1][j]).add(vectors[i][j + 1])).scale(1.0 / 3);
} else {
v1 = vectors[i-1][j].subtract(vectors[i][j]);
v2 = vectors[i][j-1].subtract(vectors[i][j]);
normalTopRight = v1.cross(v2).normalized();
// Get center (a+b+c)*(1/3)
triangleCenterTR = (vectors[i][j].add(vectors[i-1][j]).add(vectors[i][j-1])).scale(1.0/3);
}
normals.add(new Vector[] {triangleCenterTR, triangleCenterTR.add(normalTopRight)});
if (j != 0)
{
v1 = vectors[i][j].subtract(vectors[i-1][j]);
v2 = vectors[i-1][j-1].subtract(vectors[i-1][j]);
normalBottomLeft = v2.cross(v1).normalized();
// Get center (a+b+c)*(1/3)
Vector triangleCenterBL = (vectors[i - 1][j].add(vectors[i][j]).add(vectors[i - 1][j - 1])).scale(1.0 / 3);
normals.add(new Vector[]{triangleCenterBL, triangleCenterBL.add(normalBottomLeft)});
} else {
normalBottomLeft = null; // If j==0, there is no bottom left triangle above
}
/**
* We have everything to start drawing
*/
// Set some color
if (j == 0) {
// Initialization vector
gl.glTexCoord1d(mapTextureToHeight(vectors[i][j].z));
drawer.glVertexV(vectors[i][j]);
} else {
drawer.glNormalV(normalBottomLeft);
}
// Shift left
gl.glTexCoord1d(mapTextureToHeight(vectors[i - 1][j].z));
drawer.glVertexV(vectors[i - 1][j]);
// Right down diagonally
if (j < vectors[i].length - 1) { // Skip if we are reached the end
gl.glTexCoord1d(mapTextureToHeight(vectors[i][j + 1].z));
drawer.glNormalV(normalTopRight);
drawer.glVertexV(vectors[i][j + 1]);
}
}
gl.glEnd();
}
if (showGrid)
gl.glPolygonMode( gl.GL_FRONT_AND_BACK, gl.GL_FILL );
if (drawNormals) {
for (Vector[] arrow : normals) {
if (yellowNormals)
Material.YELLOW.use(gl);
else
gl.glTexCoord1d(mapTextureToHeight(arrow[0].z));
drawer.drawArrow(arrow[0], arrow[1], 0.05);
}
}
texture.unbind(gl);
}
The most obvious reason for the stripes is the fact I draw the triangles per column, causing OpenGL to not be able to smoothen the shading on the polygons (GL_SMOOTH). Is there any way to fix this?
[Edit1] Copied from your comment by Spektre
I just finished calculating the average normals, I indeed have a smooth terrain now, but the lighting looks kind of dull (no depth)
Here is the new code that draws the terrain:
public void draw() {
if (showGrid)
gl.glPolygonMode( gl.GL_FRONT_AND_BACK, gl.GL_LINE);
texture.bind(gl);
Material.TERRAIN.use(gl);
for(int i=1;i<vectors.length;i++) {
gl.glBegin(gl.GL_TRIANGLE_STRIP);
for (int j = 0; j < vectors[i].length; j++) {
// Initialization vector
gl.glTexCoord1d(mapTextureToHeight(vectors[i][j].z));
drawer.glNormalV(normals.get(vectors[i][j]));
drawer.glVertexV(vectors[i][j]);
// Shift left
gl.glTexCoord1d(mapTextureToHeight(vectors[i - 1][j].z));
drawer.glNormalV(normals.get(vectors[i - 1][j]));
drawer.glVertexV(vectors[i - 1][j]);
}
gl.glEnd();
}
if (showGrid)
gl.glPolygonMode( gl.GL_FRONT_AND_BACK, gl.GL_FILL );
if (drawNormals)
drawFaceNormals();
texture.unbind(gl);
}
I cleaned it up, I am sure the normals are pointing the correct way using the drawnormals function and made sure OpenGL is seeing the top of the terrain as FRONT using (gl.GL_FRONT -> draws only above terrain, not below).
Here is the complete class: PasteBin
Thanks to #Spektre for helping me out.
After properly calculating the average normal of all surrounding faces on a vertex and using this normal for glNormal, the shading was correct.

Zooming in map with OpenGL-ES2 Android

I have created a pinch zoom with a scale detector which in turn calls the following renderer.
This uses the projection matrix to do the zoom and then scales the eye per the zoom when panning.
public class vboCustomGLRenderer implements GLSurfaceView.Renderer {
// Store the model matrix. This matrix is used to move models from object space (where each model can be thought
// of being located at the center of the universe) to world space.
private float[] mModelMatrix = new float[16];
// Store the view matrix. This can be thought of as our camera. This matrix transforms world space to eye space;
// it positions things relative to our eye.
private float[] mViewMatrix = new float[16];
// Store the projection matrix. This is used to project the scene onto a 2D viewport.
private float[] mProjectionMatrix = new float[16];
// Allocate storage for the final combined matrix. This will be passed into the shader program.
private float[] mMVPMatrix = new float[16];
// This will be used to pass in the transformation matrix.
private int mMVPMatrixHandle;
// This will be used to pass in model position information.
private int mPositionHandle;
// This will be used to pass in model color information.
private int mColorUniformLocation;
// How many bytes per float.
private final int mBytesPerFloat = 4;
// Offset of the position data.
private final int mPositionOffset = 0;
// Size of the position data in elements.
private final int mPositionDataSize = 3;
// How many elements per vertex for double values.
private final int mPositionFloatStrideBytes = mPositionDataSize * mBytesPerFloat;
// Position the eye behind the origin.
public double eyeX = default_settings.mbrMinX + ((default_settings.mbrMaxX - default_settings.mbrMinX)/2);
public double eyeY = default_settings.mbrMinY + ((default_settings.mbrMaxY - default_settings.mbrMinY)/2);
// Position the eye behind the origin.
//final float eyeZ = 1.5f;
public float eyeZ = 1.5f;
// We are looking toward the distance
public double lookX = eyeX;
public double lookY = eyeY;
public float lookZ = 0.0f;
// Set our up vector. This is where our head would be pointing were we holding the camera.
public float upX = 0.0f;
public float upY = 1.0f;
public float upZ = 0.0f;
public double mScaleFactor = 1;
public double mScrnVsMapScaleFactor = 0;
public vboCustomGLRenderer() {}
public void setEye(double x, double y){
eyeX -= (x / screen_vs_map_horz_ratio);
lookX = eyeX;
eyeY += (y / screen_vs_map_vert_ratio);
lookY = eyeY;
// Set the camera position (View matrix)
Matrix.setLookAtM(mViewMatrix, 0, (float)eyeX, (float)eyeY, eyeZ, (float)lookX, (float)lookY, lookZ, upX, upY, upZ);
}
public void setScaleFactor(float scaleFactor, float gdx, float gdy){
mScaleFactor *= scaleFactor;
mRight = mRight / scaleFactor;
mLeft = -mRight;
mTop = mTop / scaleFactor;
mBottom = -mTop;
//Need to calculate the shift in the eye when zooming on a particular spot.
//So get the distance between the zoom point and eye point, figure out the
//new eye point by getting the factor of this distance.
double eyeXShift = (((mWidth / 2) - gdx) - (((mWidth / 2) - gdx) / scaleFactor));
double eyeYShift = (((mHeight / 2) - gdy) - (((mHeight / 2) - gdy) / scaleFactor));
screen_vs_map_horz_ratio = (mWidth/(mRight-mLeft));
screen_vs_map_vert_ratio = (mHeight/(mTop-mBottom));
eyeX -= (eyeXShift / screen_vs_map_horz_ratio);
lookX = eyeX;
eyeY += (eyeYShift / screen_vs_map_vert_ratio);
lookY = eyeY;
// Set the scale (Projection matrix)
Matrix.frustumM(mProjectionMatrix, 0, (float)mLeft, (float)mRight, (float)mBottom, (float)mTop, near, far);
}
#Override
public void onSurfaceCreated(GL10 unused, EGLConfig config) {
// Set the background frame color
//White
GLES20.glClearColor(1.0f, 1.0f, 1.0f, 1.0f);
// Set the view matrix. This matrix can be said to represent the camera position.
// NOTE: In OpenGL 1, a ModelView matrix is used, which is a combination of a model and
// view matrix. In OpenGL 2, we can keep track of these matrices separately if we choose.
Matrix.setLookAtM(mViewMatrix, 0, (float)eyeX, (float)eyeY, eyeZ, (float)lookX, (float)lookY, lookZ, upX, upY, upZ);
final String vertexShader =
"uniform mat4 u_MVPMatrix; \n" // A constant representing the combined model/view/projection matrix.
+ "attribute vec4 a_Position; \n" // Per-vertex position information we will pass in.
+ "attribute vec4 a_Color; \n" // Per-vertex color information we will pass in.
+ "varying vec4 v_Color; \n" // This will be passed into the fragment shader.
+ "void main() \n" // The entry point for our vertex shader.
+ "{ \n"
+ " v_Color = a_Color; \n" // Pass the color through to the fragment shader.
// It will be interpolated across the triangle.
+ " gl_Position = u_MVPMatrix \n" // gl_Position is a special variable used to store the final position.
+ " * a_Position; \n" // Multiply the vertex by the matrix to get the final point in
+ "} \n"; // normalized screen coordinates.
final String fragmentShader =
"precision mediump float; \n" // Set the default precision to medium. We don't need as high of a
// precision in the fragment shader.
+ "uniform vec4 u_Color; \n" // This is the color from the vertex shader interpolated across the
// triangle per fragment.
+ "void main() \n" // The entry point for our fragment shader.
+ "{ \n"
+ " gl_FragColor = u_Color; \n" // Pass the color directly through the pipeline.
+ "} \n";
// Load in the vertex shader.
int vertexShaderHandle = GLES20.glCreateShader(GLES20.GL_VERTEX_SHADER);
if (vertexShaderHandle != 0)
{
// Pass in the shader source.
GLES20.glShaderSource(vertexShaderHandle, vertexShader);
// Compile the shader.
GLES20.glCompileShader(vertexShaderHandle);
// Get the compilation status.
final int[] compileStatus = new int[1];
GLES20.glGetShaderiv(vertexShaderHandle, GLES20.GL_COMPILE_STATUS, compileStatus, 0);
// If the compilation failed, delete the shader.
if (compileStatus[0] == 0)
{
GLES20.glDeleteShader(vertexShaderHandle);
vertexShaderHandle = 0;
}
}
if (vertexShaderHandle == 0)
{
throw new RuntimeException("Error creating vertex shader.");
}
// Load in the fragment shader shader.
int fragmentShaderHandle = GLES20.glCreateShader(GLES20.GL_FRAGMENT_SHADER);
if (fragmentShaderHandle != 0)
{
// Pass in the shader source.
GLES20.glShaderSource(fragmentShaderHandle, fragmentShader);
// Compile the shader.
GLES20.glCompileShader(fragmentShaderHandle);
// Get the compilation status.
final int[] compileStatus = new int[1];
GLES20.glGetShaderiv(fragmentShaderHandle, GLES20.GL_COMPILE_STATUS, compileStatus, 0);
// If the compilation failed, delete the shader.
if (compileStatus[0] == 0)
{
GLES20.glDeleteShader(fragmentShaderHandle);
fragmentShaderHandle = 0;
}
}
if (fragmentShaderHandle == 0)
{
throw new RuntimeException("Error creating fragment shader.");
}
// Create a program object and store the handle to it.
int programHandle = GLES20.glCreateProgram();
if (programHandle != 0)
{
// Bind the vertex shader to the program.
GLES20.glAttachShader(programHandle, vertexShaderHandle);
// Bind the fragment shader to the program.
GLES20.glAttachShader(programHandle, fragmentShaderHandle);
// Bind attributes
GLES20.glBindAttribLocation(programHandle, 0, "a_Position");
GLES20.glBindAttribLocation(programHandle, 1, "a_Color");
// Link the two shaders together into a program.
GLES20.glLinkProgram(programHandle);
// Get the link status.
final int[] linkStatus = new int[1];
GLES20.glGetProgramiv(programHandle, GLES20.GL_LINK_STATUS, linkStatus, 0);
// If the link failed, delete the program.
if (linkStatus[0] == 0)
{
GLES20.glDeleteProgram(programHandle);
programHandle = 0;
}
}
if (programHandle == 0)
{
throw new RuntimeException("Error creating program.");
}
// Set program handles. These will later be used to pass in values to the program.
mMVPMatrixHandle = GLES20.glGetUniformLocation(programHandle, "u_MVPMatrix");
mPositionHandle = GLES20.glGetAttribLocation(programHandle, "a_Position");
mColorUniformLocation = GLES20.glGetUniformLocation(programHandle, "u_Color");
// Tell OpenGL to use this program when rendering.
GLES20.glUseProgram(programHandle);
}
static double mWidth = 0;
static double mHeight = 0;
static double mLeft = 0;
static double mRight = 0;
static double mTop = 0;
static double mBottom = 0;
static double mRatio = 0;
double screen_width_height_ratio;
double screen_height_width_ratio;
final float near = 1.5f;
final float far = 10.0f;
double screen_vs_map_horz_ratio = 0;
double screen_vs_map_vert_ratio = 0;
#Override
public void onSurfaceChanged(GL10 unused, int width, int height) {
// Adjust the viewport based on geometry changes,
// such as screen rotation
// Set the OpenGL viewport to the same size as the surface.
GLES20.glViewport(0, 0, width, height);
screen_width_height_ratio = (double) width / height;
screen_height_width_ratio = (double) height / width;
//Initialize
if (mRatio == 0){
mWidth = (double) width;
mHeight = (double) height;
//map height to width ratio
double map_extents_width = default_settings.mbrMaxX - default_settings.mbrMinX;
double map_extents_height = default_settings.mbrMaxY - default_settings.mbrMinY;
double map_width_height_ratio = map_extents_width/map_extents_height;
if (screen_width_height_ratio > map_width_height_ratio){
mRight = (screen_width_height_ratio * map_extents_height)/2;
mLeft = -mRight;
mTop = map_extents_height/2;
mBottom = -mTop;
}
else{
mRight = map_extents_width/2;
mLeft = -mRight;
mTop = (screen_height_width_ratio * map_extents_width)/2;
mBottom = -mTop;
}
mRatio = screen_width_height_ratio;
}
if (screen_width_height_ratio != mRatio){
final double wRatio = width/mWidth;
final double oldWidth = mRight - mLeft;
final double newWidth = wRatio * oldWidth;
final double widthDiff = (newWidth - oldWidth)/2;
mLeft = mLeft - widthDiff;
mRight = mRight + widthDiff;
final double hRatio = height/mHeight;
final double oldHeight = mTop - mBottom;
final double newHeight = hRatio * oldHeight;
final double heightDiff = (newHeight - oldHeight)/2;
mBottom = mBottom - heightDiff;
mTop = mTop + heightDiff;
mWidth = (double) width;
mHeight = (double) height;
mRatio = screen_width_height_ratio;
}
screen_vs_map_horz_ratio = (mWidth/(mRight-mLeft));
screen_vs_map_vert_ratio = (mHeight/(mTop-mBottom));
Matrix.frustumM(mProjectionMatrix, 0, (float)mLeft, (float)mRight, (float)mBottom, (float)mTop, near, far);
}
ListIterator<mapLayer> orgNonAssetCatLayersList_it;
ListIterator<FloatBuffer> mapLayerObjectList_it;
ListIterator<Byte> mapLayerObjectTypeList_it;
mapLayer MapLayer;
#Override
public void onDrawFrame(GL10 unused) {
GLES20.glClear(GLES20.GL_DEPTH_BUFFER_BIT | GLES20.GL_COLOR_BUFFER_BIT);
drawPreset();
orgNonAssetCatLayersList_it = default_settings.orgNonAssetCatMappableLayers.listIterator();
while (orgNonAssetCatLayersList_it.hasNext()) {
MapLayer = orgNonAssetCatLayersList_it.next();
if (MapLayer.BatchedPointVBO != null){
}
if (MapLayer.BatchedLineVBO != null){
drawLineString(MapLayer.BatchedLineVBO, MapLayer.lineStringObjColor);
}
if (MapLayer.BatchedPolygonVBO != null){
drawPolygon(MapLayer.BatchedPolygonVBO, MapLayer.polygonObjColor);
}
}
}
private void drawPreset()
{
Matrix.setIdentityM(mModelMatrix, 0);
// This multiplies the view matrix by the model matrix, and stores the result in the MVP matrix
// (which currently contains model * view).
Matrix.multiplyMM(mMVPMatrix, 0, mViewMatrix, 0, mModelMatrix, 0);
// This multiplies the modelview matrix by the projection matrix, and stores the result in the MVP matrix
// (which now contains model * view * projection).
Matrix.multiplyMM(mMVPMatrix, 0, mProjectionMatrix, 0, mMVPMatrix, 0);
GLES20.glUniformMatrix4fv(mMVPMatrixHandle, 1, false, mMVPMatrix, 0);
}
private void drawLineString(final FloatBuffer geometryBuffer, final float[] colorArray)
{
// Pass in the position information
geometryBuffer.position(mPositionOffset);
GLES20.glVertexAttribPointer(mPositionHandle, mPositionDataSize, GLES20.GL_FLOAT, false, mPositionFloatStrideBytes, geometryBuffer);
GLES20.glEnableVertexAttribArray(mPositionHandle);
GLES20.glUniform4f(mColorUniformLocation, colorArray[0], colorArray[1], colorArray[2], 1f);
GLES20.glLineWidth(2.0f);
GLES20.glDrawArrays(GLES20.GL_LINES, 0, geometryBuffer.capacity()/mPositionDataSize);
}
private void drawPolygon(final FloatBuffer geometryBuffer, final float[] colorArray)
{
// Pass in the position information
geometryBuffer.position(mPositionOffset);
GLES20.glVertexAttribPointer(mPositionHandle, mPositionDataSize, GLES20.GL_FLOAT, false, mPositionFloatStrideBytes, geometryBuffer);
GLES20.glEnableVertexAttribArray(mPositionHandle);
GLES20.glUniform4f(mColorUniformLocation, colorArray[0], colorArray[1], colorArray[2], 1f);
GLES20.glLineWidth(1.0f);
GLES20.glDrawArrays(GLES20.GL_LINES, 0, geometryBuffer.capacity()/mPositionDataSize);
}
}
This works very well up until it gets to a certain level then the panning starts jumping. After testing I found that it was because the floating point value of the eye, could not cope with such a small shift in position. I keep my x and y eye values in doubles so it continues to calculate shifting positions, then when calling setLookAtM() I convert them to floats.
So need I need to change the way the zoom works. I was thinking instead of zooming with the projection, scaling the model larger or smaller.
The setScaleFactor() function in my code will change, by removing the projection and eye shifting.
There is a Matrix.scaleM(m,Offset,x,y,z) function but I am unsure how or where to implement this.
Could use some suggestions on how to accomplish this.
[Edit] 24/7/2013
I tried altering setScaleFactor() like so:
public void setScaleFactor(float scaleFactor, float gdx, float gdy){
mScaleFactor *= scaleFactor;
}
and in drawPreset()
private void drawPreset()
{
Matrix.setIdentityM(mModelMatrix, 0);
//*****Added scaleM
Matrix.scaleM(mModelMatrix, 0, (float)mScaleFactor, (float)mScaleFactor, 1.0f);
// This multiplies the view matrix by the model matrix, and stores the result in the MVP matrix
// (which currently contains model * view).
Matrix.multiplyMM(mMVPMatrix, 0, mViewMatrix, 0, mModelMatrix, 0);
// This multiplies the modelview matrix by the projection matrix, and stores the result in the MVP matrix
// (which now contains model * view * projection).
Matrix.multiplyMM(mMVPMatrix, 0, mProjectionMatrix, 0, mMVPMatrix, 0);
GLES20.glUniformMatrix4fv(mMVPMatrixHandle, 1, false, mMVPMatrix, 0);
}
Now as soon as you do a zoom the image disappears from the screen.
Actually I found it right off to the right hand side. I could still pan over to it.
Still not sure on what I should be scaling to zoom, is it the model, view or view-model?
I have found out that if you take the center of your model back to the origin (0,0) it allows you to extend your zoom capabilities. With my x coord data which was between 152.6 and 152.7.
Taking it back to the origin by the offset 152.65, which needs to be applied to the data before loading it into the floatbuffer.
So the width of the data becomes 0.1 or 0.05 on each side, allowing for more precision on the trailing end of the value.

Categories