How to anchor an object using Cardboard Java SDK - java

I'm currently working on a project with the Cardboard SDK, and I'm relatively struck right now.
I want to display a cross in the center of the sight, like in a FPS, and keep it in the center of the sight when the user moves his head.
I know that in this code :
public void onNewFrame(HeadTransform headTransform) {
float[] headView = new float[16];
headTransform.getHeadView(headView, 0);
}
the headView param will contains the transformation matrix (rotation + translation) of the head (thanks to this SO : Android VR Toolkit - HeadTransform getHeadView matrix representation ).
I tried to do this :
private float[] mHeadView = new float[16];
public void onNewFrame(HeadTransform headTransform) {
headTransform.getHeadView(mHeadView, 0);
}
public void onDrawEye(Eye eye) {
float[] mvpMatrix = new float[16];
float[] modelMatrix = new float[16];
float[] mvMatrix = new float[16];
float[] camera = new float[16];
Matrix.setLookAt(camera, 0, 0, 0, -2, 0, 0, -1, 0, 1, 0);
Matrix.multiplyMM(modelMatrix, 0, mHeadView, 0, camera, 0);
Matrix.multiplyMM(mvMatrix, 0, eye.getEyeView(), 0, modelMatrix, 0);
Matrix.multiplyMM(mvpMatrix, 0, eye.getEyePerspective(0.1, 100), 0, mvMatrix, 0);
// Pass the mvpMatrix and vertices buffer to the vertex shader.
}
And here is my vertex shader :
uniform mat4 uMatrix;
attribute vec4 vPosition;
attribute vec4 vColors;
varying vec4 color;
void main() {
color = vColors;
gl_Position = uMatrix * vPosition;
}
But the cross is still anchored to its initial position and doesn't follow the head.
Am I missing something ?
How can I make my cross follow my head and stay in the center of the sight ?
Thanks in advance for your answers :)
(PS : I don't want to use Unity because this project must only use the Java SDK).

tl;dr: For a head-locked crosshair, skip the multiplication by the mHeadView matrix.
If you want a head locked object, you need to define it in head space, not in world space. Your current code defines the crosshair in world space. The mHeadView transform maps from world space to current head space, accounting for current head rotation. You don't need to multiply by this, it's only required for world-locked objects.

Related

View & Projection Matrices are not working

I am trying to implement MVP Matrices into my engine.
My Model matrix is working fine but my View and Projection Matrices do not work.
Here is the creation for both:
public void calculateProjectionMatrix() {
final float aspect = Display.getDisplayWidth() / Display.getDisplayHeight();
final float y_scale = (float) ((1f / Math.tan(Math.toRadians(FOV / 2f))) * aspect);
final float x_scale = y_scale / aspect;
final float frustum_length = FAR_PLANE - NEAR_PLANE;
proj.identity();
proj._m00(x_scale);
proj._m11(y_scale);
proj._m22(-((FAR_PLANE + NEAR_PLANE) / frustum_length));
proj._m23(-1);
proj._m32(-((2 * NEAR_PLANE * FAR_PLANE) / frustum_length));
proj._m33(0);
}
public void calculateViewMatrix() {
view.identity();
view.rotate((float) Math.toRadians(rot.x), Mathf.xRot);
view.rotate((float) Math.toRadians(rot.y), Mathf.yRot);
view.rotate((float) Math.toRadians(rot.z), Mathf.zRot);
view.translate(new Vector3f(pos).mul(-1));
System.out.println(view);
}
The vertices i'm trying to render are:
-0.5f, 0.5f, -1.0f,
-0.5f, -0.5f, -1.0f,
0.5f, -0.5f, -1.0f,
0.5f, 0.5f, -1.0f
I Tested the view matrix before the upload to shader and it is correct.
This is how I render:
ss.bind();
glEnableVertexAttribArray(0);
glEnableVertexAttribArray(1);
glEnableVertexAttribArray(2);
ss.loadViewProjection(cam);
ss.loadModelMatrix(Mathf.transformation(new Vector3f(0, 0, 0), new Vector3f(), new Vector3f(1)));
ss.connectTextureUnits();
glDrawElements(GL_TRIANGLES, 6, GL_UNSIGNED_INT, 0);
glDisableVertexAttribArray(0);
glDisableVertexAttribArray(1);
glDisableVertexAttribArray(2);
ss.unbind();
Vertex Shader:
#version 330 core
layout(location = 0) in vec3 i_position;
layout(location = 1) in vec2 i_texCoord;
layout(location = 2) in vec3 i_normal;
out vec2 p_texCoord;
uniform mat4 u_proj;
uniform mat4 u_view;
uniform mat4 u_model;
void main() {
gl_Position = u_proj * u_view * u_model * vec4(i_position, 1.0);
p_texCoord = i_texCoord;
}
While the answer of #Rabbid76 of course it totally right in general, the actual issue in this case is a combination of bad API design (in the case of JOML) and not reading the JavaDocs of the used methods.
In particular the Matrix4f._mNN() methods only set the respective matrix field but omit reevaluating the matrix properties stored internally in order to accelerate/route further matrix operations (most notably multiplication) to more optimized methods when knowing about the properties of a matrix, such as "is it identity", "does it only represent translation", "is it a perspective projection", "is it affine", "is it orthonormal" etc...
This is an optimization that JOML applies to in most cases very significantly improve performance of matrix multiplications.
As for bad API design: Those methods are only public in order for the class org.joml.internal.MemUtil to access them to set matrix elements read from NIO buffers. Since Java does not (yet) have friend classes, those _mNN() methods must have been public for this reason. They are however not meant for public/client usage.
I've changed this now and the next JOML version 1.9.21 will not expose them anymore.
In order to set matrix fields still explicitly (not necessary in most cases like these here) one can still use the mNN() methods, which do reevaluate/weaken the matrix properties to make all further operations still correct, albeit likely non-optimal.
So the issue in this cases is actually: JOML still thinks that the manually created perspective projection matrix is the identity matrix and will short-cut further matrix multiplications assuming so.
The geometry has to be place in the Viewing frustum. All the geometry which is out of the viewing frustum, is clipped an not "visible".
The geometry has to be positioned between the NEAR_PLANE and FAR_PLANE. Note, at perspective projection NEAR_PLANE and FAR_PLANE have to be greater 0:
0 < NEAR_PLANE < FAR_PLANE
Note, in view space the z axis points out of the viewport. An initial view matrix that looks at the object can be defined by:
pos = new Vector3f(0, 0, (NEAR_PLANE + FAR_PLANE)/2f );
rot = new Vector3f(0, 0, 0);
Note, if the distance to the FAR_PLANE is very far, then the object is possibly very small and almost invisible. In this case change the inital values:
pos = new Vector3f(0, 0, NEAR_PLANE * 0.99f + FAR_PLANE * 0.01f );
rot = new Vector3f(0, 0, 0);
So I tried everything and found out that you should not initialise the uniform locations to 0.
In the class that extends the ShaderProgram class, I had:
private int l_TextureSampler = 0;
private int l_ProjectionMatrix = 0;
private int l_ViewMatrix = 0;
private int l_ModelMatrix = 0;
Changing it to this:
private int l_TextureSampler;
private int l_ProjectionMatrix;
private int l_ViewMatrix;
private int l_ModelMatrix;
Worked for me.

libgdx, render shapes using Mesh and ShaderProgram

I just started using libgdx and want to render some 2D shapes using a Mesh and a custom ShaderProgram.
I'm experienced in OpenGL, but I don't see my mistake here, maybe someone can help me.
The shader is very basic, vertex:
attribute vec2 v;
uniform mat4 o;
void main(){
gl_Position = vec4(o*vec3(v, 1.0), 1.0);
}
fragment:
#ifdef GL_ES
precision mediumhp float;
#endif
void main(){
gl_FragColor = vec4(1, 1, 1, 1);
}
The mesh (quad 100x100px):
Mesh mesh = new Mesh(true, 4, 6, new VertexAttribute(Usage.Position, 2, "v"));
mesh.setVertices(new float[]{0, 0,
100, 0,
0, 100,
100, 100});
mesh.setIndices(new short[]{0, 1, 3, 0, 3, 2});
The render stage:
Matrix4 o = new Matrix4(); // also tried OrthographicCamera and SpriteBatch.getProjectionMatrix() here...
o.setToOrtho2D(0, 0, Gdx.graphics.getWidth(), Gdx.graphics.getHeight());
shader.begin();
shader.setUniformMatrix(shader.getUniformLocation("o"), o);
mesh.render(shader, GL20.GL_TRIANGLES);
shader.end();
And thats it. I get no output at all, black screen.
Of course I clear the screen and everything, SpriteBatch (which I also use for different purposes) just works fine. But I don't get how this is done in libgdx or whats wrong here...

How do you display a 3D object using indices in jogl 2 with OpenGL 3.3

im tryign to write a script to display basic 3D objects/polygon triangles using JOGL 2 with OpenGL 3.3 however when the item compiles i receive no error and get an blank window of where the object appears. So my question is, is there anything in specific im missing in adding to make the object to appear.. my code is as follows...
public void init(GL3 gl)
{
gl.glGenVertexArrays(1, IntBuffer.wrap(temp));
//create vertice buffers
int vao = temp[0];
gl.glBindVertexArray(vao);
gl.glGenBuffers(1, IntBuffer.wrap(temp));
int[] temp2 = new int[]{1,1};
gl.glGenBuffers(2, IntBuffer.wrap(temp2));
vbo = temp2[0];
ebo = temp2[1];
//creates vertex array
float vertices[] = {
-0.5f, 0.5f, 0.0f,//1,0,0, // Top-left
0.5f, 0.5f, 0.0f,//0,1,0, // Top-right
0.5f, -0.5f, 0.0f,//0,0,1, // Bottom-right
-0.5f, -0.5f, 0.0f//1,1,0 // Bottom-left
};
gl.glBindBuffer(GL.GL_ARRAY_BUFFER, vbo);
gl.glBufferData(GL.GL_ARRAY_BUFFER, vertices.length * 4,
FloatBuffer.wrap(vertices), GL.GL_STATIC_DRAW);
//creates element array
int elements[] = {
0,1,2,
2,3,0
};
gl.glBindBuffer(GL.GL_ELEMENT_ARRAY_BUFFER, ebo);
gl.glBufferData(GL.GL_ELEMENT_ARRAY_BUFFER, elements.length * 4,
IntBuffer.wrap(elements), GL.GL_STATIC_DRAW);
gl.glVertexAttribPointer(0, 3, GL.GL_FLOAT, false, 3*4, 0* 4);
gl.glEnableVertexAttribArray(0);
}
public void draw(GL3 gl)
{
gl.glBindVertexArray(vao);
gl.glDrawElements(GL.GL_TRIANGLES, 2, GL.GL_UNSIGNED_INT, 0);
}
As for where my shaders being initiated, its in a different class, which is as follows..
//Matrix4 view = new Matrix4(MatrixFactory.perspective(scene.camera.getHeightAngle(),scene.camera.getAspectRatio(),scene.camera.getPosition());
projection = MatrixFactory.perspective(scene.camera.getHeightAngle(), scene.camera.getAspectRatio(), 0.01f, 100f);
view = MatrixFactory.lookInDirection(scene.camera.getPosition(), scene.camera.getDirection(), scene.camera.getUp());
try {
shader = new Shader(new File("shaders/Transform.vert"), new File("shaders/Transform.frag"));
shader.compile(gl);
shader.enable(gl);
shader.setUniform("projection", projection, gl);
shader.setUniform("view", view, gl);
}
catch (Exception e) {
System.out.println("message " + e.getMessage());
}
for (Shape s : scene.shapes) {
s.init(gl);
}
And finally, my shader files
#version 330
out vec4 fragColour;
//in vec3 outColour;
void main() {
fragColour = vec4(1,0,0,1);
}
#version 330
uniform mat4 projection;
uniform mat4 view;
layout(location=0) in vec3 pos;
//layout(location=2) in vec2 texCoord;
//layout(location=1) in vec3 colours;
out vec2 fragTex;
out vec3 outColour;
vec4 newPos;
void main() {
newPos = vec4(pos,1.0);
gl_Position = projection * view * newPos;
//fragTex = texCoord;
//outColour = colours;
}
i am unsure on where i am going wrong, whether it is the shader files, or the actualy code itself..
I am not experienced in JOGL, I am used to c++ GL. However there are several problems: First, as Reto Koradi stated you are using the same value to ebo, vbo and vao. It should be like,
gl.glGenVertexArrays(1, IntBuffer.wrap(tempV));
int vao = tempV[0];
gl.glGenBuffers(2, IntBuffer.wrap(tempB));
int vbo = tempB[0];
int ebo = tempB[1];
Lastly, your draw seems a bit problematic, you seem to skip a step."bind the array to want to draw." Then draw.
gl.glBindVertexArray (vao);
gl.glDrawElements(GL.GL_TRIANGLES, 2, GL.GL_UNSIGNED_INT, 0);
I hope these help.
Ok, after many frustrating hours. someone helped me with the solution. The issue wasnt making seperate buffers, but rather not clearing them each time, meaning i needed to do
gl.glGenVertexArrays(1, IntBuffer.wrap(temp));
//create vertice buffers
vao = temp[0];
gl.glGenBuffers(1, IntBuffer.wrap(temp));
vbo = temp[0];
gl.glGenBuffers(1, IntBuffer.wrap(temp));
ebo = temp[0];
which is similar to how Hakes however i didnt need a seperate temp, i just needed to clear the buffer each time. one other thing i needed to do was to also put
gl.glBindVertexArray(vao);
in the init as well as the draw.
(edit)
im actually not too sure gl.glBindVertexArray(vao); needed to be in the draw method

Drawing filled polygon with libGDX

I want to draw some (filled) polygons with libGDX. It shoudn't be filled with a graphic/texture. I have only the vertices of the polygon (closed path) and tried to visualize with meshes but at some point this is not the best solution, I think.
My code for an rectangle is:
private Mesh mesh;
#Override
public void create() {
if (mesh == null) {
mesh = new Mesh(
true, 4, 0,
new VertexAttribute(Usage.Position, 3, "a_position")
);
mesh.setVertices(new float[] {
-0.5f, -0.5f, 0
0.5f, -0.5f, 0,
-0.5f, 0.5f, 0,
0.5f, 0.5f, 0
});
}
}
// ...
#Override
public void render() {
Gdx.gl.glClear(GL10.GL_COLOR_BUFFER_BIT);
mesh.render(GL10.GL_TRIANGLE_STRIP, 0, 4);
}
is there a function or something to draw filled polygons in an easier way?
Since recent updates of LibGDX, #Rus answer is using deprecated functions. However, I give him/her credits for the new updated version below:
PolygonSprite poly;
PolygonSpriteBatch polyBatch = new PolygonSpriteBatch(); // To assign at the beginning
Texture textureSolid;
// Creating the color filling (but textures would work the same way)
Pixmap pix = new Pixmap(1, 1, Pixmap.Format.RGBA8888);
pix.setColor(0xDEADBEFF); // DE is red, AD is green and BE is blue.
pix.fill();
textureSolid = new Texture(pix);
PolygonRegion polyReg = new PolygonRegion(new TextureRegion(textureSolid),
new float[] { // Four vertices
0, 0, // Vertex 0 3--2
100, 0, // Vertex 1 | /|
100, 100, // Vertex 2 |/ |
0, 100 // Vertex 3 0--1
}, new short[] {
0, 1, 2, // Two triangles using vertex indices.
0, 2, 3 // Take care of the counter-clockwise direction.
});
poly = new PolygonSprite(polyReg);
poly.setOrigin(a, b);
polyBatch = new PolygonSpriteBatch();
For good triangulating algorithms if your polygon is not convex, see the almost-linear earclipping algorithm from Toussaint (1991)
Efficient triangulation of simple polygons, Godfried Toussaint, 1991
Here is a libGDX example which draws a 2D concave polygon.
Define class members for PolygonSprite PolygonSpriteBatch
PolygonSprite poly;
PolygonSpriteBatch polyBatch;
Texture textureSolid;
Create instances, 1x1 size texture used with red pixel as workaround. An array of coordinates (x, y) is used for initialization of the polygon.
ctor() {
textureSolid = makeTextureBox(1, 0xFFFF0000, 0, 0);
float a = 100;
float b = 100;
PolygonRegion polyReg = new PolygonRegion(new TextureRegion(textureSolid),
new float[] {
a*0, b*0,
a*0, b*2,
a*3, b*2,
a*3, b*0,
a*2, b*0,
a*2, b*1,
a*1, b*1,
a*1, b*0,
});
poly = new PolygonSprite(polyReg);
poly.setOrigin(a, b);
polyBatch = new PolygonSpriteBatch();
}
Draw and rotate polygon
void draw() {
super.draw();
polyBatch.begin();
poly.draw(polyBatch);
polyBatch.end();
poly.rotate(1.1f);
}
I believe the ShapeRenderer class now has a polygon method for vertex defined polygons:
ShapeRenderer.polygon()
You can use the ShapeRenderer API to draw simple, solid-color shapes with Libgdx.
The code you've given is a reasonable way to draw solid color polygons too. Its much more flexible than ShapeRenderer, but is a good bit more complicated. You'll need to use glColor4f to set the color, or add a Usage.Color attribute to each vertex. See the SubMeshColorTest example for more details on the first approach and the MeshColorTexture example for details on the second approach.
Another option to think about is using sprite textures. If you're only interested in simple solid colors objects, you can use very simple 1x1 textures of a single color and let the system stretch that across the sprite. Much of Libgdx and the underlying hardware are really optimized for rendering textures, so you may find it easier to use even if you're not really taking advantage of the texture contents. (You can even use a 1x1 white texture, and then use a SpriteBatch with setColor and draw()
to draw different color rectangles easily.)
You can also mix and match the various approaches, too.
Use triangulation algorithm and then draw all triangles as GL_TRIANGLE_STRIP
http://www.personal.psu.edu/cxc11/AERSP560/DELAUNEY/13_Two_algorithms_Delauney.pdf
just wanted to share my related solution with you, namely for implementing and drawing a walkZone with scene2d. I basically had to put together the different suggestions of the others' posts:
1) The WalkZone:
import com.badlogic.gdx.graphics.Pixmap;
import com.badlogic.gdx.graphics.Texture;
import com.badlogic.gdx.graphics.g2d.PolygonRegion;
import com.badlogic.gdx.graphics.g2d.TextureRegion;
import com.badlogic.gdx.math.EarClippingTriangulator;
import com.badlogic.gdx.math.Polygon;
import com.mygdx.game.MyGame;
public class WalkZone extends Polygon {
private PolygonRegion polygonRegion = null;
public WalkZone(float[] vertices) {
super(vertices);
if (MyGame.DEBUG) {
Pixmap pix = new Pixmap(1, 1, Pixmap.Format.RGBA8888);
pix.setColor(0x00FF00AA);
pix.fill();
polygonRegion = new PolygonRegion(new TextureRegion(new Texture(pix)),
vertices, new EarClippingTriangulator().computeTriangles(vertices).toArray());
}
}
public PolygonRegion getPolygonRegion() {
return polygonRegion;
}
}
2) The Screen:
you can then add a listener in the desired Stage:
myStage.addListener(new InputListener() {
#Override
public boolean touchDown(InputEvent event, float x, float y, int pointer, int button) {
if (walkZone.contains(x, y)) player.walkTo(x, y);
// or even directly: player.addAction(moveTo ...
return super.touchDown(event, x, y, pointer, button);
}
});
3) The implementation:
The array passed to te WZ constructor is a set of x,y,x,y... points. If you put them counter-clockwise, it works (I didn't check the other way, nor know how it exactly works); for example this generates a 100x100 square:
yourScreen.walkZone = new WalkZone(new int[]{0, 0, 100, 0, 100, 100, 0, 100});
In my project it works like a charm, even with very intricated polygons. Hope it helps!!
Most answers suggest triangulation, which is fine, but you can also do it using the stencil buffer. It handles both convex and concave polygons. This may be a better solution if your polygon changes a lot, since otherwise you'd have to do triangulation every frame. Also, this solution properly handles self intersecting polygons, which EarClippingTriangulator does not.
FloatArray vertices = ... // The polygon x,y pairs.
Color color = ... // The color to draw the polygon.
ShapeRenderer shapes = ...
ImmediateModeRenderer renderer = shapes.getRenderer();
Gdx.gl.glClearStencil(0);
Gdx.gl.glClear(GL20.GL_STENCIL_BUFFER_BIT);
Gdx.gl.glEnable(GL20.GL_STENCIL_TEST);
Gdx.gl.glStencilFunc(GL20.GL_NEVER, 0, 1);
Gdx.gl.glStencilOp(GL20.GL_INVERT, GL20.GL_INVERT, GL20.GL_INVERT);
Gdx.gl.glColorMask(false, false, false, false);
renderer.begin(shapes.getProjectionMatrix(), GL20.GL_TRIANGLE_FAN);
renderer.vertex(vertices.get(0), vertices.get(1), 0);
for (int i = 2, n = vertices.size; i < n; i += 2)
renderer.vertex(vertices.get(i), vertices.get(i + 1), 0);
renderer.end();
Gdx.gl.glColorMask(true, true, true, true);
Gdx.gl.glStencilOp(GL20.GL_ZERO, GL20.GL_ZERO, GL20.GL_ZERO);
Gdx.gl.glStencilFunc(GL20.GL_EQUAL, 1, 1);
Gdx.gl.glEnable(GL20.GL_BLEND);
shapes.setColor(color);
shapes.begin(ShapeType.Filled);
shapes.rect(-9999999, -9999999, 9999999 * 2, 9999999 * 2);
shapes.end();
Gdx.gl.glDisable(GL20.GL_STENCIL_TEST);
To use the stencil buffer, you must specify the number of bits for the stencil buffer when your app starts. For example, here is how to do that using the LWJGL2 backend:
LwjglApplicationConfiguration config = new LwjglApplicationConfiguration();
config.stencil = 8;
new LwjglApplication(new YourApp(), config);
For more information on this technique, try one of these links:
http://commaexcess.com/articles/7/concave-polygon-triangulation-shortcut
http://glprogramming.com/red/chapter14.html#name13
http://what-when-how.com/opengl-programming-guide/drawing-filled-concave-polygons-using-the-stencil-buffer-opengl-programming/

Render image for 2D game use in OpenGL ES for android

EDIT: Solved it! I made stupid mistake, I had a textureId I'd forgotten about when it was textureID I should use.
Okay, I am fully aware that this is a recurring question, and that there is a lot of tutorials and open source code. But I've been trying as best as I can for quite a while here, and my screen is still blank (with whatever color I set using glClearColor()).
So, I would be grateful for some pointers to what I'm doing wrong, or even better, some working code that will render a resource image.
I'll show what I've got so far (by doing some crafty copy-pasting) in my onDrawFrame of the class that implements the Renderer. I've removed some of the jumping between methods, and will simply paste it in the order it is executed.
Feel free to disregard my current code, I'm more than happy to start over, if anyone can give me a working piece of code.
Setup:
bitmap = BitmapFactory.decodeResource(panel.getResources(),
R.drawable.test);
addGameComponent(new MeleeAttackComponent());
// Mapping coordinates for the vertices
float textureCoordinates[] = { 0.0f, 2.0f, //
2.0f, 2.0f, //
0.0f, 0.0f, //
2.0f, 0.0f, //
};
short[] indices = new short[] { 0, 1, 2, 1, 3, 2 };
float[] vertices = new float[] { -0.5f, -0.5f, 0.0f,
0.5f, -0.5f, 0.0f,
-0.5f, 0.5f, 0.0f,
0.5f, 0.5f, 0.0f };
setIndices(indices);
setVertices(vertices);
setTextureCoordinates(textureCoordinates);
protected void setVertices(float[] vertices) {
// a float is 4 bytes, therefore we multiply the number if
// vertices with 4.
ByteBuffer vbb = ByteBuffer.allocateDirect(vertices.length * 4);
vbb.order(ByteOrder.nativeOrder());
mVerticesBuffer = vbb.asFloatBuffer();
mVerticesBuffer.put(vertices);
mVerticesBuffer.position(0);
}
protected void setIndices(short[] indices) {
// short is 2 bytes, therefore we multiply the number if
// vertices with 2.
ByteBuffer ibb = ByteBuffer.allocateDirect(indices.length * 2);
ibb.order(ByteOrder.nativeOrder());
mIndicesBuffer = ibb.asShortBuffer();
mIndicesBuffer.put(indices);
mIndicesBuffer.position(0);
mNumOfIndices = indices.length;
}
protected void setTextureCoordinates(float[] textureCoords) {
// float is 4 bytes, therefore we multiply the number of
// vertices with 4.
ByteBuffer byteBuf = ByteBuffer
.allocateDirect(textureCoords.length * 4);
byteBuf.order(ByteOrder.nativeOrder());
mTextureBuffer = byteBuf.asFloatBuffer();
mTextureBuffer.put(textureCoords);
mTextureBuffer.position(0);
}
//The onDrawFrame(GL10 gl)
gl.glClear(GL10.GL_COLOR_BUFFER_BIT);
gl.glLoadIdentity();
gl.glTranslatef(0, 0, -4);
gl.glEnableClientState(GL10.GL_VERTEX_ARRAY);
// Specifies the location and data format of an array of vertex
// coordinates to use when rendering.
gl.glVertexPointer(3, GL10.GL_FLOAT, 0, mVerticesBuffer);
if(shoudlLoadTexture){
loadGLTextures(gl);
shoudlLoadTexture = false;
}
if (mTextureId != -1 && mTextureBuffer != null) {
gl.glEnable(GL10.GL_TEXTURE_2D);
// Enable the texture state
gl.glEnableClientState(GL10.GL_TEXTURE_COORD_ARRAY);
// Point to our buffers
gl.glTexCoordPointer(2, GL10.GL_FLOAT, 0, mTextureBuffer);
gl.glBindTexture(GL10.GL_TEXTURE_2D, mTextureId);
}
gl.glTranslatef(posX, posY, 0);
// Point out the where the color buffer is.
gl.glDrawElements(GL10.GL_TRIANGLES, mNumOfIndices,
GL10.GL_UNSIGNED_SHORT, mIndicesBuffer);
// Disable the vertices buffer.
gl.glDisableClientState(GL10.GL_VERTEX_ARRAY);
if (mTextureId != -1 && mTextureBuffer != null) {
gl.glDisableClientState(GL10.GL_TEXTURE_COORD_ARRAY);
}
private void loadGLTextures(GL10 gl) {
int[] textures = new int[1];
gl.glGenTextures(1, textures, 0);
mTextureID = textures[0];
gl.glBindTexture(GL10.GL_TEXTURE_2D, mTextureID);
gl.glTexParameterf(GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_MIN_FILTER, GL10.GL_LINEAR);
gl.glTexParameterf(GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_MAG_FILTER, GL10.GL_LINEAR);
gl.glTexParameterf(GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_WRAP_S, GL10.GL_CLAMP_TO_EDGE);
gl.glTexParameterf(GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_WRAP_T, GL10.GL_CLAMP_TO_EDGE);
gl.glTexEnvf(GL10.GL_TEXTURE_ENV, GL10.GL_TEXTURE_ENV_MODE, GL10.GL_REPLACE);
GLUtils.texImage2D(GL10.GL_TEXTURE_2D, 0, bitmap, 0);
}
It doesn't crash, no exceptions, simply a blank screen with color. I've printed stuff in there, so I'm pretty sure it is all executed.
I know it's not optimal to just paste code, but at the moment, I just want to be able to do what I was able to do with canvas :)
Thanks a lot
If you're getting the background colour, that means your window is properly set up. OpenGL is connected to that area of the screen.
However, OpenGL clips to the near and far clip planes, ensuring that objects don't cross or intersect the camera (which, both mathematically and logically, doesn't make sense) and that objects too far away don't appear. So if you've not set up modelview and projection correctly, it's probable that all your geometry is being clipped.
Modelview is used to map from world to eye space. Projection maps from eye space to screen space. So a typical applications uses the former to position objects within the scene, and position the scene relative to the camera, then the latter deals with whether the camera sees with perspective or not, how many world units make how many screen units, etc.
If you look at examples like this one, particularly onSurfaceChanged, you'll see an example of a perspective projection with a camera fixed at the origin.
Because the camera is at (0, 0, 0), leaving your geometry on z = 0 as your code does will cause it to be clipped. In that example code they've set the near clip plane to be at z = 0.1, so in your existing code you could change:
gl.glTranslatef(posX, posY, 0);
To:
gl.glTranslatef(posX, posY, -1.0);
To push your geometry back sufficiently far to appear on screen.

Categories