Using glArrayElement with LWJGL - java

I'm following along with the OpenGL tutorial found here. I'm on chapter 2 right now and it's going over the advantages of using glArrayElement to render objects. Currently, my code is as follows:
glEnableClientState(GL_VERTEX_ARRAY);
glEnableClientState(GL_COLOR_ARRAY);
double vertices[] = {100, 200, 0, 200, 100, 0, 100, 100, 0};
double colors[] = {1, .5, .8, .3, .5, .8, .3, .5, .8};
DoubleBuffer vertexBuffer = BufferUtils.createDoubleBuffer(9).put(vertices);
DoubleBuffer colorBuffer = BufferUtils.createDoubleBuffer(9).put(colors);
glVertexPointer(3, 0, vertexBuffer);
glColorPointer(3, 0, colorBuffer);
while(!Display.isCloseRequested()) {
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glBegin(GL_TRIANGLES);
glArrayElement(0);
glArrayElement(1);
glArrayElement(2);
glVertex3d(300, 200, 0);
glVertex3d(400, 100, 0);
glVertex3d(300, 100, 0);
glEnd();
//Display.sync(60);
Display.update();
}
The second triangle, defined explicitly by calls to glVertex3d is rendered fine. Bu the first triangle does not render at all. Am I making a simple mistake?

While scouring for more sample code, I came across a snippet that said you had to "flip each buffer." Adding
vertexBuffer.flip();
colorBuffer.flip();
Right before the while loop solved my problem!

Related

Display ContextAttribs and GL11 Projection Matrix Mode - Function is not supported

I am getting an error when I try to set the Matrix mode of my LWJGL program to GL_Projection.
glMatrixMode(GL_PROJECTION);
The error is :
Exception in thread "main" java.lang.IllegalStateException: Function is not supported
at org.lwjgl.BufferChecks.checkFunctionAddress(BufferChecks.java:58)
at org.lwjgl.opengl.GL11.glMatrixMode(GL11.java:2075)
....
I have tracked down the error to when I make my Display. When I remove my ContexAttribs my code doesn't display the error and renders! ( when I comment out the code that needs the contexattribs )
This is my code:
display code:
Display.setDisplayMode(new DisplayMode(WIDTH, HEIGHT));
ContextAttribs attribs = new ContextAttribs(3, 2).withProfileCore(true).withForwardCompatible(true);
Display.create(new PixelFormat().withDepthBits(24).withSamples(4), attribs);
Display.setTitle(TITLE);
Display.setInitialBackground(1, 1, 1);
GL11.glEnable(GL13.GL_MULTISAMPLE);
GL11.glViewport(0, 0, WIDTH, HEIGHT);
initialization method:
glMatrixMode(GL_PROJECTION);
glOrtho(0, width, height, 0, -1, 1);
glMatrixMode(GL_MODELVIEW);
glClearColor(0, 1, 0, 0);
textureID = loadTexture("res/hud.png");
glEnable(GL_TEXTURE_2D);
rendering method:
glClear(GL_COLOR_BUFFER_BIT);
glEnable(GL_BLEND);
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
glPushMatrix();
glTranslatef(0, 0, 0);
glBindTexture(GL_TEXTURE_2D, textureID);
glBegin(GL_QUADS);
{
glTexCoord2f(0, 0);
glVertex2f(0, 0);
glTexCoord2f(1, 0);
glVertex2f(width, 0);
glTexCoord2f(1, 1);
glVertex2f(width, height);
glTexCoord2f(0, 1);
glVertex2f(0, height);
}
glEnd();
glPopMatrix();
Does anyone know how I could get this code working with the contextAttribs?
Thanks in advance!
Edit 1: I have all the functions and variables in GL11 statically imported.
First of all, drawing with glBegin/glEnd sequences is deprecated since more than 10 years. See Vertex Specification for a state of the art way of rendering.
With the line
ContextAttribs attribs = new ContextAttribs(3, 2).withProfileCore(true).withForwardCompatible(true);
a OpenGL core profile Context with Forward compatibility bit set is generated.
In this context all the deprecated functions like glBegin/glEnd sequences, matrix stack (glMatrixMode), the standard light model etc. are removed. This causes the error.
See also Fixed Function Pipeline and OpenGL Context
Skip the setting of the forward compatibility bit (.withForwardCompatible(true)) to solve the issue.

How to set a color filter for XWalkView?

I am currently working on my web browser Cornowser, which uses the Crosswalk engine, and I ran into a problem.
I want to implement color modes, as they are in UltimateBrowserProject.
But the color filters don't apply.
I tried setting the layer type and passing the Paint with the color filter.
Here is the source code:
// Handle color modes
public void drawWithColorMode() {
Logging.logd("Applying web render color mode...");
RenderColorMode.ColorMode cm = CornBrowser.getBrowserStorage().getColorMode();
Paint paint = new Paint();
final float[] negativeColor = {
-1.0f, 0, 0, 0, 255, // Red
0, -1.0f, 0, 0, 255, // Green
0, 0, -1.0f, 0, 255, // Blue
0, 0, 0, 1.0f, 0 // Alpha
};
final float[] darkColor = {
1f, 0, 0, 0, -255,
0, 1f, 0, 0, -255,
0, 0, 1f, 0, -255,
0, 0, 0, 1f, 0
};
final float[] invertColor = {
-1f, 0, 0, 0, 0,
0, -1f, 0, 0, 0,
0, 0, -1f, 0, 0,
0, 0, 0, 1f, 0
};
Logging.logd("Found color mode: " + cm.mode);
switch(cm.mode) {
case RenderColorMode.ColorMode.NORMAL:
Logging.logd("Applying normal color mode");
paint.setColorFilter(null);
break;
case RenderColorMode.ColorMode.DARK:
Logging.logd("Applying dark mode");
paint.setColorFilter(new ColorMatrixColorFilter(darkColor));
break;
case RenderColorMode.ColorMode.NEGATIVE:
Logging.logd("Applying negative mode");
paint.setColorFilter(new ColorMatrixColorFilter(negativeColor));
break;
case RenderColorMode.ColorMode.INVERT:
Logging.logd("Applying inverted mode");
paint.setColorFilter(new ColorMatrixColorFilter(invertColor));
break;
case RenderColorMode.ColorMode.GREYSCALE:
Logging.logd("Applying greyscale");
ColorMatrix m = new ColorMatrix();
m.setSaturation(0);
paint.setColorFilter(new ColorMatrixColorFilter(m));
break;
default:
Logging.logd("Warning: Unknown color mode " + cm.mode + ".");
break;
}
Logging.logd("Setting layer type...");
setLayerType(LAYER_TYPE_HARDWARE, paint);
}
I also tried it by overriding draw(canvas) but it's the same result.
Does anyone know how to set a color filter for XWalkView?
Thanks in advance!
UPDATE:
It seems to be that SurfaceView doesn't support color filters.
How to do it anyways?
UPDATE 2:
Seems that this question is not getting so much attention... I edited my source code, how it should work, but it doesn't work, please check it, logcat gives me following output:
D/Cornowser: Applying web render color mode...
D/Cornowser: Found color mode: 2
D/Cornowser: Applying negative mode
D/Cornowser: Setting layer type...
Final update:
I actually got it working by using JavaScript.
If you want to know how I solved it, look right here.
Override the draw(Canvas) method
write the above source code
At the end of the draw() method call invalidate()

How Do I UnProject Screen Coordinates in OpenGL ES 2.0 (Android)?

I have been working for the last few days to "unproject" our app's touch events into our Renderer's coordinate space. In the pursuit of this goal, I have tried various custom unproject methods and alternative techniques (including trying to convert the coordinates using the scaling factor & transform values) to no end. I have gotten close (where my touches are slightly off) however my attempts using GLU.gluUnProject have been way off, usually placing the coordinates around the center of the view. The "closest" results were produced by Xandy's method however even these are usually off. My primary questions are how do I setup my viewport matrix and am I passing GLU.gluUnProject correct parameters? My math is based on the answer to this question. Here are the relevant excerpts of my code (showing how I setup my matrices and my current attempt):
public void onSurfaceChanged(GL10 gl, int width, int height) {
// Set the OpenGL viewport to fill the entire surface.
glViewport(0, 0, width, height);
...
float ratio = (float) width / height;
Matrix.frustumM(mProjectionMatrix, 0, -ratio, ratio, -1, 1, 3, 7);
}
public void onDrawFrame(GL10 gl) {
glClear(GL_COLOR_BUFFER_BIT);
Matrix.setLookAtM(mViewMatrix, 0, 0f, 0f, -4.5f, 0f, 0f, 0f, 0f, 1f, 0f);
Matrix.scaleM(mViewMatrix, 0, mScaleFactor, mScaleFactor, 1.0f);
Matrix.translateM(mViewMatrix, 0, -mOffset.x, mOffset.y, 0.0f);
Matrix.multiplyMM(mModelMatrix, 0, mProjectionMatrix, 0, mViewMatrix, 0);
...
}
public PointF convertScreenCoords(float x, float y) {
int[] viewPortMatrix = new int[]{0, 0, (int)mViewportWidth, (int)mViewportHeight};
float[] outputNear = new float[4];
float[] outputFar = new float[4];
y = mViewportHeight - y;
int successNear = GLU.gluUnProject(x, y, 0, mModelMatrix, 0, mProjectionMatrix, 0, viewPortMatrix, 0, outputNear, 0);
int successFar = GLU.gluUnProject(x, y, 1, mModelMatrix, 0, mProjectionMatrix, 0, viewPortMatrix, 0, outputFar, 0);
if (successNear == GL_FALSE || successFar == GL_FALSE) {
throw new RuntimeException("Cannot invert matrices!");
}
convert4DCoords(outputNear);
convert4DCoords(outputFar);
float distance = outputNear[2] / (outputFar[2] - outputNear[2]);
float normalizedX = (outputNear[0] + (outputFar[0] - outputNear[0]) * distance);
float normalizedY = (outputNear[1] + (outputFar[1] - outputNear[1]) * distance);
return new PointF(normalizedX, normalizedY);
}
convert4DCoords is simply a helper function that divides each coordinate (x, y, z) of an array by w. mOffset and mScaleFactor are the translation and scaling parameters (given by a ScaleGestureDetector within our GLSurfaceView)
Based on everything I have read this should be working however it is consistently wrong and I am not sure what else to try. Any help/feedback would be greatly appreciated!
I didn't look through all of your math, but some of your matrix operations and specifications look wrong to me. That is most likely at least part of your problem.
Looking at what you call mViewMatrix first:
Matrix.setLookAtM(mViewMatrix, 0, 0f, 0f, -4.5f, 0f, 0f, 0f, 0f, 1f, 0f);
Matrix.scaleM(mViewMatrix, 0, mScaleFactor, mScaleFactor, 1.0f);
Matrix.translateM(mViewMatrix, 0, -mOffset.x, mOffset.y, 0.0f);
There's nothing necessarily wrong with this. But most people would probably only call the first part, which is the result of setLookAtM(), a View matrix. The rest looks then more like a Model matrix. But since rendering normally only uses the product of View and Model matrix, this is just terminology. It would seem fair to say that what you store in mViewMatrix is your ModelView matrix.
Then the naming gets stranger here:
Matrix.multiplyMM(mModelMatrix, 0, mProjectionMatrix, 0, mViewMatrix, 0);
This calculates the product of Projection and View matrix. So this would be a ViewProjection matrix. Storing this in a variable called mModelMatrix is confusing, and I think you might have set yourself a trap with this naming.
This leads us to the glUnproject() call:
GLU.gluUnProject(x, y, 0, mModelMatrix, 0, mProjectionMatrix, 0, viewPortMatrix, 0, outputNear, 0);
Based on the above, what you're passing for the two matrix arguments (the viewport is not really a matrix) are the ViewProjection and the Projection matrix. Where based on the method definition, you should be passing the ModelView and the Projection matrix.
Since we established that what you call mViewMatrix corresponds to a ModelView matrix, this should start working much better if you change just that argument:
GLU.gluUnProject(x, y, 0, mViewMatrix, 0, mProjectionMatrix, 0, viewPortMatrix, 0, outputNear, 0);
Then you can get rid of the mModelMatrix that is not a Model matrix at all.

libgdx, render shapes using Mesh and ShaderProgram

I just started using libgdx and want to render some 2D shapes using a Mesh and a custom ShaderProgram.
I'm experienced in OpenGL, but I don't see my mistake here, maybe someone can help me.
The shader is very basic, vertex:
attribute vec2 v;
uniform mat4 o;
void main(){
gl_Position = vec4(o*vec3(v, 1.0), 1.0);
}
fragment:
#ifdef GL_ES
precision mediumhp float;
#endif
void main(){
gl_FragColor = vec4(1, 1, 1, 1);
}
The mesh (quad 100x100px):
Mesh mesh = new Mesh(true, 4, 6, new VertexAttribute(Usage.Position, 2, "v"));
mesh.setVertices(new float[]{0, 0,
100, 0,
0, 100,
100, 100});
mesh.setIndices(new short[]{0, 1, 3, 0, 3, 2});
The render stage:
Matrix4 o = new Matrix4(); // also tried OrthographicCamera and SpriteBatch.getProjectionMatrix() here...
o.setToOrtho2D(0, 0, Gdx.graphics.getWidth(), Gdx.graphics.getHeight());
shader.begin();
shader.setUniformMatrix(shader.getUniformLocation("o"), o);
mesh.render(shader, GL20.GL_TRIANGLES);
shader.end();
And thats it. I get no output at all, black screen.
Of course I clear the screen and everything, SpriteBatch (which I also use for different purposes) just works fine. But I don't get how this is done in libgdx or whats wrong here...

For loop for rendering OpenGL VBO

I have managed to get a cube rendered in OpenGL using a VBO. My next goal is actually creating a for loop to create multiple cubes. I'm stuck on this part though, do I put this code:
GL11.glEnableClientState(GL11.GL_VERTEX_ARRAY);
ARBVertexBufferObject.glBindBufferARB(ARBVertexBufferObject.GL_ARRAY_BUFFER_ARB, vertexBufferID);
GL11.glVertexPointer(3, GL11.GL_FLOAT, 0, 0);
GL11.glDrawArrays(GL11.GL_QUADS, 0, 24);
GL11.glDisableClientState(GL11.GL_VERTEX_ARRAY);
Into a for loop? Wouldn't I have to use some sort of glPopMatrix command along with a translate function? I barely understand how to create one cube in a VBO, so sorry if its obvious whats wrong.
You can use the following way:
GL11.glEnableClientState(GL11.GL_VERTEX_ARRAY);
ARBVertexBufferObject.glBindBufferARB(ARBVertexBufferObject.GL_ARRAY_BUFFER_ARB, vertexBufferID);
GL11.glVertexPointer(3, GL11.GL_FLOAT, 0, 0);
for (int i = 0; i < cubeCount; i++) {
GL11.glPushMatrix();
// do translation/rotation for cube no i
GL11.glDrawArrays(GL11.GL_QUADS, 0, 24);
GL11.glPopMatrix();
}
GL11.glDisableClientState(GL11.GL_VERTEX_ARRAY);
Please note that the glPushMatrix/glPopMatrix way is deprecated in newer openGl versions. For you it should work because you are using GL11.

Categories