How would I auto texture map a sphere in OpenGL? - java

Which parameters for especially s and t would I use to text map a sphere? I've tried various different options, but there's always a little portion that's distorted no matter what values I choose for float[] s and t. I can do planes and cylinders, but I'm not sure about spheres. Any help would be appreciated.
gl.glTexGeni(GL2.GL_T, GL2.GL_TEXTURE_GEN_MODE, GL2.GL_OBJECT_LINEAR);
float[] s = {1f, 0f, 0f, 0};
gl.glTexGenfv(GL2.GL_S, GL2.GL_OBJECT_PLANE, s, 0);
float[] t = {0f, 1f, 0f, 0};
gl.glTexGenfv(GL2.GL_T, GL2.GL_OBJECT_PLANE, s, 0);

You can not texture map a sphere using a single rectangular texture without creating some serious distortion at some place. It's mathematically impossible. That being said, you should not use the glTexGen functionality for this either, because a) it's been deprecated, and b) only creates linear planar mappings, whereas for texturing a sphere you need curvilinear coordinates. Use a vertex shader to generate the texture coordinates from vertex position.

Related

Rendering and Cropping/Stretching an Image Using Slick/OpenGL (using getSubImage)

I'm trying to recreate a shadow effect on some 2D sprites in a project using Slick. To do this, I'm recolouring a copy of the sprite and stretching it using Slick OpenGL using this method:
public static void getStretched(Shape shape, Image image) {
TextureImpl.bindNone();
image.getTexture().bind();
SGL GL = Renderer.get();
GL.glEnable(SGL.GL_TEXTURE_2D);
GL.glBegin(SGL.GL_QUADS);
//topleft
GL.glTexCoord2f(0f, 0f);
GL.glVertex2f(shape.getPoints()[0], shape.getPoints()[1]);
//topright
GL.glTexCoord2f(0.5f, 0f);
GL.glVertex2f(shape.getPoints()[2], shape.getPoints()[3]);
//bottom right
GL.glTexCoord2f(1f, 1f);
GL.glVertex2f(shape.getPoints()[4], shape.getPoints()[5]);
//btoom left
GL.glTexCoord2f(0.5f, 1f);
GL.glVertex2f(shape.getPoints()[6], shape.getPoints()[7]);
GL.glEnd();
GL.glDisable(SGL.GL_TEXTURE_2D);
TextureImpl.bindNone();
}
This gives the almost the desired effect, aside from the fact that the image is cropped a bit.
This becomes more extreme for higher distortions
I'm new to using OpenGL, so some help regarding how to fix this would be great.
Furthermore, if I feed an image into the method that was obtained using getSubImage, OpenGL renders the original image, rather than the sub image.
I'm unsure as to why this happens, as the sprite itself is taken from a spritesheet using getSubImage and has no problem rendering.
Help would be greatly appreciated!
I'm recolouring a copy of the sprite and stretching it
The issue is that you stretch the texture coordinates, but the region which is covered by the sprite stays the same. If the shadow exceeds the the region which is covered by the quad primitive, then it is cropped.
You have to "stretch" the vertex coordinates rather than the texture coordinates. Define a rhombic geometry for the shadow and wrap the texture on it:
float distortionX = ...;
GL.glEnable(SGL.GL_TEXTURE_2D);
GL.glBegin(SGL.GL_QUADS);
//topleft
GL.glTexCoord2f(0f, 0f);
GL.glVertex2f(shape.getPoints()[0] + distortionX, shape.getPoints()[1]);
//topright
GL.glTexCoord2f(1f, 0f);
GL.glVertex2f(shape.getPoints()[2] + distortionX, shape.getPoints()[3]);
//bottom right
GL.glTexCoord2f(1f, 1f);
GL.glVertex2f(shape.getPoints()[4], shape.getPoints()[5]);
//btoom left
GL.glTexCoord2f(0f, 1f);
GL.glVertex2f(shape.getPoints()[6], shape.getPoints()[7]);
GL.glEnd();
[...] if I feed an image into the method that was obtained using getSubImage, [...] the sprite itself is taken from a spritesheet [...]
The sprite is covers just a small rectangular region of the entire texture. This region is defined by the texture coordinates. You have to use the same texture coordinates as when you draw the sprite itself.

In OpenGL, how do I make it so that my skybox does not cover any of my entities?

I am working on an OpenGL game in Java with LWJGL (ThinMatrix's tutorials at the moment) and I just added my skybox. As you can see from the picture, however, it is clipping through the trees and covering everything behind a certain point.
Here is my rendering code for the skybox:
public void render(Camera camera, float r, float g, float b) {
shader.start();
shader.loadViewMatrix(camera);
shader.loadFogColor(r, g, b);
GL30.glBindVertexArray(cube.getVaoID());
GL20.glEnableVertexAttribArray(0);
bindTextures();
GL11.glDrawArrays(GL11.GL_TRIANGLES, 0, cube.getVertexCount());
GL30.glBindVertexArray(0);
shader.stop();
}
private void bindTextures() {
GL13.glActiveTexture(GL13.GL_TEXTURE0);
GL11.glBindTexture(GL13.GL_TEXTURE_CUBE_MAP, texture);
GL13.glActiveTexture(GL13.GL_TEXTURE1);
GL11.glBindTexture(GL13.GL_TEXTURE_CUBE_MAP, nightTexture);
shader.loadBlendFactor(getBlendFactor());
}
also if it is needed, here is my code for my master renderer:
public void render(List<Light> lights, Camera camera){
prepare();
shader.start();
shader.loadSkyColor(RED, GREEN, BLUE);
shader.loadLights(lights);
shader.loadViewMatrix(camera);
renderer.render(entities);
shader.stop();
terrainShader.start();
terrainShader.loadSkyColor(RED, GREEN, BLUE);
terrainShader.loadLight(lights);
terrainShader.loadViewMatrix(camera);
terrainRenderer.render(terrains);
terrainShader.stop();
skyboxRenderer.render(camera, RED, GREEN, BLUE);
terrains.clear();
entities.clear();
}
There are two things you can do
If you draw your skybox first, you can disable your depth test glDisable(GL_DEPTH_TEST) or your depth write glDepthMask(false). This will prevent that your skybox draws depth values, and the skybox will never be in front of anything that will be drawn later.
If you draw your skybox last, you can make it literally infinitely big by using vertex coordinates with a w-coordinate as 0. A vertex (x y z 0) means it is a vertex infinitely far in the direction of the vector (x y z). To prevent clipping, you have to enable depth clamping glEnable(GL_DEPTH_CLAMP) this will prevent OpenGl to clip away your skybox faces, and you are sure that the skybox is always at the maximum distance and will never hide anything you have drawn earlier.
the advantage of the second method is within the depth test. Because you already have a depth values written for your scene, the OpenGL pipeline can skip the calculation of the skybox pixels that are already covered by your scene. But the fragment shader for skyboxes is usually very trivial, so it shouldn't make that much of a difference.
I am not familiar with LWJGL, are you alllowed to write shader? In plain opengl, you don't have to worry about the size of skybox cube, it can be {1.0, 1.0, 1.0} if you like. What you need is first place your camera at {0.0, 0.0, 0.0} and make skybox fail depth test against everything in your scene, you can achieve that by making the skybox's z value in normalized device coordinate be 1.0.
Do this in your vertex shader
gl_Position = (mvp_mat * vec4(xyz, 1.0)).xyww;
after the perspective divide by w, z will be w / w or 1.0.
You might want to check out How can I increase distance (zfar/gluPerspective) where openGL stops drawing objects?
The problem in that instance is that the skybox itself was too small and intersecting with the geometry.
I also see that you're rendering your terrain first, and then your skybox. I would try flipping the order there; draw the skybox first then the terrain.
First, you should remove the skybox and render the scene again to check if it is skybox that clip the tree.
If it is skybox, simply scale the skybox to make it contain all the object in the terrain.
If not, it is likely to be the problem of camera (like Hanston said). You need to set the far clipping plane at least behind the skybox. That is, it should be larger the diameter of your skybox.
If you want to scale the skybox or any other object, use the transformationMatrix. the game engine use a 4x4 matrix to control the size, location and rotation of the model. you can see example in source TerrainRenderer.java, at function loadModelMatrix. It create a transform matrix and uploads it into the shader. You should do the same thing, but change the scale parameter into what you want.

Feed Shader Program with World Units instead of (0f,1f) values

I'm rendering a simple rectangle mesh using libgdx, and other geometric elements that are similar in simplicity. Therse are going to interact with the sprites I have setup in my game. The sprites' position and other properties are setup in world units and before each sprite draw session I setup the camera like this:
camera.update();
batch.setProjectionMatrix(camera.combined);
It all works well but I need to draw meshes using world units. How can I feed the shader program world coordinates(12.5f, 30f, etc, based on my game world data) instead of (0f, 1f) ranges? I want to draw several textured meshes so I need coordinates that are in relation with the other elements in the game.
Here is how I draw a simple rectangle mesh :
mesh = new Mesh(true, 4, 6,
new VertexAttribute(Usage.Position, 3, "a_position"),
new VertexAttribute(Usage.TextureCoordinates, 2, "a_texCoords"));
mesh.setVertices(new float[] {
-1.0f, -1.0f, 0, 0,1,
0.0f, -1.0f, 0,1,1,
0.0f, 0.0f, 0, 1,0,
-1.0f, 0.0f, 0, 0,0 });
mesh.setIndices(new short[] { 0, 1, 2, 2, 3, 0});
Gdx.graphics.getGL20().glEnable(GL20.GL_TEXTURE_2D);
Gdx.gl20.glActiveTexture(GL20.GL_TEXTURE);
createShader();
shader.begin();
mesh.render(shader, GL20.GL_TRIANGLES);
shader.end();
Is there any way I can feed world units to the mesh vertices array ?
You can transform the vertices in the vertex shader. This allows you to project world coordinates onto the -1 to 1 range required for rendering. This is typically done by multiplying the position vertex attribute with a (projection) matrix. Have a look at the default spritebatch shader, for an example of how to implement this.
You can use the camera.combined matrix to multiply these vertices in vertex shader. Just like you did when specifying the projection matrix for the spritebatch. You'll have to assign this matrix to the uniform you've used in your vertex shader. An example of how to do this can also be found in default spritebatch implementation.
However, you might want to reconsider your approach. Since you're using a spritebatch, you can profit from a performance gain by using the spritebatch instead of manually rendering. This will also simplify the rendering for you, because you dont have to mess with the shader and matrices yourself. Spritebatch contains a method (javadoc) which allows you to specify a manually created mesh (or vertices actually). Each vertex is expected to be 5 floats (x, y, u, v, color) in size and a multiple of four vertices (doesn't have to be a rectangle shape though) must provided (you can use Color.WHITE.toFloatBits() for the color).
But, since you're trying to render a simple rectangle, you might as well use one of the more convenient methods that allows you to render a rectangle without having to create a mesh all together (javadocs). Or, even easier, use it how it is designed by creating a Sprite for your rectangle (wiki page).
Now, if you're still certain that you do want to create a mesh and shader manually, then I'd suggest learning that using e.g. a tutorial, instead of just diving into it. E.g. these tutorials might help you get started. The wiki also contains an article describing how to do this, including transforming vertices in the vertex shader.

Render ellipse using libgdx

I am attempting to render an ellipse using ShapeRenderer, and have come up with the following partial solution:
void drawEllipse(float x, float y, float width, float height) {
float r = (width / 2);
ShapeRenderer renderer = new ShapeRenderer();
renderer.setProjectionMatrix(/* camera matrix */);
renderer.begin(ShapeType.Circle);
renderer.scale(1f, (height / width), 1f);
renderer.circle(x + r, y, r);
renderer.identity();
renderer.end();
}
This draws an ellipse at the specified coordinates with the correct width and height; however, it appears that the scale transformation causes the circle to be translated in the viewport, and I have not been successful in determining the mathematics behind the translation. I am using an orthogonal projection with y-up where the coordinates map to a pixel on the screen. I am not very familiar with OpenGL.
How can I draw an ellipse using libgdx, and have it draw the ellipse at the exact coordinates I specify? Ideally, that would mean that the origin of the ellipse is located in the top-left corner, if the ellipse was contained in a rectangle.
The new Libgdx ShapeRenderer API (current nightlies, in whatever release will come after v0.9.8) contains an ellipse drawing method so you can ignore the rest of this answer. The ShapeRenderer method has changed in other ways, too though (e.g., the ShapeType is just Filled, Line, or Point now).
For folks stuck with the older API, you should be able to work-around the distortion by making sure the scaling happens around the origin. This is a standard OpenGL practice (so its a bit obtuse, but they're following OpenGL's lead). See Opengl order of matrix transformations and OpenGL: scale then translate? and how?. Even better (again standard OpenGL practice) you end up listing the operations in the reverse order you want them to happen at, so to make a circle, distort it into an ellipse, then move it to a specific destination you actually write code like:
renderer.begin(ShapeType.Circle);
renderer.translate(x, y, 0);
renderer.scale(1f, (height/width), 1f);
renderer.circle(0, 0, r);
renderer.end();

OpenGL "zoom": object clipping and brightness

I have a simple OpenGL app that displays arbitrary 3D models. I'd like to implement zoom. what I have now uses glScale, and works to some degree. However, I'm having two issues.
Any sort of zoom (+) quickly gets to the point where the edges of the object are inside the near clipping plane. Right now, my zNear is something like 0.1, so it makes sense that increasing the scale of the object will cause clipping. I am wondering if there are any other approaches for achieving a better effect.
As I zoom in, the object gets dimmer. Zoom out and it gets brighter. I have a light position at around 0, 0, 100. I have very simple lighting positioned at 0,0,100 and using only diffuse.
gl.glEnable(GL10.GL_LIGHTING);
gl.glEnable(GL10.GL_LIGHT0);
gl.glEnable(GL10.GL_COLOR_MATERIAL);
float[] lights;
lights = new float[] { 0f, 0f, 0f, 1f };
gl.glLightfv(GL10.GL_LIGHT0, GL10.GL_AMBIENT, lights, 0);
lights = new float[] { 1f, 1f, 1f, 1f };
gl.glLightfv(GL10.GL_LIGHT0, GL10.GL_DIFFUSE, lights, 0);
lights = new float[] { 0f, 0f, 0f, 1f };
gl.glLightfv(GL10.GL_LIGHT0, GL10.GL_SPECULAR, lights, 0);
float matAmbient[] = { 0f, 0f, 0f, 1f };
float matDiffuse[] = { 1f, 1f, 1f, 1f };
float matSpecular[] = { 0f, 0f, 0f, 1f };
gl.glMaterialfv(GL10.GL_FRONT_AND_BACK, GL10.GL_AMBIENT, matAmbient, 0);
gl.glMaterialfv(GL10.GL_FRONT_AND_BACK, GL10.GL_DIFFUSE, matDiffuse, 0);
gl.glMaterialfv(GL10.GL_FRONT_AND_BACK, GL10.GL_SPECULAR, matSpecular, 0);
float lightPosition[] = { mesh.mid.vertex[X], mesh.mid.vertex[Y], 100f,
1f };
gl.glLightfv(GL10.GL_LIGHT0, GL10.GL_POSITION, lightPosition, 0);
I do not have attenuation settings, which I believe needs to be enabled to cause the light to be affected by distance. Regardless, I'm not changing the distance of the object, just scaling it. Sure the position of the faces are changing but not significantly. Anyway, I'd thinking zooming in would cause it to get brighter, not dimmer.
This happens to be using opengl-es 1.0 on the Android platform.
Scaling will change the way your object is lit as the normals are also scaled (which as you pointed out in your own answer can be forced with a call to glEnable(GL_NORMALIZE)). Note that depending on when they are specified, the lights themselves may not have the equivalent transformation:
http://www.opengl.org/resources/faq/technical/lights.htm
A light's position is transformed by the current ModelView matrix at the time the position is specified with a call to glLight*().
Depending on the kind of zoom effect you want, you could achieve your zoom in different ways.
If you literally want to 'zoom' in the way that a zoom lens on a camera does, then you could change the field of vision parameter passed in to gluPerspective. This will mean that you have the effect of flattened or exaggerated perspective, as you do with a real camera.
What is more commonly desired by typical applications, is to change the position of the camera in relation to the object. The simplest way to do this is with gluLookAt.
Beware of the difference between projection and modelview matrices; changing perspective should be done to projection, while positioning the camera should effect the modelview. See http://www.sjbaker.org/steve/omniv/projection_abuse.html
nb... I've just realised that the OpenGL-es you're using might not support those exact functions; you should be able to find how to achieve the same results quite easily.
The partial answer to #2 is that I was not scaling my normals. In other words, the previously normalized normal values have a greater (relative) magnitude when the object is scaled smaller, causing the faces to reflect more light ... and vice versa.
You can set a parameter,
glEnable(GL_NORMALIZE)
and it solves the problem, at the expense of some extra calculations. the full story is here,
http://www.opengl.org/resources/features/KilgardTechniques/oglpitfall/
(see #16).

Categories