I am currently drawing a single transparent 3D mesh, generated via a marching cubes algorithm, with the intention of having more objects once the problem is fixed.
As it stands, I can draw 3d shapes perfectly well but when I implement transparency (in my case changing the opacity of the meshes PhongMaterial) I get a weird effect where only a few triangles are rendered when behind another triangle.
see example.
http://i.imgur.com/1wdmYYs.png
(sorry, I was unable to post the image directly, due to rep)
When the "stick" is behind the larger shape there seems to be a loss in triangles and I currently have no idea why.
The red is all the same mesh rendered in the same way.
I am currently using an ambient light if that makes a difference.
Some example code:
MeshView mesh = generate Mesh Data via marching cube;
mesh.setCullFace(CullFace.None);
PhongMaterial mat = new PhongMaterial(1, 0, 0, 0.5d);
AmbientLight light = new AmbientLight();
light.setColor(new Color(1, 0, 0, 0.5)); // I dont believe the alpha makes a difference
light.setOpacity(0.5);
mesh.setMaterial(mat);
group.getChildren().addAll(light, mesh);
Transparency only works correctly when the triangle faces are sorted by distance to the camera. This is an artifact of the fact that consumer 3D cards break any scene down to the triangles and so they can render each one individually. This allows to render hundreds of triangles at the same time when you have hundreds of cores. Older cards show you the number of triangles/second which they can render.
On more modern cards, part of the triangle rendering has been moved to the driver which uses the vector engines on the card to calculate the color of each point in software. This is still fast since you can have 1000+ vector CPUs plus it allows you to create complex programs that modify each vertex/pixel before it's stored in memory which allows you to create shiny surfaces, etc.
Related
I want to reduce draw calls on my isometric map implementation so I thought about combining multiple meshes and draw them in one go. I am currently creating a mesh from all tiles in a 8x8 "chunk". This means a mesh containing floors, walls and objects. These floors, walls and objects are just quads placed along the regular axis where Y is up. A rotated orthographic camera provides the isometric projection matrix. Most floors and walls fill up the full quad with an opaque texture but some, and most objects have alpha transparency like a window, broken floor or a chair object. I am building the "chunk" mesh by adding individual meshes to it, I do this in order of drawing back to front and bottom to top otherwise the combined chunk mesh is not properly rendered at all. I render all of the needed chunks also in the same order and this works great.
However the drawing problems start when I need to add additional objects "inside" these mesh chunks. For example a moving player which is also just a quad with a transparent texture. I need to be able to put him inside that chunk mesh, perhaps on a tile where he is in front of a wall and behind a window. If I would render the player prior to the chunk, it's transparent pixels would not show the chunk mesh. If I would render the player after the chunk the player would not be visible trough a transparent quad in the chunk. If this could be solved easily and not to expensive on the CPU/GPU that would be the solution to my question. However I am considering myself new to OpenGL so I do not know it's magic very well.
I can think of a view solutions to tackle this problem that do not involve OpenGL:
Dump the chunk mesh method and just draw back to front. I either need a more efficient way of drawing or don't allow to zoom out as much to reduce draw calls since this is a bottleneck.
Get rid of the quads with transparency and make them full 3D. I feel this should be a design choice and not a mandatory thing, besides that it would add a lot of additional work to creating all these assets. Now I just have a textures projected on a quad instead of fully UV'd models with individual textures.
Draw all transparent objects after the chunks in proper order. This feels kinda hacky and error prone since some objects need to go into that chunk mesh and some don't.
Only combine floors in a batch mesh. The floors are the biggest bottleneck, the bottom of the map has all floor tiles filled which are about 4000 draw calls when drawn individually, a big building uses a lot of floors too for each Z level. Walls and objects are drawn significantly less, probably just a couple hundred maximum all the way zoomed out. SO for each chunk draw all floors in one call and then each individual object, I'd reduce draw calls a lot by just combining the floors. When drawing walls and objects I have to check if there is a potential dynamic object to be rendered or just check if there are dynamic objects within the chunk and sort them with all the walls and objects before drawing them.
This is how I currently render.
Gdx.gl.glEnable(GL20.GL_DEPTH_TEST);
Gdx.gl.glEnable(GL20.GL_BLEND);
Gdx.gl.glBlendFunc(GL20.GL_SRC_ALPHA, GL20.GL_ONE_MINUS_SRC_ALPHA);
shader.begin();
shader.setUniformMatrix("u_worldView", cam.combined);
// If I draw the player before the chunk the transparency of the player would not render the chunk.
player.getMesh().render(shader, GL20.GL_TRIANGLES);
// Drawing the mesh chunk combines of floors, walls and objects of each tile within the chunk.
chunk.getCombinedMesh().render(shader, GL20.GL_TRIANGLES);
// If I draw the player after the chunk the player would not be drawn if it is behind a transparent mesh like a window or a tree.
player.getMesh().render(shader, GL20.GL_TRIANGLES);
shader.end();
So what are my options here? Can I fix this by using some magic tricks out of the OpenGL hat? Or do you have another suggestion to put on the list above? I am using LibGDX for my project.
I want to develop a simple 2D side scrolling game using libGDX.
My world contains many different 64x64 pixel blocks that are drawn by a SpriteBatch using a camera to fit the screen. My 640x640px resource file contains all these images. The block textures are positioned at (0, 0), (0, 64), (64, 0), ... and so on in my resource file.
When my app launches, I load the texture and create many different TextureRegions:
texture = new Texture(Gdx.files.internal("texture.png"));
block = new TextureRegion(texture, 0, 0, 64, 64);
block.flip(false, true);
// continue with the other blocks
Now, when I render my world, everything seems fine. But some blocks (about 10% of my blocks) are drawn as if the TextureRegion's rectangle was positioned wrong - it draws the bottommost pixel row of the above (in the resource texture) block's texture as its topmost pixel row. Most of the blocks are rendered correctly and I checked that I entered the correct position multiple times.
The odd thing is, that when I launch the game on my computer - instead of my android device - the textures are drawn correctly!
When searching for solutions, many people refer to the filter, but neither of both Linear and Nearest works for me. :(
Hopefully, I was able to explain the problem in an accessible way and you have any ideas how to fix that (= how to draw only the texture region that I want to draw)!
Best regards
EDIT: The bug does only appear at certain positions. When I draw two blocks with the same texture at different positions, one of them is drawn correctly and the other is not.. I don't get it....
You should always leave empty space between your images when packing into one texture, because if you use FILTER_LINEAR (which I think is default) for every pixel it will sample from the four nearest pixels. And if your images are without empty pixels padding,for all edge pixels it will get pixels from the neighbor image.
So three options to solve your issue:
Manually add space between images in you texture file
Stop using FILTER_LINEAR (but you will get ugly results if you are not drawing in the native image dimentions e.g. scaling the image)
Use the Libgdx Texture Packer, it has a build it functionality to do just that, when you pack your images
What is the easiest way to give 2D texture in OpenGL (lwjgl) some kind of "Thickness". Of course i could get the border of the texture somehow and add Quads, orriented by the normal of the quad that the texture is drawn on, in the color of the adjacent texture pixel. But there has to be an easier way to do it.
Minecraft is using lwigl as well and there are the (new) 3D Items, that spin down on the ground and don't cause as much of a performance issue, as is if they were drawn of dozends of polygons. As well, when you hold an item in your hand, there is that kind of "stretched" Texture in depth, which also works with high resolution textures.
Does anyone know how that is done?
A 2D texture is always infinitely thin. If you want actual thickness (when you look edge onto it) you need geometry. In Minecraft things look blocky, because they've been modeled that way.
If you look at some angle and ignore the edges you can use parallax mapping to "fake" some depth in the texture. Or you can use a depth map and use a combination of tesselation shaders and vertex shaders to implement a displacement map, that generates geometry from the texture.
I've started with JOGL lately, I know how to create and draw objects on the canvas, but I couldn't find tutorial or explanations on how to set and rotate the camera.
I only found source code, but since I'm quite new with this, it doesn't help too much.
Does anyone know of a good tutorial or place to start? I googled but couldn't find anything (only for JOGL 1.5, and I'm using 2.0).
UPDATE
As datenwolf points out my explanation is tied to the OpenGL 2 pipeline, which has been superseded. This means you have to do your own manipulation from world space into screen space if you want to eschew the deprecated methods. Sadly, this little footnote hasn't gotten around to being attached to every last bit of OpenGL sample code or commentary in the universe yet.
Of course I don't know why it's necessarily a bad thing to use the existing GL2 pipeline before picking a library to do the same or building one yourself.
ORIGINAL
I'm playing around with JOGL myself, though I have some limited prior experience with OpenGL. OpenGL uses two matrices to transform all the 3D points you pass through it from 3D model space into 2D screen space, the Projection matrix and the ModelView matrix.
The projection matrix is designed to compensate for the translation between the 3D world and the 2D screen, projecting a higher dimensional space onto a lower dimensional one. You can get lots more details by Googling gluPerspective, which is a function in the glut toolkit for setting that matrix.
The ModelView1 matrix on the other hand is responsible for translating 3D coordinates items from scene space into view (or camera) space. How exactly this is done depends on how you're representing the camera. Three common ways of representing the camera are
A vector for the position, a vector for the target of the camera, and a vector for the 'up' direction
A vector for the position plus a quaternion for the orientation (plus perhaps a single floating point value for scale, or leave scale set to 1)
A single 4x4 matrix containing position, orientation and scale
Whichever one you use will require you to write code to translate the representation into something you can give to the OpenGL methods to set up the ModelView matrix, as well as writing code than translates user actions into modifications to the Camera data.
There are a number of demos in JOGL-Demos and JOCL-Demos that involve this kind of manipulation. For instance, this class is designed to act as a kind of primitive camera which can zoom in and out and rotate around the origin of the scene, but cannot turn otherwise. It's therefore represented as only 3 floats: and X and Y rotation and a Z distance. It applies its transform to the Modelview something like this2:
gl.glMatrixMode(GL2.GL_MODELVIEW);
gl.glLoadIdentity();
gl.glTranslatef(0, 0, z);
gl.glRotatef(rotx, 1f, 0f, 0f);
gl.glRotatef(roty, 0f, 1.0f, 0f);
I'm currently experimenting with a Quaternion+Vector+Float based camera using the Java Vecmath library, and I apply my camera transform like this:
Quat4d orientation;
Vector3d position;
double scale;
...
public void applyMatrix(GL2 gl) {
Matrix4d matrix = new Matrix4d(orientation, position, scale);
double[] glmatrix = new double[] {
matrix.m00, matrix.m10, matrix.m20, matrix.m30,
matrix.m01, matrix.m11, matrix.m21, matrix.m31,
matrix.m02, matrix.m12, matrix.m22, matrix.m32,
matrix.m03, matrix.m13, matrix.m23, matrix.m33,
};
gl.glMatrixMode(GL2.GL_MODELVIEW);
gl.glLoadMatrixd(glmatrix, 0);
}
1: The reason it's called the ModelView and not just the View matrix is because you can actually push and pop matrices on the ModelView stack (this is true of all OpenGL transformation matrices I believe). Typically you either have a full stack of matrices representing various transformations of items relative to one another in the scene graph, with the bottom one representing the camera transform, or you have a single camera transform and keep everything in the scene graph in world space coordinates (which kind of defeats the point of having a scene graph, but whatever).
2: In practice you wouldn't see the calls to gl.glMatrixMode(GL2.GL_MODELVIEW); in the code because the GL state machine is simply left in MODELVIEW mode all the time unless you're actively setting the projection matrix.
but I couldn't find tutorial or explanations on how to set and rotate the camera
Because there is none. OpenGL is not a scene graph. It's mostly sophisticated canvas and simple point, line and triangle drawing tools. Placing "objects" actually means applying a linear transformations to place a 3 dimensional vector on a 2D framebuffer.
So instead of placing the "camera" you just move around the whole world (transformation) in the opposite way you'd move the camera, yielding the very same outcome.
I have been working on an isometric minecraft-esque game engine for a strategy game I plan on making. As you can see, it really needs some lighting. It is difficult to distinguish between separate elevations because everything is the same shade. So my question is: can I shade just a specific section of a sprite? All of those blocks are just sprites, so if I shaded the entire image, it would shade the whole block.
Well, it depends on how you do your lighting.
Basically, sprites are just textured quads made of two triangles.
Traditional vertex based lighting (which is supported by the built-in but now deprecated functions) will just calculate the lighting for the 4 corners of that quad and everything else will be interpolated. This is quite fast but might result in the wrong lighting - especially with spot lights and big quads.
If you use directional lighting only, you might apply a normal map to your quads and thus influence lighting in a per-texel way, but that might still not be what you want.
The modern way would be to use shaders, i.e. the lighting is evaluated per-pixel. You'd then have to provide per-texel lighting information for your quad which is then used in the fragment/pixel shader.
Just to clarify, the meanings of some terms in this context:
per-texel: per pixel in the texture, those value might be interpolated
per-pixel: per output pixel, i.e. per screen pixel
Edit:
I just looked at your screenshot and it seems you'll have to change the shade of sprite's edges if the adjacent sprite is not on the same level. Assuming you already know which sprite edge should be visible (i.e. there's a level change at that edge) you might just change the shading of the vertices that form that edge.
If you don't use any lighting, you might just start setting the vertex color to white and to some darker color for the vertices that need shading. Then multiply your texture color with the vertex color which should result in darker edges.
Alternatively, if those level have different depths (i.e. different z values) you could use some shader for edge detection (e.g. some SSAO implementation).
Edit 2:
If you use plain old vertex lighting, applying weighted normals might help. Basically you calculate the weighted vertex normals from the normals of those triangles that share a vertex.
There are several methods doing this, one being to weight the faces based on the angle at that vertex. You could multiply the normals by those angles, add then together and finally normalize the resulting normal.
The result of that calculation might be something like this (ASCII art):
| | /
|_______|________/
| / | |
|/______|_______|
Lines pointing up are the normals, the bottom lines would be your sprites in a side view.