I have been working on an isometric minecraft-esque game engine for a strategy game I plan on making. As you can see, it really needs some lighting. It is difficult to distinguish between separate elevations because everything is the same shade. So my question is: can I shade just a specific section of a sprite? All of those blocks are just sprites, so if I shaded the entire image, it would shade the whole block.
Well, it depends on how you do your lighting.
Basically, sprites are just textured quads made of two triangles.
Traditional vertex based lighting (which is supported by the built-in but now deprecated functions) will just calculate the lighting for the 4 corners of that quad and everything else will be interpolated. This is quite fast but might result in the wrong lighting - especially with spot lights and big quads.
If you use directional lighting only, you might apply a normal map to your quads and thus influence lighting in a per-texel way, but that might still not be what you want.
The modern way would be to use shaders, i.e. the lighting is evaluated per-pixel. You'd then have to provide per-texel lighting information for your quad which is then used in the fragment/pixel shader.
Just to clarify, the meanings of some terms in this context:
per-texel: per pixel in the texture, those value might be interpolated
per-pixel: per output pixel, i.e. per screen pixel
Edit:
I just looked at your screenshot and it seems you'll have to change the shade of sprite's edges if the adjacent sprite is not on the same level. Assuming you already know which sprite edge should be visible (i.e. there's a level change at that edge) you might just change the shading of the vertices that form that edge.
If you don't use any lighting, you might just start setting the vertex color to white and to some darker color for the vertices that need shading. Then multiply your texture color with the vertex color which should result in darker edges.
Alternatively, if those level have different depths (i.e. different z values) you could use some shader for edge detection (e.g. some SSAO implementation).
Edit 2:
If you use plain old vertex lighting, applying weighted normals might help. Basically you calculate the weighted vertex normals from the normals of those triangles that share a vertex.
There are several methods doing this, one being to weight the faces based on the angle at that vertex. You could multiply the normals by those angles, add then together and finally normalize the resulting normal.
The result of that calculation might be something like this (ASCII art):
| | /
|_______|________/
| / | |
|/______|_______|
Lines pointing up are the normals, the bottom lines would be your sprites in a side view.
Related
I'm using slick2d to render an Animation, but using anti-aliasing
Without AA, the movement is choppy, as one would expect for something without AA.
I turn on AA in my game's preRenderState with:
g.setAntiAlias(true);
This leads to:
Note the diagonal line in the center, presumably caused by the two triangles that render the rectangle not meeting precicely. How can I remove that while still using AA to smooth my movement? I found http://www.java-gaming.org/index.php?topic=27313.0 but the solution was "remove AA" which I am reluctant to do.
This looks suspiciously like artifacts due to GL_POLYGON_SMOOTH.
When you use that (deprecated) functionality, you are expected to draw all opaque geometry with blending enabled and the blend function: GL_SRC_ALPHA_SATURATE, GL_ONE. Failure to do so produces white outlines on most contours (aliased edges, basically).
Chapter 6 of the OpenGL Redbook states:
Now you need to blend overlapping edges appropriately. First, turn off the depth buffer so that you have control over how overlapping pixels are drawn. Then set the blending factors to GL_SRC_ALPHA_SATURATE (source) and GL_ONE (destination). With this specialized blending function, the final color is the sum of the destination color and the scaled source color; the scale factor is the smaller of either the incoming source alpha value or one minus the destination alpha value.
This means that for a pixel with a large alpha value, successive incoming pixels have little effect on the final color because one minus the destination alpha is almost zero. With this method, a pixel on the edge of a polygon might be blended eventually with the colors from another polygon that's drawn later. Finally, you need to sort all the polygons in your scene so that they're ordered from front to back before drawing them.
That is a lot of work that almost nobody does correctly when they try to enable GL_POLYGON_SMOOTH.
Every time you enable anti-aliasing in Slick2D, it should look like this.
g.setAntiAlias(true);
GL11.glBlendFunc(GL11.GL_SRC_ALPHA_SATURATE, GL11.GL_ONE);
Basically same as Andon's answer, but with code that should solve the problem.
Note that with this whatever you draw first will be on top, rather than what you draw last. So you may need to reverse rendering order.
I am currently drawing a single transparent 3D mesh, generated via a marching cubes algorithm, with the intention of having more objects once the problem is fixed.
As it stands, I can draw 3d shapes perfectly well but when I implement transparency (in my case changing the opacity of the meshes PhongMaterial) I get a weird effect where only a few triangles are rendered when behind another triangle.
see example.
http://i.imgur.com/1wdmYYs.png
(sorry, I was unable to post the image directly, due to rep)
When the "stick" is behind the larger shape there seems to be a loss in triangles and I currently have no idea why.
The red is all the same mesh rendered in the same way.
I am currently using an ambient light if that makes a difference.
Some example code:
MeshView mesh = generate Mesh Data via marching cube;
mesh.setCullFace(CullFace.None);
PhongMaterial mat = new PhongMaterial(1, 0, 0, 0.5d);
AmbientLight light = new AmbientLight();
light.setColor(new Color(1, 0, 0, 0.5)); // I dont believe the alpha makes a difference
light.setOpacity(0.5);
mesh.setMaterial(mat);
group.getChildren().addAll(light, mesh);
Transparency only works correctly when the triangle faces are sorted by distance to the camera. This is an artifact of the fact that consumer 3D cards break any scene down to the triangles and so they can render each one individually. This allows to render hundreds of triangles at the same time when you have hundreds of cores. Older cards show you the number of triangles/second which they can render.
On more modern cards, part of the triangle rendering has been moved to the driver which uses the vector engines on the card to calculate the color of each point in software. This is still fast since you can have 1000+ vector CPUs plus it allows you to create complex programs that modify each vertex/pixel before it's stored in memory which allows you to create shiny surfaces, etc.
I need to create a jigsaw puzzle game. I've already done this in the past using AndEngine, however I've only cut texture into rectangles. Now I need to cut it into proper jigsaw pieces. How can I do that?
Cut the texture into rectangles but for every rectangle take extra space. So you would have a lot of rectangles which overlap each other.
Then you need to have some set of patterns for jigsaw edges (black and white images or you can call it a mask) and generate a mask for every rectangle using those patterns.
The algorithm would be:
create a mask with a size of rectangle and initialise it with white color.
Then choose edge pattens based on rectangle-neighbors if they are initialised or choose edges randomly if neighbors are not yet initialize.
After you chose the patterns, draw them on a mask for every side. So in the end you would have a mask with a shape of a jigsaw piece. white color = visible, black color - transparent.
Then apply the mask to the rectangle when you draw it.
And bare in mind that you don't stack these rectangles based on their actual size, but stack in a way that they would overlay each other...
P.S. I hope you understood what I was trying to say. Sorry, English is not my native language...
What is the easiest way to give 2D texture in OpenGL (lwjgl) some kind of "Thickness". Of course i could get the border of the texture somehow and add Quads, orriented by the normal of the quad that the texture is drawn on, in the color of the adjacent texture pixel. But there has to be an easier way to do it.
Minecraft is using lwigl as well and there are the (new) 3D Items, that spin down on the ground and don't cause as much of a performance issue, as is if they were drawn of dozends of polygons. As well, when you hold an item in your hand, there is that kind of "stretched" Texture in depth, which also works with high resolution textures.
Does anyone know how that is done?
A 2D texture is always infinitely thin. If you want actual thickness (when you look edge onto it) you need geometry. In Minecraft things look blocky, because they've been modeled that way.
If you look at some angle and ignore the edges you can use parallax mapping to "fake" some depth in the texture. Or you can use a depth map and use a combination of tesselation shaders and vertex shaders to implement a displacement map, that generates geometry from the texture.
While working on Projectiles I thought that it would be a good idea to rotate the sprite as well, to make it look nicer.
I am currently using a 1-Dimensional Array, and the sprite's width and height can and will vary, so it makes it a bit more difficult for me to figure out on how to do this correctly.
I will be honest and straight out say it: I have absolutely no idea on how to do this. There have been a few searches that I have done to try to find some stuff, and there were some things out there, but the best I found was this:
DreamInCode ~ Rotating a 1-dimensional Array of Pixels
This method works fine, but only for square Sprites. I would also like to apply this for non-square (rectangular) Sprites. How could I set it up so that rectangular sprites can be rotated?
Currently, I'm attempting to make a laser, and it would look much better if it didn't only go along a vertical or horizontal axis.
You need to recalculate the coordinate points of your image (take a look here). You've to do a matrix product of every point of your sprite (x, y) for the rotation matrix, to get the new point in the space x' and y'.
You can assume that the bottom left (or the bottom up, depends on your system coordinate orientation) of your sprite is at (x,y) = (0,0)
And you should recalculate the color too (because if you have a pure red pixel surrounded by blue pixel at (x,y)=(10,5) when you rotate it can move for example to (x, y)=(8.33, 7.1) that it's not a real pixel position because pixel haven't float coordinate. So the pixel at real position (x, y)=(8, 7) will be not anymore pure red, but a red with a small percentage of blue)... but one thing for time.
It's easier than you think: you only have to copy the original rectangular sprites centered into bigger square ones with transparent background. .png files have that option and I think you may use them.