I've been wanting to write it for some time now... as a project for the university, I've written (with a friend) a game that needed good explosions & particles effects. we've encountered some problems, which we solved quite elegantly (I think), and I'd like to share the knowledge.
OK then, so we found this tutorial: Make a Particle Explosion Effect which seemed easy enough to implement using Java with JOGL. before I'll answer as to how exactly we implemented this tutorial, I'll explain how rendering is done:
Camera: is just an orthonormal basis which basically means it contains 3 normalized orthogonal vectors, and a 4th vector representing the camera position. rendering is done using gluLookAt:
glu.gluLookAt(cam.getPosition().getX(), cam.getPosition().getY(), cam.getPosition().getZ(),
cam.getZ_Vector().getX(), cam.getZ_Vector().getY(), cam.getZ_Vector().getZ(),
cam.getY_Vector().getX(), cam.getY_Vector().getY(), cam.getY_Vector().getZ());
such that the camera's z vector is actually the target, the y vector is the "up" vector, and position is, well... the position.
so (if to put it in a question style), how to implement a good particles effect?
P.S: all the code samples and in-game screenshots (both in answer & question) are taken from the game, which is hosted here: Astroid Shooter
OK then, lets look at how we first approach the implementation of the particles: we had an abstract class Sprite which represented a single particle:
protected void draw(GLAutoDrawable gLDrawable) {
// each sprite has a different blending function.
changeBlendingFunc(gLDrawable);
// getting the quad as an array of length 4, containing vectors
Vector[] bb = getQuadBillboard();
GL gl = gLDrawable.getGL();
// getting the texture
getTexture().bind();
// getting the colors
float[] rgba = getRGBA();
gl.glColor4f(rgba[0],rgba[1],rgba[2],rgba[3]);
//draw the sprite on the computed quad
gl.glBegin(GL.GL_QUADS);
gl.glTexCoord2f(0.0f, 0.0f); gl.glVertex3d(bb[0].x, bb[0].y, bb[0].z);
gl.glTexCoord2f(1.0f, 0.0f); gl.glVertex3d(bb[1].x, bb[1].y, bb[1].z);
gl.glTexCoord2f(1.0f, 1.0f); gl.glVertex3d(bb[2].x, bb[2].y, bb[2].z);
gl.glTexCoord2f(0.0f, 1.0f); gl.glVertex3d(bb[3].x, bb[3].y, bb[3].z);
gl.glEnd();
}
we've most of the method calls are pretty much understandable here, no surprises. the rendering is quite simple. on the display method, we first draw all the opaque objects, then, we take all the Sprites and, sort them (square distance from camera), then draw the particles, such that further away from the camera is drawn first. but the real thing we have to look deeper into here is the method getQuadBillboard. we can understand that each particle has to "sit" on a plane that is perpendicular to the camera position, like here:
the way to compute a perpendicular plane like that is not hard:
substruct particle position from camera position to get a vector that is perpendicular to the plane, and normalize it, so it can be used as a normal for the plane. now a plane is defined tightly by a normal and position, which we now have (particle position is a point that the plane goes through)
compute the "height" of the quad, by normalizing the projection of the camera's Y vector on the plane. you can get the projected vector by computing: H = cam.Y - normal * (cam.Y dot normal)
create the "width" of the quad, by computing W = H cross normal
return the 4 points / vectors: {position+H+W,position+H-W,position-H-W,position-H+W}
but not all sprites acts like that, some are not perpendicular. for instance, the shockwave ring sprite, or the flying sparks/smoke trails:
so each sprite had to give it's own unique "billboard".BTW, the computation of the smoke trails & flying sprites sparks was a bit of a challenge as well. we've created another abstract class, we called it: LineSprite. i'll skip the explanations here, you can see the code in here: LineSprite.
well, this first try was nice, but there was an unexpected problem. here's a screenshot that illustrates the problem:
as you can see, the sprites intersects with each other, so if we look at 2 sprites that intersects, part of the 1st sprite is behind the 2nd sprite, and another part of it, is infront the 2nd sprite, which resulted in some weird rendering, where the lines of the intersection are visible. note, that even if we disabled glDepthMask, when rendering the particles, the result would still have the lines of intersection visible, because of the different blending that takes place in each sprite. so we had to somehow make the sprites to not intersect. the idea we had was really cool.
you know all these really cool 3D street art?
here's an image that emphasizes the idea:
we thought the idea could be implemented in our game, so the sprites won't intersect each other. here's an image to illustrate the idea:
basically, we made all the sprites to be on parallel planes, so no intersection could take place. and it did not effected the visible data, since it stayed the same. from every other angle, it would look streched, but from the camera point of view, it still looked great. so for the implementation:
when getting 4 vectors representing a quad billboard, and the position of the particle, we need to output a new set of 4 vectors that represents the original quad billboard. the idea of how to do this, is explained great in here: Intersection of a plane and a line. we have the "line" which is defined by the camera position, and each of the 4 vectors. we have the plane, since we could use our camera Z vector as the normal, and the position of the particle. also, a small change would be in the comparison function for sorting the sprites. it should now use the homogeneous matrix, which is defined by our camera orthonormal basis, and actually, the computation is as easy as computing: cam.getZ_Vector().getX()*pos.getX() + cam.getZ_Vector().getY()*pos.getY() + cam.getZ_Vector().getZ()*pos.getZ();. one more thing we should notice, is that if a particle is out of the viewing angle of the camera, i.e. behind the camera, we don't want to see it, and especially, we don't want to compute it's projection (could result in some very weird and psychedelic effects...).
and all is left is to show the final Sprite class
the result is quite nice:
hope it helps, would love to get your comments on this "article" (or on the game :}, which you can explore, fork, and use however you want...)
Related
I'm attempting to render a hemisphere in java. However, I'm wanting to render the slice that is defined by 2 angles - Azimuth and Elevation. Since I'm defining a slice, I cannot (to my knowledge) use any built in primitives. If the azimuth range is defined 0-360 and the elevation range is defined as 0-70, this will be a hemisphere with an upside-down cone-shaped hole in the top.
When rendering this inside "cone", I have chosen to do it as triangles in 5 degree increments. This means that with a 360 degree cone, there are 73 different vertices (if I did the math correctly: 360/5degree slices with the origin or tip of the cone being shared with all sides, and all other vertices shared by adjacent triangle slices)
My question:
Is it more efficient to render these as a single polygon with with many vertices, or many triangles with only 3 vertices each. If I do a single polygon, will I still have to include all three points for each triangle, or if it is a shared vertex, would I only include it once? Sorry, my graphics rendering knowledge is limited. Also sorry for being so verbose; I'm hoping someone may spot something erroneous in my thought process which may clear things up either way.
First - Use Google to find an algorithm to create a sphere that is not a primitive.
Second - Somewhere down the chain - triangles will be used. Most likely by the underlying library. But for you - it depends upon whether or not you plan to chop up the created region. If you are not going to subdivide the region further I would just make it one polygon. Actually, after thinking about it for a second - you can always divide up the polygon afterwards too. So just make it one polygon.
I thought about it some more and decided to amend this answer. There are two ways you can create a polygon in openGL. You can either create it as a triangular mesh or as an outline polygon. So if you were asking "Should I use a triangular mesh or an outline polygon" I would say use the triangular mesh. It is a lot easier to break up the triangular mesh than a polygon outline since, to break the mesh, all you have to do is to just stop at one of the points, include the last two points in the new object, and continue on down the triangular mesh. An outline polygon requires you to go both left and right around the polygon to locate the two points where the break occurs. If that is clear. If not say so.
Update: 12:05pm
When making a polygon you can use a triangular mesh or a polygon outline. The outline is mainly good for 2D whereas the triangular mesh works in both 2D and 3D systems. If you have any kind of a polygon at all bigger than just three points then it is a good idea to put them all into an array. This allows you to use the built-in routines that take an array and simply go through it to build your polygon. By putting everything into an array you also make it easier on yourself to add new points or remove points or adjust points. All you do is to change the array entry and then call the same routine to draw everything again. (Which should be just a single call to a function.)
Situation: I am programming a 2D isometric styled Java game with libGDX.
Right now I have a moveable player that properly collides with tiles of solid objects.
Problem:
Now it comes to the renderorder of tiles. By default the library code renders from the bottom maplayer to the top layer. (Ground is layer 0, object layer is layer 1) Which makes sense. I draw the player on top of that. This means that the player is always on top of everything which doesn't make sense in some situations.
Goal: Since an isometric look means to have a kind of a 3D perspective, the player can be behind or in front of objects. So I have to come up with some code that decides whether the player is rendered behind or in front of it. I have this fridge as an example:
I hope it is comprehensible what I mean with "logical collision". I have some glimpses of ideas how to achieve that but that would be a mess in the code. So I wanted to ask if anyone has experiences with that or can hand me some nice sources that can help me.
Thanks for reading!
It depends HOW you render. One thing that comes to mind, is to render objects/tiles from up to down and left to right, that way in most cases the objects should always be in correct layer. Now depending on how you make the graphics, there could be flaws in this. If so, you could also have few layers of of priority, to draw different parts of objects/tiles, or draw something before other. If you also have tiles and objects as separate type, you could have objects ability to draw before and after the tiles, to fulfill different needs. You could also implement some tiles to have objects, for cases like this, however that could also be waste of processor times comparing to other methods I mentioned.
How you would go about comparing object positions to tile positions is not actually very difficult. Probably a sufficient method would be comparing tile positions to object positions. Lets take a hypothetical situation, where you have tiles that are 32 x 32 in size, and there is object at 25 x 18. The object would be in front of tiles at offset 0 x 0 and 1 x 0, but behind 0 x 1 and 1 x 1 (If we imagine the tiles start from upper corner). Therefore, we first draw tiles at 0 x 0, and 1 x 0, then object, then tiles at 0 x 1 and 1 x 1. It should naturally fall in it's correct place, with fairly simple code logic.
Hope you got some ideas about how to implement it :)
Professional isometric styled games are actually 3D based with an orthographic camera. This way you dont have the complex problem of 2D sprite sorting in an isometric context and can rely on the hardware pixel-based z-Buffering.
Nevertheless, if you want to realize an isometric game without the comfort of a 3d-game engine, like in Java 2D, the approach of different sorted layers doesn't work. Same with the painter-algorithm, cause both are actually intended for pure 2D top-down view based games.
So how to cope with this dilemma?
Well an approach would be to imitate the Z-Buffering technique on a higher granular level, i.e. instead of considering each single pixel of each object in the scene, you consider tiles as a sorting base.
Like z-Buffering, pixels/tiles are rendered in the order of their individual distance to the camera:
The distance-formula of a tile-based object with coordinates (x, y, z) is proportional to:
d=y-x-z
Negative values are allowed, so d(A)=-1 as distance of an object A is closer to the camera and will be hence rendered after object B with distance d(B) = 1 which is farther away ...
The render steps for each render-cycle would therefore be:
Determine all objects which are visible (we don't wanna render all objects in a huge world, if we only see a small part of it)
Calculate the individual distances of each tile-based object in the scene with the given formula
Sort all visible objects by distance
Render all visible objects in descending order
I myself tested this strategy in JavaFX 2D and the result looks like this, i.e. it is a simple working technique for tile-based isometric rendering:
I am interested in creating a shader effect similar to that of the game (don't shoot me for using this example) Animal Crossing. As you move forward and backward along the terrain, the world "curves" giving the sense of being on a round surface. The thing is, I want to apply this kind of effect to a 2D side-scroller.
Imagine a game like Terraria where both ends of the screen (left and right sides) are slightly bent downward to give the illusion of curvature.
I have tried to find an article explaining how to achieve such an effect, but I haven't much in the way of helpful direction pointing. I know this isn't the most organized or well-put question, but any help would be appreciated.
Although I am not a fan of answering my own questions, I think I have found a way to achieve this effect and would like to share. By setting up a basic vertex shader, I was able to manipulate the location of the vertex along the y-axis depending on how far away it was from the center of the screen (the origin in my case). I originally used a linear absolute value equation to see how it worked, and I got something like this:
This is obviously a strange effect, but it is getting very close to what I want to achieve. I figured I would also try leveling the effect out by dividing the absolute value of the vertices' distance from the origin by some scalar. I started with 32 and the result was much more reasonable:
As you can see, there is only a slight bend in the terrain.
This is all nice and all, but it isn't a "curve" yet. It is just an upside down 'V' with a bit of squashing done. So from here, it was easy to apply a nice curve by using a parabola and just flattening it out in the same fashion. The result was this:
The result was a very nice curve that I could modify to be any intensity I wanted. I also tried applying the graph of a 3rd degree equation, but it gave more of a try-hard 3D feel. I suppose I could apply the graph of a circle so I can accurately get the proper curve when I am on a planet with a specified radius, but I am satisfied for now.
The code turned out to be only a few lines long. Here is the GLSL code for the vertex shader:
#version 150
varying vec4 vertColor; //Just to send the color data to the fragment shader
uniform float tx; //Passed by the Java program to ensure the terrain curvature
//is sinked by with player's movement, this value is usually
//in the "lookThroughCamera" method where you handle the game
//world's translation when the player moves
void main(void) {
float v0 = gl_Vertex[1] - pow((gl_Vertex[0] + tx) / 64, 2); //this will be the new y-value for the vertex.
//It takes the original y-value and subtracts
//the absolute value of the x-value plus the offset
//on the x-axis of the player divided by 64 all to
//the power of 2. The division by 64 levels the
//curve out enough so it looks acceptable
vec4 pos = vec4(gl_Vertex[0], v0, gl_Vertex[2], gl_Vertex[3]); //create the new position with the changed y-value
gl_Position = gl_ModelViewProjectionMatrix * pos; //multiply it by your projection and modelview matrices
vertColor = gl_Color.argb; //more color stuff that has nothing to do with the rest
}
EDIT: This approach does have a serious issue though. All vertical lines will remain vertical due to the fact they are not shifted along the x-axis properly. This is shown by the following image:
This gives an extremely unnatural look, and I have yet to come up with a proper solution to this.
I'm working on creating a simple 3D rendering engine in Java. I've messed about and found a few different ways of doing perspective projection, but the only one I got partly working had weird stretching effects the further away from the centre of the screen the object was moved, making it look very unrealistic. Basically, I want a method (however simple or complicated it needs to be) that takes a 3D point to be projected and the 3D point and rotation (possibly?) of the 'camera' as arguments, and returns the position on the screen that that point should drawn at. I don't care how long/short/simple/complicated this method is. I just want it to generate the same kind of perspective you see in a modern 3D first person shooters or any other game, and I know I may have to use matrix multiplication for this. I don't really want to use OpenGL or any other libraries because I'd quite like to do this as a learning exercise and roll my own, if it's possible.
Any help would be appreciated quite a lot :)
Thanks, again
- James
Update: To show what I mean by the 'stretching effects' here are some screen shots of a demo I've put together. Here a cube (40x40x10) centered at the coords (-20,-20,-5) is drawn with the only projection method I've got working at all (code below). The three screens show the camera at (0, 0, 50) in the first screenshot then moved in the X dimension to show the effect in the other two.
Projection code I'm using:
public static Point projectPointC(Vector3 point, Vector3 camera) {
Vector3 a = point;
Vector3 c = camera;
Point b = new Point();
b.x = (int) Math.round((a.x * c.z - c.x * a.z) / (c.z - a.z));
b.y = (int) Math.round((a.y * c.z - c.y * a.z) / (c.z - a.z));
return b;
}
You really can't do this without getting stuck in to the maths. There are loads of resources on the web that explain matrix multiplication, homogeneous coordinates, perspective projections etc. It's a big topic and there's no point repeating any of the required calculations here.
Here is a possible starting point for your learning:
http://en.wikipedia.org/wiki/Transformation_matrix
It's almost impossible to say what's wrong with your current approach - but one possibility based on your explanation that it looks odd as the object moves away from the centre is that your field of view is too wide. This results in a kind of fish-eye lens distortion where too much of the world view is squashed in to the edge of the screen.
I'm looking for an example of how to implement 2D terrain destruction that you see in games like scorched earth or on the iphone iShoot.
I'm looking to implement a game that needs to do destructible terrain and render it using OpenGL (LWJGL in Java) and using OpenGL ES on the iPhone.
(source: vintagecomputing.com)
As I recall, in Worms they used two images; The "pretty" terrain with colour, and a mask terrain that is pure black and white. Hit detection is always done on the mask.
If you actually want the terrain to collapse like it did in Tank Wars, you'll need to iterate over each column of your image and have it search for gaps between the terrain and the bottom of the playing field. If any gaps are detected, shift the terrain above the gap down to the lowest point possible in your column.
A simple example of this could be done with an array where 1 represents solid terrain and 0 represents empty space. In this case, I've set up the left side of the array as ground level, to element [0] would be on the ground:
[1,1,1,1,1,1,0,0,0,0]
Lets assume the terrain is struck from the side and a hole is made:
[1,1,0,0,1,1,0,0,0,0]
You're now left with a floating piece of terrain above another piece of terrain. To make the floating terrain collapse, iterate over the array, keeping track of the first position you find a 0 (empty space). Then, as you continue to iterate, upon discovering a 1 (terrain) simply shift the 1 to where the 0 was. Repeat the process by iterating from that the old 0 position + 1.
[1,1,1,0,0,1,0,0,0,0]
[1,1,1,1,0,0,0,0,0,0]
This is the basic approach, not the most efficient one. It would be much faster to move all indexes of terrain above the gap down at the same time, for example.
EDIT:
As the first comment states, a sort on the list is even easier. I'm keeping my original response intact since it helps explains the actual principle behind the collapsing terrain.
Soviut's answer is great! I use a similar algorithm in the destructive terrain feature in Scorched Earth for iPhone. I decided to stay true to the original and have the terrain settle instantly, but while I was considering having animated terrain settling, I ran into some performance problems. You can see evidence of this in iShoot as well, since iShoot uses a slowly settling animated terrain. There are situations where the ground is still settling from one player's turn when the next player fires a weapon. This can interfere with the shot, and the interference can change depending on how quickly the next player fires. Since Scorched Earth is a turn-based game, it seems like a good idea to have the game wait until the ground is settled until switching to the next player.
To render the terrain, I used OpenGL to draw a polygon with one pair of vertices at each horizontal screen location, like this:
1 3 5 7 9
0 2 4 6 8
Points with even numbers represent the line of pixels at the bottom of the screen. Points with odd numbers represent the vertical pixel location of the terrain. This information is copied into a point array, which is rendered with glVertexPointer, glColorPointer, and glDrawArrays, as a triangle strip, like this:
// prepare vertex buffer
for (int i=0,j,k=0,K=480;k<=K;k++) {
j = (k-(int)offsetX+480)%480;
vGroundLevel[i++] = k;
vGroundLevel[i++] = offsetY>0 ? 0 : offsetY;
vGroundLevel[i++] = k;
vGroundLevel[i++] = [env groundLevelAtIndex:j]+offsetY;
}
....
// render vertex buffer
glVertexPointer(2, GL_FLOAT, 0, vGroundLevel);
glEnableClientState(GL_VERTEX_ARRAY);
glColorPointer(4, GL_UNSIGNED_BYTE, 0, cGround);
glEnableClientState(GL_COLOR_ARRAY);
glDrawArrays(GL_TRIANGLE_STRIP, 0, 2*480);
The offsetX and offsetY parameters allow the terrain to be repositioned relative to the screen, allowing the player to move around the environment interactively, while maintaining the game environment as a wrap-around continuum.