Engine : LWJGL
SO , I am wondering , how do I draw a triangle's back face? I know there is something going on with the
glTexCoord();
method(s), but I can not understand in which way I need to do this.
The facing of a triangle has nothing to do with texture coords. It is solely defined by the winding of the vertices in window space.
By default, GL uses the rule that if the three vertices (in the order as they are specified when drawing) are seen - in the final picture - in counter-clockwise order, then this is treated as the front face. This facing rule can be changed via glFrontFace(). Furthermore, the GL can be told do never draw a specific face via glEnable(GL_CULL_FACE). Which faces are culled is controlled by glCullFace(). Typically, backface culling is used for closed objects (wihtout transparency), as in such cases, the back-facing triangles are never seen and do not need to be processed.
So to control the facing of your triangles, the order in which the vertices are specified does matter. Furthermore, the transformations you use define which side you are actually seeing.
The winding should especially be consistent inbetween and across objects. Two triangles sharing an edge, like triangles A,B,C and B,C,D, have a consitent winding if the shared edge is specified in mutually ireverse order. That is, if you specify the first triangles vertices in the order A,B,C, you must specify the vertices of latter triangle in a way such that C,B is used, like C,B,D or D,C,B, or B,D,C.
Related
I'm attempting to render a hemisphere in java. However, I'm wanting to render the slice that is defined by 2 angles - Azimuth and Elevation. Since I'm defining a slice, I cannot (to my knowledge) use any built in primitives. If the azimuth range is defined 0-360 and the elevation range is defined as 0-70, this will be a hemisphere with an upside-down cone-shaped hole in the top.
When rendering this inside "cone", I have chosen to do it as triangles in 5 degree increments. This means that with a 360 degree cone, there are 73 different vertices (if I did the math correctly: 360/5degree slices with the origin or tip of the cone being shared with all sides, and all other vertices shared by adjacent triangle slices)
My question:
Is it more efficient to render these as a single polygon with with many vertices, or many triangles with only 3 vertices each. If I do a single polygon, will I still have to include all three points for each triangle, or if it is a shared vertex, would I only include it once? Sorry, my graphics rendering knowledge is limited. Also sorry for being so verbose; I'm hoping someone may spot something erroneous in my thought process which may clear things up either way.
First - Use Google to find an algorithm to create a sphere that is not a primitive.
Second - Somewhere down the chain - triangles will be used. Most likely by the underlying library. But for you - it depends upon whether or not you plan to chop up the created region. If you are not going to subdivide the region further I would just make it one polygon. Actually, after thinking about it for a second - you can always divide up the polygon afterwards too. So just make it one polygon.
I thought about it some more and decided to amend this answer. There are two ways you can create a polygon in openGL. You can either create it as a triangular mesh or as an outline polygon. So if you were asking "Should I use a triangular mesh or an outline polygon" I would say use the triangular mesh. It is a lot easier to break up the triangular mesh than a polygon outline since, to break the mesh, all you have to do is to just stop at one of the points, include the last two points in the new object, and continue on down the triangular mesh. An outline polygon requires you to go both left and right around the polygon to locate the two points where the break occurs. If that is clear. If not say so.
Update: 12:05pm
When making a polygon you can use a triangular mesh or a polygon outline. The outline is mainly good for 2D whereas the triangular mesh works in both 2D and 3D systems. If you have any kind of a polygon at all bigger than just three points then it is a good idea to put them all into an array. This allows you to use the built-in routines that take an array and simply go through it to build your polygon. By putting everything into an array you also make it easier on yourself to add new points or remove points or adjust points. All you do is to change the array entry and then call the same routine to draw everything again. (Which should be just a single call to a function.)
I'm making a program that needs to detect the collision between 2 non-axis aligned boxes. My program only needs an indication if 2 non-axis aligned boxes are colliding. I would like to have the most simple and efficient algorithm possible.
Here I visualized the problem.
So as you can see squares 1,2 and 3 would return true because they collided with the green squares.
4 would return false because it isn't colliding.
I do have all the boxes of both colors in separate array lists.
Does anybody know a library or algorithm for this problem? Thanks in advance.
Check out the Area class in the java.awt.geom package.
http://docs.oracle.com/javase/6/docs/api/java/awt/geom/Area.html
I don't know how "easy" your game really is, how many shapes you'd have to check for (I'm thinking efficiency here), but if you have your different color shapes in different lists, a kind of brute force iteration may work for you. Not a clue if it would be efficient enough for you. I use box2D to tell me the collisions, but sounds like that may be overkill for you.
The brute force method I'm thinking of would be to utilize libgdx's Intersector class (check out the API, it has lots of methods). Iterate through your rectangles comparing to the others. Something like IntersectRectangles() gives you a boolean if two rectangles overlap (ie: collide).
This may be too inefficient/hacky, and a physics library may be too much. So one of the other answers provided may be the sweet spot.
A commonly-used approach involves quadtrees. There's a good write-up and tutorial here, which explains how to use quadtrees to perform collision detection in 2D space.
The general idea is that your game area will keep being partitioned by four as your add objects. Each partition is called a node and each node will maintain a reference to objects that exist in the corresponding partition. Objects are placed into nodes based on where they are in the 2D space. If a node does not fit cleanly into a partition, it is inserted into the parent node. Using this method, you don't have to perform an expensive check against every other object in your 2D space, because you can be sure that objects in different nodes (at the same level; i.e., sibling nodes) will not collide. So you only have to perform your collision detection on a small subset of objects.
Note that this just tells you which objects are occupying a certain area; it's a more efficient way to hone in on objects that are likely to be colliding. After that you have to check if the objects are actually colliding. There is another write-up here that goes over various techniques to accomplish this.
There are two algorithms/data-structures you need to consider for this problem:
A spatial data-structure to store your rotated quads, to efficiently determine which pairs of quads need to be tested against each other. Other answers have already addressed this. If the number of quads is small enough then you can just test all the red quads against all the green quads, which is O(m * n).
An algorithm to perform the actual test of one rotated quad against another. One of the simplest is the Separating Axis Theorem.
The basic idea behind the SAT is that if you can find at least one line where all the points of one convex object are on one side of it, and all the points of the other are on the other side, then the two are not colliding. The potential lines that you need to test are just the edges of both of the objects.
To implement it you need to implement a point-line test to tell you which side of an edge a point is on. This is done by calculating the normal to the edge, and then calculating the dot product of the edge normal and a vector from a point on the edge to the point you are testing. The sign of the dot product tells you which side the point is on (positive means the outside the edge, for an outward pointing normal). Whether you count zero (on the line) as outside or inside depends on whether you want objects that are just touching but not penetrating to count as a collision, if you do then the dot product must be greater than zero to count as outside.
For example if the points on objects are in clockwise order, and edgeA and edgeB are the two points of an edge on one object, and pointC is a point on the other object, the test is done like this (not using function calls, to show the math):
boolean isOutsideEdge(PointF edgeA, PointF edgeB, PointF pointC)
{
float normalX = edgeA.Y - edgeB.y;
float normalY = edgeB.X - edgeA.x;
float vectorX = pointC.x - edgeA.x;
float vectorY = pointC.y - edgeA.y;
return (normalX * vectorX) + (normalY * vectorY) > 0.0f;
}
Then the algorithm is:
For each edge on quad A, if all the corner points on quad B are on the outward facing side of the edge, then A and B are not colliding, stop processing and return false.
For each edge on quad B, if all the corner points on quad A are on the outward facing side of the edge, then A and B are not colliding, stop processing and return false.
If all those tests have been performed and none have returned false, then A and B are colliding, so return true.
The SAT can be generalized to arbitrary convex polygons.
So I decided to go with box2d in the end. This was the best solution because of the diffrent mask-qualifiers, the objects didn't collide but it could be easily checked wether they should be colliding.
I had to make my own contactlistener that overrides the default contactlistener. Here I could do anything if any 2 objects collided.
Thanks everyone for the help.
Situation: I am programming a 2D isometric styled Java game with libGDX.
Right now I have a moveable player that properly collides with tiles of solid objects.
Problem:
Now it comes to the renderorder of tiles. By default the library code renders from the bottom maplayer to the top layer. (Ground is layer 0, object layer is layer 1) Which makes sense. I draw the player on top of that. This means that the player is always on top of everything which doesn't make sense in some situations.
Goal: Since an isometric look means to have a kind of a 3D perspective, the player can be behind or in front of objects. So I have to come up with some code that decides whether the player is rendered behind or in front of it. I have this fridge as an example:
I hope it is comprehensible what I mean with "logical collision". I have some glimpses of ideas how to achieve that but that would be a mess in the code. So I wanted to ask if anyone has experiences with that or can hand me some nice sources that can help me.
Thanks for reading!
It depends HOW you render. One thing that comes to mind, is to render objects/tiles from up to down and left to right, that way in most cases the objects should always be in correct layer. Now depending on how you make the graphics, there could be flaws in this. If so, you could also have few layers of of priority, to draw different parts of objects/tiles, or draw something before other. If you also have tiles and objects as separate type, you could have objects ability to draw before and after the tiles, to fulfill different needs. You could also implement some tiles to have objects, for cases like this, however that could also be waste of processor times comparing to other methods I mentioned.
How you would go about comparing object positions to tile positions is not actually very difficult. Probably a sufficient method would be comparing tile positions to object positions. Lets take a hypothetical situation, where you have tiles that are 32 x 32 in size, and there is object at 25 x 18. The object would be in front of tiles at offset 0 x 0 and 1 x 0, but behind 0 x 1 and 1 x 1 (If we imagine the tiles start from upper corner). Therefore, we first draw tiles at 0 x 0, and 1 x 0, then object, then tiles at 0 x 1 and 1 x 1. It should naturally fall in it's correct place, with fairly simple code logic.
Hope you got some ideas about how to implement it :)
Professional isometric styled games are actually 3D based with an orthographic camera. This way you dont have the complex problem of 2D sprite sorting in an isometric context and can rely on the hardware pixel-based z-Buffering.
Nevertheless, if you want to realize an isometric game without the comfort of a 3d-game engine, like in Java 2D, the approach of different sorted layers doesn't work. Same with the painter-algorithm, cause both are actually intended for pure 2D top-down view based games.
So how to cope with this dilemma?
Well an approach would be to imitate the Z-Buffering technique on a higher granular level, i.e. instead of considering each single pixel of each object in the scene, you consider tiles as a sorting base.
Like z-Buffering, pixels/tiles are rendered in the order of their individual distance to the camera:
The distance-formula of a tile-based object with coordinates (x, y, z) is proportional to:
d=y-x-z
Negative values are allowed, so d(A)=-1 as distance of an object A is closer to the camera and will be hence rendered after object B with distance d(B) = 1 which is farther away ...
The render steps for each render-cycle would therefore be:
Determine all objects which are visible (we don't wanna render all objects in a huge world, if we only see a small part of it)
Calculate the individual distances of each tile-based object in the scene with the given formula
Sort all visible objects by distance
Render all visible objects in descending order
I myself tested this strategy in JavaFX 2D and the result looks like this, i.e. it is a simple working technique for tile-based isometric rendering:
In my first libgdx 3D game i now switched from createBox to createRect, to create only the visible faces (if a wall is on the left side of another wall, its right face is not visible...). I am creating 4 models:
frontFace
backFace
rightFace
leftFace
They are almost drawn how they actually should.
But there is one big issue: The side faces are only visible if i look in the positive z-Direction.
If i look the other side (negative z-Direction), they don't draw. The front and back faces only draw, if i look to them in negative x-Direction.
Has this something to do with the normals? I have set them to:
normal.x = 0;
normal.y = 1;
normal.z = 0;
Is that the error? How should i set the normals? What do they stand for? I have some basic idea about normal mapping for lighting, is that the same?
Important note: I have disabled backface culling, but it did not make any difference. View frustum culling is turned on. If any more informations are needed please post a comment and i will add them as soon as possible. Thanks
Perhaps not directly related, but still important to note: don't use createRect or createBox for anything other than debugging/testing. Instead combine multiple shapes into a single model/node/part. Or even better, use a modeling application where possible.
You didn't specify how you disabled backface culling. But keep in mind that you should not change the opengl state outside the shader/rendercontext (doing so will result in unpredicted behavior). To disable backface culling you can either specify it using the material attribute IntAttribute.CullFace (see: https://github.com/libgdx/libgdx/wiki/Material-and-environment#wiki-intattribute), the DefaultShader (or default ModelBatch) Config defaultCullFace member (see http://libgdx.badlogicgames.com/nightlies/docs/api/com/badlogic/gdx/graphics/g3d/shaders/DefaultShader.Config.html#defaultCullFace) or the (deprecated) static DefaultShader#defaultCullFace member (see http://libgdx.badlogicgames.com/nightlies/docs/api/com/badlogic/gdx/graphics/g3d/shaders/DefaultShader.html#defaultCullFace).
Whether a face is front or back is based on the vertex winding. Or in other words: the order in which you provide the corners of the rectangle is used to decide which side is front and which side is back. If you use one of the rect methods, you'll notice the arguments have either the 00, 01, 10 or 11 suffix. Here, when looking at the face, 00 is lower-left, 01 upper-left, 11 is upper-right and 10 is lower-right.
For a rectangle, it's normal is the perpendicular facing outwards the rectangle. For example if you have a rectangle on XZ plane with it front face on the top, then its normal is X=0,Y=1,Z=0. If its front face it facing the bottom, then its normal is X=0,Y=-1,Z=0. Likewise if you have a rectangle on XY plane, its normal is either X=0,Y=0,Z=1 or X=0,Y=0,Z=-1. Note that the normal is not used for face culling, it's most commonly used for lighting etc. Specifying an incorrect/opposite normal will not cause the face to be culled (it might cause incorrect/black lighting though).
For your purpose I'd recommend you to use Decal class. Decals are bitmap sprites that exist in a 3D scene. This article is about using of decals in LibGDX. I hope it is what you wanted.
Hey, I'm currently trying to extract information from a 3d array, where each entry represents a coordinate in order to draw something out of it. The problem is that the array is ridiculously large (and there are several of them) meaning I can't actually draw all of it.
What I'm trying to accomplish then, is just to draw a representation of the outside coordinates, a shell of the array if you'd like. This array is not full, can have large empty spaces with only a few pixels set, or have large clusters of pixel data grouped together. I do not know what kind of shape to expect (could be a simple cube, or a complex concave mesh), and am struggling to come up with an algorithm to effectively extract the border. This array effectively stores a set of points in a 3d space.
I thought of creating 6 2d meshes (one for each side of the 3d array), and getting the shallowest point they can find for each position, and then drawing them separetly. As I said however, this 3d shape could be concave, which creates problems with this approach. Imagine a cone with a circle on top (said circle bigger than the cone's base). While the top and side meshes would get the correct depth info out of the shape, the bottom mesh would connect the base to the circle through vertical lines, making me effectivelly loose the conical shape.
Then I thought of annalysing the array slice by slice, and creating 2 meshes from the slice data. I believe this should work for any type of shape, however I'm struggling to find an algorithm which accuratly gives me the border info for each slice. Once again, if you just try to create height maps from the slices, you will run into problems if they have any concavities. I also throught of some sort of edge tracking algorithm, but the array does not provide continuous data, and there is almost certainly not a continuous edge along each slice.
I tried looking into volume rendering, as used in medical imaging and such, as it deals with similar problems to the one I have, but couldn't really find anything that I could use.
If anyone has any experience with this sort of problem, or any valuable input, could you please point me in the right direction.
P.S. I would prefer to get a closed representation of the shell, thus my earlier 2d mesh approach. However, an approach that simply gives me the shell points, without any connection between them, that would still be extremely helpful.
Thank you,
Ze
I would start by reviewing your data structure. As you observed, the array does not maintain any obvious spatial relationships between points. An octree is a pretty good representation for data like you described. Depending upon the complexity of you point set, you may be able to find the crust using just the octree - assuming you have some connectivity between near points.
Alternatively, you may then turn to more rigorous algorithms like raycasting or marching cubes.
Guess, it's a bit late by now to be truly useful to you, but for reference I'd say this is a perfect scenario for volumetric modeling (as you guessed yourself). As long as you know the bounding box of your point cloud, you can map these coordinates to a voxel space and increase the density (value) of each voxel for each data point. Once you have your volume fully defined, you can then use the Marching cubes algorithm to produce a 3D surface mesh for a given threshold value (iso value). That resulting surface doesn't need to be continuous, but will wrap all voxels with values > isovalue inside. The 2D equivalent are heatmaps... You can refine the surface quality by adjusting the iso threshold (higher means tighter) and voxel resolution.
Since you're using Java, you might like to take a look at my toxiclibs volumeutils library, which also comes with sevaral examples (for Processing) showing the general approach...
Imagine a cone with a circle on top
(said circle bigger than the cone's
base). While the top and side meshes
would get the correct depth info out
of the shape, the bottom mesh would
connect the base to the circle through
vertical lines, making me effectivelly
loose the conical shape.
Even an example as simple as this would be impossible to reconstruct manually, let alone algorithmically. The possibility of your data representing a cylinder with a cone shaped hole is as likely as the vertices representing a cone with a disk attached to the top.
I do not know what kind of shape to
expect (could be a simple cube...
Again, without further information on how the data was generated, 8 vertices arranged in the form of a cube might as well represent 2 crossed squares. If you knew that the data was generated by, for example, a rotating 3d scanner of some sort then that would at least be a start.