following on from my previous post , I have a separate issue that I want to check.
I want to redo my calculations of having objects orbit a sphere on their own unique orbit, at various heights (radius) and angles (orbital plane). My previous post explains the method I am using, however I am getting quite a bit of unexpected behaviour, which I think is due to the mapping method I use. Objects start "turning" on their own and going off in changing directions when uncommanded.
SETUP:
I have a flat 2D grid of 1000x1000 where I keep track of objects
I then map these to the sphere and convert to 3D coords.
However, this is probably causing issues with a flat 2x2 not being able to be wrapped onto a sphere without huge distortion, so need to convert it to Mercator Projection, then wrap that onto the sphere.
Before it gets too complicated, would it be far easier to just deal with Matrix4 or Quaternions and represent everything by rotations instead? I still need to keep track of all objects, and the position on the sphere (for simplicity, lets say on the surface), but I need to be able to modify the objects orbits. For example, modify their height, or direction (orbital plane).
Can someone suggest a cleaner way to represent these valyes locally? I can see this getting very messy otherwise.
Many thanks,
J
Related
I'm attempting to render a hemisphere in java. However, I'm wanting to render the slice that is defined by 2 angles - Azimuth and Elevation. Since I'm defining a slice, I cannot (to my knowledge) use any built in primitives. If the azimuth range is defined 0-360 and the elevation range is defined as 0-70, this will be a hemisphere with an upside-down cone-shaped hole in the top.
When rendering this inside "cone", I have chosen to do it as triangles in 5 degree increments. This means that with a 360 degree cone, there are 73 different vertices (if I did the math correctly: 360/5degree slices with the origin or tip of the cone being shared with all sides, and all other vertices shared by adjacent triangle slices)
My question:
Is it more efficient to render these as a single polygon with with many vertices, or many triangles with only 3 vertices each. If I do a single polygon, will I still have to include all three points for each triangle, or if it is a shared vertex, would I only include it once? Sorry, my graphics rendering knowledge is limited. Also sorry for being so verbose; I'm hoping someone may spot something erroneous in my thought process which may clear things up either way.
First - Use Google to find an algorithm to create a sphere that is not a primitive.
Second - Somewhere down the chain - triangles will be used. Most likely by the underlying library. But for you - it depends upon whether or not you plan to chop up the created region. If you are not going to subdivide the region further I would just make it one polygon. Actually, after thinking about it for a second - you can always divide up the polygon afterwards too. So just make it one polygon.
I thought about it some more and decided to amend this answer. There are two ways you can create a polygon in openGL. You can either create it as a triangular mesh or as an outline polygon. So if you were asking "Should I use a triangular mesh or an outline polygon" I would say use the triangular mesh. It is a lot easier to break up the triangular mesh than a polygon outline since, to break the mesh, all you have to do is to just stop at one of the points, include the last two points in the new object, and continue on down the triangular mesh. An outline polygon requires you to go both left and right around the polygon to locate the two points where the break occurs. If that is clear. If not say so.
Update: 12:05pm
When making a polygon you can use a triangular mesh or a polygon outline. The outline is mainly good for 2D whereas the triangular mesh works in both 2D and 3D systems. If you have any kind of a polygon at all bigger than just three points then it is a good idea to put them all into an array. This allows you to use the built-in routines that take an array and simply go through it to build your polygon. By putting everything into an array you also make it easier on yourself to add new points or remove points or adjust points. All you do is to change the array entry and then call the same routine to draw everything again. (Which should be just a single call to a function.)
Ok so I was asked to do a 2d graph calculator as a college project, I was able to do one using java swing components and rendering an array with x,y values at real time. However there are several problems with this approach:
The array has a limit to the amount of values it can hold.
Its not very good in terms of performance because it has to loop through the whole array at 60 fps or so.
My way of fixing the first problem would be to use a dynamic array list instead of a regular array, but there is still the second problem. The idea of rendering one big image and using it as a 'map' of the graph sounds like a solution however this then brings it's own complications like:
What happens when the field of view goes out of the image boundaries.
How to know what values of the graph it should render to the image.
Now then again I face another decision making, since now we are talking more advance graphics tricks I had the idea of using lwjgl as my graphics library instead of swing which made sense from the word go, so now I can use the 3d camera system to render an orthogonal view of the 2d graph. About the first problem I thought of making chunks of image so that when we leave the FOV there is still an image to see. About the second problem am stuck, because the graph works as a function of x I don't know what my y value is until the equation has been calculated so technically I could check if the y value reaches the bottom of the image and if its lower that the top of the image (however this is still not good for performance).
Now say I have resolved all of the above there is still one last problem, and that is: because I draw the graph as very little lines (two points), how do I know how small the line have to be in order to get an accurate graph yet optimized, even when the function has some really wacky results?
Thank to everyone, and I hope you can help me :)
I've been trying various ways of creating a two-dimensional tile-based game for a few months now. I have always had each tile be a separate object of a 'Tile' class. The tile objects are stored in a two-dimensional array of objects. This has proven to be extremely impractical, mostly in terms of performance with many tiles being rendered at once. I have aided in this by only allowing tiles within a certain distance of the player being rendered, but this isn't that great either. I have also had problems with the objects returning a null-pointer exception when I try to edit the tile's values in-game. This has to do with the objects in the 2D array not being properly initialized.
Is there any other, simpler way of doing this? I can't imagine every tile-based game uses this exact way, I must be overlooking something.
EDIT: Perhaps LWJGL just isn't the correct library to use? I am having similar problems with implementing a font system with LWJGL... typing out more than a sentence will bring down the FPS by 100 or even more.
For static objects (not going anywhere, staying where they are) 1 tile = 1 object is OK. That's how it was done in Wolf3d. For moving objects you have multiple options.
You can, if you really really want to, store object sub-parts in adjacent cells/tiles when an object isn't contained fully within just one of them and crosses one or more cell/tile boundaries. But that may be not quite handy as you'd need to split your objects into parts on the fly.
A more reasonable approach is to not store moving objects in cells/tiles at all and process them more or less independently of the static objects. But then you will need to have some code to determine object visibility. Actually, in graphics the most basic performance problems come from unnecessary calculations and rendering. Generally, you don't want to even try to render what's invisible. Likewise, if some computations (especially complex ones) can be moved outside of the innermost loops, they should be.
Other than that it's pretty hard to give any specific advice given so little details about what you're doing, how you're doing it and seeing the actual code. You should really try to make your questions specific.
A two-dimensional array of Tile objects should be fine........ this is what most 2D games use and you should certainly be able to get good enough performance out of OpenGL / LWJGL to render this at a good speed (100FPS+).
Things to check:
Make sure you are clipping to only deisplay the visible set of tiles (According to the screen width and height and the player's position)
Make sure the code to draw each tile is fast... ideally you should be drawing just one textured square for each tile. In particular, you shouldn't be doing any complex operations on a per-tile basis in your rendering code.
If you're clever, you can draw multiple tiles in one OpenGL call with VBOs / clever use of texture coordinates etc. But this is probably unnecessary for a tile-based game.
Im making a game in Java with a few other people but we are stuck on one part of it, making the collision detection. The game is an RPG and I know how to do the collision detection with the characters using Rectangles, but what I dont know how to do is the collision detection for the maps. What I mean by that is like so the character cant walk over trees or water and that stuff but using rectangles doesnt seem like the best option here.
Well to explain what the game maps are gonna look like, here is an example http://i980.photobucket.com/albums/ae287/gordsmash/7-8.jpg
Now I could use rectangles to get bounds and stop the player from walking over the trees and water but that would take a lot of them.
But is there another easier way to prevent the player from walking over the trees and obstacles besides using Rectangles?
Here's a simple way but it uses more memory and you do the work up front... just create a background collision mask that denotes the permissible areas for characters to walk on in a binary form. You can store that in some sort of compressed bitmap form. The lookup then is very simple and very quick.
Rectangle collision detection seems to make sense; However, alternatively you may also try sphere-sphere collision detection, which can detect collision much quicker. You don't even need a square root for distance computations since you can compare the squared distances to see if the spheres overlap. This is a very fast method, and given the nature of your game could work very well.
ALSO! Assuming you have numerous tiles which you are colliding against, consider some method of spacial partitioning. Let me give you an easy example - subdivide your map into several rectangles (http://www.staff.ncl.ac.uk/qiuhua.liang/Research/Pic_research/mine_grid.jpg) and then depending on which rectangular area your player is currently residing in - check collision only against the tiles which are located within that area.
You may take it a step further - if you have more tiles in any given area than the threshold that you set - subdivide that area further to make more smaller areas within it.
The idea behind such subdivision is called Quadtree, and there is a huge quantity of papers and tutorials on the subject, you'll catch on very quickly.
Please let me know if you have any questions.
There are many solutions to this type of problem, but for what you're doing I believe the best course of action would be to use a tile engine. This would have been commonly used in similar games in the past (think any RPG on the SNES) and it provides you with a quick and easy means of both level/map design and collision detection.
The basic concept of a tile engine is that objects are stored in a 2D array and when your player (or any other moving game entity) attempts to move into a new tile you perform a simple check to see if the object in that tile is passable or not (for instance, if it's grass, the player may move; if it's a treasure chest, the player cannot move). This will greatly simplify checking for collisions (as a naive check of a list of entities will have O(n^2) performance). This picture might give you an idea of what I'm talking about. The lines have been added to illustrate a point, but of course when you're playing the game you don't actively think of everything as being composed of individual 32x32 pixel tiles.
While I don't personally have any experience with tile engines in Java, it looks like Mappy supports Java, and I've heard good things about PulpCore. You're more than welcome to create your own engine, of course, but you have to decide if your effort is better spent reinventing the wheel (but, of course, it will be your wheel then, and that is rather satisfying) or spend your time making a better game.
Hey, I'm currently trying to extract information from a 3d array, where each entry represents a coordinate in order to draw something out of it. The problem is that the array is ridiculously large (and there are several of them) meaning I can't actually draw all of it.
What I'm trying to accomplish then, is just to draw a representation of the outside coordinates, a shell of the array if you'd like. This array is not full, can have large empty spaces with only a few pixels set, or have large clusters of pixel data grouped together. I do not know what kind of shape to expect (could be a simple cube, or a complex concave mesh), and am struggling to come up with an algorithm to effectively extract the border. This array effectively stores a set of points in a 3d space.
I thought of creating 6 2d meshes (one for each side of the 3d array), and getting the shallowest point they can find for each position, and then drawing them separetly. As I said however, this 3d shape could be concave, which creates problems with this approach. Imagine a cone with a circle on top (said circle bigger than the cone's base). While the top and side meshes would get the correct depth info out of the shape, the bottom mesh would connect the base to the circle through vertical lines, making me effectivelly loose the conical shape.
Then I thought of annalysing the array slice by slice, and creating 2 meshes from the slice data. I believe this should work for any type of shape, however I'm struggling to find an algorithm which accuratly gives me the border info for each slice. Once again, if you just try to create height maps from the slices, you will run into problems if they have any concavities. I also throught of some sort of edge tracking algorithm, but the array does not provide continuous data, and there is almost certainly not a continuous edge along each slice.
I tried looking into volume rendering, as used in medical imaging and such, as it deals with similar problems to the one I have, but couldn't really find anything that I could use.
If anyone has any experience with this sort of problem, or any valuable input, could you please point me in the right direction.
P.S. I would prefer to get a closed representation of the shell, thus my earlier 2d mesh approach. However, an approach that simply gives me the shell points, without any connection between them, that would still be extremely helpful.
Thank you,
Ze
I would start by reviewing your data structure. As you observed, the array does not maintain any obvious spatial relationships between points. An octree is a pretty good representation for data like you described. Depending upon the complexity of you point set, you may be able to find the crust using just the octree - assuming you have some connectivity between near points.
Alternatively, you may then turn to more rigorous algorithms like raycasting or marching cubes.
Guess, it's a bit late by now to be truly useful to you, but for reference I'd say this is a perfect scenario for volumetric modeling (as you guessed yourself). As long as you know the bounding box of your point cloud, you can map these coordinates to a voxel space and increase the density (value) of each voxel for each data point. Once you have your volume fully defined, you can then use the Marching cubes algorithm to produce a 3D surface mesh for a given threshold value (iso value). That resulting surface doesn't need to be continuous, but will wrap all voxels with values > isovalue inside. The 2D equivalent are heatmaps... You can refine the surface quality by adjusting the iso threshold (higher means tighter) and voxel resolution.
Since you're using Java, you might like to take a look at my toxiclibs volumeutils library, which also comes with sevaral examples (for Processing) showing the general approach...
Imagine a cone with a circle on top
(said circle bigger than the cone's
base). While the top and side meshes
would get the correct depth info out
of the shape, the bottom mesh would
connect the base to the circle through
vertical lines, making me effectivelly
loose the conical shape.
Even an example as simple as this would be impossible to reconstruct manually, let alone algorithmically. The possibility of your data representing a cylinder with a cone shaped hole is as likely as the vertices representing a cone with a disk attached to the top.
I do not know what kind of shape to
expect (could be a simple cube...
Again, without further information on how the data was generated, 8 vertices arranged in the form of a cube might as well represent 2 crossed squares. If you knew that the data was generated by, for example, a rotating 3d scanner of some sort then that would at least be a start.