How to implement a logical render order of isometric tiles? - java

Situation: I am programming a 2D isometric styled Java game with libGDX.
Right now I have a moveable player that properly collides with tiles of solid objects.
Problem:
Now it comes to the renderorder of tiles. By default the library code renders from the bottom maplayer to the top layer. (Ground is layer 0, object layer is layer 1) Which makes sense. I draw the player on top of that. This means that the player is always on top of everything which doesn't make sense in some situations.
Goal: Since an isometric look means to have a kind of a 3D perspective, the player can be behind or in front of objects. So I have to come up with some code that decides whether the player is rendered behind or in front of it. I have this fridge as an example:
I hope it is comprehensible what I mean with "logical collision". I have some glimpses of ideas how to achieve that but that would be a mess in the code. So I wanted to ask if anyone has experiences with that or can hand me some nice sources that can help me.
Thanks for reading!

It depends HOW you render. One thing that comes to mind, is to render objects/tiles from up to down and left to right, that way in most cases the objects should always be in correct layer. Now depending on how you make the graphics, there could be flaws in this. If so, you could also have few layers of of priority, to draw different parts of objects/tiles, or draw something before other. If you also have tiles and objects as separate type, you could have objects ability to draw before and after the tiles, to fulfill different needs. You could also implement some tiles to have objects, for cases like this, however that could also be waste of processor times comparing to other methods I mentioned.
How you would go about comparing object positions to tile positions is not actually very difficult. Probably a sufficient method would be comparing tile positions to object positions. Lets take a hypothetical situation, where you have tiles that are 32 x 32 in size, and there is object at 25 x 18. The object would be in front of tiles at offset 0 x 0 and 1 x 0, but behind 0 x 1 and 1 x 1 (If we imagine the tiles start from upper corner). Therefore, we first draw tiles at 0 x 0, and 1 x 0, then object, then tiles at 0 x 1 and 1 x 1. It should naturally fall in it's correct place, with fairly simple code logic.
Hope you got some ideas about how to implement it :)

Professional isometric styled games are actually 3D based with an orthographic camera. This way you dont have the complex problem of 2D sprite sorting in an isometric context and can rely on the hardware pixel-based z-Buffering.
Nevertheless, if you want to realize an isometric game without the comfort of a 3d-game engine, like in Java 2D, the approach of different sorted layers doesn't work. Same with the painter-algorithm, cause both are actually intended for pure 2D top-down view based games.
So how to cope with this dilemma?
Well an approach would be to imitate the Z-Buffering technique on a higher granular level, i.e. instead of considering each single pixel of each object in the scene, you consider tiles as a sorting base.
Like z-Buffering, pixels/tiles are rendered in the order of their individual distance to the camera:
The distance-formula of a tile-based object with coordinates (x, y, z) is proportional to:
d=y-x-z
Negative values are allowed, so d(A)=-1 as distance of an object A is closer to the camera and will be hence rendered after object B with distance d(B) = 1 which is farther away ...
The render steps for each render-cycle would therefore be:
Determine all objects which are visible (we don't wanna render all objects in a huge world, if we only see a small part of it)
Calculate the individual distances of each tile-based object in the scene with the given formula
Sort all visible objects by distance
Render all visible objects in descending order
I myself tested this strategy in JavaFX 2D and the result looks like this, i.e. it is a simple working technique for tile-based isometric rendering:

Related

Creating 2D Angled Top Down Terrain Instead of Fully Flat

Similar to the game Factorio im trying to create "3D" terrain but of course in 2D Factorio seems to do this very well, creating terrain that looks like this
Where you can see edges of terrain and its very clearly curved. In my own 2D game Ive been trying to think of how to do the same thing, but all the ways I can think of seem to be slow or CPU intensive. Currently my terrain looks like this:
Simply 2D quads textured and drawn on screen, each quad is 16x16 (except the water thats technically a background but its not important now), How could I even begin to change my terrain to look more like a Factorio or other "2.5D" games, do they simply use different textures and check where the tile would be relative to other tiles? Or do they take a different approach?
Thanks in advance for your help!
I am a Factorio dev but I have not done this, so I can only tell you what I know generally.
There is a basic way to do it and then there are optional improvements.
Either way you will need two things
Set of textures for every situation you want to handle
Set of rules "local topology -> texture"
So you have your 2d tile map, and you move a window across it and whenever it matches a pattern, you apply an appropriate texture.
You probably wouldn't want to do that on the run in every tick, but rather calculate it all when you generate the map (or map segment - Factorio generates new areas when needed).
I will be using your picture and my imba ms paint skills to demonstrate.
This is an example of such rule. Green is land, blue is water, grey is "I don't care".
In reality you will need a lot of such rules to cover all cases (100+ I believe).
In your case, this rule would apply at the two highlighted spots.
This is all you need to have a working generator.
There is one decision that you need to make here. As you can see, the shoreline runs inside the tile, not between tiles. So you need to chose whether it will run through the last land tile, or the last water tile. The picture can therefore be a result of these two maps (my template example would be from the left one):
Both choices are ok. In fact, Factorio switched from the "shoreline on land" on the left to the "shoreline on water" on the right quite recently. Just keep in mind that you will need to adjust the walking/pathfinding to account for this.
Now note that the two areas matched by the one pattern in the example look different. This can be a result of two possible improvements that make the result nicer.
First is that for one case you can have more different textures and pick a random one. You will need to keep that choice in the game save so that it looks the same after load.
Another one is more advanced. While the basic algorithm can already give you pretty good results, there are things it can't do.
You can use larger templates and larger textures that span over several tiles. That way you can draw larger compact pieces of the terrain without being limited by the fact that all the tiles need to be connectable to all (valid) others.
The example you provided are still 2D textures (technically). But since the textures themselves are 'fancy 3D', they appear to be 3D/2D angled.
So your best bet would be to upgrade your textures. (and add shadow to entities for extra depth).
Edit:
The edges you asked about are probably layed-out by checking if a 'tile' is an edge, and if so it adds an edge-texture on top the background. While the actual tile itself is also a flat image (just like the water). Add some shadow afterwards and the 3D illusion is complete.
I hope this answers your question, otherwise feel free to ask clarification.

Rendering a "Slice" of a Sphere in Java - Efficiency

I'm attempting to render a hemisphere in java. However, I'm wanting to render the slice that is defined by 2 angles - Azimuth and Elevation. Since I'm defining a slice, I cannot (to my knowledge) use any built in primitives. If the azimuth range is defined 0-360 and the elevation range is defined as 0-70, this will be a hemisphere with an upside-down cone-shaped hole in the top.
When rendering this inside "cone", I have chosen to do it as triangles in 5 degree increments. This means that with a 360 degree cone, there are 73 different vertices (if I did the math correctly: 360/5degree slices with the origin or tip of the cone being shared with all sides, and all other vertices shared by adjacent triangle slices)
My question:
Is it more efficient to render these as a single polygon with with many vertices, or many triangles with only 3 vertices each. If I do a single polygon, will I still have to include all three points for each triangle, or if it is a shared vertex, would I only include it once? Sorry, my graphics rendering knowledge is limited. Also sorry for being so verbose; I'm hoping someone may spot something erroneous in my thought process which may clear things up either way.
First - Use Google to find an algorithm to create a sphere that is not a primitive.
Second - Somewhere down the chain - triangles will be used. Most likely by the underlying library. But for you - it depends upon whether or not you plan to chop up the created region. If you are not going to subdivide the region further I would just make it one polygon. Actually, after thinking about it for a second - you can always divide up the polygon afterwards too. So just make it one polygon.
I thought about it some more and decided to amend this answer. There are two ways you can create a polygon in openGL. You can either create it as a triangular mesh or as an outline polygon. So if you were asking "Should I use a triangular mesh or an outline polygon" I would say use the triangular mesh. It is a lot easier to break up the triangular mesh than a polygon outline since, to break the mesh, all you have to do is to just stop at one of the points, include the last two points in the new object, and continue on down the triangular mesh. An outline polygon requires you to go both left and right around the polygon to locate the two points where the break occurs. If that is clear. If not say so.
Update: 12:05pm
When making a polygon you can use a triangular mesh or a polygon outline. The outline is mainly good for 2D whereas the triangular mesh works in both 2D and 3D systems. If you have any kind of a polygon at all bigger than just three points then it is a good idea to put them all into an array. This allows you to use the built-in routines that take an array and simply go through it to build your polygon. By putting everything into an array you also make it easier on yourself to add new points or remove points or adjust points. All you do is to change the array entry and then call the same routine to draw everything again. (Which should be just a single call to a function.)

2-Dimensional Tile-Based Game: Each tile as an object impractical?

I've been trying various ways of creating a two-dimensional tile-based game for a few months now. I have always had each tile be a separate object of a 'Tile' class. The tile objects are stored in a two-dimensional array of objects. This has proven to be extremely impractical, mostly in terms of performance with many tiles being rendered at once. I have aided in this by only allowing tiles within a certain distance of the player being rendered, but this isn't that great either. I have also had problems with the objects returning a null-pointer exception when I try to edit the tile's values in-game. This has to do with the objects in the 2D array not being properly initialized.
Is there any other, simpler way of doing this? I can't imagine every tile-based game uses this exact way, I must be overlooking something.
EDIT: Perhaps LWJGL just isn't the correct library to use? I am having similar problems with implementing a font system with LWJGL... typing out more than a sentence will bring down the FPS by 100 or even more.
For static objects (not going anywhere, staying where they are) 1 tile = 1 object is OK. That's how it was done in Wolf3d. For moving objects you have multiple options.
You can, if you really really want to, store object sub-parts in adjacent cells/tiles when an object isn't contained fully within just one of them and crosses one or more cell/tile boundaries. But that may be not quite handy as you'd need to split your objects into parts on the fly.
A more reasonable approach is to not store moving objects in cells/tiles at all and process them more or less independently of the static objects. But then you will need to have some code to determine object visibility. Actually, in graphics the most basic performance problems come from unnecessary calculations and rendering. Generally, you don't want to even try to render what's invisible. Likewise, if some computations (especially complex ones) can be moved outside of the innermost loops, they should be.
Other than that it's pretty hard to give any specific advice given so little details about what you're doing, how you're doing it and seeing the actual code. You should really try to make your questions specific.
A two-dimensional array of Tile objects should be fine........ this is what most 2D games use and you should certainly be able to get good enough performance out of OpenGL / LWJGL to render this at a good speed (100FPS+).
Things to check:
Make sure you are clipping to only deisplay the visible set of tiles (According to the screen width and height and the player's position)
Make sure the code to draw each tile is fast... ideally you should be drawing just one textured square for each tile. In particular, you shouldn't be doing any complex operations on a per-tile basis in your rendering code.
If you're clever, you can draw multiple tiles in one OpenGL call with VBOs / clever use of texture coordinates etc. But this is probably unnecessary for a tile-based game.

Getting boundary information from a 3d array

Hey, I'm currently trying to extract information from a 3d array, where each entry represents a coordinate in order to draw something out of it. The problem is that the array is ridiculously large (and there are several of them) meaning I can't actually draw all of it.
What I'm trying to accomplish then, is just to draw a representation of the outside coordinates, a shell of the array if you'd like. This array is not full, can have large empty spaces with only a few pixels set, or have large clusters of pixel data grouped together. I do not know what kind of shape to expect (could be a simple cube, or a complex concave mesh), and am struggling to come up with an algorithm to effectively extract the border. This array effectively stores a set of points in a 3d space.
I thought of creating 6 2d meshes (one for each side of the 3d array), and getting the shallowest point they can find for each position, and then drawing them separetly. As I said however, this 3d shape could be concave, which creates problems with this approach. Imagine a cone with a circle on top (said circle bigger than the cone's base). While the top and side meshes would get the correct depth info out of the shape, the bottom mesh would connect the base to the circle through vertical lines, making me effectivelly loose the conical shape.
Then I thought of annalysing the array slice by slice, and creating 2 meshes from the slice data. I believe this should work for any type of shape, however I'm struggling to find an algorithm which accuratly gives me the border info for each slice. Once again, if you just try to create height maps from the slices, you will run into problems if they have any concavities. I also throught of some sort of edge tracking algorithm, but the array does not provide continuous data, and there is almost certainly not a continuous edge along each slice.
I tried looking into volume rendering, as used in medical imaging and such, as it deals with similar problems to the one I have, but couldn't really find anything that I could use.
If anyone has any experience with this sort of problem, or any valuable input, could you please point me in the right direction.
P.S. I would prefer to get a closed representation of the shell, thus my earlier 2d mesh approach. However, an approach that simply gives me the shell points, without any connection between them, that would still be extremely helpful.
Thank you,
Ze
I would start by reviewing your data structure. As you observed, the array does not maintain any obvious spatial relationships between points. An octree is a pretty good representation for data like you described. Depending upon the complexity of you point set, you may be able to find the crust using just the octree - assuming you have some connectivity between near points.
Alternatively, you may then turn to more rigorous algorithms like raycasting or marching cubes.
Guess, it's a bit late by now to be truly useful to you, but for reference I'd say this is a perfect scenario for volumetric modeling (as you guessed yourself). As long as you know the bounding box of your point cloud, you can map these coordinates to a voxel space and increase the density (value) of each voxel for each data point. Once you have your volume fully defined, you can then use the Marching cubes algorithm to produce a 3D surface mesh for a given threshold value (iso value). That resulting surface doesn't need to be continuous, but will wrap all voxels with values > isovalue inside. The 2D equivalent are heatmaps... You can refine the surface quality by adjusting the iso threshold (higher means tighter) and voxel resolution.
Since you're using Java, you might like to take a look at my toxiclibs volumeutils library, which also comes with sevaral examples (for Processing) showing the general approach...
Imagine a cone with a circle on top
(said circle bigger than the cone's
base). While the top and side meshes
would get the correct depth info out
of the shape, the bottom mesh would
connect the base to the circle through
vertical lines, making me effectivelly
loose the conical shape.
Even an example as simple as this would be impossible to reconstruct manually, let alone algorithmically. The possibility of your data representing a cylinder with a cone shaped hole is as likely as the vertices representing a cone with a disk attached to the top.
I do not know what kind of shape to
expect (could be a simple cube...
Again, without further information on how the data was generated, 8 vertices arranged in the form of a cube might as well represent 2 crossed squares. If you knew that the data was generated by, for example, a rotating 3d scanner of some sort then that would at least be a start.

Collisions and velocity, how do I predict hits that will take place between this update and the next?

I'm doing simple collisions on moving, coloured pixels. If their velocity get's higher than 1, the pixels may pass through something in the static world I'm trying to collide with.
How do I compensate for this?
This C++ example uses a vector based approach to predicting the paths of particles undergoing elastic collisions. This Java example is similar, rewinding to the start of a collision between particles when overlap is detected. In each, the critical element is separating the model from the view. By doing so, it's possible to iterate the model at 1 pixel/tick and update the view at a different, variable rate.
The article 2-Dimensional Elastic Collisions without Trigonometry may also be helpful.

Categories