Drawing a isometric map in chunks? - java

I want to reduce draw calls on my isometric map implementation so I thought about combining multiple meshes and draw them in one go. I am currently creating a mesh from all tiles in a 8x8 "chunk". This means a mesh containing floors, walls and objects. These floors, walls and objects are just quads placed along the regular axis where Y is up. A rotated orthographic camera provides the isometric projection matrix. Most floors and walls fill up the full quad with an opaque texture but some, and most objects have alpha transparency like a window, broken floor or a chair object. I am building the "chunk" mesh by adding individual meshes to it, I do this in order of drawing back to front and bottom to top otherwise the combined chunk mesh is not properly rendered at all. I render all of the needed chunks also in the same order and this works great.
However the drawing problems start when I need to add additional objects "inside" these mesh chunks. For example a moving player which is also just a quad with a transparent texture. I need to be able to put him inside that chunk mesh, perhaps on a tile where he is in front of a wall and behind a window. If I would render the player prior to the chunk, it's transparent pixels would not show the chunk mesh. If I would render the player after the chunk the player would not be visible trough a transparent quad in the chunk. If this could be solved easily and not to expensive on the CPU/GPU that would be the solution to my question. However I am considering myself new to OpenGL so I do not know it's magic very well.
I can think of a view solutions to tackle this problem that do not involve OpenGL:
Dump the chunk mesh method and just draw back to front. I either need a more efficient way of drawing or don't allow to zoom out as much to reduce draw calls since this is a bottleneck.
Get rid of the quads with transparency and make them full 3D. I feel this should be a design choice and not a mandatory thing, besides that it would add a lot of additional work to creating all these assets. Now I just have a textures projected on a quad instead of fully UV'd models with individual textures.
Draw all transparent objects after the chunks in proper order. This feels kinda hacky and error prone since some objects need to go into that chunk mesh and some don't.
Only combine floors in a batch mesh. The floors are the biggest bottleneck, the bottom of the map has all floor tiles filled which are about 4000 draw calls when drawn individually, a big building uses a lot of floors too for each Z level. Walls and objects are drawn significantly less, probably just a couple hundred maximum all the way zoomed out. SO for each chunk draw all floors in one call and then each individual object, I'd reduce draw calls a lot by just combining the floors. When drawing walls and objects I have to check if there is a potential dynamic object to be rendered or just check if there are dynamic objects within the chunk and sort them with all the walls and objects before drawing them.
This is how I currently render.
Gdx.gl.glEnable(GL20.GL_DEPTH_TEST);
Gdx.gl.glEnable(GL20.GL_BLEND);
Gdx.gl.glBlendFunc(GL20.GL_SRC_ALPHA, GL20.GL_ONE_MINUS_SRC_ALPHA);
shader.begin();
shader.setUniformMatrix("u_worldView", cam.combined);
// If I draw the player before the chunk the transparency of the player would not render the chunk.
player.getMesh().render(shader, GL20.GL_TRIANGLES);
// Drawing the mesh chunk combines of floors, walls and objects of each tile within the chunk.
chunk.getCombinedMesh().render(shader, GL20.GL_TRIANGLES);
// If I draw the player after the chunk the player would not be drawn if it is behind a transparent mesh like a window or a tree.
player.getMesh().render(shader, GL20.GL_TRIANGLES);
shader.end();
So what are my options here? Can I fix this by using some magic tricks out of the OpenGL hat? Or do you have another suggestion to put on the list above? I am using LibGDX for my project.

Related

Rendering a triangular face in 3d space

Let's say I have a triangular face in 3d space, and I have the 3d coordinates of each vertex of this triangle, and would also have other information about the triangle(angles, lengths of sides, etc.). In Java, if I have the viewing screen and its information, how can I draw that plane, without using libraries like LWJGL, to that image, assuming I can properly project, accounting for perspective, any 3d point to that 2d image.
Would the best course of action just be to run a loop that draws each point on the plain to a point on the image(i.e. setting the corresponding pixel), which will most likely set the same pixel multiple times? If I'd do this, what would be the best way to identify each point in an oblique triangle, or a triangle that doesn't line up nicely with the axes?
tl;dr: I have a triangular face in 3d space, a "camera" looking at the face, and an image in which I can set each pixel. Using no GL libraries, what's the best way to project and draw that face onto the image?
Projection :
won't detail as you seems to know it
Drawing a line
you can look at Bresenham algorithm if you wanna start with the basics
(hardwared in recent graphics card)
Filling
you can fill between left and right borders of the triangle while you use Bresenham on both (you could use a floodfill algorithm starting ... i don't know, maybe at the projection of the center of the triangle)
Your best bet is to check out the g.fillPolygon() function for Java. It allows you to draw polygons with as many sides as possible and theres also g.drawPolygon() if you don't want it solid. Then you can just do some simple maths for the points. Such as each point is basically it's x and y except if the polygon is further away the points move closer to the center of the polygon and if the polygon is closer they move further away from the center of the polygon.
A second idea could be using some sort of array to store pixels and then researching line drawing algorithms and drawing lines then putting all the line data in another array and using some sort of flood-fill. Then whilst it's in that array you could try and do some weird stuff to the pixels if you wanted textures or something.

Thickness of OpenGL 2D Textures

What is the easiest way to give 2D texture in OpenGL (lwjgl) some kind of "Thickness". Of course i could get the border of the texture somehow and add Quads, orriented by the normal of the quad that the texture is drawn on, in the color of the adjacent texture pixel. But there has to be an easier way to do it.
Minecraft is using lwigl as well and there are the (new) 3D Items, that spin down on the ground and don't cause as much of a performance issue, as is if they were drawn of dozends of polygons. As well, when you hold an item in your hand, there is that kind of "stretched" Texture in depth, which also works with high resolution textures.
Does anyone know how that is done?
A 2D texture is always infinitely thin. If you want actual thickness (when you look edge onto it) you need geometry. In Minecraft things look blocky, because they've been modeled that way.
If you look at some angle and ignore the edges you can use parallax mapping to "fake" some depth in the texture. Or you can use a depth map and use a combination of tesselation shaders and vertex shaders to implement a displacement map, that generates geometry from the texture.

libgdx: sprite break like a glass

I have a sprite which is ball. Let's say, it represents a glass ball.
I am rendering the graphics with SpriteBatch.
Is it possible in libgdx to have a breaking glass effect for the ball? Meaning, I want to split the sprite to different pieces with abnormal borders (not rectangular) and then draw them flying to different directions.
Use a PolygonSprite to represent the non-rectangular chunks of your sprite.
To generate the chunks, I suggest picking a random spot near the center of your sprite, and then creating several triangles from that point to the corners and 2 or 3 points on each side of the square sprite. You should be able to define a PolygonRegion for each shard, and use that to build PolygonSprite instances.
I haven't actually used the PolygonRegion API before (and it looks a bit obtuse), so you might want to check the examples.

Generating Very Large Images at Runtime with OpenGL and libgdx

I am generating very large hex grids (up to 120k total hexes at 32px wide hexes results in over 12k wide images) and I'm trying to find an efficient way to bind these to OpenGL textures in libgdx. I was thinking of using multiple FBOs and breaking the grid up as necessary into tiles, but I'm not sure how to ensure continuity between the FBOs. I can't start with one massive FBO, because that is backed up by a texture so it would fail from trying to load it to video memory. I can't use a standard bitmap on the heap because I need the drawing functionality of an OpenGL surface.
So what I was thinking was I would need to overdraw on the FBOs and somehow pick up on the next FBO exactly where the previous left off. However I'm not sure how to go about this. I'm drawing the hex grid with a series of hexagonal meshes, FYI.
Of course, there's probably some other much simpler and more efficient way to do this that I'm not even thinking of, which is why I pose this question to you fine people!
You have to draw it in pieces. You need to be able to draw your hex grid from an arbitrary position. This means being able to compute which hexes to draw based on a rectangle overlaid over the map. This isn't a hard problem, and I wouldn't worry too much about drawing extra stuff off-screen. You should master this ability to view the hexmap from any position before moving on.
Once you've mastered that, it's really simple.
Draw the top-left corner and store the pixel data. Then move the area you're drawing over exactly one image width. Draw and store that. Move the area over one image width. Draw and store it. Keep doing that until you've covered the entire width.
Move down one image height and repeat the process. Once you've run out of width and height, you're done. Save your mega-huge image.
You don't need FBOs for this. You could draw it to the screen if you wanted. Though if you want maximum performance, I would suggest using FBOs, double buffering them, and using glReadPixels though a pixel buffer object. That should cut down a lot on latency.

Using a canvas much larger than the screen

I'm trying to draw a 2D contour plot of some data on Android and I'm wondering what the best approach would be to draw those. The whole datasets can be relatively large (2k * 2k points) and zooming and moving inside the plot should be very fast. Most of the time only a small part of the data will be drawn as the user has zoomed in on the data.
My idea now would be to draw the whole plot onto a large canvas, but clip it to the portion visible on the screen, so that only that part would be really drawn in the end. I find the 2D drawing API of Android somewhat confusing and I'm not sure if this is really a feasible approach and how I would then go about executing it.
So my questions are:
Is it a good idea to draw onto a canvas much larger than the screen and use clipping to display only the relevant part?
How would I create a larger canvas and how would I select which parts should be drawn?
You should start the other way around. Instead of creating a huge canvas you should detect what part of your plot you need to draw and draw only that.
So basically you need some navigation/scrolling and you need to keep the offset from the starting point in memory to calculate where you are. Using the offset you can easily zoom in and out because you just need to scale the plot to the screen.
Is it a good idea to draw onto a
canvas much larger than the screen and
use clipping to display only the
relevant part?
A better question might be, do you have any other options. Some might argue that this is a bad idea since your going to keep memory in use when it isn't relevant to whats happening on the UI. However, from my experiences with the Canvas, I think you'll find this should work out just fine. Now, if you are trying to keep "5 square miles" of canvas in memory your definitely going to have to find a better way to manage it.
How would I create a larger canvas and
how would I select which parts should
be drawn?
I would expect that you will be creating your own "scrolling" method when the user touches the screen via overriding the onTouchEvent method. Basically your going to need to keep track of a starting point X and Y and just track that value as you move the Canvas on screen. In order to move the Canvas there are a number of built in's like translate and scale that you can use to both move the Canvas in X and Y as well as scale it when the user zooms in or out.
I don't think that is a good idea to draw your 2D contour plot on a big bitmap because you need a vector type graphics to zoom in and out in order to keep it sharp. Only pictures are good to scale down but graphs will lose thin lines or come out deformed when scaled down in bitmaps.
The proper way is to do it all mathematically and to calculate which part of the graph should be drawn for required position and zoom. Using anti_alias paint for lines and text, the graph would always come out sharp and good...
When the user zooms out, some items should not be drawn as they could not fit into the screen or would clutter it. So the graph would be always optimised for the zoom level...

Categories