First I would like to know if it's possible to rotate a Mesh without drawing it.
If it is possible then how could I get the new vertices of the rotated mesh?
I need this to verify if a certain Mesh is still inside a rectangle after rotation and, I only want to draw it if it still's inside.
Related
I want to reduce draw calls on my isometric map implementation so I thought about combining multiple meshes and draw them in one go. I am currently creating a mesh from all tiles in a 8x8 "chunk". This means a mesh containing floors, walls and objects. These floors, walls and objects are just quads placed along the regular axis where Y is up. A rotated orthographic camera provides the isometric projection matrix. Most floors and walls fill up the full quad with an opaque texture but some, and most objects have alpha transparency like a window, broken floor or a chair object. I am building the "chunk" mesh by adding individual meshes to it, I do this in order of drawing back to front and bottom to top otherwise the combined chunk mesh is not properly rendered at all. I render all of the needed chunks also in the same order and this works great.
However the drawing problems start when I need to add additional objects "inside" these mesh chunks. For example a moving player which is also just a quad with a transparent texture. I need to be able to put him inside that chunk mesh, perhaps on a tile where he is in front of a wall and behind a window. If I would render the player prior to the chunk, it's transparent pixels would not show the chunk mesh. If I would render the player after the chunk the player would not be visible trough a transparent quad in the chunk. If this could be solved easily and not to expensive on the CPU/GPU that would be the solution to my question. However I am considering myself new to OpenGL so I do not know it's magic very well.
I can think of a view solutions to tackle this problem that do not involve OpenGL:
Dump the chunk mesh method and just draw back to front. I either need a more efficient way of drawing or don't allow to zoom out as much to reduce draw calls since this is a bottleneck.
Get rid of the quads with transparency and make them full 3D. I feel this should be a design choice and not a mandatory thing, besides that it would add a lot of additional work to creating all these assets. Now I just have a textures projected on a quad instead of fully UV'd models with individual textures.
Draw all transparent objects after the chunks in proper order. This feels kinda hacky and error prone since some objects need to go into that chunk mesh and some don't.
Only combine floors in a batch mesh. The floors are the biggest bottleneck, the bottom of the map has all floor tiles filled which are about 4000 draw calls when drawn individually, a big building uses a lot of floors too for each Z level. Walls and objects are drawn significantly less, probably just a couple hundred maximum all the way zoomed out. SO for each chunk draw all floors in one call and then each individual object, I'd reduce draw calls a lot by just combining the floors. When drawing walls and objects I have to check if there is a potential dynamic object to be rendered or just check if there are dynamic objects within the chunk and sort them with all the walls and objects before drawing them.
This is how I currently render.
Gdx.gl.glEnable(GL20.GL_DEPTH_TEST);
Gdx.gl.glEnable(GL20.GL_BLEND);
Gdx.gl.glBlendFunc(GL20.GL_SRC_ALPHA, GL20.GL_ONE_MINUS_SRC_ALPHA);
shader.begin();
shader.setUniformMatrix("u_worldView", cam.combined);
// If I draw the player before the chunk the transparency of the player would not render the chunk.
player.getMesh().render(shader, GL20.GL_TRIANGLES);
// Drawing the mesh chunk combines of floors, walls and objects of each tile within the chunk.
chunk.getCombinedMesh().render(shader, GL20.GL_TRIANGLES);
// If I draw the player after the chunk the player would not be drawn if it is behind a transparent mesh like a window or a tree.
player.getMesh().render(shader, GL20.GL_TRIANGLES);
shader.end();
So what are my options here? Can I fix this by using some magic tricks out of the OpenGL hat? Or do you have another suggestion to put on the list above? I am using LibGDX for my project.
I am currently implementing a lighting system to my game.
My light class calculates a polygon for my light to be visible in.
All I need to do now is to cut my light texture in this shape.
I am trying to archieve this by creating a polygonRegion and drawing it to a polygonSpriteBatch.
Thats what I do at the moment:
polygonRegion = new PolygonRegion(new TextureRegion(light), vertices,
new EarClippingTriangulator().computeTriangles(vertices).toArray());
and in the drawing step:
polygonSpriteBatch.setProjectionMatrix(lightHandler.tiledTest.cameraHandler.camera.combined);
polygonSpriteBatch.begin();
polygonSpriteBatch.draw(polygonRegion, 0, 0);
polygonSpriteBatch.end();
The polygon is actually working fine and as intended but problem is currently, that the actual light texture doesn´t seem to show up at my actual light´s coords in the center of the polygon, but on the lower left corner of the map. So my light polygon is actually black until I place it in that very corner.
Here is a screen with some light sources placed on the map. Please ignore the green and yellow lines. Those are for debugging the polygon.
https://1drv.ms/i/s!AgucvuUdePpwhZg7rqn6pk7cYktf1A
This is how i want my light polygon to actually look:
https://1drv.ms/i/s!AgucvuUdePpwhZg6Y8Y2JiKplGm6ig
I hope you can help me with this problem!
Apologies for any mistakes I did with this post.. it´s my first one here!
I am currently working on a game in which i need pixel perfect collision with objects. The only obstacle i had gotten to however, is with actually getting a mask of the object. I thought about rendering a sprite of an object (which is properly scaled and transformed) to FrameBuffer, which then i'd convert to a pixmap, but i had no luck doing it (tried many methods, including this)
So, my question is: Is there a way of rendering a single sprite to a pixmap in LibGDX?
Sounds like putting the horse behind the cart. A Sprite is a region of a Texture, which in turn is an image loaded from file aka Pixmap. If you just want to load the image from file then you can do: new Pixmap(Gdx.files.internal("yourfile.png"));. You can also scale and transform your coordinates without rendering to a FBO first.
That said; getting the Pixmap of a FrameBuffer is as "simple" as taking a screenshot while the FBO is bound, because it is exactly that.
fbo.begin();
Gdx.gl.glClear(...);
... //render to the FBO
Pixmap pixmap = ScreenUtils.getFrameBufferPixmap(0, 0, fbo.getWidth(), fbo.getHeight());
fbo.end();
Note that this will look like it is up-side-down.
But you are probably only interested in the pixel data, so you might as well skip the Pixmap step in that case and use the ScreenUtils.getFrameBufferPixels method.
However, that said; using the pixel data for collision detection is most likely not the best solution for whatever it is you are trying to achieve. Instead I'd advise to look into other options, for example have a look at this tool which can be used to convert your images into a collision shape.
What is the easiest way to give 2D texture in OpenGL (lwjgl) some kind of "Thickness". Of course i could get the border of the texture somehow and add Quads, orriented by the normal of the quad that the texture is drawn on, in the color of the adjacent texture pixel. But there has to be an easier way to do it.
Minecraft is using lwigl as well and there are the (new) 3D Items, that spin down on the ground and don't cause as much of a performance issue, as is if they were drawn of dozends of polygons. As well, when you hold an item in your hand, there is that kind of "stretched" Texture in depth, which also works with high resolution textures.
Does anyone know how that is done?
A 2D texture is always infinitely thin. If you want actual thickness (when you look edge onto it) you need geometry. In Minecraft things look blocky, because they've been modeled that way.
If you look at some angle and ignore the edges you can use parallax mapping to "fake" some depth in the texture. Or you can use a depth map and use a combination of tesselation shaders and vertex shaders to implement a displacement map, that generates geometry from the texture.
I'm trying to render a set of polygons, i have a set of points and im not doing any triangularization.
If i render my VBO with GL_LINE_LOOP mode, the lines went whit the right vertices, but when i try to render filled polygons with the same buffer but using GL_POLYGON i get wrong vertices, its like some points just go away.
I tried to disable the OpenGl polygon smoothing but still the same.
Any tips?
This image shows the lines and the polygon expected to be the same.
GL_POLYGON is only for convex, coplanar polygons.
Make sure the points in your VBO form one.