Libgdx ModelBuilder.createRect only visible from one side - java

In my first libgdx 3D game i now switched from createBox to createRect, to create only the visible faces (if a wall is on the left side of another wall, its right face is not visible...). I am creating 4 models:
frontFace
backFace
rightFace
leftFace
They are almost drawn how they actually should.
But there is one big issue: The side faces are only visible if i look in the positive z-Direction.
If i look the other side (negative z-Direction), they don't draw. The front and back faces only draw, if i look to them in negative x-Direction.
Has this something to do with the normals? I have set them to:
normal.x = 0;
normal.y = 1;
normal.z = 0;
Is that the error? How should i set the normals? What do they stand for? I have some basic idea about normal mapping for lighting, is that the same?
Important note: I have disabled backface culling, but it did not make any difference. View frustum culling is turned on. If any more informations are needed please post a comment and i will add them as soon as possible. Thanks

Perhaps not directly related, but still important to note: don't use createRect or createBox for anything other than debugging/testing. Instead combine multiple shapes into a single model/node/part. Or even better, use a modeling application where possible.
You didn't specify how you disabled backface culling. But keep in mind that you should not change the opengl state outside the shader/rendercontext (doing so will result in unpredicted behavior). To disable backface culling you can either specify it using the material attribute IntAttribute.CullFace (see: https://github.com/libgdx/libgdx/wiki/Material-and-environment#wiki-intattribute), the DefaultShader (or default ModelBatch) Config defaultCullFace member (see http://libgdx.badlogicgames.com/nightlies/docs/api/com/badlogic/gdx/graphics/g3d/shaders/DefaultShader.Config.html#defaultCullFace) or the (deprecated) static DefaultShader#defaultCullFace member (see http://libgdx.badlogicgames.com/nightlies/docs/api/com/badlogic/gdx/graphics/g3d/shaders/DefaultShader.html#defaultCullFace).
Whether a face is front or back is based on the vertex winding. Or in other words: the order in which you provide the corners of the rectangle is used to decide which side is front and which side is back. If you use one of the rect methods, you'll notice the arguments have either the 00, 01, 10 or 11 suffix. Here, when looking at the face, 00 is lower-left, 01 upper-left, 11 is upper-right and 10 is lower-right.
For a rectangle, it's normal is the perpendicular facing outwards the rectangle. For example if you have a rectangle on XZ plane with it front face on the top, then its normal is X=0,Y=1,Z=0. If its front face it facing the bottom, then its normal is X=0,Y=-1,Z=0. Likewise if you have a rectangle on XY plane, its normal is either X=0,Y=0,Z=1 or X=0,Y=0,Z=-1. Note that the normal is not used for face culling, it's most commonly used for lighting etc. Specifying an incorrect/opposite normal will not cause the face to be culled (it might cause incorrect/black lighting though).

For your purpose I'd recommend you to use Decal class. Decals are bitmap sprites that exist in a 3D scene. This article is about using of decals in LibGDX. I hope it is what you wanted.

Related

Anti Aliasing based on colors (not textures)

I was searching for an anti-aliasing algorithm for my OpenGL program (so I searched for a good shader). The thing is, all shaders want to do something with the textures, but I dont use textures, only colors. I looked at FXAA most of the time, so is there a anti-aliasing algorithm that just works with colors? My game, what this is for looks blocky like minecraft, but only works with colors and cubes of different size.
I hope someone can help me.
Greetings
Anti-aliasing has nothing specifically to do with either textures or colors.
Proper anti-aliasing is about sample rate, which while highly technical can be thought of as doing extra work to make a better educated guess at some value that cannot be directly looked up (e.g. a pixel that is only partially covered by a triangle).
Multisample Anti-Aliasing (MSAA) will work nicely for you, it will only anti-alias polygon edges and does nothing for texture aliasing on the interior of a polygon. Since you are not using textures you do not need to worry about aliasing inside a polygon.
Incidentally, FXAA is not proper anti-aliasing. FXAA is basically a shader-based edge detection and blur image processing filter. FXAA will blur any part of the scene with sharp edges, whether it is a polygon edge or an edge due to a mapped texture. It indiscriminately blurs anything it thinks is an aliased edge and gets this wrong often, resulting in blurry textures.
To use MSAA, you need:
A framebuffer with at least 2 samples
Enable multisample rasterization
Satisfying (1) is going to depend on what you used to create your window (in this case LWJGL). Most frameworks let you select the sample count as one of the parameters at the time of creation.
Framebuffer Objects can also be used to do this without messing with your window's parameters, but they are more complicated than need be for this discussion.
(2) is as simple as calling glEnable (GL_MULTISAMPLE).

How do I improve LibGDX 3D rendering performance?

I'm working on rendering a tiled sphere with LibGDX, aimed at producing a game for desktop. Here are some images of what I've got so far: http://imgur.com/GoYvEYZ,xf52D6I#0. I'm rendering 10,000 or so ModelInstances, all of which are generated from code using their own ModelBuilders. They each contain 3 or 4 trianglular parts, and every ModelInstance corresponds to its own Model. Here's the exact rendering code I'm using to do so:
modelBatch.begin(cam);
// Render all visible tiles
visibleCount = 0;
for (Tile t : tiles) {
if (isVisible(cam, t)) {
// t.rendered is a ModelInstance produced earlier by code.
// the Model corresponding to the instance is unique to this tile.
modelBatch.render(t.rendered, environment);
visibleCount++;
}
}
modelBatch.end();
The ModelInstances are not produced from code each frame, just drawn. I only update them when I need to. The "isVisible" check is just some very simple frustum culling, which I followed from this tutorial https://xoppa.github.io/blog/3d-frustum-culling-with-libgdx/. As you can tell from my diagnostic information, my FPS is terrible. I'm aiming for at least 60 FPS rendering what I hope is a fairly-simple scene of tons of polygons. I just know I'm doing this in a very inefficient way.
I've done some research on how people might typically solve this issue, but am stuck trying to apply the solutions to my project. For example, dividing the scene into chunks is recommended, but I don't know how I could make use of that when the player is able to rotate the sphere and view all sides. I read about occlusion culling, so that I might only render ModelInstances on the side of the sphere facing the camera, but am at a loss as to how to implement that in LibGDX.
Additionally, how bad is it that every ModelInstance uses its own Model? Would speed be improved if only one shared Model object was used? If anyone could point me to more resources or give me any good recommendations on how I can improve the performance here, I'd be thankful.
If the tiles are eventually intended to be solid, one improvement you can make is to turn on back-face culling. This will cause any faces not facing the camera to not be rendered (i.e. one side of each face becomes invisible). For a sphere that means the GPU would only need to render about half the faces.
Combining the object into a single Model may also have a large impact. It may be the difference between 10,000 draw calls and 1 (it depends on how smart that modelBatch object is, as it might do the combining behind the scenes). If the user will sometimes be zoomed pretty close a chunking approach might help so that you can continue doing frustum culling.

Java - Describing the correct faces (Winding)

Engine : LWJGL
SO , I am wondering , how do I draw a triangle's back face? I know there is something going on with the
glTexCoord();
method(s), but I can not understand in which way I need to do this.
The facing of a triangle has nothing to do with texture coords. It is solely defined by the winding of the vertices in window space.
By default, GL uses the rule that if the three vertices (in the order as they are specified when drawing) are seen - in the final picture - in counter-clockwise order, then this is treated as the front face. This facing rule can be changed via glFrontFace(). Furthermore, the GL can be told do never draw a specific face via glEnable(GL_CULL_FACE). Which faces are culled is controlled by glCullFace(). Typically, backface culling is used for closed objects (wihtout transparency), as in such cases, the back-facing triangles are never seen and do not need to be processed.
So to control the facing of your triangles, the order in which the vertices are specified does matter. Furthermore, the transformations you use define which side you are actually seeing.
The winding should especially be consistent inbetween and across objects. Two triangles sharing an edge, like triangles A,B,C and B,C,D, have a consitent winding if the shared edge is specified in mutually ireverse order. That is, if you specify the first triangles vertices in the order A,B,C, you must specify the vertices of latter triangle in a way such that C,B is used, like C,B,D or D,C,B, or B,D,C.

Box2d helicopter physics

I am developing java game using box2d for my physics, I have got helicopter, ex:
I reduced gravity by setting:
body.setGravityScale(0.03f);
So it acts bit realistic (is affected by gravity only little bit, floating in the air)
To move it, down/up left/right I have controller, thats how I control my helicopter:
body.applyLinearImpulse(new Vector2(pValueX * 3, pValueY * 3), mainBody.getWorldCenter());
Where pValueX and pValueY are 1 or -1 (directions up/down left or right)
It works good, but now I am trying to achieve more realistic effect, when moving helicopter left/right I wanted to tilt it little bit so it works like real helicopter, but could not find proper way how to do it, I have tried applying force in different part of the body, but it makes my helicopter rotating 360 degrees if keep pressing left or right.
This question is old, but in case it's still relevant, I created a helicopter using JBox2D (which pretty much maps directly to Box2D). For tilting left/right (i.e. forwards/backwards relative to the pilot):-
heli.applyTorque(TURN_TORQUE);
or
heli.applyTorque(-TURN_TORQUE);
This rotates the heli, and then if the player wants lift:
Vec2 force = new Vec2();
force.y = (float)Math.cos(chopper.getAngle()) * -1;
force.x = (float)Math.sin(chopper.getAngle());
force.mulLocal(ROTOR_FORCE);
heli.applyForceToCenter(force);
What you can do is, just define two constants as maxForceLeft and maxForceRight. When you press left apply some force on the cockpit part of the helicopter and keep comparing it with the maxForceLeft,once it reaches that value stop applying the force.Do the same for the right button by applying the force on the tail rotor part of the helicopter.In this way you can avoid rotating it 360 degrees.Depending upon the kind of effect you want for your helicopter you can apply the forces in either upward or downward direction.
http://www.iforce2d.net/b2dtut/rotate-to-angle
What you need is rotating the body to a desired angle..
This is a great tutorial to achieve this.
I hope this would help.

3D Shadow implementation idea

Lets assume your eye is in the surface point P1 on an object A and there is a target object B and there is a point-light source behind object B.
Question: am i right if i look to the light source and say "i am in a shadow" if i cannot see the light because of object B ?. Then i flag that point of object A as "one of the shadow points of B on A" .
If this is true, then can we build a "shadow geometry"(black-colored) object on the surface of A then change it constantly because of motion of light,B,A, etc... in realtime ? Lets say a sphere(A) has 1000 vertices and other sphere (B)has 1000 vertices too, so does this mean 1 milion comparations? (is shadowing, O(N^2) (time) complexity?). I am not sure about the complexity becuse the changing the P1(eye) also changes the seen point of B (between P1 and light source point). What about the second-order shadows and higher (such as lights being reflecting between two objects many times) ?
I am using java-3D now but it doesnt have shadow capabilities so i think of moving to other java-compatible libraries.
Thanks.
Edit: i need to disable the "camera" when moving the camera to build that shadow. How can i do this? Does this decrease the performance badly?
New idea: java3D has built-in collision detection. I will create lines(invisible) from light to target polygon-vertex then check for a collision from another object. If collision occurs, add that vertex corrd. to the shadow list but this would work only for point-lights :( .
Anyone who supplys with a real shade library for java3d, will be much helpful.
Very small sample Geomlib shadow/raytracing in java3D would be the best
Ray-tracing example maybe?
I know this is a little hard but could have been tried by at least a hundred people.
Thanks.
Shadows is probably the most complex topic in 3D graphics programming, and there are many approaches, but the best option should be identified according to the task requirements. The algorithm you are talking about is the simplest way to implement shadows from a spot light source onto the plane. It should not be done on the CPU, as you already use GPU for 3D rendering.
Basically the approach is to render the same object twice: once from the camera view point, and once from the light source point. You will need to prepare model view matrices to convert between these two views. Once you render the object from the light point, you get the depth map, in which each point lies closest to the light source. Then, for each pixel of the normal rendering, you should convert its 3D coordinates into the previous view, and check against the corresponding depth value. This essentially gives you a way to tell which pixels are covered by shadow.
The performance impact comes from rendering the same object twice. If your task doesn't assume high scalability of shadow casting solution, then it might be a way to go.
A number of relevant questions:
How Do I Create Cheap Shadows In OpenGL?
Is there an easy way to get shadows in OpenGL?
What is the simplest method for rendering shadows on a scene in OpenGL?
Your approach can be summarised like this:
foreach (point p to be shaded) {
foreach (light) {
if (light is visible from p)
// p is lit by that light
else
// p is in shadow
}
}
The funny fact is that's how real-time shadows are done today on the GPU.
However it's not trivial for this to work efficiently. Rendering the scene is a streamlined process, triangle-by-triangle. It would be very cumbersome if for every single point (pixel, fragment) in every single triangle you'd need to consider all other triangles in other to check for ray intersection.
So how to do that efficiently? Answer: Reverse the process.
There's a lot fewer lights than pixels on the scene, usually. Let's take advantage of this fact and do some preprocessing:
// preprocess
foreach (light) {
// find all pixels p on the scene reachable from the light
}
// then render the whole scene...
foreach (point p to be shaded) {
foreach (light) {
// simply look up into what was calculated before...
if (p is visible by the light)
// p is lit
else
// p is in shadow
}
}
That seems a lot faster... But two problems remain:
how to find all pixels visible by the light?
how to make them accessible quickly for lookup during rendering?
There's the tricky part:
In order to find all points visible by a light, place a camera there and render the whole scene! Depth test will reject the invisible points.
To make this result accessible later, save it as a texture and use that texture for lookup during the actual rendering stage.
This technique is called Shadow Mapping, and the texture with pixels visible from a light is called a Shadow Map. For a more detailed explanation, see for example the Wikipedia article.
Basically yes, your approach will produce shadows. But doing it point by point is not feasible performance wise (for realtime), unless its done at the GPU. I'm not familiar with what the API's offer today, but I'm sure any recent engine will offer some shadow out of the box.
Your 'New idea' is how shadows were implemented back in the days when rendering was still done with the CPU. If the number of polygons isn't too big (or you can efficently reject entire bunches by having grouping volumes etc.) it can be done with fairly little CPU power.
3D shadow rendering on vanilla Java is never going to be efficient. You best use graphical libraries written to utilize the full capabilities range of the graphical card, such as OpenGL or DirectX. As you are using Canvas (from the screenshot you provided), you can even paint that Canvas from native code using JNI. So you could use all the technology from graphial libraries, do just a little fiddling and paint your Canvas directly from the native code. There would be very little work involved to make it work, compared to writing your own 3D engine.
Wiki link about AWT native access: http://en.wikipedia.org/wiki/Java_AWT_Native_Interface
Documentation: http://docs.oracle.com/javase/7/docs/technotes/guides/awt/AWT_Native_Interface.html

Categories