libGDX antialiasing on desktop seems uneffective - java

I am working on a desktop game with libGDX. I want to reduce the aliasing, that is very strong.
There is documentation about that. The magic is supposed to happen in the DesktopLauncher class with the line config.samples = samplingNumber;
I tried 2, 4, 8 and 16 sampling number. I am unable to see a difference.
Here is my DesktopLauncher class.
public class DesktopLauncher {
public static void main (String[] arg) {
LwjglApplicationConfiguration config = new LwjglApplicationConfiguration();
config.title = "AA Test";
config.width = 1280;
config.height = 720;
config.samples = 8;
new LwjglApplication(new MyGdxGame(), config);
}
}
And here is an image showing the difference between no AA and MSAA 16x. The same result is observed for MSAA 2x, 4x and 8x.
Am I missing something to apply MSAA to my libGDX project ?

I found a solution to this problem.
texture.setFilter(TextureFilter.Linear, TextureFilter.Linear);
The sampling field, as you experienced, has no effect but setting this filter on the Textures did.
If you are using TextureAtlas then you can do the following to your TextureAtlas object.
atlas.getTextures().forEach(t -> t.setFilter(Texture.TextureFilter.Linear, Texture.TextureFilter.Linear));

MSAA only affects the edges of polygons, which are not usually visible in a 2D scene because sprites typically do not bleed all the way to their rectangular edges. (Exceptions are opaque rectangular sprites, and shapes drawn with ShapeRenderer.)
Your image quality looks to me like you are not using a mip mapping filter. Load your texture with the useMipMaps parameter true, and use a min filter of MipMapLinearLinear or MipMapLinearNearest (the first looks better, costs more). Note: a MipMap filter does nothing if you didn't load your Texture with useMipMaps true.
There are AA techniques that do process all pixels of the screen, but they are more expensive than simply using mip mapping and trilinear filtering. One example is FXAA, which is done not with a configuration setting, but by drawing your scene to a frame buffer object, and then drawing the FBO's texture to screen with a special shader.

Your GPU chip could not support MSAA although then it should support CSAA.
To make it work you need to replace
Gdx.gl.glClear(GL20.GL_COLOR_BUFFER_BIT);
with
Gdx.gl.glClear(GL20.GL_COLOR_BUFFER_BIT | GL20.GL_DEPTH_BUFFER_BIT | (Gdx.graphics.getBufferFormat().coverageSampling?GL20.GL_COVERAGE_BUFFER_BIT_NV:0));
in your render method. Then set config.samples for Desktop or config.numSamples for Android to the desired value.
Read more about LibGDX AA here

Related

LibGDX: Can't add a Texture in my Table

I have a problem with my texture. The table works perfect, but when I try to add a texture Android Studio doesn't agree with me.
private Sprite tiger2;
.
batch.begin();
batch.setColor(tiger2.getColor());
batch.draw(
tiger2,
Gdx.graphics.getWidth() / 2f - tiger2.getRegionWidth() / 2f,
Gdx.graphics.getHeight() / 2f - tiger2.getRegionHeight() / 2f,
tiger2.getRegionWidth(),
tiger2.getRegionHeight()
);
batch.end();
.
tiger2 = new Sprite(new Texture("tiger2.png"));
.
I don't know if this is how I should write it:
table.add(tiger2);
I get this error:
Cannot resolve method 'add(com.badlogic.gdx.graphics.g2d.Sprite)'
table.add(**tiger2**);
You are trying to add a Sprite to the table not a Texture and this is the core reason why you are having problems. It is basically the same as doing int i = "Hello World" which would yield a error like incompatible types. In other words a Integer container cannot hold a String.
If you hover over the line you will see that table.add expects a actor. If you would go to the definition of Sprite you notice that this does not extend Actor in any way. So doing Actor a = new Sprite() results in the same error as above, you cannot put a Sprite object in a Actor container. These are core fundamentals of programming.
So what are compatible Actors? Everything that inherits Actor such as Label, TextButton, Table, ScrollPane, Window, Dialogue, etc. You can extend Action yourself too and create your very own Actor, but since you are struggling with this you should wait with this.
The easiest solution is to use scene2d.ui.Image. If you inspect this class you see it extends from Widget and if you inspect Widget you see it extends from Actor and thus Image is a actor since you are a child of your mother and your mother of your grandmother.
If you inspect what the Image constructor takes then you will notice the easiest thing to do is to create a SpriteDrawable, since Image does not take a Sprite like table.add does not take a Sprite either. SpriteDrawable takes a sprite and is a Drawable and that is compatible with the constructor of Image.
Image image = new Image(new SpriteDrawable(mySprite));
table.add(image);
Image takes a Texture too so if you are not doing fancy Sprite stuff with your sprite you can just create the Image with a Texture as well. You can see this in your IDE by typing new Image( and intellisense will let you know the compatible constructors.
Try to understand these types and use your IDE and the documentation to find out what you can use. With Android Studio you can right click a class and do goto -> declaration and see the class in question. Other IDE's have equivalent methods unless you are using something like Notepad of course.
Assuming the table is a Scene2D Table:
A texture is not an actor and cannot be added to a table. To fix this you can add the texture to an Image (an actor containing a texture). It would look something like this:
Image tiger2Image = new Image(new Texture("tiger2.png"));
table.add(tiger2Image);

Drawing 3D boxes on chessboard - OpenCV, LibGdx and java

I'm new to opencv. I'm working with it in java, which is a pain, since most of the example and resources around the internet is in C++.
Currently my project involves recognizing a chessboard and then be able to draw on specific parts of the board.
I've gotten so far as to get the corners through the Calib3d part of the library. But this is where i get stuck. My question is how do i convert the corners info i got (Which is the corners placement on the 2D image) to something i can use in a 3D space to draw on with LibGdx?
Following is my code (in snippets):
public class chessboardDrawer implements ApplicationListner{
... //Fields are here
MatOfPoint2f corners = new MatOfPoint2f();
MatOfPoint3f objectPoints = new MatOfPoint3f();
public void create(){
webcam = new VideoCapture(0);
... //Program sleeps to make sure camera is ready
}
public void render(){
//Fetch webcam image
webcam.read(webcamImage);
//Grayscale the image
Imgproc.cvtColor(webcamImage, greyImage, Imgproc.COLOR_BGR2GRAY);
//Check if image contains a chessboard of with 9x6 corners
boolean foundCorners = Calib3d.findChessboardCorners(greyImage,
new Size(9,6),
corners, Calib3d.CALIB_CB_FAST_CHECK | Calib3d.CALIB_CB_ADAPTIVE_THRESH);
if(foundCorners){
for(int i = 0; i < corners.height; i++){
//This is where i have to convert the corners
//to something i can use in libGdx to draw boxes
//And insert them into the objectPoints variable
}
}
//Show the corners on the webcamIamge
Calib3d.drawChessboardCorners(webcamImage, new Size(9,6), corners, true);
//Helper library to show the webcamImage
UtilAR.imDrawBackground(webcamImage);
}
}
Any help?
You actually need to localize the (physical) camera using those coordinates.
Fortunately, it is really easy in case of a chessboard.
Camera pose estimation
Note:
The current implementation in OpenCV may not satisfy you in terms of accuracy (at least for a monocular camera). A good AR experience demands nice accuracy.
(Optional) Use some noise filtering method/estimation algorithm to stabilize the pose estimate across time/frames (Preferably Kalman Filter).
This would reduce jerks and wobbling.
Control pose (position + orientation) of a PerspectiveCamera using aforementioned pose estimated.
Draw 3D stuff using scales and initial orientation in accordance with the objPoints that you provided to the camera calibration method.
You can follow this nice blog post to do it.
All 3D models that you render now would be in the chessboard's frame of reference.
Hope this helps.
Good luck.

How to set different images as faces/sides of a 3D cube in Java 3D API?

I've created a small piece of code which draws a 3D cube in my SWT application allowing to rotate it.
Now, I want to change each face/side of the cube and draw a different image on it, but I can't find how to do it (or at least, in an easy way, if that's possible).
I was able to change the complete texture of the cube to an image, but it change all the faces and I want to set a different image to each face. Is this possible? Any code example?
Thanks
Ok, based on the previous answer and some other forums I reach the following code that allows to set a different texture to each face of a cube:
Basically the line that allows to do that is the following one:
((Shape3D) textureCube.getChild(POSITION)).setAppearance(APPEARANCE);
Taking into account that:
textureCube:
Box textureCube = new Box(0.4f, 0.4f, 0.4f, Box.GENERATE_TEXTURE_COORDS,
defaultAppearance);
(defaultAppearance is just a basic Appearance object: Appearance defaultAppearance = new Appearance();)
The position is given by, as vembutech pointed out, TextureCubeMap class and their values for each face: POSITIVE_X, POSITIVE_Y, POSITIVE_Z, NEGATIVE_X, NEGATIVE_Y, NEGATIVE_Z.
And the appearance object is just an appearance object. I created mine appearance objects with this method:
private Appearance getAppearance(String f) throws Exception {
Appearance app = new Appearance();
URL texImage = new java.net.URL("file:" + f);
Texture tex = new TextureLoader(texImage, this).getTexture();
app.setTexture(tex);
TextureAttributes texAttr = new TextureAttributes();
texAttr.setTextureMode(TextureAttributes.MODULATE);
app.setTextureAttributes(texAttr);
return app;
}
This method creates an appearance based on an input file (f).
Cheers
Use TextureCubeMap class which is a sub class of Texture. The texture mapping can be used to apply images to faces of the cube.
You can do it by specifying the cube faces using xyz coordinates as positive and negative.
Refer the below link for its complete documentation.

LibGDX FrameBuffer scaling

I'm working on a painting application using the LibGDX framework, and I am using their FrameBuffer class to merge what the user draws onto a solid texture, which is what they see as their drawing. That aspect is working just fine, however, the area the user can draw on isn't always going to be the same size, and I am having trouble getting it to display properly on resolutions other than that of the entire window.
I have tested this very extensively, and what seems to be happening is the FrameBuffer is creating the texture at the same resolution as the window itself, and then simply stretching or shrinking it to fit the actual area it is meant to be in, which is a very unpleasant effect for any drawing larger or smaller than the window.
I have verified, at every single step of my process, that I am never doing any of this stretching myself, and that everything is being drawn how and where it should, with the right dimensions and locations. I've also looked into the FrameBuffer class itself to try and find the answer, but strangely found nothing in there either, but, given all of the testing I've done, it seems to be the only possible place for this issue to be created somehow.
I am simply completely out of ideas, having spent a considerable amount of time trying to troubleshoot this problem.
Thank you so much Synthetik for finding the core issue. Here is the proper way to fix this situation that you elude to. (I think!)
The way to make frame buffer produce a correct ratio and scale texture regardless of actual device window size is to set the projection matrix to the size required like so :
SpriteBatch batch = new SpriteBatch();
Matrix4 matrix = new Matrix4();
matrix.setToOrtho2D(0, 0, 480,800); // here is the actual size you want
batch.setProjectionMatrix(matrix);
I believe I've solved my problem, and I will give a very brief overview of what the problem is.
Basically, the cause of this issue lies within the SpriteBatch class. Specifically, assuming I am not using an outdated version of the class, the problem lies on line 181, where the projection matrix is set. The line :
projectionMatrix.setToOrtho2D(0, 0, Gdx.graphics.getWidth(), Gdx.graphics.getHeight());
This is causing everything that is drawn to, essentially, be drawn at the scale of the window/screen and then stretched to fit where it needs to afterwards. I am not sure if there is a more "proper" way to handle this, but I simply created another method within the SpriteBatch class that allows me to call this method again with my own dimensions, and call that when necessary. Note that it isn't required on every draw or anything like that, only once, or any time the dimensions may change.

getting more FPS from a SurfaceView, can i do something better?

I'm playing around with drawing jbox2d objects onto a surfaceview, yet I'm not really satisfied with the framerate I'm getting ( 10-13, when there are multiple objects on screen / in debug more I'm getting about 26-30 ).
while (isRun)
{
Canvas canvas = holder.lockCanvas();
_update(canvas); /// <- call to update
holder.unlockCanvasAndPost(canvas);
}
...
canvas.drawColor(0xFF6699FF);
for ( Body b = world.getBodyList(); b!=null; b = b.getNext() ) // <- cycle through all the world bodies
{
rlBodyImage bi = new rlBodyImage();
bi = (rlBodyImage) b.getUserData();
float x = b.getPosition().x*scale + wOffset + camera_x + camera_x_temp;
float y = b.getPosition().y*-scale + hOffset + camera_y + camera_y_temp;
canvas.save();
canvas.translate( x - (bi.getImage().getWidth()*bi.getCoof()*scale)/2 , y - (bi.getImage().getHeight()*bi.getCoof()*scale)/2 );
canvas.scale( bi.getCoof()*scale , bi.getCoof()*scale );
canvas.rotate( (float) -(b.getAngle() * (180/Math.PI)) , bi.getImage().getWidth() /2 , bi.getImage().getHeight() /2 );
canvas.drawPicture(bi.getImage()); // <- draw the image assossiated with current body on canvas
canvas.restore(); // images are stroed as "Pictures" , extracted from SVGs.
}
Is there a way to speed things up, other then of course using more simple SVGs? :)
Thanks!
EDIT :
Yep, will have to switch to PNGs, they give way better FPS rate.
vector Pictures = 10...13...16 FPS
PNG only = 35...40+ FPS
PNG with scaling = 32...37+ FPS
PNG with scaling & rotation = 27+ FPS
You should probably use a rasterized image instead of SVGs. You can either save them to pngs or similar before/during compile, or the phone can convert them to fitting images on (first) startup.
And it seems like you are swapping texture to be drawn multiple times per frame. That is extremely expensive for the GPU. You should create one big sprite/imageatlas containing all your images, load it onto the GPU and then draw different regions of it to the screen.
And do not allocate many new objects each frame if you are not going to use them for a longer time. The GC will freeze your game for a short period of time every now and then, dropping your fps.
Edit: you should also achieve much more than ~20 fps in debug mode (unless your phone is really old). Consider optimizing your box2d world. You should maybe consider using the Libgdx framework. It provides a JNI wrapper for Box2d, greatly improving performance.
Edit2:
the problem with PNGs start when I have to rotate and scale them (
when the user zooms in or out )
This is not an issue. Save the PNGs as the maximum size they will be shown on screen, and then just scale and rotate them to fit. If you use proper minification filter, and maybe add some extra space around (see this post for an example), you should see no to minimal loss in quality. If needed, you can create one large version and a smaller version of each sprite, and draw the one fitting best. Or just use mipmapping in OpenGL. And there is no way you should need to save a version for every size/angle of each sprite.
Libgdx Box2D
I don't know how much of a performance gain there is.
Immediately, this line strikes me as odd :
rlBodyImage bi = new rlBodyImage();
Every single frame you're making a new rlBodyImage, given it's name I suspect that's not a simple thing to do.
However, the next line ignores that image :
bi = (rlBodyImage) b.getUserData();
Try this inside your loop instead of those two lines :
rlBodyImage bi = (rlBodyImage) b.getUserData();
Let us know if that helps :)

Categories