I have been trying to "cut" an image for some time now, I ll explain why and what I tried.
So I wanted to create an hp "bar" except it's not a bar but a heart and so I though it would be easy all I had to do is have two pictures draw them on top of each other and then just cut one to make it appear as in hp was being lost, but I was not able to find a way to cut the image.
Setting the height just resizes the image as you might have guessed
I tried using textureRegion to kind of hack it but it didn't go so well
I found a method called clip begin which also uses scissors but for some reason that just doesn't seem to be working.
I might be using the clip begin wrong but I can't really find any real documentation on it, all I'm doing is:
image.clipBegin(x,y,height,weight);
image.clipEnd();
I almost forgot, I'm using a scene2d Image, might be a better way to go around it but not sure what that would be.
I would appreciate any ideas on how to do this, thank you.
You want to use the OpenGL Scissor support that Libgdx exposes. See the Libgdx Clipping wiki
and the Libgdx ScissorStack documentation.
The API isn't particularly friendly (its designed to support dynamically pushing multiple constraining rectangles, which as far as I've seen, isn't used very often).
The important point to remember with the scissor stack is that it only applies to actual draw commands that get issued. Since most APIs try to batch up draw commands, this means actual drawing might not happen when it looks like it should happen. To ensure clipping is happening you must flush any buffered draws before pushing the scissor (otherwise the wrong thing might get clipped) and you must flush any draw calls before popping the scissor (otherwise things you want clipped might avoid the scissors).
See libgdx ScissorStack not working as expected or libGDX - How to clip or How to draw on just a portion of the screen with SpriteBatch in libgdx? or Making a Group hide Actors outside of its bounds.
Related
I was searching for an anti-aliasing algorithm for my OpenGL program (so I searched for a good shader). The thing is, all shaders want to do something with the textures, but I dont use textures, only colors. I looked at FXAA most of the time, so is there a anti-aliasing algorithm that just works with colors? My game, what this is for looks blocky like minecraft, but only works with colors and cubes of different size.
I hope someone can help me.
Greetings
Anti-aliasing has nothing specifically to do with either textures or colors.
Proper anti-aliasing is about sample rate, which while highly technical can be thought of as doing extra work to make a better educated guess at some value that cannot be directly looked up (e.g. a pixel that is only partially covered by a triangle).
Multisample Anti-Aliasing (MSAA) will work nicely for you, it will only anti-alias polygon edges and does nothing for texture aliasing on the interior of a polygon. Since you are not using textures you do not need to worry about aliasing inside a polygon.
Incidentally, FXAA is not proper anti-aliasing. FXAA is basically a shader-based edge detection and blur image processing filter. FXAA will blur any part of the scene with sharp edges, whether it is a polygon edge or an edge due to a mapped texture. It indiscriminately blurs anything it thinks is an aliased edge and gets this wrong often, resulting in blurry textures.
To use MSAA, you need:
A framebuffer with at least 2 samples
Enable multisample rasterization
Satisfying (1) is going to depend on what you used to create your window (in this case LWJGL). Most frameworks let you select the sample count as one of the parameters at the time of creation.
Framebuffer Objects can also be used to do this without messing with your window's parameters, but they are more complicated than need be for this discussion.
(2) is as simple as calling glEnable (GL_MULTISAMPLE).
I'm working on a painting application using the LibGDX framework, and I am using their FrameBuffer class to merge what the user draws onto a solid texture, which is what they see as their drawing. That aspect is working just fine, however, the area the user can draw on isn't always going to be the same size, and I am having trouble getting it to display properly on resolutions other than that of the entire window.
I have tested this very extensively, and what seems to be happening is the FrameBuffer is creating the texture at the same resolution as the window itself, and then simply stretching or shrinking it to fit the actual area it is meant to be in, which is a very unpleasant effect for any drawing larger or smaller than the window.
I have verified, at every single step of my process, that I am never doing any of this stretching myself, and that everything is being drawn how and where it should, with the right dimensions and locations. I've also looked into the FrameBuffer class itself to try and find the answer, but strangely found nothing in there either, but, given all of the testing I've done, it seems to be the only possible place for this issue to be created somehow.
I am simply completely out of ideas, having spent a considerable amount of time trying to troubleshoot this problem.
Thank you so much Synthetik for finding the core issue. Here is the proper way to fix this situation that you elude to. (I think!)
The way to make frame buffer produce a correct ratio and scale texture regardless of actual device window size is to set the projection matrix to the size required like so :
SpriteBatch batch = new SpriteBatch();
Matrix4 matrix = new Matrix4();
matrix.setToOrtho2D(0, 0, 480,800); // here is the actual size you want
batch.setProjectionMatrix(matrix);
I believe I've solved my problem, and I will give a very brief overview of what the problem is.
Basically, the cause of this issue lies within the SpriteBatch class. Specifically, assuming I am not using an outdated version of the class, the problem lies on line 181, where the projection matrix is set. The line :
projectionMatrix.setToOrtho2D(0, 0, Gdx.graphics.getWidth(), Gdx.graphics.getHeight());
This is causing everything that is drawn to, essentially, be drawn at the scale of the window/screen and then stretched to fit where it needs to afterwards. I am not sure if there is a more "proper" way to handle this, but I simply created another method within the SpriteBatch class that allows me to call this method again with my own dimensions, and call that when necessary. Note that it isn't required on every draw or anything like that, only once, or any time the dimensions may change.
Background:
I am making a 2D side-scroller.
When the player touches the screen, the player moves forward (the camera follows the player).
I cannot find an answer to this question although it seems rather straightforward.
Question:
How can I get my parallax background to scroll only when my player moves?
(example code would make things much easier for me)
I am using autoparallaxbackground but it seems that it simply just scrolls at the rates you pass in, with no regard to the camera. Moreover, I am not fully sure of the difference between autoparallaxbackground and parallaxbackground.
Any help is appreciated!
AutoParallaxBackground extends ParallaxBackground, adding one simple feature: automatically changing mParallaxValue with time. As you may imagine, if you don't need your background constantly moving, you may use ParallaxBackground as your base class, and then use setParallaxValue(final float pParallaxValue) to manually adjust the position.
I'm making a game in Java and I have it so that when the player presses the left arrow button, three images are drawn one after the other. But when the third image is drawn, you can still see the image drawn before it (I rotated the images to make it look more realistic). Is there anyway to "undraw" the images? I've done some research and haven't found anything specific.
No, you can't undraw an image. It sounds like the images aren't fully opaque.
Not to be an over-complication junkie, but how about using OpenGL to draw them on quads. That way you can hide, show, rotate, animate, accelerate etc so much easier. And it will all be tons faster than with Swing/AWT. JOGL library + NeHe Tutorials will quickly show you how to achieve this (really, you only need this and this). Or use Unity if you wanna do it even simpler and more portable.
Best reader,
I'm stuck on one of my concepts.
I'm making a program which classroom children can measure themselves with.
This is what the program includes;
- 1 webcam (only used for a simple webcam view.)
- 2 phidgets (don't mind these.)
So, this was my plan. I'll draw a rectangle on the webcamview and make it repaint itself constantly.
When the repainting is stopped by one of the phidgets, the rectangle's value will be returned in centimeters or meters.
I've already written the code of the rectangle that's repainting itself and this was my result:
(It's a roundRectangle, the lines are kind of hard to see in this image, sorry about that.)
As you can see, the background is now simply black.
I want to set the background of this JFrame as a webcam view (if possible) and then draw the
rectangle over the webcam view instead of the black background.
I've already looked into jmf, fmj and such but am getting errors even after checking my webcam path and adding the needed jar libraries. So I want to try other options.
So;
- I simply want to open my webcam, use it as background (yes live stream, if possible in some way).
And then draw this rectangle over it.
I'm thus wondering if this is possible, or if there's other options for me to achieve this.
Hope you understand my situation, and please ask if anything's unclear.
EDIT:
I got my camera to open now trough java. The running camera is of type "Process".
This is where I got the code for my camera to open: http://www.linglom.com/2007/06/06/how-to-run-command-line-or-execute-external-application-from-java/
I adjusted mine a little so it'll open my camera instead.
But now I'm wondering; is it possible to set a process as background of a JFrame?
Or can I somehow add the process to a JPanel and then add it to a JFrame?
I've tried several things without any succes.
My program as it is now, when I run it, opens the measuring frame and the camera view seperatly.
But the goal is to fuse them and make the repainting-rectangle paint over the camera view.
Help much appreciated!
I don't think it's a matter of setting a webcam stream as the background for your interface. More likely, you need to create a media player component, add it to your GUI, and then overlay your rectangles on top of that component.
As you probably know from searching for Java webcam solutions in Stack Overflow already, it's not easy, but hopefully the JMF Specs and API Guide will help you through it. The API guide is a PDF and has sections on receiving media streams, as well as sample code.