I'm using slick2d to render an Animation, but using anti-aliasing
Without AA, the movement is choppy, as one would expect for something without AA.
I turn on AA in my game's preRenderState with:
g.setAntiAlias(true);
This leads to:
Note the diagonal line in the center, presumably caused by the two triangles that render the rectangle not meeting precicely. How can I remove that while still using AA to smooth my movement? I found http://www.java-gaming.org/index.php?topic=27313.0 but the solution was "remove AA" which I am reluctant to do.
This looks suspiciously like artifacts due to GL_POLYGON_SMOOTH.
When you use that (deprecated) functionality, you are expected to draw all opaque geometry with blending enabled and the blend function: GL_SRC_ALPHA_SATURATE, GL_ONE. Failure to do so produces white outlines on most contours (aliased edges, basically).
Chapter 6 of the OpenGL Redbook states:
Now you need to blend overlapping edges appropriately. First, turn off the depth buffer so that you have control over how overlapping pixels are drawn. Then set the blending factors to GL_SRC_ALPHA_SATURATE (source) and GL_ONE (destination). With this specialized blending function, the final color is the sum of the destination color and the scaled source color; the scale factor is the smaller of either the incoming source alpha value or one minus the destination alpha value.
This means that for a pixel with a large alpha value, successive incoming pixels have little effect on the final color because one minus the destination alpha is almost zero. With this method, a pixel on the edge of a polygon might be blended eventually with the colors from another polygon that's drawn later. Finally, you need to sort all the polygons in your scene so that they're ordered from front to back before drawing them.
That is a lot of work that almost nobody does correctly when they try to enable GL_POLYGON_SMOOTH.
Every time you enable anti-aliasing in Slick2D, it should look like this.
g.setAntiAlias(true);
GL11.glBlendFunc(GL11.GL_SRC_ALPHA_SATURATE, GL11.GL_ONE);
Basically same as Andon's answer, but with code that should solve the problem.
Note that with this whatever you draw first will be on top, rather than what you draw last. So you may need to reverse rendering order.
Related
I was wondering if there is a way to blend objects that are in different draw calls alone.
I have a particle system that draws many points close to each other and I don't want to add their color values. However I do want to add those particles with other particles from a different particle system draw call. I know I could achieve this using a frame buffer object but it doesn't seem efficient.
It's not possible directly via blending; the only state GL has at any point in time is the current fragment, and the contents of the framebuffer.
You could imagine using a stencil mask (clear the stencil at the start of the draw, set the stencil to 1 with each triangle in the particle system, and fail if the stencil test if the value is already 1). However most particles need some level of alpha transparency to fade out each particle at the edges, so this is probably not what you actually want ...
Actually, given the need for the "fade" region of one particle to overlap the "bright" part of a particle behind it, I'm not entirely sure that you can make this work without blending all of the particles in the particle system together.
I'm working on a project at school, which basically is: writing an application to make a drone fly autonomously, and through scanning QR-codes hung up on walls, be able to navigate through a room in order to complete a certain task.
What I am currently working on, is for the drone to detect cardboard boxes (working as obstacles). These boxes are white, and have a blue circle on them. How I'm planning to solve this, is by scanning the frame for colors and squares:
If the drone detects a square, check if it's white. If it's white, check if it contains a blue circle. If it does, I can say that it most likely is a cardboard box.
This is what the box looks like:
If anyone would be able to provide some pointers as to how I can start working on the color detection, I would be very happy!
PS: I haven't provided any code, since I don't really know what to provide. I would be more than happy to provide some if needed
UPDATE: for anyone stuck at the same problem as I, a fellow student provided an excellent link for my exact situation:
http://opencv-java-tutorials.readthedocs.io/en/latest/08-object-detection.html
I would go from a different angle to do this by detecting the blue circle first.
Detect base colors
see RGB value base color name
Select all blue pixels neighboring white or gray-ish ones
As your circlehas black border then you have to select all blue pixels near white,gray,black... just to be sure. This is the result (Green are selected pixels):
another (more robust) possibility is to select all black pixels neighboring white and blue at the same time.
do a connected components analysis
so merge all connected pixels into polylines
For each polyline decide if it is circle/ellipse/oval
that can be done by investigating angle between line segments. If has sharp spikes then sharp edges are present and it is not an oval. If the circumference is too far from circle/elipse/oval computed from its bounding box then it is not oval but some more complicated curvature.
For each oval decide if it is filled with blue
so just flood fill mask of the oval circumference and compare how many pixels are int the original image blue against those that are not. if the ratio is closer to 100% blue then it is filled blue oval shape....
As your marker has also some features inside you can compute the ratio of all base colors inside it to more accurately detect the marker.
Look at Algorithms: Ellipse matching for some additional ideas.
now you can similarly check if the background is white/gray-ish
There are a lot of other possible approaches like OCR and character similarity or based on FFT/DCT, Hough transform for circles... also you are not bound only to geometric properties comparation instead you can compare histograms...
I need to create a jigsaw puzzle game. I've already done this in the past using AndEngine, however I've only cut texture into rectangles. Now I need to cut it into proper jigsaw pieces. How can I do that?
Cut the texture into rectangles but for every rectangle take extra space. So you would have a lot of rectangles which overlap each other.
Then you need to have some set of patterns for jigsaw edges (black and white images or you can call it a mask) and generate a mask for every rectangle using those patterns.
The algorithm would be:
create a mask with a size of rectangle and initialise it with white color.
Then choose edge pattens based on rectangle-neighbors if they are initialised or choose edges randomly if neighbors are not yet initialize.
After you chose the patterns, draw them on a mask for every side. So in the end you would have a mask with a shape of a jigsaw piece. white color = visible, black color - transparent.
Then apply the mask to the rectangle when you draw it.
And bare in mind that you don't stack these rectangles based on their actual size, but stack in a way that they would overlay each other...
P.S. I hope you understood what I was trying to say. Sorry, English is not my native language...
My question involves the drawImage method in Java Graphics2D (this is for a desktop app, not Android).
My BufferedImage that I'd like to draw contains high resolution binary data, most pixels are black but I have some sparse green pixels (the green pixels represent data points from an incoming raw data stream). The bitmap is quite large, larger than my typical panel size. I made it large so I could zoom in and out. The problem is when I zoom out I lose some of my green pixels .. as an example if my image is 1000 pixels and by panel is 250 pixels, I'd lose 1 out of 4 pixels in each direction (X and Y). If I use nearest neighbor interpolation when I scale the pixels can just disappear to black. If I use something like bilinear interpolation my green pixel will get recolored to somewhere between black and green.
I understand all this behavior, but my question is that is there any way to get the behavior I want, which is to make sure if any pixels is non-black I want it to be drawn at it's full intensity. Perhaps something like a "max-hold" interpolation.
I realize I could probably do what I want by drawing shape primitive over a black background, and maybe this is what I'll have to do. But there is a reason I'm using bitmaps (has to do with the fact that I'm showing the data in a falling spectrogram-type display - and it does have a mode where all the pixels could be colored and not just black and green).
Thanks,
You could look at the implementation of drawImage and override it to get your desired behaviour, however probably the core of the scaling uses hardware acceleration, so re implementing it in Java would be really slow.
You could look into JOGL, but my impression is that, if your pixels are really sparse, just drawing the green pixels on a black background (or over an image) would be both easy to code and very fast.
You could even have an heuristic switching between painting the dots to scaling the image if the number of dots starts being too high.
I have been working on an isometric minecraft-esque game engine for a strategy game I plan on making. As you can see, it really needs some lighting. It is difficult to distinguish between separate elevations because everything is the same shade. So my question is: can I shade just a specific section of a sprite? All of those blocks are just sprites, so if I shaded the entire image, it would shade the whole block.
Well, it depends on how you do your lighting.
Basically, sprites are just textured quads made of two triangles.
Traditional vertex based lighting (which is supported by the built-in but now deprecated functions) will just calculate the lighting for the 4 corners of that quad and everything else will be interpolated. This is quite fast but might result in the wrong lighting - especially with spot lights and big quads.
If you use directional lighting only, you might apply a normal map to your quads and thus influence lighting in a per-texel way, but that might still not be what you want.
The modern way would be to use shaders, i.e. the lighting is evaluated per-pixel. You'd then have to provide per-texel lighting information for your quad which is then used in the fragment/pixel shader.
Just to clarify, the meanings of some terms in this context:
per-texel: per pixel in the texture, those value might be interpolated
per-pixel: per output pixel, i.e. per screen pixel
Edit:
I just looked at your screenshot and it seems you'll have to change the shade of sprite's edges if the adjacent sprite is not on the same level. Assuming you already know which sprite edge should be visible (i.e. there's a level change at that edge) you might just change the shading of the vertices that form that edge.
If you don't use any lighting, you might just start setting the vertex color to white and to some darker color for the vertices that need shading. Then multiply your texture color with the vertex color which should result in darker edges.
Alternatively, if those level have different depths (i.e. different z values) you could use some shader for edge detection (e.g. some SSAO implementation).
Edit 2:
If you use plain old vertex lighting, applying weighted normals might help. Basically you calculate the weighted vertex normals from the normals of those triangles that share a vertex.
There are several methods doing this, one being to weight the faces based on the angle at that vertex. You could multiply the normals by those angles, add then together and finally normalize the resulting normal.
The result of that calculation might be something like this (ASCII art):
| | /
|_______|________/
| / | |
|/______|_______|
Lines pointing up are the normals, the bottom lines would be your sprites in a side view.