JAVA drawPolygon() - Parameter Explanation - java

I'm currently looking into the drawPolygon(int[] xPoints, int[] yPoints, int nPoints) method in Java.
If I am not mistaken, the first two parameters are arrays, indicating the x-coordinates and y-coordinates of the polygon.
My question is, how are the polygon's coordinates interpreted from the two arrays?
For instance, I want to draw a line between the points (100, 300) and (200, 400). That is, a line increasing from left to right.
However, if I put these values into their respective arrays:
xPoints = {100, 200}; //x-coordinates
yPoints = {300, 400}; //y-coordinates
I get a line decreasing from left to right. As if the points are interpreted (100, 400) and (200, 300).
Thus, my question is: how are the array elements evaluated to make up the points of the polygon?
Thanks!

The default coordinate system has the origin in the upper left hand side corner of the canvas, and the y values increase from the top of the screen downwards. You can use an affine transform if you aren't happy with this orientation.
This is an example (!) from some code I have lying around - you may have to adapt it according to your situation:
// Polygon -> PathIterator -> Path2D, and then:
Path2D path = ...;
at.scale( 1, -1 );
path.transform( at );
bbox = path.getBounds2D();
at = new AffineTransform();
at.translate( -bbox.getMinX(), -bbox.getMinY() );
path.transform( at );

The coordinate system has origo in the top-left corner, and the y-axis increasing downwards.
This is why you get a downward slope when you increase the y-coordinate.

Related

How do I draw a rectangular polygon without a triangular cutout?

I am creating a rectangular polygon that represents the box bounding the player sprite in a simple 2D game. This is the implementation I use:
float[] vertices = new float[] {player.getSprite().getX(), player.getSprite().getY(),
player.getSprite().getX() + player.getSprite().getWidth(),
player.getSprite().getY(), player.getSprite().getX(),
player.getSprite().getY() + player.getSprite().getHeight(),
player.getSprite().getX() + player.getSprite().getWidth(),
player.getSprite().getY() + player.getSprite().getHeight()};
Polygon rPoly = new Polygon(vertices);
polyBatch.draw(
new PolygonRegion(new TextureRegion(healthImage), rPoly.getTransformedVertices(),
new EarClippingTriangulator().computeTriangles(vertices).toArray()),
player.getSprite().getScaleX(), player.getSprite().getScaleY());
Where the "Sprite" is an actual LibGDX sprite object. When I try to use this implementation I get this as a result:
How do I get this polygon to be drawn without that triangular cut in it?
I will start this answer with a disclaimer: I have never used LibGDX before. Nonetheless I can see a potential problem with your code.
Let's number the corners of your rectangle as follows:
1 2
3 4
Your array of vertex coordinates includes these corners in the order 1, 2, 3, 4.
You are using a polygon object to represent this rectangle. Polygon objects will typically expect the vertex coordinates that they are given to go around the polygon in one direction or the other. For example, if you had a polygon with 10 points, how would the Polygon class know in which order to connect the points? Of course, order 1, 2, 3, 4 is not going around your rectangle in either direction.
Try swapping the order of the last two pairs of coordinates, so that your array of vertices includes the corners in the order 1, 2, 4, 3.
Bonus hint for readability: try to format your array of vertices so that it contains one pair of coordinates per line, perhaps something like the following:
Sprite sprite = player.getSprite();
float[] vertices = new float[] {
sprite.getX(), sprite.getY(),
sprite.getX() + sprite.getWidth(), sprite.getY(),
sprite.getX() + sprite.getWidth(), sprite.getY() + sprite.getHeight(),
sprite.getX(), sprite.getY() + sprite.getHeight()
};
To reduce the line length, I've created a local variable for the value of player.getSprite(). I've guessed the name of the class as Sprite: feel free to adjust this if necessary. You could potentially create further local variables for the values of sprite.getX(), sprite.getY() and so on.

Wrong result from Rectangle.contains() in java

It appears that the contains() method in Rectangle is not inclusive to the bottom right corner.
For example the following code returns "false";
Rectangle r = new Rectangle(0,0,100,100);
System.out.println(r.contains(100, 100));
As quoted from the Rectangle API (Java 8):
public Rectangle(int x,
int y,
int width,
int height) Constructs a new Rectangle whose upper-left corner is specified as (x,y) and whose width and height are
specified by the arguments of the same name.
Using Width and Height with the starting Point of (0,0) means the Rectangle has points from (0,0) to (99,99) - 100 pixels of width and 100 pixels of height, based on the given starting pixel of (0,0) which is always included in the Rectangle.
This means that (100,100) will indeed not be included in the constructed Rectangle. Based on the logic above, (100,100) will be contained in the following (verified using an online java compiler):
Rectangle r = new Rectangle(1,1,100,100);
References:
The Rectangle API
It seems that the API wrongly states that the "upper left corner" is (x,y) when according to the accepted answer and my own experience, (x,y) is the lower left corner.

glTranslatef and mouse clicks - gluUnProject

Please see bottom of question for the current solution I have gone for, thanks to Finlaybob, elect, gouessej
An appeal to the Elders of OpenGL.... I am having big problems with detecting the relative position of a mouse click on my textured plane.
I am making a game where I am drawing a single large square and texturing it with a large generated map texture. The view is always top down and you can only currently move the X Y and Z coordinates of that square.
Screenshot of the map
OpenGL init
screenRatio = (float)screenW / (float)screenH;
System.out.println("init");
glu = new GLU();
GL2 gl2 = drawable.getGL().getGL2();
gl2.glShadeModel( GL2.GL_SMOOTH );
gl2.glHint( GL2.GL_PERSPECTIVE_CORRECTION_HINT, GL2.GL_NICEST );
gl2.glClearColor( 0f, 0f, 0f, 1f );
gl2.glDepthMask(false);
gl2.glEnable(GL2.GL_DEPTH_TEST);
Set camera position
gl2.glViewport(0, 0, 1024, 768);
gl2.glMatrixMode( GL2.GL_PROJECTION );
gl2.glLoadIdentity();
glu.gluPerspective( 45, screenRatio, 1, 100 );
glu.gluLookAt( 0, 0, 3, 0, 0, 0, 0, 1, 0 );
gl2.glMatrixMode(GL2.GL_MODELVIEW);
gl2.glLoadIdentity();
Move position to start drawing the map
// typical camera coord example:
// CENTRE: 0.0f, 0.0f, 10f
// FULL ZOOM OUT AND TOP LEFT: -25f, 25f, 40f
// move position
gl2.glTranslatef( -cameraX, -cameraY, -cameraZ );
I suspect the glTranslatef z-coord may be a suspect. As I am drawing the square 40f ( for example ) away from the origin
Map vertex information
// here are the coordinates/dimensions of my textured square ( my map )
float[] vertexArray = {
-25f, 25f,
25f, 25f,
25f, -25f,
25f, -25f,
};
Mouse click position calculation
"Borrowed" from java-tips 1628-how-to-use-gluunproject-in-jogl.html
int x = mouse.getX(), y = mouse.getY();
int viewport[] = new int[4];
double mvmatrix[] = new double[16];
double projmatrix[] = new double[16];
int realy = 0;
double wcoord[] = new double[4];
gl2.glGetIntegerv(GL2.GL_VIEWPORT, viewport, 0);
gl2.glGetDoublev(GL2.GL_MODELVIEW_MATRIX, mvmatrix, 0);
gl2.glGetDoublev(GL2.GL_PROJECTION_MATRIX, projmatrix, 0);
realy = viewport[3] - (int) y - 1;
glu.gluUnProject(
(double) x,
(double) realy,
0.0, // I have experimented with having this as 1.0 also
mvmatrix, 0,
projmatrix, 0,
viewport, 0,
wcoord, 0
);
Experimenting with the near/far bit ( 3rd param of gluUnProject ) seems to produce a better effect but there seems to be no sweet spot ( the best I found was 0.945 )
I would very much like mCX, mCY to be relative to the rendered map coordinates ( -25f - 25f ) regardless of Z position
mCX = (float)wcoord[0];
mCY = (float)wcoord[1];
Draw a rectangle at the translated coordinates
gl2.glColor3f(1.f, 0.f, 0.f);
gl2.glBegin(GL2.GL_QUADS);
gl2.glVertex2f( mCX-0.1f, mCY+0.1f );
gl2.glVertex2f( mCX+0.1f, mCY+0.1f );
gl2.glVertex2f( mCX+0.1f, mCY-0.1f );
gl2.glVertex2f( mCX-0.1f, mCY-0.1f );
gl2.glEnd();
Currently the coordinates work well in relation to x & y translation, if I click the very centre of the screen it will draw a box approximately in the correct place regardless of my glTranslatef movement. If I click away from the centre of the screen I see an exponential offset.
Demonstration of exponential offset
When I click the very dead centre of the screen it will draw this mauve square exactly around the mouse point, but with the smallest of movement it will create the following effect:
Fully zoomed in, click a couple of pixels right of centre
UPDATE AND WORKING... FOR NOW
At the time of generating the texture for my map I also generate an alternative texture which represents each "tile" as a different colour. In my initial and current attempt the colour of this tile is a function of it's X and Y coordinates ( a map is made up of 100 tiles across and 100 tiles down, so the x+y coordinates range from 0 - 99 )
I end up with a texture which looks like a gradient from green to red. The below code will, at the time of a mouse click, quickly render this texture ( imperceptible to user ) and read the rgb value under the mouse. We then turn that rgb value into a world coordinate and BOOM... the relative coordinates of my map are realised.
float pX, pY;
// render a colourised version of the scene for the purposes of "picking"
// https://www.opengl.org/archives/resources/faq/technical/selection.htm
public void pick ( GL2 gl2 ) {
// DRAW PICKING SCENE
gl2.glClearBufferfv(GL2.GL_COLOR, 0, clearColor);
gl2.glClearBufferfv(GL2.GL_DEPTH, 0, clearDepth);
gl2.glTranslatef( -cameraX, -cameraY, -cameraZ );
// draw my map but use the colour gradient texture
for ( Entity e : this.entities ) {
e.drawPick( gl2 );
}
// not sure what this does #cargo-cult
gl2.glFlush();
gl2.glFinish();
gl2.glPixelStorei(GL2.GL_UNPACK_ALIGNMENT, 1);
// After rendering ask OpenGL to read the colour of the screen at the given window coordinates!
FloatBuffer buffer = FloatBuffer.allocate(4);
int realy = 0;
int viewport[] = new int[4];
gl2.glGetIntegerv(GL2.GL_VIEWPORT, viewport, 0);
realy = viewport[3] - (int) mouse.getY() - 1;
gl2.glReadPixels( mouse.getX(), realy, 1, 1, GL2.GL_RGBA, GL2.GL_FLOAT, buffer);
float[] pixels = new float[3];
pixels = buffer.array();
// pixels holds rgb values respectively
// convert the red + green values back into x + y values
pX = (pixels[0] * 255) - 25f;
pY = -((pixels[1] * 255) - 25f);
// draw the proper texture
for ( Entity e : this.entities ) {
e.draw( gl2 );
}
}
You've almost got it. You're going to need a good value for Z in the unproject function though.
What you are trying to do is take the position of the cursor and multiply by a matrix to give a point in "3d space". Your matrices are likely 4x4 or 4x3, so you need a 4 component vector. (x,y,z,w)
When you draw your map, the existing point is multiplied by 1 or more matrices including the projection matrix. ( e.g. -25.0f,25.0f,0.0f,1.0f - actually a 3d point). When this is multiplied by all matrices, the GPU essentially gets back a value in normalised device coordinates (NDC) (between -1 and 1 in all axes) for that vertex.
To do the opposite and unproject you'll need to have a valid/good value for Z. The reason is that in NDC everything that is drawn is in -1,1 on all axes, to get everything in (further away things are squashed a bit). This is how you get flickering and weirdness if you have a huge > 100000 zFar distance for example, it still has to fit into -1,1.
The best way to do this is to use the depth buffer, by capturing the depth value it'll give you a good approxomation of the z coordinate in NDC, which you can pass to the unproject call.
The reason why 0.945 is the sweet spot is probably dependent on how far the camera is from your map or vice versa. It's usually the case that the depth buffer has much more detail closer to the near plane than the far - it's not linear.
http://www.opengl-tutorial.org/beginners-tutorials/tutorial-3-matrices/ has a good visual near the bottom of the page, and is a good resource for intro to matrices in general:
You can see the distortion caused by moving to NDC. This is required for viewing from a perspactive viewpoint, but you need to take it into consideration when you transform backward too.
Colour picking as mentioned is also viable for picking, but will still require some work. Because you have a single object, you'll have to render each texel of the image with a different colour, output that to a separate colour buffer, check to see what colour is on the buffer and somehow relate that to a point in space. It could probably be done though, but I'd say colour picking is more suited to multiple objects.
From what I've read - the depth buffer one might be more suitable for you as it's one object, and the depth buffer will give you a Z coordinate for every point you click on. It could still be on your far plane, but it will still give you a value.
Alternatively, as suggested by #elect use an orthographic projection.

Drawing a convex/concave quadrilateral in Java

I have four randomly sorted points, and I can't seem to draw a convex/concave Polygon with these points. This is the code I am using:
int xPoly[] = new int[4];
int yPoly[] = new int[4];
for(int i = 0; i < quad.size(); i++){
g2d.fill(quad.get(i));
xPoly[i] = (int) (quad.get(i).getX());
yPoly[i] = (int) (quad.get(i).getY());
}
Polygon poly = new Poylgon(xPoly, yPoly, xPoly.length);
g2d.draw(poly);
Where quad is defined as ArrayList<Point> quad = new ArrayList();. Point is a simple class I wrote which is self explanatory. However, my solution keeps producing polygons like this:
The black points are part of quad. My desired result is a normal looking polygon, not an irregular one.
An example for the quad ArrayList:
(0,0), (5,3) (9,10), (6, 7)
There is no specified order for these points, so xPoly and yPoly aren't necessarily ordered either.
Switched to an answer:
You must order the points in a Polygon yourself - the points in a Polygon describe a fixed order in which it will be drawn. It won't interpolate what type of shape you want.

What is the source of these pixel gaps in between identical vertices in OpenGL's Ortho? How can I eliminate them?

Despite passing equal (exactly equal) coordinates for 'adjacent' edges, I'm ending up with some strange lines between adjacent elements when scaling my grid of rendered tiles.
My tile grid rendering algorithm accepts scaled tiles, so that I can adjust the grid's visual size to match a chosen window size of the same aspect ratio, among other reasons. It seems to work correctly when scaled to exact integers, and a few non-integer values, but I get some inconsistent results for the others.
Some Screenshots:
The blue lines are the clear color showing through. The chosen texture has no transparent gaps in the tilesheet, as unused tiles are magenta and actual transparency is handled by the alpha layer. The neighboring tiles in the sheet have full opacity. Scaling is achieved by setting the scale to a normalized value obtained through a gamepad trigger between 1f and 2f, so I don't know what actual scale was applied when the shot was taken, with the exception of the max/min.
Attribute updates and entity drawing are synchronized between threads, so none of the values could have been applied mid-draw. This isn't transferred well through screenshots, but the lines don't flicker when the scale is sustained at that point, so it logically shouldn't be an issue with drawing between scale assignment (and thread locks prevent this).
Scaled to 1x:
Scaled to A, 1x < Ax < Bx :
Scaled to B, Ax < Bx < Cx :
Scaled to C, Bx < Cx < 2x :
Scaled to 2x:
Projection setup function
For setting up orthographic projection (changes only on screen size changes):
.......
float nw, nh;
nh = Display.getHeight();
nw = Display.getWidth();
GL11.glOrtho(0, nw, nh, 0, 1, -1);
orthocenter.setX(nw/2); //this is a Vector2, floats for X and Y, direct assignment.
orthocenter.setY(nh/2);
.......
For the purposes of the screenshot, nw is 512, nh is 384 (implicitly casted from int). These never change throughout the example above.
General GL drawing code
After cutting irrelevant attributes that didn't fix the problem when cut:
#Override
public void draw(float xOffset, float yOffset, float width, float height,
int glTex, float texX, float texY, float texWidth, float texHeight) {
GL11.glLoadIdentity();
GL11.glTranslatef(0.375f, 0.375f, 0f); //This is supposed to fix subpixel issues, but makes no difference here
GL11.glTranslatef(xOffset, yOffset, 0f);
if(glTex != lastTexture){
GL11.glBindTexture(GL11.GL_TEXTURE_2D, glTex);
lastTexture = glTex;
}
GL11.glBegin(GL11.GL_QUADS);
GL11.glTexCoord2f(texX,texY + texHeight);
GL11.glVertex2f(-height/2, -width/2);
GL11.glTexCoord2f(texX + texWidth,texY + texHeight);
GL11.glVertex2f(-height/2, width/2);
GL11.glTexCoord2f(texX + texWidth,texY);
GL11.glVertex2f(height/2, width/2);
GL11.glTexCoord2f(texX,texY);
GL11.glVertex2f(height/2, -width/2);
GL11.glEnd();
}
Grid drawing code (dropping the same parameters dropped from 'draw'):
//Externally there is tilesize, which contains tile pixel size, in this case 32x32
public void draw(Engine engine, Vector2 offset, Vector2 scale){
int xp, yp; //x and y position of individual tiles
for(int c = 0; c<width; c++){ //c as in column
xp = (int) (c*tilesize.a*scale.getX()); //set distance from chunk x to column x
for(int r = 0; r<height; r++){ //r as in row
if(tiles[r*width+c] <0) continue; //skip empty tiles ('air')
yp = (int) (r*tilesize.b*scale.getY()); //set distance from chunk y to column y
tileset.getFrame(tiles[r*width+c]).draw( //pull 'tile' frame from set, render.
engine, //drawing context
new Vector2(offset.getX() + xp, offset.getY() + yp), //location of tile
scale //scale of tiles
);
}
}
}
Between the tiles and the platform specific code, vectors' components are retrieved and passed along to the general drawing code as pasted earlier.
My analysis
Mathematically, each position is an exact multiple of the scale*tilesize in either the x or y direction, or both, which is then added to the offset of the grid's location. It is then passed as an offset to the drawing code, which translates that offset with glTranslatef, then draws a tile centered at that location through halving the dimensions then drawing each plus-minus pair.
This should mean that when tile 1 is drawn at, say, origin, it has an offset of 0. Opengl then is instructed to draw a quad, with the left edge at -halfwidth, right edge at +halfwidth, top edge at -halfheight, and bottom edge at +halfheight. It then is told to draw the neighbor, tile 2, with an offset of one width, so it translates from 0 to that width, then draws left edge at -halfwidth, which should coordinate-wise be exactly the same as tile1's right edge. By itself, this should work, and it does. When considering a constant scale, it breaks somehow.
When a scale is applied, it is a constant multiple across all width/height values, and mathematically shouldn't make anything change. However, it does make a difference, for what I think could be one of two reasons:
OpenGL is having issues with subpixel filling, ie filling left of a vertex doesn't fill the vertex's containing pixel space, and filling right of that same vertex also doesn't fill the vertex's containing pixel space.
I'm running into float accuracy problems, where somehow X+width/2 does not equal X+width - width/2 where width = tilewidth*scale, tilewidth is an integer, and X is a float.
I'm not really sure about how to tell which one is the problem, or how to remedy it other than to simply avoid non-integer scale values, which I'd like to be able to support. The only clue I think might apply to finding the solution is how the pattern of line gaps isn't really consistant (see how it skips tiles in some cases, only has vertical or horizontal but not both, etc). However, I don't know what this implies.
This looks like it's probably a floating point precision issue. The critical statement in your question is this:
Mathematically, each position is an exact multiple [..]
While that's mathematically true, you're dealing with limited floating point precision. Sequences of operations that should mathematically produce the same result can (and often do) produce slightly different results due to rounding errors during expression evaluation.
Specifically in your case, it looks like you're relying on identities of this form:
i * width + width/2 == (i + 1) * width - width/2
This is mathematically correct, but you can't expect to get exactly the same numbers when evaluating the values with limited floating point precision. Depending on how the small errors end up getting rounded to pixels, it can result in visual artifacts.
The only good way to avoid this is that you actually use the same values for coordinates that must be the same, instead of using calculations that mathematically produce the same results.
In the case of coordinates on a grid, you could calculate the coordinates for each grid line (tile boundary) once, and then use those values for all draw operations. Say if you have n tiles in the x-direction, you calculate all the x-values as:
x[i] = i * width;
and then when drawing tile i, use x[i] and x[i + 1] as the left and right x-coordinates.

Categories