java, determine if circle is inside an area - java

Hi im new to programming and im trying to code an algorithm in java to determine if a circle is in a rectangular area
I have the radius of the circle and the point in the middle of it(the center)
|_____________________________________________________
|
|
|
| circle
|
|
|
|
|(0,0)________________________________________________
the bottom left corner represent the coordinate (0,0)
this is what I have so far but I know I have an error somewhere which I can't find
if (mCenter.getmX() + mRadius > width ||
mCenter.getmY() + mRadius > height ||
mCenter.getmX() - mRadius < 0 ||
mCenter.getmY() - mRadius < 0) {
return false; //not inside area
}
else { return true; }
In this code mCenter is a Point with a x and y coordinate, mRadius is the circle radius and width and height are the width/height of the area
thanks

You didn't say what the symptom is, but your helpful diagram above uses the ordinary mathematical coordinate system while your posted code uses awt.image.BufferedImage. Swing and most 2D computer graphics systems use a different coordinate system that's more convenient for laying out content in reading order.
Per GraphicsConfiguration#getDefaultTransform():
Coordinates in the coordinate space defined by the default
AffineTransform for screen and printer devices have the origin in the
upper left-hand corner of the target region of the device, with X
coordinates increasing to the right and Y coordinates increasing
downwards.
I think it's possible to set up a GraphicsConfiguration with a different transform. (I don't know how to do it.) Not so for awt.image.BufferedImage:
All BufferedImage objects have an upper left corner coordinate of (0, 0).
javax.swing.SwingUtilities has coordinate conversion methods.
P.S. Calling image.setRGB() for each pixel will be slow compared to passing the entire image into setRGB(int startX, int startY, int w, int h, int[] rgbArray, int offset, int scansize) or setData(Raster r). Usually a frame buffer is held in a 1-D array that's treated like a 2-D array, with scansize indicating the width of a scan line within this buffer.

Related

Wrong result from Rectangle.contains() in java

It appears that the contains() method in Rectangle is not inclusive to the bottom right corner.
For example the following code returns "false";
Rectangle r = new Rectangle(0,0,100,100);
System.out.println(r.contains(100, 100));
As quoted from the Rectangle API (Java 8):
public Rectangle(int x,
int y,
int width,
int height) Constructs a new Rectangle whose upper-left corner is specified as (x,y) and whose width and height are
specified by the arguments of the same name.
Using Width and Height with the starting Point of (0,0) means the Rectangle has points from (0,0) to (99,99) - 100 pixels of width and 100 pixels of height, based on the given starting pixel of (0,0) which is always included in the Rectangle.
This means that (100,100) will indeed not be included in the constructed Rectangle. Based on the logic above, (100,100) will be contained in the following (verified using an online java compiler):
Rectangle r = new Rectangle(1,1,100,100);
References:
The Rectangle API
It seems that the API wrongly states that the "upper left corner" is (x,y) when according to the accepted answer and my own experience, (x,y) is the lower left corner.

What is the source of these pixel gaps in between identical vertices in OpenGL's Ortho? How can I eliminate them?

Despite passing equal (exactly equal) coordinates for 'adjacent' edges, I'm ending up with some strange lines between adjacent elements when scaling my grid of rendered tiles.
My tile grid rendering algorithm accepts scaled tiles, so that I can adjust the grid's visual size to match a chosen window size of the same aspect ratio, among other reasons. It seems to work correctly when scaled to exact integers, and a few non-integer values, but I get some inconsistent results for the others.
Some Screenshots:
The blue lines are the clear color showing through. The chosen texture has no transparent gaps in the tilesheet, as unused tiles are magenta and actual transparency is handled by the alpha layer. The neighboring tiles in the sheet have full opacity. Scaling is achieved by setting the scale to a normalized value obtained through a gamepad trigger between 1f and 2f, so I don't know what actual scale was applied when the shot was taken, with the exception of the max/min.
Attribute updates and entity drawing are synchronized between threads, so none of the values could have been applied mid-draw. This isn't transferred well through screenshots, but the lines don't flicker when the scale is sustained at that point, so it logically shouldn't be an issue with drawing between scale assignment (and thread locks prevent this).
Scaled to 1x:
Scaled to A, 1x < Ax < Bx :
Scaled to B, Ax < Bx < Cx :
Scaled to C, Bx < Cx < 2x :
Scaled to 2x:
Projection setup function
For setting up orthographic projection (changes only on screen size changes):
.......
float nw, nh;
nh = Display.getHeight();
nw = Display.getWidth();
GL11.glOrtho(0, nw, nh, 0, 1, -1);
orthocenter.setX(nw/2); //this is a Vector2, floats for X and Y, direct assignment.
orthocenter.setY(nh/2);
.......
For the purposes of the screenshot, nw is 512, nh is 384 (implicitly casted from int). These never change throughout the example above.
General GL drawing code
After cutting irrelevant attributes that didn't fix the problem when cut:
#Override
public void draw(float xOffset, float yOffset, float width, float height,
int glTex, float texX, float texY, float texWidth, float texHeight) {
GL11.glLoadIdentity();
GL11.glTranslatef(0.375f, 0.375f, 0f); //This is supposed to fix subpixel issues, but makes no difference here
GL11.glTranslatef(xOffset, yOffset, 0f);
if(glTex != lastTexture){
GL11.glBindTexture(GL11.GL_TEXTURE_2D, glTex);
lastTexture = glTex;
}
GL11.glBegin(GL11.GL_QUADS);
GL11.glTexCoord2f(texX,texY + texHeight);
GL11.glVertex2f(-height/2, -width/2);
GL11.glTexCoord2f(texX + texWidth,texY + texHeight);
GL11.glVertex2f(-height/2, width/2);
GL11.glTexCoord2f(texX + texWidth,texY);
GL11.glVertex2f(height/2, width/2);
GL11.glTexCoord2f(texX,texY);
GL11.glVertex2f(height/2, -width/2);
GL11.glEnd();
}
Grid drawing code (dropping the same parameters dropped from 'draw'):
//Externally there is tilesize, which contains tile pixel size, in this case 32x32
public void draw(Engine engine, Vector2 offset, Vector2 scale){
int xp, yp; //x and y position of individual tiles
for(int c = 0; c<width; c++){ //c as in column
xp = (int) (c*tilesize.a*scale.getX()); //set distance from chunk x to column x
for(int r = 0; r<height; r++){ //r as in row
if(tiles[r*width+c] <0) continue; //skip empty tiles ('air')
yp = (int) (r*tilesize.b*scale.getY()); //set distance from chunk y to column y
tileset.getFrame(tiles[r*width+c]).draw( //pull 'tile' frame from set, render.
engine, //drawing context
new Vector2(offset.getX() + xp, offset.getY() + yp), //location of tile
scale //scale of tiles
);
}
}
}
Between the tiles and the platform specific code, vectors' components are retrieved and passed along to the general drawing code as pasted earlier.
My analysis
Mathematically, each position is an exact multiple of the scale*tilesize in either the x or y direction, or both, which is then added to the offset of the grid's location. It is then passed as an offset to the drawing code, which translates that offset with glTranslatef, then draws a tile centered at that location through halving the dimensions then drawing each plus-minus pair.
This should mean that when tile 1 is drawn at, say, origin, it has an offset of 0. Opengl then is instructed to draw a quad, with the left edge at -halfwidth, right edge at +halfwidth, top edge at -halfheight, and bottom edge at +halfheight. It then is told to draw the neighbor, tile 2, with an offset of one width, so it translates from 0 to that width, then draws left edge at -halfwidth, which should coordinate-wise be exactly the same as tile1's right edge. By itself, this should work, and it does. When considering a constant scale, it breaks somehow.
When a scale is applied, it is a constant multiple across all width/height values, and mathematically shouldn't make anything change. However, it does make a difference, for what I think could be one of two reasons:
OpenGL is having issues with subpixel filling, ie filling left of a vertex doesn't fill the vertex's containing pixel space, and filling right of that same vertex also doesn't fill the vertex's containing pixel space.
I'm running into float accuracy problems, where somehow X+width/2 does not equal X+width - width/2 where width = tilewidth*scale, tilewidth is an integer, and X is a float.
I'm not really sure about how to tell which one is the problem, or how to remedy it other than to simply avoid non-integer scale values, which I'd like to be able to support. The only clue I think might apply to finding the solution is how the pattern of line gaps isn't really consistant (see how it skips tiles in some cases, only has vertical or horizontal but not both, etc). However, I don't know what this implies.
This looks like it's probably a floating point precision issue. The critical statement in your question is this:
Mathematically, each position is an exact multiple [..]
While that's mathematically true, you're dealing with limited floating point precision. Sequences of operations that should mathematically produce the same result can (and often do) produce slightly different results due to rounding errors during expression evaluation.
Specifically in your case, it looks like you're relying on identities of this form:
i * width + width/2 == (i + 1) * width - width/2
This is mathematically correct, but you can't expect to get exactly the same numbers when evaluating the values with limited floating point precision. Depending on how the small errors end up getting rounded to pixels, it can result in visual artifacts.
The only good way to avoid this is that you actually use the same values for coordinates that must be the same, instead of using calculations that mathematically produce the same results.
In the case of coordinates on a grid, you could calculate the coordinates for each grid line (tile boundary) once, and then use those values for all draw operations. Say if you have n tiles in the x-direction, you calculate all the x-values as:
x[i] = i * width;
and then when drawing tile i, use x[i] and x[i + 1] as the left and right x-coordinates.

How to draw a square with the mouse

What I'm trying to do is basically the thing you can do in the desktop when you click and drag te mouse making a square. The problem is I don't know how to make it draw "backwards" or how to clean the previous parameters when you start a new square. here is the entire code:
public void paint (Graphics j){
super.paint(j);
j.drawRect(x,y,z,w);
}
private void formMousePressed(java.awt.event.MouseEvent evt) {
x=evt.getX();
y=evt.getY();
repaint();
}
private void formMouseDragged(java.awt.event.MouseEvent evt) {
z=evt.getX();
w=evt.getY();
repaint();
}
The signature for drawRect is: drawRect(int x, int y, int width, int height). You need to calculate the top left corner of the square, and the width and height.
The top-left corner is (min(x, z), min(y, w)).
The width is abs(x-z) and the height is abs(y-w)
Putting this together we get
Try
j.drawRect(Math.min(x, z), Math.min(y, w), Math.abs(x - z), Math.abs(y - w));
Why does this work? Well you're given 2 points. It's a well known fact that 2 points can determine a square(opposite corners). The first problem is that you have to translate the points you're given, into an input that java likes. In this case, you first need the upper left hand corner. You don't know which point you have is that corner, or actually it could be that neither of them are.
So what do we know about the upper left corner? We know that it's x value is the smallest x value that exists in the square. We also know that at least one of the 2 points given rest on that same edge. Using this information we can determine that the x coordinate of the top left corner is the smallest x value of our 2 points. Or min(x, z). We use the same procedure to find the y coordinate.
Now width and height are easy. The width is the right edge - the left edge. We don't know which point is the right side, and which is the left side, but it doesn't matter. If we take the absolute value of the difference will always give you the positive difference between the points. In this case abs(x-z). The process is the same for the height.
As for resetting the square try adding a formMouseReleased method and setting x, y, z, w to 0.
I think you might create a method that resets the parameters
something like: void modifyMouse() in your Mouse class
//your parameters=0
I might try to give you a better help if you clarify your question, for now try that.

Drawing a rectangle inside a rectangle

I feel a little silly asking this but i'm not able to figure it out.. I am trying to draw a rectangle inside another rectangle and the math i'm using must be off. the inside rectangle is always one pixel to short.
b.fillRect( rectangleX+rectangleOutlineSize, rectangleY+rectangleOutlineSize, rectangleWidth-rectangleOutlineSize*2, rectangleHeight);
Its probably simple but I have been stuck on it for a hour and I have had trouble with it in the past.
In programming the coordinate system is a bit
weird, not exactly as (the usual one) in math.
*---------------------------------------> X +
|
|
|
|
|
|
|
v
Y +
I guess you're having a problem with that.
The * is the (0,0) which is usually the upper left
corner of your drawing area (e.g. of your screen).
Try something along these lines.
b.fillRect( x, y, width, height );
b.fillRect( x + (width-w)/2.0, y + (height-h)/2.0, w, h );
width - the width of the big rectangle
height - the height of the big rectangle
x,y - upper left corner of the big rectangle
w,h - width, height of the small rectangle

Convert 2d game world coordinates to screen position

I have a system that generates chunks of 2d game map tiles. Chunks are 16x16 tiles, tiles are 25x25.
The chunks are given their own coordinates, like 0,0, 0,1, etc. The tiles determine their coordinates in the world based on which chunk they're in. I've verified that the chunks/tiles are all showing the proper x/y coordinates.
My problem is translating those into screen coordinates. In a previous question someone recommended using:
(worldX * tileWidth) % viewport_width
Each tile's x/y are run through this calculation and a screen x/y coordinate is returned.
This works for tiles that fit within the viewport, but it resets the screen x/y position calculation for anything off-screen.
In my map, I load chunks of tiles within a radius around the player so some of the inner tiles will be off-screen (until they move around, tile positions on the screen are moved).
I tried a test with a tile that would be off screen:
Tile's x coord: 41
41 * 25 = 1025
Game window: 1024
1025 % 1024 = 1
This means that tile x (which, if the screen 0,0 is at map 0,0, should be at x:1025, just off the right-hand side of the screen) is actually at x:1, appearing in the top-left.
I can't think of how to properly handle this - it seems to me like I need take the tileX * tileWidth to determine it's "initial screen position" and then somehow use an offset to determine how to make it appear on screen. But what offset?
Update: I already store an x/y offset value when the player moves, so I know how to move the map. I can use these values as the current offset, and if someone saves the game I can simply store those and re-use them. There's no equation necessary, I would just have to store the cumulative offsets.
The modulo (worldX*tileWidth % screenWidth) is what's causing it to reset. Modulo (%) gives you the remainder of an integer division operation; so, if worldX * tileWidth is greater than screenWidth, it will give you the remainder of (worldX * tileWidth) / screenWidth; if worldX * tileWidth is screenWidth+1, remainder is 1: it starts over at the beginning of the row.
If you eliminate the modulo, it will continue to draw tiles past the edge of the screen. If your drawing buffer is the same size as the screen, you'll need to add a check for tiles at the edge of the screen to make sure you only draw the tile portion that will be visible.
If you're trying to keep the player centered on the screen, you need to offset each tile by the player's offset from tile 0,0 in pixels, minus half the screen width:
offsetX = (playerWorldX * tileWidth) - (screenWidth / 2);
screenX = (worldX * tileWidth) - offsetX;
x = ((worldX*tileWidth) > screenWidth) ? worldX*tileWidth : (worldX*tileWidth)%screenWidth;
That should work. Though I recommend implementing something like an interface and letting each tile decide where they want to be rendered. Something like this
interface Renderable {
void Render(Graphics2D g)
..
}
class Tile implements Renderable{
int x,y
//other stuff
Render(Graphics2D g){
if (!inScreen()){
return;
}
//...
//render
}
boolean inScreen(){
//if the map moves with the player you need to define the boundaries of your current screenblock in terms of the global map coordinates
//What you can do is store this globally in a singleton somewhere or pass it to the constructor of each tile.
//currentBlock.x is then player.x - screenWidth/2
//currentBlock.width is then player.x + screenWidth/2;
//similar for y
if(this.x < currentBlock.x || this.x > currentBlock.Width)
return false;
if (this.y < currentBlock.y || this.y > currentBlock.height)
return false;
return true;
//If the map are in blocks (think zelda on snes where you go from one screenblock to another) you still need to define the boundaries
//currentBlock.x = (player.x / screenWidth) (integer division) *screenWidth;
//currentBlock.width = (player.x /screenWidth) (...) * screenWidth + screenWidth;
//same for y
//Then perform above tests
}

Categories