Tile id on grid from x/y position - java

(this image is not mine, but it helps ilustrate the point)
Give this grid with each tile beign 32x32 pixels, how can i calculate the tile ID where the mouse is?
In this case, mouse is on tile: 40.

Let us say the current mouse position is (x,y) and the length of each small square is l(in the given case 32). Then, the grid x and y value will be given by:
gridX = x/l; //be sure it is integer division and not float
gridY = y/l; //
Then calculate the tile id on the basis of those values:
currentTileId = (boxesEachRow * gridY) + gridX + 1;
where boxesEachRow is the number of boxes along each row(here it is 8). The plus 1 is needed if you are treating the first box's id as 1 and not 0.

Related

find the minimum and maximum distance between 2 rectangles of any dimension

I have a Rectangle class that represents a d-dimension rectangle with 2*d numbers for every dimension. For every dimension i have a lower and an upper bound. Dimensionality stores the number of dimensions of the rectangle, and for the lower and upper bounds i use a double array.
I want to create 2 methods that they have as input another rectangle object of the same Dimensionality and return the minimum and maximum distance between the rectangles, im trying to do this using the minimum/maximum distances of each of their projections in every axis. I also have a method that creates the projections.
//returns 2 position array
public double[] project(int x)
{
//x is the selected dimension
double proj[] = new double[2];
proj[0] = this.lb[x];
proj[1] = this.ub[x];
return proj;
}
you can see on the third set of shapes what i want to do more clearly
https://s15.postimg.org/l8aijyl1n/imageedit_2_6689786765.jpg
Find mutual orientation of rectangles (as direction of vector between centers, it is enough to get signs of center.x and center.y differences)
Depending on orientation quadrant, get distances from selected edges of the first rectangle to selected edges of the second. Using signs of these distances, find extremal distances:
For the case at the picture (direction in the 4th quadrant) one have to check distances from right bottom corner of the first rectangle to the left and top edges of the second one, and from left top corner of the second - to the right and bottom edges of the first one.
min_dx = rect2.left - rect1.right
min_dy = rect2.top - rect.bottom
if both values > 0 (++ case)
min_dist = sqrt(min_dx^2 + min_dy^2) //corner-corner case
+- case
min_dist = min_dx
-+ case
min_dist = min_dy
-- case
min_dist = 0
Seems the same approach might be used for higher dimensions. Number of cases becomes too high, so it is worth to select planes for checking with indexes corresponding to center-center direction vector components' signs

What is the source of these pixel gaps in between identical vertices in OpenGL's Ortho? How can I eliminate them?

Despite passing equal (exactly equal) coordinates for 'adjacent' edges, I'm ending up with some strange lines between adjacent elements when scaling my grid of rendered tiles.
My tile grid rendering algorithm accepts scaled tiles, so that I can adjust the grid's visual size to match a chosen window size of the same aspect ratio, among other reasons. It seems to work correctly when scaled to exact integers, and a few non-integer values, but I get some inconsistent results for the others.
Some Screenshots:
The blue lines are the clear color showing through. The chosen texture has no transparent gaps in the tilesheet, as unused tiles are magenta and actual transparency is handled by the alpha layer. The neighboring tiles in the sheet have full opacity. Scaling is achieved by setting the scale to a normalized value obtained through a gamepad trigger between 1f and 2f, so I don't know what actual scale was applied when the shot was taken, with the exception of the max/min.
Attribute updates and entity drawing are synchronized between threads, so none of the values could have been applied mid-draw. This isn't transferred well through screenshots, but the lines don't flicker when the scale is sustained at that point, so it logically shouldn't be an issue with drawing between scale assignment (and thread locks prevent this).
Scaled to 1x:
Scaled to A, 1x < Ax < Bx :
Scaled to B, Ax < Bx < Cx :
Scaled to C, Bx < Cx < 2x :
Scaled to 2x:
Projection setup function
For setting up orthographic projection (changes only on screen size changes):
.......
float nw, nh;
nh = Display.getHeight();
nw = Display.getWidth();
GL11.glOrtho(0, nw, nh, 0, 1, -1);
orthocenter.setX(nw/2); //this is a Vector2, floats for X and Y, direct assignment.
orthocenter.setY(nh/2);
.......
For the purposes of the screenshot, nw is 512, nh is 384 (implicitly casted from int). These never change throughout the example above.
General GL drawing code
After cutting irrelevant attributes that didn't fix the problem when cut:
#Override
public void draw(float xOffset, float yOffset, float width, float height,
int glTex, float texX, float texY, float texWidth, float texHeight) {
GL11.glLoadIdentity();
GL11.glTranslatef(0.375f, 0.375f, 0f); //This is supposed to fix subpixel issues, but makes no difference here
GL11.glTranslatef(xOffset, yOffset, 0f);
if(glTex != lastTexture){
GL11.glBindTexture(GL11.GL_TEXTURE_2D, glTex);
lastTexture = glTex;
}
GL11.glBegin(GL11.GL_QUADS);
GL11.glTexCoord2f(texX,texY + texHeight);
GL11.glVertex2f(-height/2, -width/2);
GL11.glTexCoord2f(texX + texWidth,texY + texHeight);
GL11.glVertex2f(-height/2, width/2);
GL11.glTexCoord2f(texX + texWidth,texY);
GL11.glVertex2f(height/2, width/2);
GL11.glTexCoord2f(texX,texY);
GL11.glVertex2f(height/2, -width/2);
GL11.glEnd();
}
Grid drawing code (dropping the same parameters dropped from 'draw'):
//Externally there is tilesize, which contains tile pixel size, in this case 32x32
public void draw(Engine engine, Vector2 offset, Vector2 scale){
int xp, yp; //x and y position of individual tiles
for(int c = 0; c<width; c++){ //c as in column
xp = (int) (c*tilesize.a*scale.getX()); //set distance from chunk x to column x
for(int r = 0; r<height; r++){ //r as in row
if(tiles[r*width+c] <0) continue; //skip empty tiles ('air')
yp = (int) (r*tilesize.b*scale.getY()); //set distance from chunk y to column y
tileset.getFrame(tiles[r*width+c]).draw( //pull 'tile' frame from set, render.
engine, //drawing context
new Vector2(offset.getX() + xp, offset.getY() + yp), //location of tile
scale //scale of tiles
);
}
}
}
Between the tiles and the platform specific code, vectors' components are retrieved and passed along to the general drawing code as pasted earlier.
My analysis
Mathematically, each position is an exact multiple of the scale*tilesize in either the x or y direction, or both, which is then added to the offset of the grid's location. It is then passed as an offset to the drawing code, which translates that offset with glTranslatef, then draws a tile centered at that location through halving the dimensions then drawing each plus-minus pair.
This should mean that when tile 1 is drawn at, say, origin, it has an offset of 0. Opengl then is instructed to draw a quad, with the left edge at -halfwidth, right edge at +halfwidth, top edge at -halfheight, and bottom edge at +halfheight. It then is told to draw the neighbor, tile 2, with an offset of one width, so it translates from 0 to that width, then draws left edge at -halfwidth, which should coordinate-wise be exactly the same as tile1's right edge. By itself, this should work, and it does. When considering a constant scale, it breaks somehow.
When a scale is applied, it is a constant multiple across all width/height values, and mathematically shouldn't make anything change. However, it does make a difference, for what I think could be one of two reasons:
OpenGL is having issues with subpixel filling, ie filling left of a vertex doesn't fill the vertex's containing pixel space, and filling right of that same vertex also doesn't fill the vertex's containing pixel space.
I'm running into float accuracy problems, where somehow X+width/2 does not equal X+width - width/2 where width = tilewidth*scale, tilewidth is an integer, and X is a float.
I'm not really sure about how to tell which one is the problem, or how to remedy it other than to simply avoid non-integer scale values, which I'd like to be able to support. The only clue I think might apply to finding the solution is how the pattern of line gaps isn't really consistant (see how it skips tiles in some cases, only has vertical or horizontal but not both, etc). However, I don't know what this implies.
This looks like it's probably a floating point precision issue. The critical statement in your question is this:
Mathematically, each position is an exact multiple [..]
While that's mathematically true, you're dealing with limited floating point precision. Sequences of operations that should mathematically produce the same result can (and often do) produce slightly different results due to rounding errors during expression evaluation.
Specifically in your case, it looks like you're relying on identities of this form:
i * width + width/2 == (i + 1) * width - width/2
This is mathematically correct, but you can't expect to get exactly the same numbers when evaluating the values with limited floating point precision. Depending on how the small errors end up getting rounded to pixels, it can result in visual artifacts.
The only good way to avoid this is that you actually use the same values for coordinates that must be the same, instead of using calculations that mathematically produce the same results.
In the case of coordinates on a grid, you could calculate the coordinates for each grid line (tile boundary) once, and then use those values for all draw operations. Say if you have n tiles in the x-direction, you calculate all the x-values as:
x[i] = i * width;
and then when drawing tile i, use x[i] and x[i + 1] as the left and right x-coordinates.

How to make a hitbox?

I'm making a game similar to mario and I've got this map generated by arrays and images. But my problem is that I don't know how to make a hitbox system for all the tiles. I've tried to have a position based collision system based on your position on the map
like this
if(xpos > 10*mapX && xpos < 14 * mapX){
ypos -= 1;
}
But I don't want to that for every wall or hole.
So is there a way to check in front, below and above the character to see if there is a hitbox there and if there is you cant move that direction or fall?
Thank you
If it's a simple 2D game, I'd suggest dividing the map into square tiles. You could store the map in the memory as a two dimensional array and during each frame check tiles adjacent to the player. Of course he can occupy as much as 4 tiles during movement, but it makes you check only up to 12 positions, which can be easily done.
Further collision checking can be done easily using image position and dimension.
Remember that there is no need to check if a static object (environment) is colliding with something, you just need to check objects that have made a move since last frame, i.e. the player and sprites.
EDIT:
Let's say you've got the following section of map (variable map):
...
.pe
ooo
where
. = nothing
p = player
o = floor
e = enemy
you also have the pair (x, y) representing tile indices (not exact position) of the player. In this case you have to do something like this:
if ("o".equals(map[y + 1, x + 1]))
//floor is under
if ("e".equals(map[y, x + 1]))
//enemy is on the right
if ("o".equals(map[y - 1, x]))
//floor is above us
If any of these conditions are met, you have to check image positions and handle collisions.
Note: clicked submit way after the last post was made...
As Mateusz says a 2D array is best for this type of game:
e.g. using chars:
0123456789012
0 ==
1 * ===
2===== =======
So in this case tileMap[8][1] == '*'. You'd probably be best using an enumeration instead of chars though e.g. Tile.SPRING for a Sonic style spring board.
If your map was made up of regular sized tiles you could say:
int xInFrontOfPlayer = playerX + PLAYER_WIDTH;
int xBehindPlayer = playerX - PLAYER_WIDTH;
Tile tileInFrontOfPlayer = getTileAtWorldCoord(xInFrontOfPlayer, playerY);
Tile tileBehindPlayer = getTileAtWorldCoord(xBehindPlayer, playerY);
...
public Tile getTileAtWorldCoord(int worldX, worldY) {
return tileMap[worldX / TILE_WIDTH][worldY / TILE_HEIGHT];
}
Where TILE_WIDTH and TILE_HEIGHT are the dimensions of your tiles in pixels. Then use similar math for yAbovePlayer and yBelowPlayer.
You might then have some logic in your game loop:
if user is pressing the "go right" key:
if the tile to the right is Tile.SPACE:
move player right
else if the tile to the right is Tile.WALL:
don't do anything
if the tile below is Tile.SPACE:
fall

Convert 2d game world coordinates to screen position

I have a system that generates chunks of 2d game map tiles. Chunks are 16x16 tiles, tiles are 25x25.
The chunks are given their own coordinates, like 0,0, 0,1, etc. The tiles determine their coordinates in the world based on which chunk they're in. I've verified that the chunks/tiles are all showing the proper x/y coordinates.
My problem is translating those into screen coordinates. In a previous question someone recommended using:
(worldX * tileWidth) % viewport_width
Each tile's x/y are run through this calculation and a screen x/y coordinate is returned.
This works for tiles that fit within the viewport, but it resets the screen x/y position calculation for anything off-screen.
In my map, I load chunks of tiles within a radius around the player so some of the inner tiles will be off-screen (until they move around, tile positions on the screen are moved).
I tried a test with a tile that would be off screen:
Tile's x coord: 41
41 * 25 = 1025
Game window: 1024
1025 % 1024 = 1
This means that tile x (which, if the screen 0,0 is at map 0,0, should be at x:1025, just off the right-hand side of the screen) is actually at x:1, appearing in the top-left.
I can't think of how to properly handle this - it seems to me like I need take the tileX * tileWidth to determine it's "initial screen position" and then somehow use an offset to determine how to make it appear on screen. But what offset?
Update: I already store an x/y offset value when the player moves, so I know how to move the map. I can use these values as the current offset, and if someone saves the game I can simply store those and re-use them. There's no equation necessary, I would just have to store the cumulative offsets.
The modulo (worldX*tileWidth % screenWidth) is what's causing it to reset. Modulo (%) gives you the remainder of an integer division operation; so, if worldX * tileWidth is greater than screenWidth, it will give you the remainder of (worldX * tileWidth) / screenWidth; if worldX * tileWidth is screenWidth+1, remainder is 1: it starts over at the beginning of the row.
If you eliminate the modulo, it will continue to draw tiles past the edge of the screen. If your drawing buffer is the same size as the screen, you'll need to add a check for tiles at the edge of the screen to make sure you only draw the tile portion that will be visible.
If you're trying to keep the player centered on the screen, you need to offset each tile by the player's offset from tile 0,0 in pixels, minus half the screen width:
offsetX = (playerWorldX * tileWidth) - (screenWidth / 2);
screenX = (worldX * tileWidth) - offsetX;
x = ((worldX*tileWidth) > screenWidth) ? worldX*tileWidth : (worldX*tileWidth)%screenWidth;
That should work. Though I recommend implementing something like an interface and letting each tile decide where they want to be rendered. Something like this
interface Renderable {
void Render(Graphics2D g)
..
}
class Tile implements Renderable{
int x,y
//other stuff
Render(Graphics2D g){
if (!inScreen()){
return;
}
//...
//render
}
boolean inScreen(){
//if the map moves with the player you need to define the boundaries of your current screenblock in terms of the global map coordinates
//What you can do is store this globally in a singleton somewhere or pass it to the constructor of each tile.
//currentBlock.x is then player.x - screenWidth/2
//currentBlock.width is then player.x + screenWidth/2;
//similar for y
if(this.x < currentBlock.x || this.x > currentBlock.Width)
return false;
if (this.y < currentBlock.y || this.y > currentBlock.height)
return false;
return true;
//If the map are in blocks (think zelda on snes where you go from one screenblock to another) you still need to define the boundaries
//currentBlock.x = (player.x / screenWidth) (integer division) *screenWidth;
//currentBlock.width = (player.x /screenWidth) (...) * screenWidth + screenWidth;
//same for y
//Then perform above tests
}

Where does the origin of a graphic string start?

In core Java book it says
The width of the rectangle that the getStringBounds method returns is the horizontal
extent of the string. The height of the rectangle is the sum of ascent, descent, and
leading. The rectangle has its origin at the baseline of the string. The top y -coordinate of the rectangle is negative. Thus, you can obtain string width, height, and
ascent as follows:
double stringWidth = bounds.getWidth();
double stringHeight = bounds.getHeight();
double ascent = -bounds.getY();
What does the author mean when saying that the rectangle has its origin at the baseline of the string, while top y-coordinate is the ascent?
Where does the bounding rectangle of the string start?
with a test string i got the following:
w: 291.0
h: 91.265625
x:0.0
y:-72.38671875
descent: 15.8203125
leading: 3.0585938
That mean the rectangle origin is at the leading not the baseline, am i correct on this?
It means that the bounds' coordinates are in a space where zero Y coordinate is at string's baseline and positive Y coordinates go downwards. In the following image the black dot corresponds to zero Y:
Therefore negative bounds.getY() (ascent) corresponds to the topmost coordinate. And positive bounds.getHeight() + bounds.getY() (descent + leading) will correspond to the botmommost coordinate in this coordinate space.
The math works out:
72.38671875 ascent + 15.8203125 descent + 3.0585938 leading = 91.265625 total height
This tutorial on 2D Text has an image illustrating leading, descent, and ascent.
In your specific case, 72.38671875 is the height of the ascent. That's measured from the baseline to the top of the tallest glyph. The leading is the space between the bottom of the descender to the top of the next line.
The bounding rectangle is relative to the baseline. The API for FontMetrics.getStringBounds states "The returned bounds is in baseline-relative coordinates", which explains your results. x will always be 0, and the height of the bounding box will be the ascent plus the descent plus the leading.
The Java graphics coordinate system has its origin in the top right of the canvas, with the Y coordinate increasing from top to bottom. This means that a rectangle's top edge (the return value of getY()) will have a smaller Y coordinate than its bottom edge (the baseline of a text string).
The result value of getStringBounds() is only somewhat consistent with this. While the coordinate system is respected, the origin of the bounding rectangle is relative to the baseline, not at the top left. This means that the top left of the rectangle will have a negative Y coordinate.

Categories