I have a system that generates chunks of 2d game map tiles. Chunks are 16x16 tiles, tiles are 25x25.
The chunks are given their own coordinates, like 0,0, 0,1, etc. The tiles determine their coordinates in the world based on which chunk they're in. I've verified that the chunks/tiles are all showing the proper x/y coordinates.
My problem is translating those into screen coordinates. In a previous question someone recommended using:
(worldX * tileWidth) % viewport_width
Each tile's x/y are run through this calculation and a screen x/y coordinate is returned.
This works for tiles that fit within the viewport, but it resets the screen x/y position calculation for anything off-screen.
In my map, I load chunks of tiles within a radius around the player so some of the inner tiles will be off-screen (until they move around, tile positions on the screen are moved).
I tried a test with a tile that would be off screen:
Tile's x coord: 41
41 * 25 = 1025
Game window: 1024
1025 % 1024 = 1
This means that tile x (which, if the screen 0,0 is at map 0,0, should be at x:1025, just off the right-hand side of the screen) is actually at x:1, appearing in the top-left.
I can't think of how to properly handle this - it seems to me like I need take the tileX * tileWidth to determine it's "initial screen position" and then somehow use an offset to determine how to make it appear on screen. But what offset?
Update: I already store an x/y offset value when the player moves, so I know how to move the map. I can use these values as the current offset, and if someone saves the game I can simply store those and re-use them. There's no equation necessary, I would just have to store the cumulative offsets.
The modulo (worldX*tileWidth % screenWidth) is what's causing it to reset. Modulo (%) gives you the remainder of an integer division operation; so, if worldX * tileWidth is greater than screenWidth, it will give you the remainder of (worldX * tileWidth) / screenWidth; if worldX * tileWidth is screenWidth+1, remainder is 1: it starts over at the beginning of the row.
If you eliminate the modulo, it will continue to draw tiles past the edge of the screen. If your drawing buffer is the same size as the screen, you'll need to add a check for tiles at the edge of the screen to make sure you only draw the tile portion that will be visible.
If you're trying to keep the player centered on the screen, you need to offset each tile by the player's offset from tile 0,0 in pixels, minus half the screen width:
offsetX = (playerWorldX * tileWidth) - (screenWidth / 2);
screenX = (worldX * tileWidth) - offsetX;
x = ((worldX*tileWidth) > screenWidth) ? worldX*tileWidth : (worldX*tileWidth)%screenWidth;
That should work. Though I recommend implementing something like an interface and letting each tile decide where they want to be rendered. Something like this
interface Renderable {
void Render(Graphics2D g)
..
}
class Tile implements Renderable{
int x,y
//other stuff
Render(Graphics2D g){
if (!inScreen()){
return;
}
//...
//render
}
boolean inScreen(){
//if the map moves with the player you need to define the boundaries of your current screenblock in terms of the global map coordinates
//What you can do is store this globally in a singleton somewhere or pass it to the constructor of each tile.
//currentBlock.x is then player.x - screenWidth/2
//currentBlock.width is then player.x + screenWidth/2;
//similar for y
if(this.x < currentBlock.x || this.x > currentBlock.Width)
return false;
if (this.y < currentBlock.y || this.y > currentBlock.height)
return false;
return true;
//If the map are in blocks (think zelda on snes where you go from one screenblock to another) you still need to define the boundaries
//currentBlock.x = (player.x / screenWidth) (integer division) *screenWidth;
//currentBlock.width = (player.x /screenWidth) (...) * screenWidth + screenWidth;
//same for y
//Then perform above tests
}
Related
I have an isometric map drawn.
I take the current position of my sprite and the target position of where my sprite should be at after the move:
// region is my TextureRegion.
int x1 = getIsometricX(1,1,region);
int x2 = getIsometricX(1,2,region);
int y1= getIsometricY(1,1,region);
int y2 = getIsometricY(1,2,region);
And then I draw a simple line using ShapeRenderer to see if the local/target points are correctly set, and the rectangle so you can see where the sprite rendering starts.
renderer.setProjectionMatrix(camera.combined);
renderer.begin(ShapeRenderer.ShapeType.Line);
renderer.setColor(Color.RED);
renderer.line(x1 + location.getOffsetx(), y1 + location.getOffsety(), x2 + location.getOffsetx(), y2 + location.getOffsety());
renderer.rect(x1 + location.getOffsetx(), y1 + location.getOffsety(), region.getRegionWidth(), region.getRegionHeight());
renderer.end();
Every sprite of mine has set offsetX and offsetY to adjust its location on the isometric tile, because every sprite is different.
Output looks like this:
what you can see here, is the starting point of where the sprite starts to draw (you see that offsets adjusted it so the sprite looks like its on the 1,1 tile.
and you can see the line which starts at the starting draw point of the sprite, and ends at the target draw point of the sprite.
Now my question is, how can I make that sprite move on that line's path, so it will look like the ship is moving forward?
So the main concept of the question is.. How can you make a sprite move in a straight line, from local point to target point?
Some functions you might need to see:
public int getIsometricX(int x, int y, TextureRegion region) {
return (x * GameTile.TILE_WIDTH / 2) - (y * GameTile.TILE_WIDTH / 2) - (region.getRegionWidth() / 2);
}
public int getIsometricY(int x, int y, TextureRegion region) {
return (x * GameTile.TILE_HEIGHT / 2) + (y * GameTile.TILE_HEIGHT / 2) - (region.getRegionHeight() / 2);
}
Tiles are drawn using the same method, just with Tile's texture.
I would like to answer myself this question, because other people might have the same issue and just over-complicate it like me.
If you want to perform any move on your isometric map, do not follow my misunderstandings and calculate it on the isometric coordinates.
You have to calculate it on your flat screen matrix coordinates, and then convert it to isometric coordinates.
For example, I want to move up like this line, all I need to do is this:
ship.y += 0.1f // when it reaches 1, then it will be at 0, 1
So you know that you want to be at 0,1 on your non-isometric map.
So you do this increment, and then for last, you have to convert it to isometric coordinates before drawing:
float x = (ship.x * GameTile.TILE_WIDTH / 2) - (ship.y * GameTile.TILE_WIDTH / 2) - (textyure.getWidth() / 2);
float y = (ship.x * GameTile.TILE_HEIGHT / 2) + (ship.y * GameTile.TILE_HEIGHT / 2) - (texture.getHeight() / 2);
And that will draw it on the isometric map, exactly like on your screen-coordinates, but on an isometric format.
for(float x : new float[targetx - currentx]) {ship.setPosition(ship.getX + x, ship.getY); }
And you would do the same for y.
Edit:
I guess this is wrong since you're calling this every frame I assume.
Instead you would keep track of your distance traveled between target and ship and you would increment sip position by 1 each time to x and y until it reached the distance.
I am trying to get my player to shoot a spell and it travels to where ever the player clicked. I can easily accomplish this by doing the following.
if(position.x >= destination.x - 1 && position.x <= destination.x + 1)
reachedX = true;
if(position.y >= destination.y - 1 && position.y <= destination.y + 1)
reachedY = true;
However if the players origin is at, for example, 0,0 and I click at 10,300 then it travels right and up but when the spell reaches an x of 10 it travels directly upwards. I want the spell to travel at an angle that it will reach the x coordinate at the same time as the y coordinate. Here is an image showing what happens and what I want to happen.
In the first picture it looks like the spell goes 45° until it reaches the right x-coordinate. This sounds like the x and y speed are equals, no matter where the destination point is.
You shuld instead have a direction and depending on that a x and y speed.
For that you first need to get the point, the player is clicking.
Therefore you can implement InputProcessor and it's touchDown(int screenX, int screenY, int pointer, int button).
The screenX and screenY arguments are given in screen coordinates (pixels) and therefore need to be converted to your world-coordinates. This can be done using the camera or the viewport and it's unproject(Vector2 screenCoords). This method returns a Vector2 giving the world-coordinates.
Now you need to find out the direction Vector2 between you and the clicked point. The direction Vector is calculated like this:
new Vector2(otherPos.x - myPos.x, otherPos.y - myPos.y).nor();
This returns the normalized direction Vector between the two points.
Now you only need to move the spell by dir.x*spellSpeed*delta in x-direction and dir.y*spellSpeed*delta in y-direction and it should look like in your second picture.
I am trying to shoot an object(a spell) depending on the rotation of the players arm. The spell is supposed to come out of the hand and shoot towards where the mouse cicked(the arm rotates and points to where the mouse is). This is how the arm rotates in game.
public boolean mouseMoved(int screenX, int screenY) {
tmp.x = screenX;
tmp.y = screenY;
tmp.z = 0;
cam.unproject(tmp);
rot = MathUtils.radiansToDegrees * MathUtils.atan2((float)tmp.y - (float)player.getArmSprite().getY() - player.getArmSprite().getHeight(),
tmp.x -player.getArmSprite().getX() - player.getArmSprite().getWidth());
if (rot < 0) rot += 360;
//Last right or left means if hes looking left or right
if(player.lastRight)
player.setObjectRotation(rot + 80);
if(player.lastLeft)
player.setObjectRotation(-rot - 80);
And this is how the spell is supposed to shoot based off rotation
//destination is a vector of where on screen the mouse was clicked
if(position.y < destination.y){
position.y += vel.y * Gdx.graphics.getDeltaTime();
}
if(position.x < destination.x){
position.x += vel.x * Gdx.graphics.getDeltaTime();
}
However this is very wonky and never really reacts the way it supposed to with a few exceptions. It fires from the hand and then if the y axis is equal it completely evens out and goes till it reaches the x position, I want it to fire from the hand to the position clicks perfectly straight from point a to point b, this is clearly a rotation problem that I just can't seem to figure out how to tackle.
Here is an image of what is happening
Example image
The red indicates where I clicked, as you can see it reached the x pos first and now is traveling to the y when it should have reached the x and y pos of where I clicked first
Any help with this problem is extremely appreciated!
I'm pretty bad at radians and tangents but luckily we have vectors.
Since you have the rot ation in degrees of the arm. I advice to use Vectors to use for any Vector related math now.
//A vector pointing up
Vector2 direction = new Vector2(0, 1);
//Let's rotate that by the rotation of the arm
direction.rotate(rot);
Now direction is the direction the arm is pointing. If your rotation is calculated where up = 0. So you might need to rotate it 180, 90 or -90 degrees. Or in the case you did something silly any degrees.
Your spell should have a Vector too for it's position. Set that to the hand or wherever you want to start from. Now all you need to do is scale that direction since it's currently has a length of 1. If you want to move 5 units each frame you can do direction.scl(5) now it is of length 5. But technically speaking it's no direction anymore now everybody calls it velocity so let's do.
//when you need to fire
float speed = 5;
Vector2 velocity = direction.cpy().scl(speed);
//update
position.add(velocity);
draw(fireballImage, position.x, position.y);
I copied direction first, otherwise it would also be scaled. Then I just added the velocity to the position and draw using that Vector.
And to show Vectors are awesome you should see this awesome badlogic vs mouse program I created. https://github.com/madmenyo/FollowMouse there are just a view lines of my own code. It just takes a little bit of vector knowledge and it's very readable.
I am currently creating a small 2d-game with lwjgl.
I tried to figure out a way of implementing a Fog-Of-War.
I used a black backgound with alpha set to 0.5.
Then I added a Square, to set alpha to 1 for each tile, which is lit, ending up having a black Background with differend Alpha values.
Then I rendered my Background using the blendfunction:
glBlendFunc(GL_ZERO, GL_SRC_ALPHA)
This works well, but now I have a problem with adding a second layer with transparent parts and apply the Fog-Of-War on them, too.
I've read something about FrameBufferObjects, but I don't know how to use them and if they are the right choice.
Later on I want to lit tiles with an texture/Image to give it a smoother look. So these textures may overlap. This is the reason why I chose to first render the Fog-Of-War.
Do you have an idea how to fix this problem?
Thanks to samgak.
Now I try to render a dark square on each dark tile exept the lit tiles.
I divided each tile in an 8x8 grid for more details. This is my method:
public static void drawFog() {
int width = map.getTileWidth()>>3; //Divide by 8
int height = map.getTileHeight()>>3;
int mapWidth = map.getWidth() << 3;
int mapHeight = map.getHeight() << 3;
//background_x/y is the position of the background in pixel
int mapStartX = (int) Math.floor(background_x / width);
int mapStartY = (int) Math.floor(background_y / height);
//Multiply each color component with 0.5 to get a darker look
glBlendFunc(GL_ZERO, GL_SRC_ALPHA);
glColor4f(0.0f, 0.0f, 0.0f, 0.5f);
glBegin(GL_QUADS);
//RENDERED_TILES_X/Y is the amount of tiles to fill the screen
for(int x = mapStartX; x < (RENDERED_TILES_X<<3) + mapStartX
&& x < mapWidth; x++){
for(int y = mapStartY; y < (RENDERED_TILES_Y<<3) + mapStartY
&& y < mapHeight; y++){
//visible is an boolean-array for each subtile
if(!visible[x][y]){
float tx = (x * width) - background_x;
float ty = (y * height) - background_y;
glVertex2f(tx, ty);
glVertex2f(tx+width, ty);
glVertex2f(tx+width, ty+height);
glVertex2f(tx, ty+height);
}
}
}
glEnd();
}
I set the visible array to false except for an small square.
It will render fine, but if I move the background the whole screen except the visible square turns black.
One approach is to render the Fog-of-War layer last, using an untextured black square rendered over the top of all the other layers after they have been rendered.
Use this blend function:
glBlendFunc(GL_ONE_MINUS_SRC_ALPHA, GL_SRC_ALPHA)
and set the Fog-of-War alpha per-vertex so that when it is 1.0 the black overlay is transparent, and when it is 0.0, it is entirely black. (If you want the alpha to have the opposite meaning, just swap the arguments).
To make it more smooth you can set the alpha per vertex at each of the corners of the square to vary smoothly across it. You could also use a texture with varying alpha values instead of a plain black square, or subdivide the square into 4 or 16 squares to allow finer control.
Despite passing equal (exactly equal) coordinates for 'adjacent' edges, I'm ending up with some strange lines between adjacent elements when scaling my grid of rendered tiles.
My tile grid rendering algorithm accepts scaled tiles, so that I can adjust the grid's visual size to match a chosen window size of the same aspect ratio, among other reasons. It seems to work correctly when scaled to exact integers, and a few non-integer values, but I get some inconsistent results for the others.
Some Screenshots:
The blue lines are the clear color showing through. The chosen texture has no transparent gaps in the tilesheet, as unused tiles are magenta and actual transparency is handled by the alpha layer. The neighboring tiles in the sheet have full opacity. Scaling is achieved by setting the scale to a normalized value obtained through a gamepad trigger between 1f and 2f, so I don't know what actual scale was applied when the shot was taken, with the exception of the max/min.
Attribute updates and entity drawing are synchronized between threads, so none of the values could have been applied mid-draw. This isn't transferred well through screenshots, but the lines don't flicker when the scale is sustained at that point, so it logically shouldn't be an issue with drawing between scale assignment (and thread locks prevent this).
Scaled to 1x:
Scaled to A, 1x < Ax < Bx :
Scaled to B, Ax < Bx < Cx :
Scaled to C, Bx < Cx < 2x :
Scaled to 2x:
Projection setup function
For setting up orthographic projection (changes only on screen size changes):
.......
float nw, nh;
nh = Display.getHeight();
nw = Display.getWidth();
GL11.glOrtho(0, nw, nh, 0, 1, -1);
orthocenter.setX(nw/2); //this is a Vector2, floats for X and Y, direct assignment.
orthocenter.setY(nh/2);
.......
For the purposes of the screenshot, nw is 512, nh is 384 (implicitly casted from int). These never change throughout the example above.
General GL drawing code
After cutting irrelevant attributes that didn't fix the problem when cut:
#Override
public void draw(float xOffset, float yOffset, float width, float height,
int glTex, float texX, float texY, float texWidth, float texHeight) {
GL11.glLoadIdentity();
GL11.glTranslatef(0.375f, 0.375f, 0f); //This is supposed to fix subpixel issues, but makes no difference here
GL11.glTranslatef(xOffset, yOffset, 0f);
if(glTex != lastTexture){
GL11.glBindTexture(GL11.GL_TEXTURE_2D, glTex);
lastTexture = glTex;
}
GL11.glBegin(GL11.GL_QUADS);
GL11.glTexCoord2f(texX,texY + texHeight);
GL11.glVertex2f(-height/2, -width/2);
GL11.glTexCoord2f(texX + texWidth,texY + texHeight);
GL11.glVertex2f(-height/2, width/2);
GL11.glTexCoord2f(texX + texWidth,texY);
GL11.glVertex2f(height/2, width/2);
GL11.glTexCoord2f(texX,texY);
GL11.glVertex2f(height/2, -width/2);
GL11.glEnd();
}
Grid drawing code (dropping the same parameters dropped from 'draw'):
//Externally there is tilesize, which contains tile pixel size, in this case 32x32
public void draw(Engine engine, Vector2 offset, Vector2 scale){
int xp, yp; //x and y position of individual tiles
for(int c = 0; c<width; c++){ //c as in column
xp = (int) (c*tilesize.a*scale.getX()); //set distance from chunk x to column x
for(int r = 0; r<height; r++){ //r as in row
if(tiles[r*width+c] <0) continue; //skip empty tiles ('air')
yp = (int) (r*tilesize.b*scale.getY()); //set distance from chunk y to column y
tileset.getFrame(tiles[r*width+c]).draw( //pull 'tile' frame from set, render.
engine, //drawing context
new Vector2(offset.getX() + xp, offset.getY() + yp), //location of tile
scale //scale of tiles
);
}
}
}
Between the tiles and the platform specific code, vectors' components are retrieved and passed along to the general drawing code as pasted earlier.
My analysis
Mathematically, each position is an exact multiple of the scale*tilesize in either the x or y direction, or both, which is then added to the offset of the grid's location. It is then passed as an offset to the drawing code, which translates that offset with glTranslatef, then draws a tile centered at that location through halving the dimensions then drawing each plus-minus pair.
This should mean that when tile 1 is drawn at, say, origin, it has an offset of 0. Opengl then is instructed to draw a quad, with the left edge at -halfwidth, right edge at +halfwidth, top edge at -halfheight, and bottom edge at +halfheight. It then is told to draw the neighbor, tile 2, with an offset of one width, so it translates from 0 to that width, then draws left edge at -halfwidth, which should coordinate-wise be exactly the same as tile1's right edge. By itself, this should work, and it does. When considering a constant scale, it breaks somehow.
When a scale is applied, it is a constant multiple across all width/height values, and mathematically shouldn't make anything change. However, it does make a difference, for what I think could be one of two reasons:
OpenGL is having issues with subpixel filling, ie filling left of a vertex doesn't fill the vertex's containing pixel space, and filling right of that same vertex also doesn't fill the vertex's containing pixel space.
I'm running into float accuracy problems, where somehow X+width/2 does not equal X+width - width/2 where width = tilewidth*scale, tilewidth is an integer, and X is a float.
I'm not really sure about how to tell which one is the problem, or how to remedy it other than to simply avoid non-integer scale values, which I'd like to be able to support. The only clue I think might apply to finding the solution is how the pattern of line gaps isn't really consistant (see how it skips tiles in some cases, only has vertical or horizontal but not both, etc). However, I don't know what this implies.
This looks like it's probably a floating point precision issue. The critical statement in your question is this:
Mathematically, each position is an exact multiple [..]
While that's mathematically true, you're dealing with limited floating point precision. Sequences of operations that should mathematically produce the same result can (and often do) produce slightly different results due to rounding errors during expression evaluation.
Specifically in your case, it looks like you're relying on identities of this form:
i * width + width/2 == (i + 1) * width - width/2
This is mathematically correct, but you can't expect to get exactly the same numbers when evaluating the values with limited floating point precision. Depending on how the small errors end up getting rounded to pixels, it can result in visual artifacts.
The only good way to avoid this is that you actually use the same values for coordinates that must be the same, instead of using calculations that mathematically produce the same results.
In the case of coordinates on a grid, you could calculate the coordinates for each grid line (tile boundary) once, and then use those values for all draw operations. Say if you have n tiles in the x-direction, you calculate all the x-values as:
x[i] = i * width;
and then when drawing tile i, use x[i] and x[i + 1] as the left and right x-coordinates.