Related
I'm working on a 2d engine. It already works quite good, but I keep getting pixel-errors.
For example, my window is 960x540 pixels, I draw a line from (0, 0) to (959, 0). I would expect that every pixel on scan-line 0 will be set to a color, but no: the right-most pixel is not drawn. Same problem when I draw vertically to pixel 539. I really need to draw to (960, 0) or (0, 540) to have it drawn.
As I was born in the pixel-era, I am convinced that this is not the correct result. When my screen was 320x200 pixels big, I could draw from 0 to 319 and from 0 to 199, and my screen would be full. Now I end up with a screen with a right/bottom pixel not drawn.
This can be due to different things:
where I expect the opengl line primitive is drawn from a pixel to a pixel inclusive, that last pixel just is actually exclusive? Is that it?
my projection matrix is incorrect?
I am under a false assumption that when I have a backbuffer of 960x540, that is actually has one pixel more?
Something else?
Can someone please help me? I have been looking into this problem for a long time now, and every time when I thought it was ok, I saw after a while that it actually wasn't.
Here is some of my code, I tried to strip it down as much as possible. When I call my line-function, every coordinate is added with 0.375, 0.375 to make it correct on both ATI and nvidia adapters.
int width = resX();
int height = resY();
for (int i = 0; i < height; i += 2)
rm->line(0, i, width - 1, i, vec4f(1, 0, 0, 1));
for (int i = 1; i < height; i += 2)
rm->line(0, i, width - 1, i, vec4f(0, 1, 0, 1));
// when I do this, one pixel to the right remains undrawn
void rendermachine::line(int x1, int y1, int x2, int y2, const vec4f &color)
{
... some code to decide what std::vector the coordinates should be pushed into
// m_z is a z-coordinate, I use z-buffering to preserve correct drawing orders
// vec2f(0, 0) is a texture-coordinate, the line is drawn without texturing
target->push_back(vertex(vec3f((float)x1 + 0.375f, (float)y1 + 0.375f, m_z), color, vec2f(0, 0)));
target->push_back(vertex(vec3f((float)x2 + 0.375f, (float)y2 + 0.375f, m_z), color, vec2f(0, 0)));
}
void rendermachine::update(...)
{
... render target object is queried for width and height, in my test it is just the back buffer so the window client resolution is returned
mat4f mP;
mP.setOrthographic(0, (float)width, (float)height, 0, 0, 8000000);
... all vertices are copied to video memory
... drawing
if (there are lines to draw)
glDrawArrays(GL_LINES, (int)offset, (int)lines.size());
...
}
// And the (very simple) shader to draw these lines
// Vertex shader
#version 120
attribute vec3 aVertexPosition;
attribute vec4 aVertexColor;
uniform mat4 mP;
varying vec4 vColor;
void main(void) {
gl_Position = mP * vec4(aVertexPosition, 1.0);
vColor = aVertexColor;
}
// Fragment shader
#version 120
#ifdef GL_ES
precision highp float;
#endif
varying vec4 vColor;
void main(void) {
gl_FragColor = vColor.rgb;
}
In OpenGL, lines are rasterized using the "Diamond Exit" rule. This is almost the same as saying that the end coordinate is exclusive, but not quite...
This is what the OpenGL spec has to say:
http://www.opengl.org/documentation/specs/version1.1/glspec1.1/node47.html
Also have a look at the OpenGL FAQ, http://www.opengl.org/archives/resources/faq/technical/rasterization.htm, item "14.090 How do I obtain exact pixelization of lines?". It says "The OpenGL specification allows for a wide range of line rendering hardware, so exact pixelization may not be possible at all."
Many will argue that you should not use lines in OpenGL at all. Their behaviour is based on how ancient SGI hardware worked, not on what makes sense. (And lines with widths >1 are nearly impossible to use in a way that looks good!)
Note that OpenGL coordinate space has no notion of integers, everything is a float and the "centre" of an OpenGL pixel is really at the 0.5,0.5 instead of its top-left corner. Therefore, if you want a 1px wide line from 0,0 to 10,10 inclusive, you really had to draw a line from 0.5,0.5 to 10.5,10.5.
This will be especially apparent if you turn on anti-aliasing, if you have anti-aliasing and you try to draw from 50,0 to 50,100 you may see a blurry 2px wide line because the line fell in-between two pixels.
Despite passing equal (exactly equal) coordinates for 'adjacent' edges, I'm ending up with some strange lines between adjacent elements when scaling my grid of rendered tiles.
My tile grid rendering algorithm accepts scaled tiles, so that I can adjust the grid's visual size to match a chosen window size of the same aspect ratio, among other reasons. It seems to work correctly when scaled to exact integers, and a few non-integer values, but I get some inconsistent results for the others.
Some Screenshots:
The blue lines are the clear color showing through. The chosen texture has no transparent gaps in the tilesheet, as unused tiles are magenta and actual transparency is handled by the alpha layer. The neighboring tiles in the sheet have full opacity. Scaling is achieved by setting the scale to a normalized value obtained through a gamepad trigger between 1f and 2f, so I don't know what actual scale was applied when the shot was taken, with the exception of the max/min.
Attribute updates and entity drawing are synchronized between threads, so none of the values could have been applied mid-draw. This isn't transferred well through screenshots, but the lines don't flicker when the scale is sustained at that point, so it logically shouldn't be an issue with drawing between scale assignment (and thread locks prevent this).
Scaled to 1x:
Scaled to A, 1x < Ax < Bx :
Scaled to B, Ax < Bx < Cx :
Scaled to C, Bx < Cx < 2x :
Scaled to 2x:
Projection setup function
For setting up orthographic projection (changes only on screen size changes):
.......
float nw, nh;
nh = Display.getHeight();
nw = Display.getWidth();
GL11.glOrtho(0, nw, nh, 0, 1, -1);
orthocenter.setX(nw/2); //this is a Vector2, floats for X and Y, direct assignment.
orthocenter.setY(nh/2);
.......
For the purposes of the screenshot, nw is 512, nh is 384 (implicitly casted from int). These never change throughout the example above.
General GL drawing code
After cutting irrelevant attributes that didn't fix the problem when cut:
#Override
public void draw(float xOffset, float yOffset, float width, float height,
int glTex, float texX, float texY, float texWidth, float texHeight) {
GL11.glLoadIdentity();
GL11.glTranslatef(0.375f, 0.375f, 0f); //This is supposed to fix subpixel issues, but makes no difference here
GL11.glTranslatef(xOffset, yOffset, 0f);
if(glTex != lastTexture){
GL11.glBindTexture(GL11.GL_TEXTURE_2D, glTex);
lastTexture = glTex;
}
GL11.glBegin(GL11.GL_QUADS);
GL11.glTexCoord2f(texX,texY + texHeight);
GL11.glVertex2f(-height/2, -width/2);
GL11.glTexCoord2f(texX + texWidth,texY + texHeight);
GL11.glVertex2f(-height/2, width/2);
GL11.glTexCoord2f(texX + texWidth,texY);
GL11.glVertex2f(height/2, width/2);
GL11.glTexCoord2f(texX,texY);
GL11.glVertex2f(height/2, -width/2);
GL11.glEnd();
}
Grid drawing code (dropping the same parameters dropped from 'draw'):
//Externally there is tilesize, which contains tile pixel size, in this case 32x32
public void draw(Engine engine, Vector2 offset, Vector2 scale){
int xp, yp; //x and y position of individual tiles
for(int c = 0; c<width; c++){ //c as in column
xp = (int) (c*tilesize.a*scale.getX()); //set distance from chunk x to column x
for(int r = 0; r<height; r++){ //r as in row
if(tiles[r*width+c] <0) continue; //skip empty tiles ('air')
yp = (int) (r*tilesize.b*scale.getY()); //set distance from chunk y to column y
tileset.getFrame(tiles[r*width+c]).draw( //pull 'tile' frame from set, render.
engine, //drawing context
new Vector2(offset.getX() + xp, offset.getY() + yp), //location of tile
scale //scale of tiles
);
}
}
}
Between the tiles and the platform specific code, vectors' components are retrieved and passed along to the general drawing code as pasted earlier.
My analysis
Mathematically, each position is an exact multiple of the scale*tilesize in either the x or y direction, or both, which is then added to the offset of the grid's location. It is then passed as an offset to the drawing code, which translates that offset with glTranslatef, then draws a tile centered at that location through halving the dimensions then drawing each plus-minus pair.
This should mean that when tile 1 is drawn at, say, origin, it has an offset of 0. Opengl then is instructed to draw a quad, with the left edge at -halfwidth, right edge at +halfwidth, top edge at -halfheight, and bottom edge at +halfheight. It then is told to draw the neighbor, tile 2, with an offset of one width, so it translates from 0 to that width, then draws left edge at -halfwidth, which should coordinate-wise be exactly the same as tile1's right edge. By itself, this should work, and it does. When considering a constant scale, it breaks somehow.
When a scale is applied, it is a constant multiple across all width/height values, and mathematically shouldn't make anything change. However, it does make a difference, for what I think could be one of two reasons:
OpenGL is having issues with subpixel filling, ie filling left of a vertex doesn't fill the vertex's containing pixel space, and filling right of that same vertex also doesn't fill the vertex's containing pixel space.
I'm running into float accuracy problems, where somehow X+width/2 does not equal X+width - width/2 where width = tilewidth*scale, tilewidth is an integer, and X is a float.
I'm not really sure about how to tell which one is the problem, or how to remedy it other than to simply avoid non-integer scale values, which I'd like to be able to support. The only clue I think might apply to finding the solution is how the pattern of line gaps isn't really consistant (see how it skips tiles in some cases, only has vertical or horizontal but not both, etc). However, I don't know what this implies.
This looks like it's probably a floating point precision issue. The critical statement in your question is this:
Mathematically, each position is an exact multiple [..]
While that's mathematically true, you're dealing with limited floating point precision. Sequences of operations that should mathematically produce the same result can (and often do) produce slightly different results due to rounding errors during expression evaluation.
Specifically in your case, it looks like you're relying on identities of this form:
i * width + width/2 == (i + 1) * width - width/2
This is mathematically correct, but you can't expect to get exactly the same numbers when evaluating the values with limited floating point precision. Depending on how the small errors end up getting rounded to pixels, it can result in visual artifacts.
The only good way to avoid this is that you actually use the same values for coordinates that must be the same, instead of using calculations that mathematically produce the same results.
In the case of coordinates on a grid, you could calculate the coordinates for each grid line (tile boundary) once, and then use those values for all draw operations. Say if you have n tiles in the x-direction, you calculate all the x-values as:
x[i] = i * width;
and then when drawing tile i, use x[i] and x[i + 1] as the left and right x-coordinates.
I have a system that generates chunks of 2d game map tiles. Chunks are 16x16 tiles, tiles are 25x25.
The chunks are given their own coordinates, like 0,0, 0,1, etc. The tiles determine their coordinates in the world based on which chunk they're in. I've verified that the chunks/tiles are all showing the proper x/y coordinates.
My problem is translating those into screen coordinates. In a previous question someone recommended using:
(worldX * tileWidth) % viewport_width
Each tile's x/y are run through this calculation and a screen x/y coordinate is returned.
This works for tiles that fit within the viewport, but it resets the screen x/y position calculation for anything off-screen.
In my map, I load chunks of tiles within a radius around the player so some of the inner tiles will be off-screen (until they move around, tile positions on the screen are moved).
I tried a test with a tile that would be off screen:
Tile's x coord: 41
41 * 25 = 1025
Game window: 1024
1025 % 1024 = 1
This means that tile x (which, if the screen 0,0 is at map 0,0, should be at x:1025, just off the right-hand side of the screen) is actually at x:1, appearing in the top-left.
I can't think of how to properly handle this - it seems to me like I need take the tileX * tileWidth to determine it's "initial screen position" and then somehow use an offset to determine how to make it appear on screen. But what offset?
Update: I already store an x/y offset value when the player moves, so I know how to move the map. I can use these values as the current offset, and if someone saves the game I can simply store those and re-use them. There's no equation necessary, I would just have to store the cumulative offsets.
The modulo (worldX*tileWidth % screenWidth) is what's causing it to reset. Modulo (%) gives you the remainder of an integer division operation; so, if worldX * tileWidth is greater than screenWidth, it will give you the remainder of (worldX * tileWidth) / screenWidth; if worldX * tileWidth is screenWidth+1, remainder is 1: it starts over at the beginning of the row.
If you eliminate the modulo, it will continue to draw tiles past the edge of the screen. If your drawing buffer is the same size as the screen, you'll need to add a check for tiles at the edge of the screen to make sure you only draw the tile portion that will be visible.
If you're trying to keep the player centered on the screen, you need to offset each tile by the player's offset from tile 0,0 in pixels, minus half the screen width:
offsetX = (playerWorldX * tileWidth) - (screenWidth / 2);
screenX = (worldX * tileWidth) - offsetX;
x = ((worldX*tileWidth) > screenWidth) ? worldX*tileWidth : (worldX*tileWidth)%screenWidth;
That should work. Though I recommend implementing something like an interface and letting each tile decide where they want to be rendered. Something like this
interface Renderable {
void Render(Graphics2D g)
..
}
class Tile implements Renderable{
int x,y
//other stuff
Render(Graphics2D g){
if (!inScreen()){
return;
}
//...
//render
}
boolean inScreen(){
//if the map moves with the player you need to define the boundaries of your current screenblock in terms of the global map coordinates
//What you can do is store this globally in a singleton somewhere or pass it to the constructor of each tile.
//currentBlock.x is then player.x - screenWidth/2
//currentBlock.width is then player.x + screenWidth/2;
//similar for y
if(this.x < currentBlock.x || this.x > currentBlock.Width)
return false;
if (this.y < currentBlock.y || this.y > currentBlock.height)
return false;
return true;
//If the map are in blocks (think zelda on snes where you go from one screenblock to another) you still need to define the boundaries
//currentBlock.x = (player.x / screenWidth) (integer division) *screenWidth;
//currentBlock.width = (player.x /screenWidth) (...) * screenWidth + screenWidth;
//same for y
//Then perform above tests
}
I'm having a little problem with figuring something out (Obviously).
I'm creating a 2D Top-down mmorpg, and in this game I wish the player to move around a tiled map similar to the way the game Pokemon worked, if anyone has ever played it.
If you have not, picture this: I need to load various areas, constructing them from tiles which contain an image and a location (x, y) and objects (players, items) but the player can only see a portion of it at a time, namely a 20 by 15 tile-wide area, which can be 100s of tiles tall/wide. I want the "camera" to follow the player, keeping him in the center, unless the player reaches the edge of the loaded area.
I don't need code necessarily, just a design plan. I have no idea how to go about this kind of thing.
I was thinking of possibly splitting up the entire loaded area into 10x10 tile pieces, called "Blocks" and loading them, but I'm still not sure how to load pieces off screen and only show them when the player is in range.
The picture should describe it:
Any ideas?
My solution:
The way I solved this problem was through the wonderful world of JScrollPanes and JPanels.
I added a 3x3 block of JPanels inside of a JScrollPane, added a couple scrolling and "goto" methods for centering/moving the JScrollPane around, and voila, I had my camera.
While the answer I chose was a little more generic to people wanting to do 2d camera stuff, the way I did it actually helped me visualize what I was doing a little better since I actually had a physical "Camera" (JScrollPane) to move around my "World" (3x3 Grid of JPanels)
Just thought I would post this here in case anyone was googling for an answer and this came up. :)
For a 2D game, it's quite easy to figure out which tiles fall within a view rectangle, if the tiles are rectangular. Basically, picture a "viewport" rectangle inside the larger world rectangle. By dividing the view offsets by the tile sizes you can easily determine the starting tile, and then just render the tiles in that fit inside the view.
First off, you're working in three coordinate systems: view, world, and map. The view coordinates are essentially mouse offsets from the upper left corner of the view. World coordinates are pixels distances from the upper left corner of tile 0, 0. I'm assuming your world starts in the upper left corner. And map cooridnates are x, y indices into the map array.
You'll need to convert between these in order to do "fancy" things like scrolling, figuring out which tile is under the mouse, and drawing world objects at the correct coordinates in the view. So, you'll need some functions to convert between these systems:
// I haven't touched Java in years, but JavaScript should be easy enough to convey the point
var TileWidth = 40,
TileHeight = 40;
function View() {
this.viewOrigin = [0, 0]; // scroll offset
this.viewSize = [600, 400];
this.map = null;
this.worldSize = [0, 0];
}
View.prototype.viewToWorld = function(v, w) {
w[0] = v[0] + this.viewOrigin[0];
w[1] = v[1] + this.viewOrigin[1];
};
View.prototype.worldToMap = function(w, m) {
m[0] = Math.floor(w[0] / TileWidth);
m[1] = Math.floor(w[1] / TileHeight);
}
View.prototype.mapToWorld = function(m, w) {
w[0] = m[0] * TileWidth;
w[1] = m[1] * TileHeight;
};
View.prototype.worldToView = function(w, v) {
v[0] = w[0] - this.viewOrigin[0];
v[1] = w[1] - this.viewOrigin[1];
}
Armed with these functions we can now render the visible portion of the map...
View.prototype.draw = function() {
var mapStartPos = [0, 0],
worldStartPos = [0, 0],
viewStartPos = [0, 0];
mx, my, // map coordinates of current tile
vx, vy; // view coordinates of current tile
this.worldToMap(this.viewOrigin, mapStartPos); // which tile is closest to the view origin?
this.mapToWorld(mapStartPos, worldStartPos); // round world position to tile corner...
this.worldToView(worldStartPos, viewStartPos); // ... and then convert to view coordinates. this allows per-pixel scrolling
mx = mapStartPos[0];
my = mapStartPos[y];
for (vy = viewStartPos[1]; vy < this.viewSize[1]; vy += TileHeight) {
for (vx = viewStartPos[0]; vx < this.viewSize[0]; vy += TileWidth) {
var tile = this.map.get(mx++, my);
this.drawTile(tile, vx, vy);
}
mx = mapStartPos[0];
my++;
vy += TileHeight;
}
};
That should work. I didn't have time to put together a working demo webpage, but I hope you get the idea.
By changing viewOrigin you can scroll around. To get the world, and map coordinates under the mouse, use the viewToWorld and worldToMap functions.
If you're planning on an isometric view i.e. Diablo, then things get considerably trickier.
Good luck!
The way I would do such a thing is to keep a variable called cameraPosition or something. Then, in the draw method of all objects, use cameraPosition to offset the locations of everything.
For example: A rock is at [100,50], while the camera is at [75,75]. This means the rock should be drawn at [25,-25] (the result of [100,50] - [75,75]).
You might have to tweak this a bit to make it work (for example maybe you have to compensate for window size). Note that you should also do a bit of culling - if something wants to be drawn at [2460,-830], you probably don't want to bother drawing it.
One approach is along the lines of double buffering ( Java Double Buffering ) and blitting ( http://download.oracle.com/javase/tutorial/extra/fullscreen/doublebuf.html ). There is even a design pattern associated with it ( http://www.javalobby.org/forums/thread.jspa?threadID=16867&tstart=0 ).
The issue involves an Android Path shape. It's a triangle that I'm using as an arrow to point towards objects on a screen Canvas. This is for a 2d game. player in the middle of the screen, objects around him and offscreen.
These arrows are supposed to rotate around the center of the screen, with a radius so that they rotate in a circle around the player. The arrows point towards objects that the player needs to move towards.
What I have right now is somewhat working, but the arrows are zipping around the circle at ridiculous speeds. Funny enough, they're pointing in the right direction, but they aren't staying at the right point on the circle. (if arrow is pointing northeast, arrow should be on the northeast part of the circle, etc)
I'm sure it's because of the math. I'm probably using atan2 wrong. Or canvas.translate wrong. Or maybe I shouldn't be using atan2 at all. Help! :)
Here is the code:
// set the shape of our radar blips
oBlipPath.moveTo(0, -5);
oBlipPath.lineTo(5, 0);
oBlipPath.lineTo(0, 5);
// Paint all the enemies and radar blips!
for(int i=0; i<iNumEnemies; i++){
if (oEnemies[i].draw(canvas, (int)worldX, (int)worldY)){
//calculate the degree the object is from the center of the screen.
//(user is the player. this could be done easier using iWidth and iHeight probably)
//we use a world coordinate system. worldY and worldX are subtracted
fDegrees = (float)Math.toDegrees(Math.atan2((oEnemies[i].getEnemyCenterY()-worldY)-user.getShipCenterY(), (oEnemies[i].getEnemyCenterX()-worldX)-user.getShipCenterX()));
canvas.save();
//get to the center
canvas.translate((iWidth / 2) , (iHeight / 2) );
//move a little bit depending on direction (trying to make arrows appear around a circle)
canvas.translate((float)(20 * Math.cos(fDegrees)), (float)(20* Math.sin(fDegrees)));
//rotate canvas so arrows will rotate and point in the right direction
canvas.rotate(fDegrees);
//draw arrows
canvas.drawPath(oBlipPath, oBlipPaint);
canvas.restore();
}
}
Affine transformations are are not commutative. They are typically applied in an apparent last-specified-first-applied order. As an alternative, consider the rotate() variation that rotates about a point.
Well, I've got it doing what I wanted, but I don't really know how. I threw in some random numbers until things showed up on the screen the way I wanted. If anyone wants to clue me in as to a better way to do this, I'm all ears.
The code:
// set the shape of our radar blips
oBlipPath.moveTo(0, -5);
oBlipPath.lineTo(6, 0);
oBlipPath.lineTo(0, 5);
oBlipMatrix.setRotate(45, 0, 0);
oBlipPath.transform(oBlipMatrix);
// Paint all the enemies and radar blips!
for(int i=0; i<iNumEnemies; i++){
oEnemies[i].draw(canvas, (int)worldX, (int)worldY);
if (oEnemies[i].bActive){
//calculate the degree the object is from the center of the screen.
//(user is the player. this could be done easier using iWidth and iHeight probably)
//we use a world coordinate system. worldY and worldX are subtracted
fDegrees = (float)Math.toDegrees(Math.atan2((oEnemies[i].getEnemyCenterY()-worldY)-(iHeight / 2), (oEnemies[i].getEnemyCenterX()-worldX)-(iWidth / 2)));
canvas.save();
//get to the center
canvas.translate((iWidth / 2 + 50) , (iHeight / 2 + 50) );
//move a little bit depending on direction (trying to make arrows appear around a circle)
//canvas.translate((float)(20 * Math.cos(fDegrees)), (float)(20* Math.sin(fDegrees)));
//rotate canvas so arrows will rotate and point in the right direction
canvas.rotate(fDegrees-45, -50, -50);
//draw arrows
canvas.drawPath(oBlipPath, oBlipPaint);
canvas.restore();
}
}
For whatever reason, I have to subtract 45 degrees from the canvas rotation, but add 45 degrees to the matrix rotation of the path shape. It works, but why?! :)