I'm trying to center the text vertically inside a rect but it's always off by by a little bit.
The font used is the Helvetica and the font size is set to 12 and I'm setting a padding of 6 points above and below the letter and I'm setting the size of the rect as 24 points.
The code used to write the cells is below and the image shows the cell uncentered vertically.
public void drawCell(PDPageContentStream owningStream, float xOffset, float yOffset) throws IOException {
float cellHeightSpacing = fontSize / 2;
float height = yOffset - fontSize - cellHeightSpacing;
if (isContentLargerThanCell()) {
if (maxLines < 2)
return;
} else {
float x = xOffset+getAlignedX(" "+content+" ");
drawContent(owningStream," "+content+" ",x,height);
}
drawCellBoundaries(owningStream, xOffset, yOffset - 2 * fontSize, 2 * fontSize);
}
private void drawCellBoundaries(PDPageContentStream owniContentStream, float X, float startHeight, float sizeHeight) throws IOException {
owniContentStream.addRect(X, startHeight, width, sizeHeight);
owniContentStream.stroke()
}
You actually have two issues to cope with:
For a given font size fs, hardly any letter actually has a height of fs, usually short sequences of letters don't either.
Your code assumes that it has to vertically center content of height fs but you use capital letters without any part beneath the base line, so their height is considerably less than fs.
The y coordinate you use for drawing text is the height of the base line, not the height of the bottom of all text.
E.g. look at this letter
If you draw this letter at some coordinates x,y, its descender will be drawn even below your y height while your code assumes for centering that the whole letter is located between y and y + fs.
The former problem most likely will have to remain. If you vertically center for the exact appearance of the letters, neighboring cells might have jumping base lines which will look worse than a certain degree of being off-center.
Your main problem is the latter one, and you can solve it by increasing the height of text drawing (or lowering the height of the boundary drawing) by fs times the absolute value of the maximum descent of the font.
You can retrieve the font descent from the font's font descriptor (PDFontDescriptor.getDescent()) or the font's bounding box (PDFont.getBoundingBox())
Related
double degPi = degrees * Math.PI / 180;
double a = Math.cos(degPi)*tImgCover.getScaledHeight();
double b = Math.sin(degPi)*tImgCover.getScaledWidth();
double c = -Math.sin(degPi) * tImgCover.getScaledHeight();
double d = Math.cos(degPi)* tImgCover.getScaledWidth();
double e = absX;
double f = absY;
contentByte.addImage(imgae, a, b, c, d, e, f);/*add image*/
How to rotate around the image center by itext?
If we have an Image image and coordinates x, y, we can draw the image without rotation with its lower left corner at the given coordinates like this
contentByte.addImage(image, image.getWidth(), 0, 0, image.getHeight(), x, y);
A bitmap image from the resources has a size of 1x1 with the coordinate origin at its lower left. Thus, this operation stretches the image to its correct size and moves it so its lower left is at the given coordinates.
If we want to draw the same image as if the one drawn above was rotated around its center by an angle rotate, therefore, we can do this by moving the 1x1 image so that the origin is in its center, stretch it to its correct size, rotate it, and then move the origin (which still is at the center of the rotated image) to the center of the unrotated image. These operations are easier to express using AffineTransform instances (from package com.itextpdf.awt.geom) instead number tupels. Thus:
// Draw image as if the previous image was rotated around its center
// Image starts out being 1x1 with origin in lower left
// Move origin to center of image
AffineTransform A = AffineTransform.getTranslateInstance(-0.5, -0.5);
// Stretch it to its dimensions
AffineTransform B = AffineTransform.getScaleInstance(image.getWidth(), image.getHeight());
// Rotate it
AffineTransform C = AffineTransform.getRotateInstance(rotate);
// Move it to have the same center as above
AffineTransform D = AffineTransform.getTranslateInstance(x + image.getWidth()/2, y + image.getHeight()/2);
// Concatenate
AffineTransform M = (AffineTransform) A.clone();
M.preConcatenate(B);
M.preConcatenate(C);
M.preConcatenate(D);
//Draw
contentByte.addImage(image, M);
(AddRotatedImage.java test method testAddRotatedImage)
For example drawing both images using
int x = 200;
int y = 300;
float rotate = (float) Math.PI / 3;
results in something like this:
With a Flip
The OP asked in a comment
how to add rotate and flip image?
For this you simply insert a mirroring affine transformation into the sequence of transformations above.
Unfortunately the OP did not mention which he meant a horizontal or a vertical flip. But as changing the rotation angle accordingly transforms one in the other, that isn't really necessary, either.
// Draw image as if the previous image was flipped and rotated around its center
// Image starts out being 1x1 with origin in lower left
// Move origin to center of image
AffineTransform A = AffineTransform.getTranslateInstance(-0.5, -0.5);
// Flip it horizontally
AffineTransform B = new AffineTransform(-1, 0, 0, 1, 0, 0);
// Stretch it to its dimensions
AffineTransform C = AffineTransform.getScaleInstance(image.getWidth(), image.getHeight());
// Rotate it
AffineTransform D = AffineTransform.getRotateInstance(rotate);
// Move it to have the same center as above
AffineTransform E = AffineTransform.getTranslateInstance(x + image.getWidth()/2, y + image.getHeight()/2);
// Concatenate
AffineTransform M = (AffineTransform) A.clone();
M.preConcatenate(B);
M.preConcatenate(C);
M.preConcatenate(D);
M.preConcatenate(E);
//Draw
contentByte.addImage(image, M);
(AddRotatedImage.java test method testAddRotatedFlippedImage)
The result with the same image as above:
With Interpolation
The OP asked in a yet another comment
How anti aliasing ?
The iText Image class knows an Interpolation property. By setting it to true (before adding the image to the document, obviously),
image.setInterpolation(true);
low resolution images are subject to interpolation when drawn.
E.g. using a 2x2 image with differently colored pixels instead of the image of Willi, you get the following results, first without interpolation, then with interpolation:
Confer the AddRotatedImage.java test testAddRotatedInterpolatedImage which adds this image:
Beware: iText Image property Interpolation effectively sets the Interpolate entry in the PDF image dictionary. The PDF specification notes in this context:
NOTE A conforming Reader may choose to not implement this feature of PDF, or may use any specific implementation of interpolation that it wishes.
Thus, on some viewers interpolation may occur differently than in your viewer, maybe even not at all. If you need a specific kind of interpolation on every viewer, upscale the image with the desired amount of interpolation / anti-aliasing before loading it into an iText Image.
public static BufferedImage rotateClockwise90( BufferedImage inputImage ){
int width = inputImage.getWidth();
int height = inputImage.getHeight();
BufferedImage returnImage = new BufferedImage( height, width , inputImage.getType() );
for( int x = 0; x < width; x++ ) {
for( int y = 0; y < height; y++ ) {
returnImage.setRGB( height-y-1, x, inputImage.getRGB( x, y ) );
}
}
return returnImage;
}
I am currently creating a small 2d-game with lwjgl.
I tried to figure out a way of implementing a Fog-Of-War.
I used a black backgound with alpha set to 0.5.
Then I added a Square, to set alpha to 1 for each tile, which is lit, ending up having a black Background with differend Alpha values.
Then I rendered my Background using the blendfunction:
glBlendFunc(GL_ZERO, GL_SRC_ALPHA)
This works well, but now I have a problem with adding a second layer with transparent parts and apply the Fog-Of-War on them, too.
I've read something about FrameBufferObjects, but I don't know how to use them and if they are the right choice.
Later on I want to lit tiles with an texture/Image to give it a smoother look. So these textures may overlap. This is the reason why I chose to first render the Fog-Of-War.
Do you have an idea how to fix this problem?
Thanks to samgak.
Now I try to render a dark square on each dark tile exept the lit tiles.
I divided each tile in an 8x8 grid for more details. This is my method:
public static void drawFog() {
int width = map.getTileWidth()>>3; //Divide by 8
int height = map.getTileHeight()>>3;
int mapWidth = map.getWidth() << 3;
int mapHeight = map.getHeight() << 3;
//background_x/y is the position of the background in pixel
int mapStartX = (int) Math.floor(background_x / width);
int mapStartY = (int) Math.floor(background_y / height);
//Multiply each color component with 0.5 to get a darker look
glBlendFunc(GL_ZERO, GL_SRC_ALPHA);
glColor4f(0.0f, 0.0f, 0.0f, 0.5f);
glBegin(GL_QUADS);
//RENDERED_TILES_X/Y is the amount of tiles to fill the screen
for(int x = mapStartX; x < (RENDERED_TILES_X<<3) + mapStartX
&& x < mapWidth; x++){
for(int y = mapStartY; y < (RENDERED_TILES_Y<<3) + mapStartY
&& y < mapHeight; y++){
//visible is an boolean-array for each subtile
if(!visible[x][y]){
float tx = (x * width) - background_x;
float ty = (y * height) - background_y;
glVertex2f(tx, ty);
glVertex2f(tx+width, ty);
glVertex2f(tx+width, ty+height);
glVertex2f(tx, ty+height);
}
}
}
glEnd();
}
I set the visible array to false except for an small square.
It will render fine, but if I move the background the whole screen except the visible square turns black.
One approach is to render the Fog-of-War layer last, using an untextured black square rendered over the top of all the other layers after they have been rendered.
Use this blend function:
glBlendFunc(GL_ONE_MINUS_SRC_ALPHA, GL_SRC_ALPHA)
and set the Fog-of-War alpha per-vertex so that when it is 1.0 the black overlay is transparent, and when it is 0.0, it is entirely black. (If you want the alpha to have the opposite meaning, just swap the arguments).
To make it more smooth you can set the alpha per vertex at each of the corners of the square to vary smoothly across it. You could also use a texture with varying alpha values instead of a plain black square, or subdivide the square into 4 or 16 squares to allow finer control.
I want everyone to see the same things on their screen regardless of their screen size and aspect ratio so this is the code I am currently using. (also I am sending net data across with the coordinates of where the other players are on the screen)
int width = 1920, height = 1080;
public OrthographicCamera camera;
Viewport viewport;
//constructor
camera = new OrthographicCamera();
viewport = new ScalingViewport(Scaling.stretch, width, height, camera);
viewport.apply();
camera.position.set(camera.viewportWidth / 2, camera.viewportHeight / 2, 0);
camera.update();
public void resize(int width, int height) {
viewport.update(width, height);
camera.position.set(camera.viewportWidth / 2, camera.viewportHeight / 2, 0);
}
now for example I wanted 10 perfect squares going across the middle of the screen so I made then 192 pixels by 192 pixels so I could have 10 perfect squares going across the middle of the screen my system right now works perfect except for the fact that it is rendered internally 1920x1080 on all devices big and small. How would I convert my camera to units and get the size needed for 10 perfect squares to go across the screen? Is that even possible?
Here is my code to draw 10 squares across the screen
float size = 192;
for(int i = 0; i<10; i++){
walls.add(new Stuff(i*size,height/2-size/2,size,size,"middle",1,1,0,1));
}
How would I convert all this code to say units? Or is this an acceptable approach?
You are already using units, they just aren't very meaningful (and it certainly aren't pixels). If you want to use meaningful units (e.g. SI units), then the only thing you have to change in this code are the values. E.g. if the size of your stuff (wall?) is, say 2 meter, then use the value 2 instead of 192. And if you want your users screen to be, say 20 meters (10 walls e.g.) in width and 16:9 aspect ratio, then use that for the Viewport worldWidth and worldHeight.
float worldWidth = 20;
float worldHeight = worldWidth * 9f / 16f;
...
viewport = new StretchViewport(worldWidth, worldHeight, camera);
Make sure to understand that these "pixels" you are talking about only exist in your imagination. See also: http://blog.xoppa.com/pixels/.
You created your ScalingViewport with a width of 1920, so the width in world units will be 1920 on all screens, no matter what. Also, your scene will be distorted on any screen that is not 16:9, since you are stretching to fit whatever the screen is. (Because of the distortion, I personally would never use ScalingViewport with Scaling.stretch, aka StretchViewport.)
If you want your squares to look square on all screens with this type of viewport, you'll have to do some math to change their height (but their width should always be 192 if you want exactly ten to fit across the screen).
public void resize(int width, int height){
float viewportAspect = 1920f / 1080f;
float screenAspect = (float)width / (float)height; //Make sure you cast to floats
boxHeight = 192 * screenAspect / viewportAspect;
viewport.update(width, height, true);
}
The camera always shows the scene in world units, so there's no conversion to do.
Despite passing equal (exactly equal) coordinates for 'adjacent' edges, I'm ending up with some strange lines between adjacent elements when scaling my grid of rendered tiles.
My tile grid rendering algorithm accepts scaled tiles, so that I can adjust the grid's visual size to match a chosen window size of the same aspect ratio, among other reasons. It seems to work correctly when scaled to exact integers, and a few non-integer values, but I get some inconsistent results for the others.
Some Screenshots:
The blue lines are the clear color showing through. The chosen texture has no transparent gaps in the tilesheet, as unused tiles are magenta and actual transparency is handled by the alpha layer. The neighboring tiles in the sheet have full opacity. Scaling is achieved by setting the scale to a normalized value obtained through a gamepad trigger between 1f and 2f, so I don't know what actual scale was applied when the shot was taken, with the exception of the max/min.
Attribute updates and entity drawing are synchronized between threads, so none of the values could have been applied mid-draw. This isn't transferred well through screenshots, but the lines don't flicker when the scale is sustained at that point, so it logically shouldn't be an issue with drawing between scale assignment (and thread locks prevent this).
Scaled to 1x:
Scaled to A, 1x < Ax < Bx :
Scaled to B, Ax < Bx < Cx :
Scaled to C, Bx < Cx < 2x :
Scaled to 2x:
Projection setup function
For setting up orthographic projection (changes only on screen size changes):
.......
float nw, nh;
nh = Display.getHeight();
nw = Display.getWidth();
GL11.glOrtho(0, nw, nh, 0, 1, -1);
orthocenter.setX(nw/2); //this is a Vector2, floats for X and Y, direct assignment.
orthocenter.setY(nh/2);
.......
For the purposes of the screenshot, nw is 512, nh is 384 (implicitly casted from int). These never change throughout the example above.
General GL drawing code
After cutting irrelevant attributes that didn't fix the problem when cut:
#Override
public void draw(float xOffset, float yOffset, float width, float height,
int glTex, float texX, float texY, float texWidth, float texHeight) {
GL11.glLoadIdentity();
GL11.glTranslatef(0.375f, 0.375f, 0f); //This is supposed to fix subpixel issues, but makes no difference here
GL11.glTranslatef(xOffset, yOffset, 0f);
if(glTex != lastTexture){
GL11.glBindTexture(GL11.GL_TEXTURE_2D, glTex);
lastTexture = glTex;
}
GL11.glBegin(GL11.GL_QUADS);
GL11.glTexCoord2f(texX,texY + texHeight);
GL11.glVertex2f(-height/2, -width/2);
GL11.glTexCoord2f(texX + texWidth,texY + texHeight);
GL11.glVertex2f(-height/2, width/2);
GL11.glTexCoord2f(texX + texWidth,texY);
GL11.glVertex2f(height/2, width/2);
GL11.glTexCoord2f(texX,texY);
GL11.glVertex2f(height/2, -width/2);
GL11.glEnd();
}
Grid drawing code (dropping the same parameters dropped from 'draw'):
//Externally there is tilesize, which contains tile pixel size, in this case 32x32
public void draw(Engine engine, Vector2 offset, Vector2 scale){
int xp, yp; //x and y position of individual tiles
for(int c = 0; c<width; c++){ //c as in column
xp = (int) (c*tilesize.a*scale.getX()); //set distance from chunk x to column x
for(int r = 0; r<height; r++){ //r as in row
if(tiles[r*width+c] <0) continue; //skip empty tiles ('air')
yp = (int) (r*tilesize.b*scale.getY()); //set distance from chunk y to column y
tileset.getFrame(tiles[r*width+c]).draw( //pull 'tile' frame from set, render.
engine, //drawing context
new Vector2(offset.getX() + xp, offset.getY() + yp), //location of tile
scale //scale of tiles
);
}
}
}
Between the tiles and the platform specific code, vectors' components are retrieved and passed along to the general drawing code as pasted earlier.
My analysis
Mathematically, each position is an exact multiple of the scale*tilesize in either the x or y direction, or both, which is then added to the offset of the grid's location. It is then passed as an offset to the drawing code, which translates that offset with glTranslatef, then draws a tile centered at that location through halving the dimensions then drawing each plus-minus pair.
This should mean that when tile 1 is drawn at, say, origin, it has an offset of 0. Opengl then is instructed to draw a quad, with the left edge at -halfwidth, right edge at +halfwidth, top edge at -halfheight, and bottom edge at +halfheight. It then is told to draw the neighbor, tile 2, with an offset of one width, so it translates from 0 to that width, then draws left edge at -halfwidth, which should coordinate-wise be exactly the same as tile1's right edge. By itself, this should work, and it does. When considering a constant scale, it breaks somehow.
When a scale is applied, it is a constant multiple across all width/height values, and mathematically shouldn't make anything change. However, it does make a difference, for what I think could be one of two reasons:
OpenGL is having issues with subpixel filling, ie filling left of a vertex doesn't fill the vertex's containing pixel space, and filling right of that same vertex also doesn't fill the vertex's containing pixel space.
I'm running into float accuracy problems, where somehow X+width/2 does not equal X+width - width/2 where width = tilewidth*scale, tilewidth is an integer, and X is a float.
I'm not really sure about how to tell which one is the problem, or how to remedy it other than to simply avoid non-integer scale values, which I'd like to be able to support. The only clue I think might apply to finding the solution is how the pattern of line gaps isn't really consistant (see how it skips tiles in some cases, only has vertical or horizontal but not both, etc). However, I don't know what this implies.
This looks like it's probably a floating point precision issue. The critical statement in your question is this:
Mathematically, each position is an exact multiple [..]
While that's mathematically true, you're dealing with limited floating point precision. Sequences of operations that should mathematically produce the same result can (and often do) produce slightly different results due to rounding errors during expression evaluation.
Specifically in your case, it looks like you're relying on identities of this form:
i * width + width/2 == (i + 1) * width - width/2
This is mathematically correct, but you can't expect to get exactly the same numbers when evaluating the values with limited floating point precision. Depending on how the small errors end up getting rounded to pixels, it can result in visual artifacts.
The only good way to avoid this is that you actually use the same values for coordinates that must be the same, instead of using calculations that mathematically produce the same results.
In the case of coordinates on a grid, you could calculate the coordinates for each grid line (tile boundary) once, and then use those values for all draw operations. Say if you have n tiles in the x-direction, you calculate all the x-values as:
x[i] = i * width;
and then when drawing tile i, use x[i] and x[i + 1] as the left and right x-coordinates.
In core Java book it says
The width of the rectangle that the getStringBounds method returns is the horizontal
extent of the string. The height of the rectangle is the sum of ascent, descent, and
leading. The rectangle has its origin at the baseline of the string. The top y -coordinate of the rectangle is negative. Thus, you can obtain string width, height, and
ascent as follows:
double stringWidth = bounds.getWidth();
double stringHeight = bounds.getHeight();
double ascent = -bounds.getY();
What does the author mean when saying that the rectangle has its origin at the baseline of the string, while top y-coordinate is the ascent?
Where does the bounding rectangle of the string start?
with a test string i got the following:
w: 291.0
h: 91.265625
x:0.0
y:-72.38671875
descent: 15.8203125
leading: 3.0585938
That mean the rectangle origin is at the leading not the baseline, am i correct on this?
It means that the bounds' coordinates are in a space where zero Y coordinate is at string's baseline and positive Y coordinates go downwards. In the following image the black dot corresponds to zero Y:
Therefore negative bounds.getY() (ascent) corresponds to the topmost coordinate. And positive bounds.getHeight() + bounds.getY() (descent + leading) will correspond to the botmommost coordinate in this coordinate space.
The math works out:
72.38671875 ascent + 15.8203125 descent + 3.0585938 leading = 91.265625 total height
This tutorial on 2D Text has an image illustrating leading, descent, and ascent.
In your specific case, 72.38671875 is the height of the ascent. That's measured from the baseline to the top of the tallest glyph. The leading is the space between the bottom of the descender to the top of the next line.
The bounding rectangle is relative to the baseline. The API for FontMetrics.getStringBounds states "The returned bounds is in baseline-relative coordinates", which explains your results. x will always be 0, and the height of the bounding box will be the ascent plus the descent plus the leading.
The Java graphics coordinate system has its origin in the top right of the canvas, with the Y coordinate increasing from top to bottom. This means that a rectangle's top edge (the return value of getY()) will have a smaller Y coordinate than its bottom edge (the baseline of a text string).
The result value of getStringBounds() is only somewhat consistent with this. While the coordinate system is respected, the origin of the bounding rectangle is relative to the baseline, not at the top left. This means that the top left of the rectangle will have a negative Y coordinate.