Java - Resizing font to fit area - java

Many times over the years, I have been faced with the problem of resizing text in to fit in a certain area on a Java GUI. My solution was usually to work around the issue by:
Redesigning the interface to avoid the problem
Changing the area to fit the size of the text
Doing a binary search for the correct size font to fit the string(when I could not do either of the first 2)
While working on another project recently that needed to determine the correct font size for a given region quickly, my binary search method was too slow(I suspect because of the dynamic memory allocations involved with creating and measuring a font many times in sequence) and introduced noticeable lag into my application. What I needed was a faster easier way to calculate a font size that would allow a given string to be rendered to fit within a defined region of the GUI.

Finally it occurred to me that there was a much easier and faster way that only required a few allocations at run time. This new method eliminated the need for any kind of search and requires only 1 measurement to be taken, it does, however, need to make one assumption, one that is perfectly reasonable for most applications.
The width and height of the font must be the proportional to the point size of the font. This happens on all but the most obscure transformations done to a rendering context.
Using this assumption, we can compute the ratio of the font's dimensions to the point size, and linearly extrapolate to find the font size that we need for the given region. Some code that I wrote to do this is below:
Edit:
The accuracy of the initial measurement is limited by the size of the base font. Using a really small font size as the base can throw off the results. But the larger the size of the base font the more accurate the more accurate the linear approximation is.
import java.awt.Font;
import java.awt.FontMetrics;
import java.awt.Graphics;
import java.awt.Shape;
import java.awt.font.GlyphVector;
import java.awt.geom.Rectangle2D;
public class FontUtilities
{
public static Font createFontToFit
(
String value,
double width,
double height,
Font base,
Graphics context
)
{
double measuredWidth;
double measuredHeight;
double baseFontSize;
FontMetrics ruler;
Rectangle2D bounds;
double heightBasedFontSize;
double widthBasedFontSize;
GlyphVector vector;
Shape outline;
if
(
(value == null) ||
(base == null) ||
(context == null) ||
(width != width) ||
(height != height)
)
{
return null;
}
//measure the size of the string in the current font size
baseFontSize = base.getSize2D();
ruler = context.getFontMetrics(base);
vector = base.createGlyphVector(ruler.getFontRenderContext(), value);
//use the bounds measurement on the outline of the text since this is the only
//measurement method that seems to be bug free and consistent in java
outline = vector.getOutline(0, 0);
bounds = outline.getBounds();
measuredWidth = bounds.getWidth();
measuredHeight = bounds.getHeight();
//assume that each of the width and the height of the string
//is proportional to the font size, calculate the ratio
//and extrapolate linearly to determine the needed font size.
//should have 2 font sizes one for matching the width, and one for
//matching the height, return the least of the 2
widthBasedFontSize = (baseFontSize*width)/measuredWidth;
heightBasedFontSize = (baseFontSize*height)/measuredHeight;
if(widthBasedFontSize < heightBasedFontSize)
{
return base.deriveFont(base.getStyle(), (float)widthBasedFontSize);
}
else
{
return base.deriveFont(base.getStyle(), (float)heightBasedFontSize);
}
}
}

Related

How to calculate the height of an element?

I am generating pdf files by XML data.
I calculate the height of a paragraph element as :
float paraWidth = 0.0f;
for (Object o : el.getChunks()) {
paraWidth += ((Chunk) o).getWidthPoint();
}
float paraHeight = paraWidth/PageSize.A4.getWidth();
But this method does not works correctly.
Can you give me an idea?
Your question is strange. According to the header of your question, you want to know the height of a string, but your code shows that you are asking for the width of a String.
Please take a look at the FoobarFilmFestival example.
If bf is a BaseFont instance, then you can use:
float ascent = bf.getAscentPoint("Some String", 12);
float descent = bf.getDescentPoint("Some String", 12);
This will return the height above the baseline and the height below the baseline, when we use a font size of 12. As you probably know, the font size is an indication of the average height. It's not the actual height. It's just a number we work with.
The total height will be:
float height = ascent - descent;
Or maybe you want to know the number of lines taken by a Paragraph and multiply that with the leading. In that case, there are different possibilities. As it's not clear from your question what you want (height of chunks, width of chunks, vertical position of the baseline,...), you won't get any better answers than the ones that are already given. Please rephrase your question if the height of the glyphs in a Chunk wasn't what you expected.
Firstly why you iterating over Chunk collection casted to Object ? If all elements of this collection are Chunk, use this:
for (Chunk c : el.getChunks()) {
paraWidth += c.getWidthPoint();
}
What do you mean saying method does not works correctly ?

LibGDX BitmapFont won't stop shaking

I have a BitmapFont that is displaying a player's score as he moves across the screen at a constant rate. Because the player is always moving, I have to recalculate at what position I draw the font every frame. I use this code.
scoreFont.setScale(4f, 4f);
scoreFont.draw(batch, "" + scoreToShow, playerGhost.pos.x + 100f, 600f);
playerGhost.render(batch);
The problem? The font won't stop shaking. It's only a couple of pixels worth of vibration, but it's slightly noticeable. It's more noticeable when I run it on my tablet.
Is this a known bug?
How can I get it to stop shaking?
Call scorefont.setUseIntegerPositions(false); so it won't round the font's position to the nearest integer. You will also probably want to set the font's min filtering to Linear or MipmapLinearNearest, and max filtering to Linear.
The reason for the default behavior is that the default configuration is for text that is pixel perfect, for a viewport set with units equal to the size of a pixel. If your viewport had dimensions exactly the same as the screen's pixel dimensions, this configuration would help keep text from looking slightly blurry.
It could actually be the fact that you're scaling your font.
I had this problem and it's quite complex to understand (and also to fix).
Basically, when you scale fonts, BitmapFont changes the values inside the BitmapFontData by dividing/multiplying. If you do a lot of scaling, with a lot of different values (or an unlucky combination of values), it can introduce rounding errors which can cause flickering around the edges of the font.
The solution I implemented in the end was to write a Fontholder which stores all of the original BitmapFontData values. I then reset the font data to those original values at the beginning of every frame (i.e. start of render() method).
Here's the code...
package com.bigcustard.blurp.core;
import com.badlogic.gdx.graphics.g2d.*;
public class FontHolder {
private BitmapFont font;
private final float lineHeight;
private final float spaceWidth;
private final float xHeight;
private final float capHeight;
private final float ascent;
private final float descent;
private final float down;
private final float scaleX;
private final float scaleY;
public FontHolder(BitmapFont font) {
this.font = font;
BitmapFont.BitmapFontData data = font.getData();
this.lineHeight = data.lineHeight;
this.spaceWidth = data.spaceWidth;
this.xHeight = data.xHeight;
this.capHeight = data.capHeight;
this.ascent = data.ascent;
this.descent = data.descent;
this.down = data.down;
this.scaleX = data.scaleX;
this.scaleY = data.scaleY;
}
// Call this at start of each frame.
public void reset() {
BitmapFont.BitmapFontData data = font.getData();
data.lineHeight = this.lineHeight;
data.spaceWidth = this.spaceWidth;
data.xHeight = this.xHeight;
data.capHeight = this.capHeight;
data.ascent = this.ascent;
data.descent = this.descent;
data.down = this.down;
data.scaleX = this.scaleX;
data.scaleY = this.scaleY;
}
public BitmapFont getFont() {
return font;
}
}
I'm not wild about this, as it's slightly hacky, but it's a necessary evil, and will completely and properly solve the issue.
The correct way to handle this would be to use two different cameras, and two different spriteBatches, one for the game itself and one for the UI.
You call the update() method on both cameras, and use spriteBatch.setProjectionMatrix(camera.combined); on each batch to render them at the same time each frame.

2D Dynamic Lighting in Java

I am making a game that has campfire objects. What I want to do is to brighten all pixels in a circle around each campfire. However, looping through every pixel and changing those within the radius is not all that efficient and makes the game run at ~7 fps. Ideas on how to either make this process efficient or simulate light differently?
I haven't written the code for the fires but this is the basic loop to check each pixel/change its brightness based on a number:
public static BufferedImage updateLightLevels(BufferedImage img, float light)
{
BufferedImage brightnessBuffer = new BufferedImage(img.getWidth(), img.getHeight(), BufferedImage.TYPE_4BYTE_ABGR);
brightnessBuffer.getGraphics().drawImage(img, 0, 0, null);
for(int i = 0; i < brightnessBuffer.getWidth(); i++)
{
for(int a = 0; a < brightnessBuffer.getHeight(); a++)
{
//get the color at the pixel
int rgb = brightnessBuffer.getRGB(i, a);
//check to see if it is transparent
int alpha = (rgb >> 24) & 0x000000FF;
if(alpha != 0)
{
//make a new color
Color rgbColor = new Color(rgb);
//turn it into an hsb color
float[] hsbCol = Color.RGBtoHSB(rgbColor.getRed(), rgbColor.getGreen(), rgbColor.getBlue(), null);
//lower it by the certain amount
//if the pixel is already darker then push it all the way to black
if(hsbCol[2] <= light)
hsbCol[2] -= (hsbCol[2]) - .01f;
else
hsbCol[2] -= light;
//turn the hsb color into a rgb color
int rgbNew = Color.HSBtoRGB(hsbCol[0], hsbCol[1], hsbCol[2]);
//set the pixel to the new color
brightnessBuffer.setRGB(i, a, rgbNew);
}
}
}
return brightnessBuffer;
}
I apologize if my code is not clean, I'm self taught.
I can give you lots of approaches.
You're currently rendering on the CPU, and you're checking every single pixel. That's hardcore brute force, and brute force isn't what the CPU is best at. It works, but as you've seen, the performance is abysmal.
I'd point you in two directions that would massively improve your performance:
Method 1 - Culling. Does every single pixel really need to have its lighting calculated? If you could instead calculate a general "ambient light", then you could paint most of the pixels in that ambient light, and then only calculate the really proper lighting for pixels closest to lights; so lights throw a "spot" effect which fades into the ambient. That way you're only ever performing checks on a few of the pixels of the screen at a time (the circle area around each light). The code you posted just looks like it paints every pixel, I'm not seeing where the "circle" dropoff is even applied.
Edit:
Instead, sweep through the lights, and just loop through local offsets of the light position.
for(Light l : Lights){
for(int x = l.getX() -LIGHT_DISTANCE, x< l.getX() + LIGHT_DISTANCE, y++){
for(int y = l.getY() - LIGHT_DISTANCE, y < l.getY() + LIGHT_DISTANCE, y++){
//calculate light
int rgb = brightnessBuffer.getRGB(x, y);
//do stuff
}
}
You may want to add a check with that method so overlapping lights don't cause a bunch of rechecks, unless you DO want that behavior (ideally those pixels would be twice as bright)
Method 2 - Offhand calculation to the GPU. There's a reason we have graphics cards; they're specifically built to be able to number crunch those situations where you really need brute force. If you can offload this process to the GPU as a shader, then it'll run licketysplit, even if you run it on every pixel several times over. This will require you to learn graphics APIs however, but if you're working in java, LibGDX makes it very painless to render using the GPU and pass off a couple shaders to the GPU.
I am uncertain about the way in which you are going about calculating light values, but I do know that using the BufferedImage.getRGB() and BufferedImage.setRGB() methods is very slow.
I would suggest accessing the pixels of the BufferedImage directly from an array (much faster IMO)
to do this:
BufferedImage lightImage = new BufferedImage(width,height,BufferedImage.TYPE_INT_ARGB);
Raster r = lightImage.getRaster();
int[] lightPixels = ((DataBufferInt)r.getDataBuffer()).getData();
Now, changing any pixel in this array will show on your image. Note that the values used in this array are color values in the format of whatever format you defined your image with.
In this case it is TYPE_INT_ARGB meaning you will have to include the alpha value in the number when setting the coloar (RRGGBB*AA*)
Since this array is a 1D array, it is more difficult to access pixels using x and y co-ordinates. The following method is an implementation of accessing pixels from the lightPixels array more easily.
public void setLight(int x, int y,int[] array,int width, int value){
array[width*y+x] = value;
}
*note: width is the width of your level, or the width of the 2D array your level might exist as, if it was a 2D array.
You can also get pixels from the lightPixels array with a similar method, just excluding the value and returning the array[width*y+x].
It is up to you how you use the setLight() and getLight() methods but in the cases that I have encountered, using this method is much faster than using getRGB and setRGB.
Hope this helps

Making font size smaller than 1 Java GUI

I want to label my hashmarks in my grid for my graph, however when I use even font size 1 it is way to big! Is there a way to make a font size smaller than 1? Am I missing something with how I'm coding it?
Here's the code which generates the grid and attempts to put a label on the hash.
for (double k = myStart1; k <= myEnd1; k = k + (myEnd1 - myStart1) / 8) {
g2.setColor(Color.BLACK);
g2.draw(new Line2D.Double(k, (max - min) / 60, k, -(max - min) / 60));
String labelx=String.valueOf(k);
Float xCo=Float.parseFloat(Double.toString(k));
g2.setFont(new Font("SansSerif",Font.PLAIN,1));
g2.drawString(labelx, xCo, 0);
}
Here's a screenshot of the graph produced by x^2.
As I'm sure you've already noted, the Font constructor takes an int for the size parameter- effectively rendering impossible the construction of a font (using this method, at least) which has a size between 0 and 1.
I did, however, find the deriveFont method of the Font class particularly interesting:
public Font deriveFont(float size)
Creates a new Font object by replicating the current Font object and applying a new size to it.
Parameters:
size - the size for the new Font.
The deriveFont method, which claims to construct a new Font with the given size, takes a float as the parameter- therefore, it might be possible to do something like this:
Font theFont = new Font("SansSerif",Font.PLAIN,1);
theFont = theFont.deriveFont(0.5);
g2.setFont(theFont);
Resulting in a font with a size of 0.5.
Now, I haven't tested this myself- setting up a Graphics program takes time, so you're in a much better position to try it out than me. But just throwing it out there as a possibility.

off screen graphics resolution in java.awt

I have some java code that needs to programmatically render text onto an image. So I use BufferedImage, and write text on the Graphics object.
However, when configuring the font instance, one would specify the font size in points. When a piece of text is rendered onto an image, AWT will translate the points into pixels, based on the resolution of the Graphics object. I don't want to get myself involved in computing the pixel/point ratio, since it's really the task for the AWT. The image that is being produced is for a high resolution device (higher than any desktop monitors).
But, I don't seem to find a way to specify what the resolution of the Graphics is. It inherits it from the local graphics environment, which is beyond my control. I don't really want this code to be dependent on anything local, and I'm not even sure it's "sane", to use local graphics environment for determining the resolution of off screen rasters, who knows what people would want them for.
So, any way I can specify the resolution for an off screen image of any kind (preferably the one that can create Graphics object so I can use standard AWT rendering API)?
(update)
Here is a (rather long) sample problem that renders a piece of text on an image, with predefined font size in pixels (effectively, the target device DPI is 72). What bugs me, is that I have to use local screen DPI to make the calculation of the font size in points, though I'm not using the screen in any way, so it's not relevant, and plain fails on headless systems all together. What I would loved in this case instead, is being able to create an off screen image (graphics, raster), with DPI of 72, which would make points, by value, be equal to pixels.
Sample way to run the code:
$ java FontDisplay Monospace 150 "Cat in a bag" 1.png
This would render "Cat in a bag message", with font size of 150 pixels, on a 150 pixel tall image, and save the result in 1.png.
import java.awt.*;
import java.awt.image.*;
import java.awt.font.*;
import javax.imageio.*;
import javax.imageio.stream.*;
import java.io.*;
import java.util.*;
public class FontDisplay {
public static void main(String a[]) throws Exception {
// args: <font_name> <pixel_height> <text> <image_file>
// image file must have supported extension.
int height = Integer.parseInt(a[1]);
String text = a[2];
BufferedImage bi = new BufferedImage(1, 1,
BufferedImage.TYPE_INT_ARGB);
int dpi = Toolkit.getDefaultToolkit().getScreenResolution();
System.out.println("dpi : "+dpi);
float points = (float)height * 72.0F / (float)dpi;
System.out.println("points : "+points);
Map m = new HashMap();
m.put(TextAttribute.FAMILY, a[0]);
m.put(TextAttribute.SIZE, points);
Font f = Font.getFont(m);
if (f == null) {
throw new Exception("Font "+a[0]+" not found on your system");
}
Graphics2D g = bi.createGraphics();
FontMetrics fm = g.getFontMetrics(f);
int w = fm.charsWidth(text.toCharArray(), 0, text.length());
bi = new BufferedImage(w, height, BufferedImage.TYPE_INT_ARGB);
g = bi.createGraphics();
g.setColor(Color.BLACK);
g.fillRect(0, 0, w, height);
g.setColor(Color.WHITE);
g.setRenderingHint(RenderingHints.KEY_TEXT_ANTIALIASING,
RenderingHints.VALUE_TEXT_ANTIALIAS_LCD_HRGB);
g.setFont(f);
g.drawString(text, 0, fm.getMaxAscent());
String fName = a[3];
String ext = fName.substring(fName.lastIndexOf('.')+1).toLowerCase();
File file = new File(fName);
ImageWriter iw = ImageIO.getImageWritersBySuffix(ext).next();
ImageOutputStream ios = ImageIO.createImageOutputStream(file);
iw.setOutput(ios);
iw.write(bi);
ios.flush();
ios.close();
}
}
Comparing points to pixels is like kg to Newton where the acceleration may give varying conversions. AWT lets you elect a device (screen, printer), but in your case you definitely have to determine your ratio.
You may of course use Photoshop or Gimp and create a normative image for java.
After elaborated question:
Ah, I think I see the misunderstanding. An image does only concern pixels, never points, mm, DPI, or whatever. (Sometimes only as metainfo added separately to the image.)
So if you know the DPI of your device, the inches you want to use, then the dots/pixels are clear. points/dpi may shed more light.

Categories