Image Processing Edge Detection in Java - java

This is my situation. It involves aligning a scanned image which will account for incorrect scanning. I must align the scanned image with my Java program.
These are more details:
There is a table-like form printed on a sheet of paper, which will be scanned into an image file.
I will open the picture with Java, and I will have an OVERLAY of text boxes.
The text boxes are supposed to align correctly with the scanned image.
In order to align correctly, my Java program must analyze the scanned image and detect the coordinates of the edges of the table on the scanned image, and thus position the image and the textboxes so that the textboxes and the image both align properly (in case of incorrect scanning)
You see, the guy scanning the image might not necessarily place the image in a perfectly correct position, so I need my program to automatically align the scanned image as it loads it. This program will be reusable on many of such scanned images, so I need the program to be flexible in this way.
My question is one of the following:
How can I use Java to detect the y coordinate of the upper edge of the table and the x-coordinate of the leftmost edge of the table. The table is a a regular table with many cells, with black thin border, printed on a white sheet of paper (horizontal printout)
If an easier method exists to automatically align the scanned image in such a way that all scanned images will have the graphical table align to the same x, y coordinates, then share this method :).
If you don't know the answer to the above to questions, do tell me where I should start. I don't know much about graphics java programming and I have about 1 month to finish this program. Just assume that I have a tight schedule and I have to make the graphics part as simple as possible for me.
Cheers and thank you.

Try to start from a simple scenario and then improve the approach.
Detect corners.
Find the corners in the boundaries of the form.
Using the form corners coordinates, calculate the rotation angle.
Rotate/scale the image.
Map the position of each field in the form relative to form origin coordinates.
Match the textboxes.
The program presented at the end of this post does the steps 1 to 3. It was implemented using Marvin Framework. The image below shows the output image with the detected corners.
The program also outputs: Rotation angle:1.6365770416167182
Source code:
import java.awt.Color;
import java.awt.Point;
import marvin.image.MarvinImage;
import marvin.io.MarvinImageIO;
import marvin.plugin.MarvinImagePlugin;
import marvin.util.MarvinAttributes;
import marvin.util.MarvinPluginLoader;
public class FormCorners {
public FormCorners(){
// Load plug-in
MarvinImagePlugin moravec = MarvinPluginLoader.loadImagePlugin("org.marvinproject.image.corner.moravec");
MarvinAttributes attr = new MarvinAttributes();
// Load image
MarvinImage image = MarvinImageIO.loadImage("./res/printedForm.jpg");
// Process and save output image
moravec.setAttribute("threshold", 2000);
moravec.process(image, null, attr);
Point[] boundaries = boundaries(attr);
image = showCorners(image, boundaries, 12);
MarvinImageIO.saveImage(image, "./res/printedForm_output.jpg");
// Print rotation angle
double angle = (Math.atan2((boundaries[1].y*-1)-(boundaries[0].y*-1),boundaries[1].x-boundaries[0].x) * 180 / Math.PI);
angle = angle >= 0 ? angle : angle + 360;
System.out.println("Rotation angle:"+angle);
}
private Point[] boundaries(MarvinAttributes attr){
Point upLeft = new Point(-1,-1);
Point upRight = new Point(-1,-1);
Point bottomLeft = new Point(-1,-1);
Point bottomRight = new Point(-1,-1);
double ulDistance=9999,blDistance=9999,urDistance=9999,brDistance=9999;
double tempDistance=-1;
int[][] cornernessMap = (int[][]) attr.get("cornernessMap");
for(int x=0; x<cornernessMap.length; x++){
for(int y=0; y<cornernessMap[0].length; y++){
if(cornernessMap[x][y] > 0){
if((tempDistance = Point.distance(x, y, 0, 0)) < ulDistance){
upLeft.x = x; upLeft.y = y;
ulDistance = tempDistance;
}
if((tempDistance = Point.distance(x, y, cornernessMap.length, 0)) < urDistance){
upRight.x = x; upRight.y = y;
urDistance = tempDistance;
}
if((tempDistance = Point.distance(x, y, 0, cornernessMap[0].length)) < blDistance){
bottomLeft.x = x; bottomLeft.y = y;
blDistance = tempDistance;
}
if((tempDistance = Point.distance(x, y, cornernessMap.length, cornernessMap[0].length)) < brDistance){
bottomRight.x = x; bottomRight.y = y;
brDistance = tempDistance;
}
}
}
}
return new Point[]{upLeft, upRight, bottomRight, bottomLeft};
}
private MarvinImage showCorners(MarvinImage image, Point[] points, int rectSize){
MarvinImage ret = image.clone();
for(Point p:points){
ret.fillRect(p.x-(rectSize/2), p.y-(rectSize/2), rectSize, rectSize, Color.red);
}
return ret;
}
public static void main(String[] args) {
new FormCorners();
}
}

Edge detection is something that is typically done by enhancing the contrast between neighboring pixels, such that you get a easily detectable line, which is suitable for further processing.
To do this, a "kernel" transforms a pixel according it the pixel's inital value, and the value of that pixel's neighbors. A good edge detection kernel will enhance the differences between neighboring pixels, and reduce the strength of a pixel with similar neigbors.
I would start by looking at the Sobel operator. This might not return results that are immediately useful to you; however, it will get you far closer than you would be if you were to approach the problem with little knowledge of the field.
After you have some crisp clean edges, you can use larger kernels to detect points where it seems that a 90% bend in two lines occurs, that might give you the pixel coordinates of the outer rectangle, which might be enough for your purposes.
With those outer coordinates, it still is a bit of math to make the new pixels be composted with the average values between the old pixels rotated and moved to "match". The results (especially if you do not know about anti-aliasing math) can be pretty bad, adding blur to the image.
Sharpening filters might be a solution, but they come with their own issues, mainly they make the picture sharper by adding graininess. Too much, and it is obvious that the original image is not a high-quality scan.

I researched the libraries but in the end I found it more convenient to code up my own edge detection methods.
The class below will detect black/grayed out edges of a scanned sheet of paper that contains such edges, and will return the x and y coordinate of the edges of the sheet of paper, starting from the rightmost end (reverse = true) or from lower end (reverse = true) or from the top edge (reverse = false) or from left edge (reverse = false). Also...the program will take ranges along vertical edges (rangex) measured in pixels, and horizontal ranges (rangey) measured in pixels. The ranges determine outliers in the points received.
The program does 4 vertical cuts using the specified arrays, and 4 horizontal cuts. It retrieves the values of the dark dots. It uses the ranges to eliminate outliers. Sometimes, a little spot on the paper may cause an outlier point. The smaller the range, the fewer the outliers. However, sometimes the edge is slightly tilted, so you don't want to make the range too small.
Have fun. It works perfectly for me.
import java.awt.image.BufferedImage;
import java.awt.Color;
import java.util.ArrayList;
import java.lang.Math;
import java.awt.Point;
public class EdgeDetection {
public App ap;
public int[] horizontalCuts = {120, 220, 320, 420};
public int[] verticalCuts = {300, 350, 375, 400};
public void printEdgesTest(BufferedImage image, boolean reversex, boolean reversey, int rangex, int rangey){
int[] mx = horizontalCuts;
int[] my = verticalCuts;
//you are getting edge points here
//the "true" parameter indicates that it performs a cut starting at 0. (left edge)
int[] xEdges = getEdges(image, mx, reversex, true);
int edgex = getEdge(xEdges, rangex);
for(int x = 0; x < xEdges.length; x++){
System.out.println("EDGE = " + xEdges[x]);
}
System.out.println("THE EDGE = " + edgex);
//the "false" parameter indicates you are doing your cut starting at the end (image.getHeight)
//and ending at 0
//if the parameter was true, it would mean it would start the cuts at y = 0
int[] yEdges = getEdges(image, my, reversey, false);
int edgey = getEdge(yEdges, rangey);
for(int y = 0; y < yEdges.length; y++){
System.out.println("EDGE = " + yEdges[y]);
}
System.out.println("THE EDGE = " + edgey);
}
//This function takes an array of coordinates...detects outliers,
//and computes the average of non-outlier points.
public int getEdge(int[] edges, int range){
ArrayList<Integer> result = new ArrayList<Integer>();
boolean[] passes = new boolean[edges.length];
int[][] differences = new int[edges.length][edges.length-1];
//THIS CODE SEGMENT SAVES THE DIFFERENCES BETWEEN THE POINTS INTO AN ARRAY
for(int n = 0; n<edges.length; n++){
for(int m = 0; m<edges.length; m++){
if(m < n){
differences[n][m] = edges[n] - edges[m];
}else if(m > n){
differences[n][m-1] = edges[n] - edges[m];
}
}
}
//This array determines which points are outliers or nots (fall within range of other points)
for(int n = 0; n<edges.length; n++){
passes[n] = false;
for(int m = 0; m<edges.length-1; m++){
if(Math.abs(differences[n][m]) < range){
passes[n] = true;
System.out.println("EDGECHECK = TRUE" + n);
break;
}
}
}
//Create a new array only using valid points
for(int i = 0; i<edges.length; i++){
if(passes[i]){
result.add(edges[i]);
}
}
//Calculate the rounded mean... This will be the x/y coordinate of the edge
//Whether they are x or y values depends on the "reverse" variable used to calculate the edges array
int divisor = result.size();
int addend = 0;
double mean = 0;
for(Integer i : result){
addend += i;
}
mean = (double)addend/(double)divisor;
//returns the mean of the valid points: this is the x or y coordinate of your calculated edge.
if(mean - (int)mean >= .5){
System.out.println("MEAN " + mean);
return (int)mean+1;
}else{
System.out.println("MEAN " + mean);
return (int)mean;
}
}
//this function computes "dark" points, which include light gray, to detect edges.
//reverse - when true, starts counting from x = 0 or y = 0, and ends at image.getWidth or image.getHeight()
//verticalEdge - determines whether you want to detect a vertical edge, or a horizontal edge
//arr[] - determines the coordinates of the vertical or horizontal cuts you will do
//set the arr[] array according to the graphical layout of your scanned image
//image - this is the image you want to detect black/white edges of
public int[] getEdges(BufferedImage image, int[] arr, boolean reverse, boolean verticalEdge){
int red = 255;
int green = 255;
int blue = 255;
int[] result = new int[arr.length];
for(int n = 0; n<arr.length; n++){
for(int m = reverse ? (verticalEdge ? image.getWidth():image.getHeight())-1:0; reverse ? m>=0:m<(verticalEdge ? image.getWidth():image.getHeight());){
Color c = new Color(image.getRGB(verticalEdge ? m:arr[n], verticalEdge ? arr[n]:m));
red = c.getRed();
green = c.getGreen();
blue = c.getBlue();
//determine if the point is considered "dark" or not.
//modify the range if you want to only include really dark spots.
//occasionally, though, the edge might be blurred out, and light gray helps
if(red<239 && green<239 && blue<239){
result[n] = m;
break;
}
//count forwards or backwards depending on reverse variable
if(reverse){
m--;
}else{
m++;
}
}
}
return result;
}
}

A similar such problem I've done in the past basically figured out the orientation of the form, re-aligned it, re-scaled it, and I was all set. You can use the Hough transform to to detect the angular offset of the image (ie: how much it is rotated), but you still need to detect the boundaries of the form. It also had to accommodate for the boundaries of the piece of paper itself.
This was a lucky break for me, because it basically showed a black and white image in the middle of a big black border.
Apply an aggressive, 5x5 median filter to remove some noise.
Convert from grayscale to black and white (rescale intensity values from [0,255] to [0,1]).
Calculate the Principal Component Analysis (ie: calculate the Eigenvectors of the covariance matrix for your image from the calculated Eigenvalues) (http://en.wikipedia.org/wiki/Principal_component_analysis#Derivation_of_PCA_using_the_covariance_method)
4) This gives you a basis vector. You simply use that to re-orient your image to a standard basis matrix (ie: [1,0],[0,1]).
Your image is now aligned beautifully. I did this for normalizing the orientation of MRI scans of entire human brains.
You also know that you have a massive black border around the actual image. You simply keep deleting rows from the top and bottom, and both sides of the image until they are all gone. You can temporarily apply a 7x7 median or mode filter to a copy of the image so far at this point. It helps rule out too much border remaining in the final image from thumbprints, dirt, etc.

Related

How to scale a pixel-drawing function in Java

In Java, I have a function like this:
public void setPixel(int x, int y, boolean on);
It sets a virtual black and white pixel, given whether it is on or not.
How can I call this function so the resulting display will be four times larger?
I tried this:
int x = 3;
int y = 3;
setPixel(x, y, true);
setPixel(x+1, y+1, true);
setPixel(x+2, y+2, true);
setPixel(x+3, y+3, true);
But naturally, it overlapped when I tried to draw something. How should I call the method?
While I'm tagging this Java, the concept could apply to any language.
Answering on these assumptions: setPixel sets a single pixel to white or black (if on is true, to black, else to white). You want to use this function to get a B&W image and make it four times larger. The code you provided is wrong and just makes a diagonal instad of a 4x4 block. Is this correct? If so:
A way to draw a 4 times larger image would then be, for example, to have a "getPixel(x,y)" which gets you whether the pixel at (x,y) is on in the original image and then start painting somewhere else in 4x4 blocks. Whenever you move by one pixel in either X or Y direction when getting the values of your original image, you move by 4 in your new image to scale. So then what you intended to do maybe was something like this?
void setBlock(int x, int y, boolean on, int scale)
for(int i=0; i < scale; i++){
for(int j=0; j < scale; j++){
setPixel(scale*x + i, scale*y + j, on);
And then iterate over your original image's coordinates doing something like this?
setBlock(x, y, getPixel(x, y), 4);

Java equal too, but with a buffer

I have made a simple game and I have a simple way to detect when I have collected a coin but it is very hard to match its position exactly.
public class Token {
private String name;
int x;
int y;
private BufferedImage image;
public Token (String nameIn, int xIn, int yIn, BufferedImage imageIn)
{
name = nameIn;
x = xIn;
y = yIn;
image = imageIn;
}
public boolean collected(Hero frylark) {
if (frylark.getX() == x && frylark.getY() == y) {
return true;
}
else {
return false;
}
}
}
Is there any way i can have a buffer of say 10 pixels instead of
matching the position of the coin exactly.
A distance between two points in a two-dimensional field is the sum of the squares of the differences between their corresponding coordinates:
public boolean collected(Hero frylark) {
return Math.sqrt(Math.pow(frylark.getX() - x , 2) +
Math.pow(frylark.getY() - y , 2)
) <= 10.0;
}
Based on Mureinik's answer, you can do this faster by not use Math.pow nor Math.sqrt.
double dx = frylark.getX() - x;
double dy = frylark.getY() - y;
return dx*dx + dy*dy <= 10.0*10.0;
I have made a simple game and I have a simple way to detect when I have collected a coin but it is very hard to match its position exactly.
I will propose a slightly different approach for you. If you attempt to detect collision by using only the x and y coordinates, it is very hard to detect collision since you need both pixels to hit at the same spot.
This problem arises especially when you try to check collision for images of different sizes:
Exmaple:
With your current implementation, in order for the Game Character to hit the coin, the red pixel (top left hand corner) has to collide, and you end up needed to add a buffer for images of different sizes to check for collision.
I will advise returning a bounding box for each object and check weather their bounding boxes intersects:
public boolean collected(Hero h){
Rectangle heroBox = new Rectangle (h.getX(), h.getY(), h.getWidth(), h.getHeight());
Rectangle coinBox = new Rectangle (x, y, width, height);
return(coinBox.intersects(heroBox));
}
You will need the width and height (which is usually the width and height of your images) of your objects for creating the bounding box.
Advantage:
You no longer have to check the size of each image and set the buffer for them individually.
Is there any way i can have a buffer of say 10 pixels instead of
matching the position of the coin exactly.
Adding a buffer:
If you still want a buffer, say 10 pixel. We can still apply it in this implementation:
public boolean collected(Hero h, int buffer){
Rectangle heroBox = new Rectangle (h.getX(), h.getY(), h.getWidth() + buffer, h.getHeight() + buffer);
Rectangle coinBox = new Rectangle (x, y, width + buffer , height + buffer);
return(coinBox.intersects(heroBox));
}
By adding the given buffer, we enlarge the area of the bounding boxes, hence making it more sensitive. You can always tweak from my example to add the buffer on one of the objects, both objects, or on only the width or the height of either objects.

What does image gradient mean in the following paper?

I am working in digital image processing using java, recently I am implementing a paper in java, One portion of this paper is this:
What I have understood from that is, G(i,j) would be intensity of image at location (i, j) after applying soble operator on it, does it mean so or anything else,
I have used the following code to compute wG,
public void weightedGCalc() {
BufferedImage sobelIm = this.getSobelImage();
int width = sobelIm.getWidth();
int height = sobelIm.getHeight();
weightedG = new double[width][height];
for (int row = 0; row < width; row++) {
for (int col = 0; col < height; col++) {
int imgPix = new Color(sobelIm.getRGB(row, col)).getRed();
float val = -(float) (Math.pow(imgPix, 2) / (2 * Math.pow(SIGMA_G[5], 2)));
weightedG[row][col] = (float) Math.exp(val);
}
}
}
Here this.getSobelImage(); will give me sobel Image of a given image. I am working with gray level images hence i am considering only one plane (RED). Here SIGMA_G[5] contains value of sigmaG as suggested by Author.
Your implementation is correct. By gradient, I think the author actually means the gradient magnitude of the image. Convolution with sobel operators is one way of calculating the gradient of an image.
and
(Gx,Gy), an image of vectors is the gradient of the image.
This G is the gradient magnitude of the image and is what you get from this.getSobelImage(), which is what you want.
Here gradient means the results obtained by the Sobel operator. Means the final outcome of the Sobel operators processing on the given image.
So after applying the Sobel function on a floating point image. The resultant image will be the gradient image.

Efficient 2D Tile based lighting system

What is the most efficient way to do lighting for a tile based engine in Java?
Would it be putting a black background behind the tiles and changing the tiles' alpha?
Or putting a black foreground and changing alpha of that? Or anything else?
This is an example of the kind of lighting I want:
There are many ways to achieve this. Take some time before making your final decision. I will briefly sum up some techiques you could choose to use and provide some code in the end.
Hard Lighting
If you want to create a hard-edge lighting effect (like your example image),
some approaches come to my mind:
Quick and dirty (as you suggested)
Use a black background
Set the tiles' alpha values according to their darkness value
A problem is, that you can neither make a tile brighter than it was before (highlights) nor change the color of the light. Both of these are aspects which usually make lighting in games look good.
A second set of tiles
Use a second set of (black/colored) tiles
Lay these over the main tiles
Set the new tiles' alpha value depending on how strong the new color should be there.
This approach has the same effect as the first one with the advantage, that you now may color the overlay tile in another color than black, which allows for both colored lights and doing highlights.
Example:
Even though it is easy, a problem is, that this is indeed a very inefficent way. (Two rendered tiles per tile, constant recoloring, many render operations etc.)
More Efficient Approaches (Hard and/or Soft Lighting)
When looking at your example, I imagine the light always comes from a specific source tile (character, torch, etc.)
For every type of light (big torch, small torch, character lighting) you
create an image that represents the specific lighting behaviour relative to the source tile (light mask). Maybe something like this for a torch (white being alpha):
For every tile which is a light source, you render this image at the position of the source as an overlay.
To add a bit of light color, you can use e.g. 10% opaque orange instead of full alpha.
Results
Adding soft light
Soft light is no big deal now, just use more detail in light mask compared to the tiles. By using only 15% alpha in the usually black region you can add a low sight effect when a tile is not lit:
You may even easily achieve more complex lighting forms (cones etc.) just by changing the mask image.
Multiple light sources
When combining multiple light sources, this approach leads to a problem:
Drawing two masks, which intersect each other, might cancel themselves out:
What we want to have is that they add their lights instead of subtracting them.
Avoiding the problem:
Invert all light masks (with alpha being dark areas, opaque being light ones)
Render all these light masks into a temporary image which has the same dimensions as the viewport
Invert and render the new image (as if it was the only light mask) over the whole scenery.
This would result in something similar to this:
Code for the mask invert method
Assuming you render all the tiles in a BufferedImage first,
I'll provide some guidance code which resembles the last shown method (only grayscale support).
Multiple light masks for e.g. a torch and a player can be combined like this:
public BufferedImage combineMasks(BufferedImage[] images)
{
// create the new image, canvas size is the max. of all image sizes
int w, h;
for (BufferedImage img : images)
{
w = img.getWidth() > w ? img.getWidth() : w;
h = img.getHeight() > h ? img.getHeight() : h;
}
BufferedImage combined = new BufferedImage(w, h, BufferedImage.TYPE_INT_ARGB);
// paint all images, preserving the alpha channels
Graphics g = combined.getGraphics();
for (BufferedImage img : images)
g.drawImage(img, 0, 0, null);
return combined;
}
The final mask is created and applied with this method:
public void applyGrayscaleMaskToAlpha(BufferedImage image, BufferedImage mask)
{
int width = image.getWidth();
int height = image.getHeight();
int[] imagePixels = image.getRGB(0, 0, width, height, null, 0, width);
int[] maskPixels = mask.getRGB(0, 0, width, height, null, 0, width);
for (int i = 0; i < imagePixels.length; i++)
{
int color = imagePixels[i] & 0x00ffffff; // Mask preexisting alpha
// get alpha from color int
// be careful, an alpha mask works the other way round, so we have to subtract this from 255
int alpha = (maskPixels[i] >> 24) & 0xff;
imagePixels[i] = color | alpha;
}
image.setRGB(0, 0, width, height, imagePixels, 0, width);
}
As noted, this is a primitive example. Implementing color blending might be a bit more work.
Raytracing might be the simpliest approach.
you can store which tiles have been seen (used for automapping, used for 'remember your map while being blinded', maybe for the minimap etc.)
you show only what you see - maybe a monster of a wall or a hill is blocking your view, then raytracing stops at that point
distant 'glowing objects' or other light sources (torches lava) can be seen, even if your own light source doesn't reach very far.
the length of your ray gives will be used to check amount light (fading light)
maybe you have a special sensor (ESP, gold/food detection) which would be used to find objects that are not in your view? raytrace might help as well ^^
how is this done easy?
draw a line from your player to every point of the border of your map (using Bresehhams Algorithm http://en.wikipedia.org/wiki/Bresenham%27s_line_algorithm
walk along that line (from your character to the end) until your view is blocked; at this point stop your search (or maybe do one last final iteration to see what did top you)
for each point on your line set the lighning (maybe 100% for distance 1, 70% for distance 2 and so on) and mark you map tile as visited
maybe you won't walk along the whole map, maybe it's enough if you set your raytrace for a 20x20 view?
NOTE: you really have to walk along the borders of viewport, its NOT required to trace every point.
i'm adding the line algorithm to simplify your work:
public static ArrayList<Point> getLine(Point start, Point target) {
ArrayList<Point> ret = new ArrayList<Point>();
int x0 = start.x;
int y0 = start.y;
int x1 = target.x;
int y1 = target.y;
int sx = 0;
int sy = 0;
int dx = Math.abs(x1-x0);
sx = x0<x1 ? 1 : -1;
int dy = -1*Math.abs(y1-y0);
sy = y0<y1 ? 1 : -1;
int err = dx+dy, e2; /* error value e_xy */
for(;;){ /* loop */
ret.add( new Point(x0,y0) );
if (x0==x1 && y0==y1) break;
e2 = 2*err;
if (e2 >= dy) { err += dy; x0 += sx; } /* e_xy+e_x > 0 */
if (e2 <= dx) { err += dx; y0 += sy; } /* e_xy+e_y < 0 */
}
return ret;
}
i did this whole lightning stuff some time ago, a* pathfindin feel free to ask further questions
Appendum:
maybe i might simply add the small algorithms for raytracing ^^
to get the North & South Border Point just use this snippet:
for (int x = 0; x <map.WIDTH; x++){
Point northBorderPoint = new Point(x,0);
Point southBorderPoint = new Point(x,map.HEIGHT);
rayTrace( getLine(player.getPos(), northBorderPoint), player.getLightRadius()) );
rayTrace( getLine(player.getPos(), southBorderPoint, player.getLightRadius()) );
}
and the raytrace works like this:
private static void rayTrace(ArrayList<Point> line, WorldMap map, int radius) {
//int radius = radius from light source
for (Point p: line){
boolean doContinue = true;
float d = distance(line.get(0), p);
//caclulate light linear 100%...0%
float amountLight = (radius - d) / radius;
if (amountLight < 0 ){
amountLight = 0;
}
map.setLight( p, amountLight );
if ( ! map.isViewBlocked(p) ){ //can be blockeb dy wall, or monster
doContinue = false;
break;
}
}
}
I've been into indie game development for about three years right now. The way I would do this is first of all by using OpenGL so you can get all the benefits of the graphical computing power of the GPU (hopefully you are already doing that). Suppose we start off with all tiles in a VBO, entirely lit. Now, there are several options of achieving what you want. Depending on how complex your lighting system is, you can choose a different approach.
If your light is going to be circular around the player, no matter the fact if obstacles would block the light in real life, you could choose for a lighting algorithm implemented in the vertex shader. In the vertex shader, you could compute the distance of the vertex to the player and apply some function that defines how bright things should be in function of the computed distance. Do not use alpha, but just multiply the color of the texture/tile by the lighting value.
If you want to use a custom lightmap (which is more likely), I would suggest to add an extra vertex attribute that specifies the brightness of the tile. Update the VBO if needed. Same approach goes here: multiply the pixel of the texture by the light value. If you are filling light recursively with the player position as starting point, then you would update the VBO every time the player moves.
If your lightmap depends on where the sunlight hits your level, you could combine two sort of lighting techniques. Create one vertex attribute for the sun brightness and another vertex attribute for the light emitted by light points (like a torch held by the player). Now you can combine those two values in the vertex shader. Suppose the your sun comes up and goes down like the day and night pattern. Let's say the sun brightness is sun, which is a value between 0 and 1. This value can be passed to the vertex shader as a uniform. The vertex attribute that represents the sun brightness is s and the one for light, emitted by light points is l. Then you could compute the total light for that tile like this:
tileBrightness = max(s * sun, l + flicker);
Where flicker (also a vertex shader uniform) is some kind of waving function that represents the little variants in the brightness of your light points.
This approach makes the scene dynamic without having to recreate continuously VBO's. I implemented this approach in a proof-of-concept project. It works great. You can check out what it looks like here: http://www.youtube.com/watch?v=jTcNitp_IIo. Note how the torchlight is flickering at 0:40 in the video. That is done by what I explained here.

Slick is getting very slow but only drawin rectangles

I am using slick for java since a few days and got a serious problem.
If i run a completely empty apllication (it just shows the fps) with a solution of 800x600 i get a fps count between 700 and 800.
If I now draw an array with 13300 entries as a grid of green and white rectangles, the fps drop to something around 70.
With more entries in the array it becomes really slow.
For example in a solution of 1024x768 and an array with 21760 entries the fps drop to 40.
How i draw a single entry:
public void draw(Graphics graphics){
graphics.setColor(new Color(getColor().getRed(), getColor().getGreen(), getColor().getBlue(), getColor().getAlpha()));
graphics.fillRect(getPosition().x, getPosition().y, getSize().x, getSize().y);
Color_ARGB white = new Color_ARGB(Color_ARGB.ColorNames.WHITE);
graphics.setColor(new Color(white.getRed(), white.getGreen(), white.getBlue(), white.getAlpha()));
}
And this is how I draw the complete array:
public void draw(Graphics graphics) {
for (int ix = 0; ix < getWidth(); ix++) {
for (int iy = 0; iy < getHeight(); iy++) {
getGameGridAt(ix, iy).draw(graphics);
}
}
}
In my opinion 21760 is not that much.
Is there anything wrong with my code or is slick just too slow to draw so much rectangles?
You only want to draw rectangles that are on the screen. If your screen bounds go from 0 to 1024 in the x direction and from 0 to 768 in the y direction, then you only want to loop through rectangles that are inside those bounds and then only draw those rectangles. I can't imagine you are trying to draw 21760 rectangles inside those bounds.
If you are, then try creating one static rectangle and then just try drawing that ONE in all of the different positions you need to draw it at rather than creating a new one every time. For example, in a game I am making, I might have 1000 tiles that are "grass" tiles, but all 1000 of those share the same static texture. So I only need to reference one image rather than each tile creating its own.
Each rectangle can still have a unique state. Just make your own rectangle class and have a static final Image that holds a 5*5 image. Each rectangle will use this image when it needs to be drawn. You can still have unique properties for each rectangle. For example, private Vector2f position, private boolean isAlive, etc
You're probably not going to be able to draw individual rectangles any faster than that.
Games that render millions of polygons per second do so using vertex buffer objects (VBO). For that, you'll probably need to code against the OpenGL API (LWJGL) itself, not a wrapper.
Not sure if Slick will allow it, but if this thing looks anything like a chessboard grid... you could draw just 4 rectangles, grab them and use the resulting image as a texture for your whole image. I'm not even a java programmer just trying to come up with a solution.
Since you're only repeatedly using just a few colors creating a new Color object for every single one is bound to be slow... use new only once for each different color used and store the re-usable colors somewhere in your class, than call the functions with those, constantly allocating and freeing memory is very slow.
And while this might not be as much a benefit as not using new each time but have you considered caching the results of all those function calls and rewriting code as
public void draw(Graphics graphics) {
int ixmax = getWidth();
int iymax = getHeight();
for (int ix = 0; ix < ixmax; ix++) {
for (int iy = 0; iy < iymax; iy++) {
getGameGridAt(ix, iy).draw(graphics);
}
}
}
Or if you'd prefer not to declare new variables
public void draw(Graphics graphics) {
for (int ix = getWidth() - 1; ix >= 0; ix--) {
for (int iy = getHeight() - 1; iy >= 0; iy--) {
getGameGridAt(ix, iy).draw(graphics);
}
}
}
Just noticed in another answer you have an integral size grid (5x5) ... in this case the fastest way to go about this would seem to be to draw each item a single pixel (you can do this directly in memory using a 2-dimensional array) and scale it to 500% or use it as a texture and draw a single rectangle with it the final size you desire ... should be quite fast. Sorry for all the confusion caused by previous answers, you should have said what you're doing more clearly from the start.
If scaling and textures are not available you can still draw in memory using something like this (written in c++, please translate it to java yourself)
for( int x = 0; x < grid.width(); x++ ) {
for( int y = 0; y < grid.height(); y++ ) {
image[x*5][y*5] = grid.color[x][y];
image[x*5][y*5 + 1] = grid.color[x][y];
image[x*5][y*5 + 2] = grid.color[x][y];
image[x*5][y*5 + 3] = grid.color[x][y];
image[x*5][y*5 + 4] = grid.color[x][y];
}
memcpy(image[x*5+1], image[x*5], grid.height() * sizeof(image[0][0]) );
memcpy(image[x*5+2], image[x*5], grid.height() * sizeof(image[0][0]) );
memcpy(image[x*5+3], image[x*5], grid.height() * sizeof(image[0][0]) );
memcpy(image[x*5+4], image[x*5], grid.height() * sizeof(image[0][0]) );
}
I'm not sure, but perhaps for graphics the x and y might be represented in the reversed order than used here, so change the code accordingly if it that's the case (you'll figure that out as soon as a few iterations run), also your data is probably structured a bit differently but I think the idea should be clear.

Categories