Java - Detecting collisions via Array-stored collision - java

I am currently working on collision for my 2D game. I did some research and deducted that I should use the method of storing alpha value pixels into a "mask" of the image of the entity, and the same for the other. Then, I take both entitys' x & y co-ords, as well as height and width, and make a Rectangle object and use the method Rectangle.intersects(Rectangle r) to check if they do infact collide inorder to make it more efficent instead of going through 2 for loops.
IF they intersect, I then make a new Array with the dimensions :
int maxLengthY = Math.max(thisEntity.getMask().length, e.getMask().length);
int maxLengthX = Math.max(thisEntity.getMask()[0].length, thisEntity.getMask()[0].length);
int minX = Math.min(thisEntity.getX(), e.getX());
int minY = Math.min(thisEntity.getY(), e.getY());
int[][] map = new int[maxLengthX + minX][maxLengthY + minY];
and then add the other two masks onto this one with their corresponding y & x "boundaries", like so:
for(int curX = 0; curX < maxLengthX + minX; curX++) { //only loop through the co-ords that area affected
for(int curY = 0; curY < maxLengthY + minY; curY++) {
int this_x = thisEntity.getX();
int this_width = thisEntity.getImage().getWidth();
int this_y = thisEntity.getY();
int this_height = thisEntity.getImage().getHeight();
int[][] this_mask = thisEntity.getMask();
if(curX < (this_x + this_width) && curX < this_x) {//check that the co-ords used are relevant for thisEntity's mask
if(curY < (this_y + this_height) && curY < this_y) {
map[curX][curY] = this_mask[Math.abs(curX - this_x)][Math.abs(curY - this_y)]; // store data from mask to map
}
}
int other_x = e.getX();
int other_width = e.getImage().getWidth();
int other_y = e.getY();
int other_height = e.getImage().getHeight();
int[][] other_mask = e.getMask();
if(curX < (other_x + other_width) && curX > other_x) { //check that the co-ords used are relevant for e's mask
if(curY < (other_y + other_height) && curY > other_y) {
if(map[curX][curY] == 1) { //check if this segment is already written by thisEntity
map[curX][curY] = 2; //if yes, set to 2 instead of e's value to show collision
} else {
map[curX][curY] = other_mask[curX][curY]; // the minus to nullify minX and minY "requirements"
}
}
}
}
}
resulting in the Array "map" looking like SO:
(excuse my 1337 paint skills)
This is the code in all it's beauty:
public Entity[] collisions(Entity thisEntity) {
ArrayList<Entity> list = new ArrayList<Entity>();
try {
for (Entity e : getLevel().getEntities()) {
System.out.println("rect contains = "+thisEntity.getRect().contains(e.getRect()));
if (!thisEntity.equals(e)) {
Rectangle r = e.getRect();
r = thisEntity.getRect();
if (thisEntity.getRect().intersects(e.getRect())) {
//get variables to create a space designated for the intersection areas involved
int maxLengthY = Math.max(thisEntity.getMask().length, e.getMask().length);
int maxLengthX = Math.max(thisEntity.getMask()[0].length, thisEntity.getMask()[0].length);
int minX = Math.min(thisEntity.getX(), e.getX());
int minY = Math.min(thisEntity.getY(), e.getY());
int[][] map = new int[maxLengthX + minX][maxLengthY + minY]; //create a matrix which merges both Entity's mask's to compare
for(int curX = 0; curX < maxLengthX + minX; curX++) { //only loop through the co-ords that area affected
for(int curY = 0; curY < maxLengthY + minY; curY++) {
int this_x = thisEntity.getX();
int this_width = thisEntity.getImage().getWidth();
int this_y = thisEntity.getY();
int this_height = thisEntity.getImage().getHeight();
int[][] this_mask = thisEntity.getMask();
if(curX < (this_x + this_width) && curX > this_x) {//check that the co-ords used are relevant for thisEntity's mask
if(curY < (this_y + this_height) && curY > this_y) {
map[curX][curY] = this_mask[Math.abs(curX - this_x)][Math.abs(curY - this_y)]; // store data from mask to map
}
}
int other_x = e.getX();
int other_width = e.getImage().getWidth();
int other_y = e.getY();
int other_height = e.getImage().getHeight();
int[][] other_mask = e.getMask();
if(curX < (other_x + other_width) && curX > other_x) { //check that the co-ords used are relevant for e's mask
if(curY < (other_y + other_height) && curY > other_y) {
if(map[curX][curY] == 1) { //check if this segment is already written by thisEntity
map[curX][curY] = 2; //if yes, set to 2 instead of e's value to show collision
} else {
map[curX][curY] = other_mask[curX][curY]; // the minus to nullify minX and minY "requirements"
}
}
}
}
}
}
}
}
} catch (Exception excp) {
excp.printStackTrace();
}
return list.toArray(new Entity[1]);
}
Also, here is the method getMask() :
public int[][] getMask() {
return mask;
}
...
private void createMask(BufferedImage image) {
final int[] pixels = ((DataBufferInt) image.getRaster().getDataBuffer()).getData();
final int width = image.getWidth();
final int height = image.getHeight();
final boolean hasAlphaChannel = image.getAlphaRaster() != null;
int[][] result = new int[height][width];
if (hasAlphaChannel) {
for (int pixel = 0, row = 0, col = 0; pixel < pixels.length; pixel += 4) {
int alpha = pixels[pixel];
if(alpha != 0) {
result[row][col] = 1;
} else {
result[row][col] = 0;
}
if (col == width) {
col = 0;
row++;
}
}
}
mask = result;
}
However... this code does not work as intended, and in some cases at all as when adding the individual masks to the map I get IndexOutOfBounds even though it should work so it's probably just me overlooking something...
So, to conclude, I need help with my code:
What is wrong with it?
How can I fix it?
Is there a more efficent way of doing this type of collision?
Do you recommend other types of collision? If so, what are they?

When you create Entities, do you create their masks from images of the exact same size? Because otherwise entity's mask and image map would be using different coordinate systems (entity.mask[0][0] might be is at its corner, when map[0][0] is at the corner of the "world"), and you're comparing same indices at the line:
map[curX][curY] = other_mask[curX][curY];
(above in the code you're actually getting those to the same coordinate system with Math.abs(a-b))
As for more efficient ways to detect collisions, you can look into binary space partitioning and more on collision detection in general on Wikipedia.

Related

Calculating 'color distance' between 2 points in a 3-dimensional space

I have a homework task where I have to write a class responsible for contour detection. It is essentially an image processing operation, using the definition of euclidean distance between 2 points in the 3-dimensional space. Formula given to us to use is:
Math.sqrt(Math.pow(pix1.red - pix2.red,2) + Math.pow(pix1.green- pix2.green,2) + Math.pow(pix1.blue- pix2.blue,2));
We need to consider each entry of the two dimensional array storing the colors of the pixels of an image, and if some pixel, pix, the color distance between p and any of its neighbors is more than 70, change the color of the pixel to black, else change it to white.
We are given a seperate class as well responsible for choosing an image, and selecting an output, for which method operationContouring is applied to. Java syntax and convention is very new to me having started with python. Conceptually, I'm struggling to understand what the difference between pix1 and pix2 is, and how to define them. This is my code so far.
Given:
import java.awt.Color;
/* Interface for ensuring all image operations invoked in same manner */
public interface operationImage {
public Color[][] operationDo(Color[][] imageArray);
}
My code:
import java.awt.Color;
public class operationContouring implements operationImage {
public Color[][] operationDo(Color[][] imageArray) {
int numberOfRows = imageArray.length;
int numberOfColumns = imageArray[0].length;
Color[][] results = new Color[numberOfRows][numberOfColumns];
for (int i = 0; i < numberOfRows; i++)
for (int j = 0; j < numberOfColumns; j++) {
int red = imageArray[i][j].getRed();
int green = imageArray[i][j].getGreen();
int blue = imageArray[i][j].getBlue();
double DistanceColor = Math.sqrt(Math.pow(pix1.red - pix2.red,2) + Math.pow(pix1.green- pix2.green,2) + Math.pow(pix1.blue- pix2.blue,2));
int LIMIT = 70;
if (DistanceColor> LIMIT ) {
results[i][j] = new Color((red=0), (green=0), (blue=0));
}
else {
results[i][j] = new Color((red=255), (green=255), (blue=255));
}
}
return results;
}
}
This is a solution I wrote that uses BufferedImages. I tested it and it should work. Try changing it such that it uses your data format (Color[][]) and it should work for you too. Note that "pix1" is nothing more than a description of the color of some pixel, and "pix2" is the description of the color of the pixel you are comparing it to (determining whether the color distance > 70).
public static boolean tooDifferent(Color c1, Color c2) {
return Math.sqrt(Math.pow(c1.getRed() - c2.getRed(),2) + Math.pow(c1.getGreen()- c2.getGreen(),2) + Math.pow(c1.getBlue()- c2.getBlue(),2)) > 70;
}
public static Color getColor(int x, int y, BufferedImage img) {
return new Color(img.getRGB(x, y));
}
public static BufferedImage operationDo(BufferedImage img) {
int numberOfRows = img.getHeight();
int numberOfColumns = img.getWidth();
BufferedImage results = new BufferedImage(numberOfColumns, numberOfRows, BufferedImage.TYPE_INT_ARGB);
for (int y = 0; y < numberOfRows; y++) {
for (int x = 0; x < numberOfColumns; x++) {
Color color = new Color(img.getRGB(x, y));
boolean aboveExists = y > 0;
boolean belowExists = y < numberOfRows - 1;
boolean leftExists = x > 0;
boolean rightExists = x < numberOfColumns - 1;
if ((aboveExists && tooDifferent(color, getColor(x, y - 1, img))) ||
(belowExists && tooDifferent(color, getColor(x, y + 1, img))) ||
(leftExists && tooDifferent(color, getColor(x - 1, y, img))) ||
(rightExists && tooDifferent(color, getColor(x + 1, y, img)))) {
results.setRGB(x, y, Color.black.getRGB());
} else {
results.setRGB(x, y, Color.white.getRGB());
}
}
}
return results;
}

Edge detection using sobel operator

So I am trying to write a program that uses sobel operator to detect edges in an image. Below is my method.
/**
* Detects edges.
* #param url - filepath to the iamge.
*/
private void detect(String url) {
BufferedImage orgImage = readImage(url);
int width = orgImage.getWidth();
int height = orgImage.getHeight();
BufferedImage resImage = new BufferedImage(width, height, BufferedImage.TYPE_BYTE_BINARY);
WritableRaster inraster = orgImage.getRaster();
WritableRaster outraster = resImage.getRaster();
System.out.println("size: " + width + "X" + height);
// Loop through every pixel, ignores the edges as these will throw out of
//bounds.
for (int i = 1; i < width-2; i++) {
for (int j = 1; j < height-2; j++) {
// Compute filter result, loops over in a
// box pattern.
int sum = 0;
for (int x = -1; x <= 1; x++) {
for (int y = -1; y <= 1; y++) {
int sum1 = i+y;
int sum2 = j+x;
int p = inraster.getSample(sum1, sum2, 0);
sum = sum + p;
}
}
int q = (int) Math.round(sum / 9.0);
if(q<150){
q = 0;
}else{
q = 255;
}
outraster.setSample(i, j, 0, q);
}
}
writeImage(resImage, "jpg", "EdgeDetection " + url);
}
This mostly just gives me a black and white image:
Before
After
I am obviosly calculating the pixel value wrong somehow. I am also note sure what value to use when deciding if the pixel should be black or white.

Why does this recursive image comparison take less then primitive iterative one? (JAVA 8)

I have run into a little situation that puzzles me. Would someone be able to explain it?
I have written an algorithm in Java that compares two images by first comparing the two middle columns of the two images, then comparing the two middle rows of the two images, and then recursively comparing the four remaining quadrants in the two images.
Now, for some reason my test tells me that my algorithm is quicker then the primitive comparison which simply iterates over each row and each column.
I would have thought that the iterative approach would be faster as the number of comparisons is the same, while in the recursive approach we're doing additional calculations to work out the four quadrants and the middle row and column.
I have done a separate test ( which I won't include) to check that the algorithm does in fact go over every pixel in the image and it does.
Is there something I'm missing in my implementation or is there something about the Java compiler that's producing this result?
Here's the code:
Algo:
public boolean areIdentical(BufferedImage image1, BufferedImage image2) {
int height = image1.getHeight();
int width = image2.getWidth();
for(int y : new int[]{0, height-1}){
for (int x = 0; x < width; x++) {
if(!compareColors(image1.getRGB(x,y), image2.getRGB(x,y))) return false;
}
}
for(int x : new int[]{0,width-1}){
for(int y = 0; y < height; y++){
if(!compareColors(image1.getRGB(x,y), image2.getRGB(x,y))) return false;
}
}
return areIdenticalRecursiveCross(image1, image2, new BoxCoordinates(1,1, width-2,height-2));
}
public boolean areIdenticalRecursiveCross(BufferedImage image1, BufferedImage image2, BoxCoordinates boxToCheck){
int height = boxToCheck.getHeight();
int width = boxToCheck.getWidth();
int startX= boxToCheck.getX();
int startY=boxToCheck.getY();
if(height == 1 || height == 2 || width == 1 || width == 2){
for (int x = startX; x < startX+width; x++) {
for (int y = startY; y < startY+height; y++) {
if(!compareColors(image1.getRGB(x,y),image2.getRGB(x,y))) return false;
}
}
return true;
} else{
int newWidth = width/2 - 1 +width%2;
for(int x = startX+newWidth; x <= startX+width/2; x++){
for(int y = startY; y < startY+height; y++){
if(!compareColors(image1.getRGB(x,y),image2.getRGB(x, y))) return false;
}
}
int newHeight = height/2 - 1 + height%2;
for(int y = startY + newHeight; y <= startY + height/2; y++){
for(int x = startX; x < width+startX; x++){
if(!compareColors(image1.getRGB(x,y),image2.getRGB(x, y))) return false;
}
}
return areIdenticalRecursiveCross(image1, image2,new BoxCoordinates(startX, startY, newWidth, newHeight))
&& areIdenticalRecursiveCross(image1, image2,new BoxCoordinates(startX + width/2+1, startY, newWidth, newHeight))
&& areIdenticalRecursiveCross(image1, image2,new BoxCoordinates(startX, startY + height/2+1, newWidth, newHeight))
&& areIdenticalRecursiveCross(image1, image2,new BoxCoordinates(startX+ width/2+1, startY+height/2+1, newWidth, newHeight));
}
}
public boolean compareColors(int c1, int c2){
return c1 == c2;
}
Test:
public void testAreIdentical() throws Exception {
BufferedImage screen = takeScreenshot();
long start = System.currentTimeMillis();
boolean primitive = false;
for(int i = 0; i<1;i++) {
primitive = primitive(screen, screen);
}
long end =System.currentTimeMillis();
System.out.println("Primitive took: " + (end-start)/1000.0 + " returning " + primitive);
start = System.currentTimeMillis();
boolean recursive = false;
for (int i =0; i< 1;i++) {
recursive = matcher.areIdentical(screen, screen);
}
end = System.currentTimeMillis();
System.out.println("Failgorithm took: " + (end-start)/1000.0 + " returning " + recursive);
}
public boolean primitive(BufferedImage image1, BufferedImage image2){
for (int x = 0; x < image1.getWidth(); x++) {
for (int y = 0; y < image1.getHeight(); y++) {
if(image1.getRGB(x,y)!=image2.getRGB(x,y)) return false;
}
}
return true;
}
Test Result:
Primitive took: 0.161 returning true
Failgorithm took: 0.127 returning true
Failgorithm 2 took: 0.168 returning true
Interestingly if I change the algorithm to only check a single middle row and column on each recursive step, I get a worse result(Failgorithm 3).

Finding white rectangle in an image

I'm trying to find a white rectangle in an image. The rectangle size is fixed. This is what I've come up as of yet:
BufferedImage bImage = bufferedImage;
int height = bufferedImage.getHeight(); //~1100px
int width = bufferedImage.getWidth(); //~1600px
int neededWidth = width / 2;
int neededHeight = 150;
int x = 0;
int y = 0;
boolean breaker = false;
boolean found = false;
int rgb = 0xFF00FF00;
int fx, fy;
fx = fy = 0;
JavaLogger.log.info("width, height: " + w + ", " + h);
while ((x != (width / 2) || y != (height - neededHeight)) && found == false) {
for (int i = y; i - y < neededHeight + 1; i++) {
for (int j = x; j - x < neededWidth + 1; j++) { //Vareetu buut, ka +1 vajadziigs
//JavaLogger.log.info("x,y: " + j + ", " + i);
long pixel = bImage.getRGB(j, i);
if (pixel != colorWhite && pixel != -1) {
//bImage.setRGB(j, i, rgb);
//JavaLogger.log.info("x,y: " + (j+x) + ", " + (i+y));
breaker = true;
break;
} else {
//bImage.setRGB(j, i, 0xFFFFFF00);
}
//printPixelARGB(pixel);
if ((i - y == neededHeight-10) && j - x == neededWidth-10) {
JavaLogger.log.info("width, height: " + x + ", " + y + "," + j + ", " + i);
fx = j;
fy = i;
found = true;
breaker = true;
break;
}
}
if (breaker) {
breaker = false;
break;
}
}
if (x < (width / 2)) {
x++;
} else {
if (y < (height - neededHeight)) {
y++;
x = 0;
} else {
break;
}
}
//JavaLogger.log.info("width, height: " + x + ", " + y);
}
if (found == true) {
for (int i = y; i < fy; i++) {
for (int j = x; j < fx; j++) {
bImage.setRGB(j, i, 0xFF00FF3F);
}
}
}
JavaLogger.log.info("width, height: " + w + ", " + h);
This works ok, if the rectangle I need is close to the begining of (0;0), but as it get further away, the performance degrades quite severely. I'm wondering, if there's something that can be done?
For example, this search took nearly 8s, which is quite a lot.
I'm thinking, that this can deffinitely be done more effectively. Maybe some blob finding? Read about it, but I've no idea how to apply it.
Also, I'm new to both Java and Image processing, so any help is appreciated.
This is very rough, but successfully finds all the white pixels in the image, more checking can be done to ensure it is the size you want and everything is there but the basics are there.
PS: I have not tested with your image. r and this.rc is picture size and p and this.px is the inner rectangle size
public static void main(String[] args) {
JFrame frame = new JFrame();
final int r = 100;
final int p = 10;
NewJPanel pan = new NewJPanel(r, p, new A() {
#Override
public void doImage(BufferedImage i) {
int o = 0;
for (int j = 0; j < i.getWidth() - p; j++) {
for (int k = 0; k < i.getHeight() - p; k++) {
PixelGrabber pix2 = new PixelGrabber(
i, j, k, p, p, false);
try {
pix2.grabPixels();
} catch (InterruptedException ex) {}
int pixelColor = pix2.getColorModel()
.getRGB(pix2.getPixels());
Color c = new Color(pixelColor);
if (c.equals(Color.WHITE)) {
System.out.println("Found at : x:" + j + ",y:" + k);
}
}
}
}
});
frame.getContentPane().add(pan);
frame.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE);
frame.setSize(500, 500);
frame.setLocationRelativeTo(null);
frame.setVisible(true);
}
private interface A {
void doImage(BufferedImage i);
}
private static class NewJPanel extends JPanel {
private static final long serialVersionUID = -5348356640373105209L;
private BufferedImage image = null;
private int px;
private int rc;
private A a;
public NewJPanel(int r, int p, A a) {
this.px = p;
this.rc = r;
this.a = a;
}
public BufferedImage getImage() {
return image;
}
#Override public void paint(Graphics g) {
super.paint(g);
image = new BufferedImage(this.rc, this.rc,
BufferedImage.TYPE_INT_ARGB);
java.awt.Graphics2D g2 = image.createGraphics();
g2.setColor(Color.BLACK);
g2.fillRect(0, 0, this.rc, this.rc);
g2.setColor(Color.WHITE);
g2.fillRect(
new Random().nextInt(this.rc - this.px),
new Random().nextInt(this.rc - this.px),
this.px, this.px);
g.drawImage(image, this.rc, this.rc, this);
this.a.doImage(this.image);
}
}
I'm no expert but I don't think the code is the problem - you need to change your algorithm. I would start by recursively searching for a single white pixel on the 2d plane, something like:
findWhitePixel(square){
look at pixel in the middle of 'square' - if it's white return it, otherwise:
findWhitePixel(top-right-quarter of 'square')
findWhitePixel(top-left-quarter of 'square')
findWhitePixel(bottom-right-quarter of 'square')
findWhitePixel(bottom-left-quarter of 'square')
}
after you find a white pixel try travesing up, down, left and right from it to find the borders on you shape. if it's a given that there can only be rectangles - your done. if there might be other shapes (triangles, circles, etc.) you'll need some verification here.
What you are asking can be solved by the operation known as "erosion". The erosion replaces every pixel by the darkest of all pixels in the rectangle of the requested size at that location (top-left corner). Here, darkest means that non-white supersedes white.
The output of erosion is an image with W-1 columns and H-1 rows less. Any white pixel in it corresponds to a solution.
In the lucky case of a rectangle shape, erosion is a separable operation. This means that you can erode first using an horizontal segment shape, then a vertical segment shape on the output of the first erosion. For a W x H restangle size, this replaces W * H operations by W + H, a significant saving.
In the also lucky case of a binary image (non-white or white), erosion by a segment can be done extremely efficiently: in every row independently, find all contiguous runs of white pixels, and turn the W-1 rightmost ones to non-white. Do the same to all columns, shortening the white runs by H-1 pixels.
Example: find all 3x2 rectangles:
####....####
##.....#..##
#..######...
.....###....
After 3x1 erosion:
####..####
##...#####
#########.
...#####..
After 1x2 erosion:
####.#####
##########
#########.
This algorithms takes constant time per pixel (regardless the rectangle size). Properly implemented, should take a handful of milliseconds.

Crop image to smallest size by removing transparent pixels in java

I have a sprite sheet which has each image centered in a 32x32 cell. The actual images are not 32x32, but slightly smaller. What I'd like to do is take a cell and crop the transparent pixels so the image is as small as it can be.
How would I do that in Java (JDK 6)?
Here is an example of how I'm currently breaking up the tile sheet into cells:
BufferedImage tilesheet = ImageIO.read(getClass().getResourceAsStream("/sheet.png");
for (int i = 0; i < 15; i++) {
Image img = tilesheet.getSubimage(i * 32, 0, 32, 32);
// crop here..
}
My current idea was to test each pixel from the center working my way out to see if it is transparent, but I was wondering if there would be a faster/cleaner way of doing this.
There's a trivial solution – to scan every pixel. The algorithm bellow has a constant performance of O(w•h).
private static BufferedImage trimImage(BufferedImage image) {
int width = image.getWidth();
int height = image.getHeight();
int top = height / 2;
int bottom = top;
int left = width / 2 ;
int right = left;
for (int x = 0; x < width; x++) {
for (int y = 0; y < height; y++) {
if (image.getRGB(x, y) != 0){
top = Math.min(top, y);
bottom = Math.max(bottom, y);
left = Math.min(left, x);
right = Math.max(right, x);
}
}
}
return image.getSubimage(left, top, right - left + 1, bottom - top + 1);
}
But this is much more effective:
private static BufferedImage trimImage(BufferedImage image) {
WritableRaster raster = image.getAlphaRaster();
int width = raster.getWidth();
int height = raster.getHeight();
int left = 0;
int top = 0;
int right = width - 1;
int bottom = height - 1;
int minRight = width - 1;
int minBottom = height - 1;
top:
for (;top <= bottom; top++){
for (int x = 0; x < width; x++){
if (raster.getSample(x, top, 0) != 0){
minRight = x;
minBottom = top;
break top;
}
}
}
left:
for (;left < minRight; left++){
for (int y = height - 1; y > top; y--){
if (raster.getSample(left, y, 0) != 0){
minBottom = y;
break left;
}
}
}
bottom:
for (;bottom > minBottom; bottom--){
for (int x = width - 1; x >= left; x--){
if (raster.getSample(x, bottom, 0) != 0){
minRight = x;
break bottom;
}
}
}
right:
for (;right > minRight; right--){
for (int y = bottom; y >= top; y--){
if (raster.getSample(right, y, 0) != 0){
break right;
}
}
}
return image.getSubimage(left, top, right - left + 1, bottom - top + 1);
}
This algorithm follows the idea from pepan's answer (see above) and is 2 to 4 times more effective. The difference is: it never scans any pixel twice and tries to contract search range on each stage.
The worst case of the method's performance is O(w•h–a•b)
This code works for me. The algorithm is simple, it iterates from left/top/right/bottom of the picture and finds the very first pixel in the column/row which is not transparent. It then remembers the new corner of the trimmed picture and finally it returns the sub image of the original image.
There are things which could be improved.
The algorithm expects, there is the alpha byte in the data. It will fail on an index out of array exception if there is not.
The algorithm expects, there is at least one non-transparent pixel in the picture. It will fail if the picture is completely transparent.
private static BufferedImage trimImage(BufferedImage img) {
final byte[] pixels = ((DataBufferByte) img.getRaster().getDataBuffer()).getData();
int width = img.getWidth();
int height = img.getHeight();
int x0, y0, x1, y1; // the new corners of the trimmed image
int i, j; // i - horizontal iterator; j - vertical iterator
leftLoop:
for (i = 0; i < width; i++) {
for (j = 0; j < height; j++) {
if (pixels[(j*width+i)*4] != 0) { // alpha is the very first byte and then every fourth one
break leftLoop;
}
}
}
x0 = i;
topLoop:
for (j = 0; j < height; j++) {
for (i = 0; i < width; i++) {
if (pixels[(j*width+i)*4] != 0) {
break topLoop;
}
}
}
y0 = j;
rightLoop:
for (i = width-1; i >= 0; i--) {
for (j = 0; j < height; j++) {
if (pixels[(j*width+i)*4] != 0) {
break rightLoop;
}
}
}
x1 = i+1;
bottomLoop:
for (j = height-1; j >= 0; j--) {
for (i = 0; i < width; i++) {
if (pixels[(j*width+i)*4] != 0) {
break bottomLoop;
}
}
}
y1 = j+1;
return img.getSubimage(x0, y0, x1-x0, y1-y0);
}
I think this is exactly what you should do, loop through the array of pixels, check for alpha and then discard. Although when you for example would have a star shape it will not resize the image to be smaller be aware of this.
A simple fix for code above. I used the median for RGB and fixed the min() function of x and y:
private static BufferedImage trim(BufferedImage img) {
int width = img.getWidth();
int height = img.getHeight();
int top = height / 2;
int bottom = top;
int left = width / 2 ;
int right = left;
for (int x = 0; x < width; x++) {
for (int y = 0; y < height; y++) {
if (isFg(img.getRGB(x, y))){
top = Math.min(top, y);
bottom = Math.max(bottom, y);
left = Math.min(left, x);
right = Math.max(right, x);
}
}
}
return img.getSubimage(left, top, right - left, bottom - top);
}
private static boolean isFg(int v) {
Color c = new Color(v);
return(isColor((c.getRed() + c.getGreen() + c.getBlue())/2));
}
private static boolean isColor(int c) {
return c > 0 && c < 255;
}
[Hi I tried the following. In the images file idle1.png is the image with a big transparent box while testing.png is the same image with minimum bounding box
'BufferedImage tempImg = (ImageIO.read(new File(fileNPath)));
WritableRaster tempRaster = tempImg.getAlphaRaster();
int x1 = getX1(tempRaster);
int y1 = getY1(tempRaster);
int x2 = getX2(tempRaster);
int y2 = getY2(tempRaster);
System.out.println("x1:"+x1+" y1:"+y1+" x2:"+x2+" y2:"+y2);
BufferedImage temp = tempImg.getSubimage(x1, y1, x2 - x1, y2 - y1);
//for idle1.png
String filePath = fileChooser.getCurrentDirectory() + "\\"+"testing.png";
System.out.println("filePath:"+filePath);
ImageIO.write(temp,"png",new File(filePath));
where the get functions are
public int getY1(WritableRaster raster) {
//top of character
for (int y = 0; y < raster.getHeight(); y++) {
for (int x = 0; x < raster.getWidth(); x++) {
if (raster.getSample(x, y,0) != 0) {
if(y>0) {
return y - 1;
}else{
return y;
}
}
}
}
return 0;
}
public int getY2(WritableRaster raster) {
//ground plane of character
for (int y = raster.getHeight()-1; y > 0; y--) {
for (int x = 0; x < raster.getWidth(); x++) {
if (raster.getSample(x, y,0) != 0) {
return y + 1;
}
}
}
return 0;
}
public int getX1(WritableRaster raster) {
//left side of character
for (int x = 0; x < raster.getWidth(); x++) {
for (int y = 0; y < raster.getHeight(); y++) {
if (raster.getSample(x, y,0) != 0) {
if(x > 0){
return x - 1;
}else{
return x;
}
}
}
}
return 0;
}
public int getX2(WritableRaster raster) {
//right side of character
for (int x = raster.getWidth()-1; x > 0; x--) {
for (int y = 0; y < raster.getHeight(); y++) {
if (raster.getSample(x, y,0) != 0) {
return x + 1;
}
}
}
return 0;
}'[Look at Idle1.png and the minimum bounding box idle = testing.png][1]
Thank you for your help regards Michael.Look at Idle1.png and the minimum bounding box idle = testing.png]images here
If your sheet already has transparent pixels, the BufferedImage returned by getSubimage() will, too. The default Graphics2D composite rule is AlphaComposite.SRC_OVER, which should suffice for drawImage().
If the sub-images have a distinct background color, use a LookupOp with a four-component LookupTable that sets the alpha component to zero for colors that match the background.
I'd traverse the pixel raster only as a last resort.
Addendum: Extra transparent pixels may interfere with collision detection, etc. Cropping them will require working with a WritableRaster directly. Rather than working from the center out, I'd start with the borders, using a pair of getPixels()/setPixels() methods that can modify a row or column at a time. If a whole row or column has zero alpha, mark it for elimination when you later get a sub-image.

Categories