How to auto crop an image white border in Java? - java

What's the easiest way to auto crop the white border out of an image in java? Thanks in advance...

Here's a way to crop all 4 sides, using the color from the very top-left pixel as the baseline, and allow for a tolerance of color variation so that noise in the image won't make the crop useless
public BufferedImage getCroppedImage(BufferedImage source, double tolerance) {
// Get our top-left pixel color as our "baseline" for cropping
int baseColor = source.getRGB(0, 0);
int width = source.getWidth();
int height = source.getHeight();
int topY = Integer.MAX_VALUE, topX = Integer.MAX_VALUE;
int bottomY = -1, bottomX = -1;
for(int y=0; y<height; y++) {
for(int x=0; x<width; x++) {
if (colorWithinTolerance(baseColor, source.getRGB(x, y), tolerance)) {
if (x < topX) topX = x;
if (y < topY) topY = y;
if (x > bottomX) bottomX = x;
if (y > bottomY) bottomY = y;
}
}
}
BufferedImage destination = new BufferedImage( (bottomX-topX+1),
(bottomY-topY+1), BufferedImage.TYPE_INT_ARGB);
destination.getGraphics().drawImage(source, 0, 0,
destination.getWidth(), destination.getHeight(),
topX, topY, bottomX, bottomY, null);
return destination;
}
private boolean colorWithinTolerance(int a, int b, double tolerance) {
int aAlpha = (int)((a & 0xFF000000) >>> 24); // Alpha level
int aRed = (int)((a & 0x00FF0000) >>> 16); // Red level
int aGreen = (int)((a & 0x0000FF00) >>> 8); // Green level
int aBlue = (int)(a & 0x000000FF); // Blue level
int bAlpha = (int)((b & 0xFF000000) >>> 24); // Alpha level
int bRed = (int)((b & 0x00FF0000) >>> 16); // Red level
int bGreen = (int)((b & 0x0000FF00) >>> 8); // Green level
int bBlue = (int)(b & 0x000000FF); // Blue level
double distance = Math.sqrt((aAlpha-bAlpha)*(aAlpha-bAlpha) +
(aRed-bRed)*(aRed-bRed) +
(aGreen-bGreen)*(aGreen-bGreen) +
(aBlue-bBlue)*(aBlue-bBlue));
// 510.0 is the maximum distance between two colors
// (0,0,0,0 -> 255,255,255,255)
double percentAway = distance / 510.0d;
return (percentAway > tolerance);
}

If you want the white parts to be invisible, best way is to use image filters and make white pixels transparent, it is discussed here by #PhiLho with some good samples,
if you want to resize your image so it's borders won't have white colors, you can do it with four simple loops,
this little method that I've write for you does the trick, note that it just crop upper part of image, you can write the rest,
private Image getCroppedImage(String address) throws IOException{
BufferedImage source = ImageIO.read(new File(address)) ;
boolean flag = false ;
int upperBorder = -1 ;
do{
upperBorder ++ ;
for (int c1 =0 ; c1 < source.getWidth() ; c1++){
if(source.getRGB(c1, upperBorder) != Color.white.getRGB() ){
flag = true;
break ;
}
}
if (upperBorder >= source.getHeight())
flag = true ;
}while(!flag) ;
BufferedImage destination = new BufferedImage(source.getWidth(), source.getHeight() - upperBorder, BufferedImage.TYPE_INT_ARGB) ;
destination.getGraphics().drawImage(source, 0, upperBorder*-1, null) ;
return destination ;
}

And here just another Example
private static BufferedImage autoCrop(BufferedImage sourceImage) {
int left = 0;
int right = 0;
int top = 0;
int bottom = 0;
boolean firstFind = true;
for (int x = 0; x < sourceImage.getWidth(); x++) {
for (int y = 0; y < sourceImage.getWidth(); y++) {
// pixel is not empty
if (sourceImage.getRGB(x, y) != 0) {
// we walk from left to right, thus x can be applied as left on first finding
if (firstFind) {
left = x;
}
// update right on each finding, because x can grow only
right = x;
// on first find apply y as top
if (firstFind) {
top = y;
} else {
// on each further find apply y to top only if a lower has been found
top = Math.min(top, y);
}
// on first find apply y as bottom
if (bottom == 0) {
bottom = y;
} else {
// on each further find apply y to bottom only if a higher has been found
bottom = Math.max(bottom, y);
}
firstFind = false;
}
}
}
return sourceImage.getSubimage(left, top, right - left, bottom - top);
}

img is original image source
BufferedImage subImg = img.getSubimage(0, 0, img.getWidth() - 1, img.getHeight() - 1);

Related

Java - Tracing outline of abstract shapes and making outer pixels transparent

I want to layer two images together. A background and foreground. The foreground is stitched together as a grid of smaller images (3x3). I have been able to make all white pixels transparent as a workaround, however the inside of the shapes are white and I only want pixels outside the shapes transparent.
Say for example the grid of images contained a circle or square in each grid location. Is there a way I can iterate over each pixel and create two arrays of pixel locations - those outside the images making them transparent, and those inside the images where I can set the colour?
import javax.imageio.ImageIO;
import java.awt.*;
import java.awt.image.BufferedImage;
import java.io.File;
// Stitches a grid of images together, scales a background image to fit and layers them.
public class Layer {
public static void layerImages() {
// Grid layout of images to stitch.
int rows = 3;
int cols = 3;
int chunks = rows * cols;
int chunckWidth, chunkHeight;
// Image files to stitch
File[] imgFiles = new File[chunks];
for(int i = 0; i < chunks; i++) {
imgFiles[i] = new File("ocarina_sprite" + (i + 1) + ".png");
}
// Read images into array.
try {
BufferedImage[] buffImages = new BufferedImage[chunks];
for (int i = 0; i < chunks; i++) {
buffImages[i] = ImageIO.read(imgFiles[i]);
}
chunckWidth = buffImages[0].getWidth();
chunkHeight = buffImages[0].getHeight();
BufferedImage finalImage = new BufferedImage(chunckWidth * cols, chunkHeight*rows, BufferedImage.TYPE_INT_ARGB);
// Calculate background width and height to cover stitched image.
int bwidth = 0;
int bheight = 0;
for(int i = 0; i < rows; i++) {
bwidth += buffImages[i].getWidth();
}
for(int i = 0; i < cols; i++) {
bheight += buffImages[i].getHeight();
}
// Background image
File dory = new File("dory.png");
BufferedImage original = ImageIO.read(dory);
// Scale background image.
BufferedImage background = scale(original, bwidth, bheight);
// Prepare final image by drawing background first.
Graphics2D g = finalImage.createGraphics();
g.drawImage(background, 0, 0, null);
// Prepare foreground image.
BufferedImage foreground = new BufferedImage(chunckWidth * cols, chunkHeight*rows, BufferedImage.TYPE_INT_ARGB);
// Stitch foreground images together
int num = 0;
for(int i = 0; i < rows; i++) {
for(int j = 0; j < rows; j++) {
foreground.createGraphics().drawImage(buffImages[num],chunckWidth * j, chunkHeight * i, null);
num++;
}
}
// Set white pixels to transparent.
for (int y = 0; y < foreground.getHeight(); ++y) {
for (int x = 0; x < foreground.getWidth(); ++x) {
int argb = foreground.getRGB(x, y);
if ((argb & 0xFFFFFF) > 0xFFFFEE) {
foreground.setRGB(x, y, 0x00FFFFFF);
}
}
}
// Draw foreground image to final image.
Graphics2D g3 = finalImage.createGraphics();
g3.drawImage(foreground, 0, 0, null);
// Output final image
ImageIO.write(finalImage, "png", new File("finalImage.png"));
}
catch (Exception e) {
System.out.println(e);
}
}
// Scale image
public static BufferedImage scale(BufferedImage imageToScale, int dWidth, int dHeight) {
BufferedImage scaledImage = null;
if (imageToScale != null) {
scaledImage = new BufferedImage(dWidth, dHeight, imageToScale.getType());
Graphics2D graphics2D = scaledImage.createGraphics();
graphics2D.drawImage(imageToScale, 0, 0, dWidth, dHeight, null);
graphics2D.dispose();
}
return scaledImage;
}
}
The floodfill solution mentioned in the comment was what I needed to solve the problem, however the recursion over a million+ pixels didn't work out so I implemented the forest fire algorithm which is floodfill using queues instead of recursion.
public static void forestFire(int width, int height, int x, int y) {
// Check if already set
int argb = foreground.getRGB(x, y);
if (((argb >> 24) & 0xFF) == 0) {
return;
}
coords.add(new Point(x, y));
// Set transparent pixel
foreground.setRGB(x, y, 0x00FFFFFF);
Point currentCoord = new Point();
while(!coords.isEmpty()) {
currentCoord.setLocation(coords.poll());
// Get current coordinates
x = (int)currentCoord.getX();
y = (int)currentCoord.getY();
// North
if(y != 0) {
int north = foreground.getRGB(x, y - 1);
// Check if transparent (already set) and check target colour (white)
if (((north >> 24) & 0xFF) > 0 && (north & 0xFFFFFF) > 0x111100) {
// Set transparent pixel
foreground.setRGB(x, y - 1, 0x00FFFFFF);
coords.add(new Point(x, y - 1));
}
}
// East
if(x != width - 1) {
int east = foreground.getRGB(x + 1, y);
if (((east >> 24) & 0xFF) > 0 && (east & 0xFFFFFF) > 0x111100) {
foreground.setRGB(x + 1, y, 0x00FFFFFF);
coords.add(new Point(x + 1, y));
}
}
// South
if(y != height - 1) {
int south = foreground.getRGB(x, y + 1);
if (((south >> 24) & 0xFF) > 0 && (south & 0xFFFFFF) > 0x111100) {
foreground.setRGB(x, y + 1, 0x00FFFFFF);
coords.add(new Point(x, y + 1));
}
}
// West
if(x != 0) {
int west = foreground.getRGB(x - 1, y);
if (((west >> 24) & 0xFF) > 0 && (west & 0xFFFFFF) > 0x111100) {
foreground.setRGB(x - 1, y, 0x00FFFFFF);
coords.add(new Point(x - 1, y));
}
}
}

Assign color to a pixel based on its neighbors

I've put all the black pixels of the image in an array and I want them to get the color of their left neighbor. I run the code without errors but the result is not really what I'm expecting.
Where those black stripes comes form? I was expecting it to be all red.
Here's my code and results.
import java.awt.Color;
import java.awt.image.BufferedImage;
import java.util.*;
public class ImageTest {
public static BufferedImage Threshold(BufferedImage img) {
int height = img.getHeight();
int width = img.getWidth();
BufferedImage finalImage = new BufferedImage(width,height,BufferedImage.TYPE_INT_RGB);
int r = 0;
int g = 0;
int b = 0;
List<Integer> blackpixels = new ArrayList<Integer>();
for (int x = 0; x < width; x++) {
try {
for (int y = 0; y < height; y++) {
//Get RGB values of pixels
int rgb = img.getRGB(x, y);
r = ImageTest.getRed(rgb);
g = ImageTest.getGreen(rgb);
b = ImageTest.getBlue(rgb);
int leftLoc = (x-1) + y*width;
if ((r < 5) && (g < 5) && (b < 5)) {
blackpixels.add(rgb);
Integer[] simpleArray = new Integer[ blackpixels.size() ];
System.out.print(simpleArray.length);
int pix = 0;
while(pix < simpleArray.length) {
r = leftLoc;
pix = pix +1;
}
}
finalImage.setRGB(x,y,ImageTest.mixColor(r, g,b));
}
}
catch (Exception e) {
e.getMessage();
}
}
return finalImage;
}
private static int mixColor(int red, int g, int b) {
return red<<16|g<<8|b;
}
public static int getRed(int rgb) {
return (rgb & 0x00ff0000) >> 16;
}
public static int getGreen(int rgb) {
return (rgb & 0x0000ff00) >> 8;
}
public static int getBlue(int rgb) {
return (rgb & 0x000000ff) >> 0;
}
}
The following might work.
The main change is that it first collects the locations of ALL dark pixels, then goes over them to assign the colour from their left neighbours.
import java.awt.image.BufferedImage;
import java.util.*;
public class BlackRedImage
{
public static BufferedImage Threshold( BufferedImage img )
{
int height = img.getHeight();
int width = img.getWidth();
BufferedImage finalImage = new BufferedImage(width, height, BufferedImage.TYPE_INT_ARGB);
List<Integer> blackpixels = new ArrayList<Integer>();
for ( int x = 0; x < width; x++ )
{
for ( int y = 0; y < height; y++ )
{
int rgb = img.getRGB(x, y); // Get the pixel in question
int r = BlackRedImage.getRed(rgb);
int g = BlackRedImage.getGreen(rgb);
int b = BlackRedImage.getBlue(rgb);
if ( (r < 5) && (g < 5) && (b < 5) )
{ // record location of any "black" pixels found
blackpixels.add(x + (y * width));
}
finalImage.setRGB(x, y, rgb);
}
}
// Now loop through all "black" pixels, setting them to the colour found to their left
for ( int blackPixelLocation: blackpixels )
{
if ( blackPixelLocation % width == 0 )
{ // these pixels are on the left most edge, therefore they do not have a left neighbour!
continue;
}
int y = blackPixelLocation / width;
int x = blackPixelLocation - (width * y);
int rgb = img.getRGB(x - 1, y); // Get the pixel to the left of the "black" pixel
System.out.println("x = " + x + ", y = " + y + ", rgb = " + rgb);
finalImage.setRGB(x, y, rgb);
}
return finalImage;
}
private static int mixColor( int red, int g, int b )
{
return red << 16 | g << 8 | b;
}
public static int getRed( int rgb )
{
return (rgb & 0x00ff0000) >> 16;
}
public static int getGreen( int rgb )
{
return (rgb & 0x0000ff00) >> 8;
}
public static int getBlue( int rgb )
{
return (rgb & 0x000000ff) >> 0;
}
}
EDIT: Here is a simpler version (doesn't collect the black pixels)
import java.awt.Color;
import java.awt.image.BufferedImage;
import java.util.*;
public class ColourMove
{
public static BufferedImage Threshold( BufferedImage img )
{
int width = img.getWidth();
int height = img.getHeight();
BufferedImage finalImage = new BufferedImage(width, height, BufferedImage.TYPE_INT_ARGB);
for ( int x = 1; x < width; x++ ) // Start at 1 as the left most edge doesn't have a left neighbour
{
for ( int y = 0; y < height; y++ )
{
Color colour = new Color(img.getRGB(x, y));
int red = colour.getRed();
int green = colour.getGreen();
int blue = colour.getBlue();
if ( (red < 5) && (green < 5) && (blue < 5) )
{ // Encountered a "black" pixel, now replace it with it's left neighbour
finalImage.setRGB(x, y, img.getRGB(x - 1, y));
}
else
{ // Non-black pixel
finalImage.setRGB(x, y, colour.getRGB());
}
}
}
return finalImage;
}
}

Find color with tolerance in bufferedimage

I'm writing a method that will attempt to find a color in a bufferedImage. At the moment, the method works by taking a screencap, and then scanning the image for a specific color. Now I'd like to add some RGB tolerence, so if the user is trying to find color (1, 3, 5) with tolerance 1, any color +-1 R, B or G will return true.
I could solve this by first generating a arrayList of RGB values that work, and then for each pixel I could go through the array and check with each value. The problem is that would probably get VERY slow for high tolerances on large images.
Is there a more efficient or possibly a built in way I can do this? Here is my method as it stands right now. Thank you!
public static Point findColor(Box searchArea, int color){
System.out.println("Test");
BufferedImage image = generateScreenCap(searchArea);
for (int i = 0; i < image.getWidth(); i++) {
for (int j = 0; j < image.getHeight(); j++) {
if((image.getRGB(i, j)*-1)==color){
return new Point(i + searchArea.x1, j + searchArea.y1);
}
}
}
return new Point(-1, -1);
}
Edit: I'm using the int RGB values for all comparisons, so instead of Color[1, 1, 1], I use Color.getRGB() which returns a negative int which I convert to positive for end user simplicity.
You need to compare RGB values and not the "whole" color if you want to have a custom tolerance. Here is the code, it is not tested, but you get the idea :
public static Point findColor(Box searchArea, int r, int g, int b, int tolerance) {
// Pre-calc RGB "tolerance" values out of the loop (min is 0 and max is 255)
int minR = Math.max(r - tolerance, 0);
int minG = Math.max(g - tolerance, 0);
int minB = Math.max(b - tolerance, 0);
int maxR = Math.min(r + tolerance, 255);
int maxG = Math.min(g + tolerance, 255);
int maxB = Math.min(b + tolerance, 255);
BufferedImage image = generateScreenCap(searchArea);
for (int i = 0; i < image.getWidth(); i++) {
for (int j = 0; j < image.getHeight(); j++) {
// get single RGB pixel
int color = image.getRGB(i, j);
// get individual RGB values of that pixel
// (could use Java's Color class but this is probably a little faster)
int red = (color >> 16) & 0x000000FF;
int green = (color >> 8) & 0x000000FF;
int blue = (color) & 0x000000FF;
if ( (red >= minR && red <= maxR) &&
(green >= minG && green <= maxG) &&
(blue >= minB && blue <= maxB) )
return new Point(i + searchArea.x1, j + searchArea.y1);
}
}
return new Point(-1, -1);
}

How to effectively color pixels in a BufferedImage?

I'm using the following piece of code to iterate over all pixels in an image and draw a red 1x1 square over the pixels that are within a certain RGB-tolerance. I guess there is a more efficient way to do this? Any ideas appreciated. (bi is a BufferedImage and g2 is a Graphics2D with its color set to Color.RED).
Color targetColor = new Color(selectedRGB);
for (int x = 0; x < bi.getWidth(); x++) {
for (int y = 0; y < bi.getHeight(); y++) {
Color pixelColor = new Color(bi.getRGB(x, y));
if (withinTolerance(pixelColor, targetColor)) {
g2.drawRect(x, y, 1, 1);
}
}
}
private boolean withinTolerance(Color pixelColor, Color targetColor) {
int pixelRed = pixelColor.getRed();
int pixelGreen = pixelColor.getGreen();
int pixelBlue = pixelColor.getBlue();
int targetRed = targetColor.getRed();
int targetGreen = targetColor.getGreen();
int targetBlue = targetColor.getBlue();
return (((pixelRed >= targetRed - tolRed) && (pixelRed <= targetRed + tolRed)) &&
((pixelGreen >= targetGreen - tolGreen) && (pixelGreen <= targetGreen + tolGreen)) &&
((pixelBlue >= targetBlue - tolBlue) && (pixelBlue <= targetBlue + tolBlue)));
}
if (withinTolerance(pixelColor, targetColor)) {
bi.setRGB( x, y, 0xFFFF0000 )
}
BufferedImage's setRGB method's third parameter, as explained in the Javadoc, takes a pixel of the TYPE_INT_ARGB form.
8 bits for the alpha (FF here, fully opaque)
8 bits for the red component (FF here, fully flashy red)
8 bits for the green component (0, no green)
8 bits for the blue component.

Crop image to smallest size by removing transparent pixels in java

I have a sprite sheet which has each image centered in a 32x32 cell. The actual images are not 32x32, but slightly smaller. What I'd like to do is take a cell and crop the transparent pixels so the image is as small as it can be.
How would I do that in Java (JDK 6)?
Here is an example of how I'm currently breaking up the tile sheet into cells:
BufferedImage tilesheet = ImageIO.read(getClass().getResourceAsStream("/sheet.png");
for (int i = 0; i < 15; i++) {
Image img = tilesheet.getSubimage(i * 32, 0, 32, 32);
// crop here..
}
My current idea was to test each pixel from the center working my way out to see if it is transparent, but I was wondering if there would be a faster/cleaner way of doing this.
There's a trivial solution – to scan every pixel. The algorithm bellow has a constant performance of O(w•h).
private static BufferedImage trimImage(BufferedImage image) {
int width = image.getWidth();
int height = image.getHeight();
int top = height / 2;
int bottom = top;
int left = width / 2 ;
int right = left;
for (int x = 0; x < width; x++) {
for (int y = 0; y < height; y++) {
if (image.getRGB(x, y) != 0){
top = Math.min(top, y);
bottom = Math.max(bottom, y);
left = Math.min(left, x);
right = Math.max(right, x);
}
}
}
return image.getSubimage(left, top, right - left + 1, bottom - top + 1);
}
But this is much more effective:
private static BufferedImage trimImage(BufferedImage image) {
WritableRaster raster = image.getAlphaRaster();
int width = raster.getWidth();
int height = raster.getHeight();
int left = 0;
int top = 0;
int right = width - 1;
int bottom = height - 1;
int minRight = width - 1;
int minBottom = height - 1;
top:
for (;top <= bottom; top++){
for (int x = 0; x < width; x++){
if (raster.getSample(x, top, 0) != 0){
minRight = x;
minBottom = top;
break top;
}
}
}
left:
for (;left < minRight; left++){
for (int y = height - 1; y > top; y--){
if (raster.getSample(left, y, 0) != 0){
minBottom = y;
break left;
}
}
}
bottom:
for (;bottom > minBottom; bottom--){
for (int x = width - 1; x >= left; x--){
if (raster.getSample(x, bottom, 0) != 0){
minRight = x;
break bottom;
}
}
}
right:
for (;right > minRight; right--){
for (int y = bottom; y >= top; y--){
if (raster.getSample(right, y, 0) != 0){
break right;
}
}
}
return image.getSubimage(left, top, right - left + 1, bottom - top + 1);
}
This algorithm follows the idea from pepan's answer (see above) and is 2 to 4 times more effective. The difference is: it never scans any pixel twice and tries to contract search range on each stage.
The worst case of the method's performance is O(w•h–a•b)
This code works for me. The algorithm is simple, it iterates from left/top/right/bottom of the picture and finds the very first pixel in the column/row which is not transparent. It then remembers the new corner of the trimmed picture and finally it returns the sub image of the original image.
There are things which could be improved.
The algorithm expects, there is the alpha byte in the data. It will fail on an index out of array exception if there is not.
The algorithm expects, there is at least one non-transparent pixel in the picture. It will fail if the picture is completely transparent.
private static BufferedImage trimImage(BufferedImage img) {
final byte[] pixels = ((DataBufferByte) img.getRaster().getDataBuffer()).getData();
int width = img.getWidth();
int height = img.getHeight();
int x0, y0, x1, y1; // the new corners of the trimmed image
int i, j; // i - horizontal iterator; j - vertical iterator
leftLoop:
for (i = 0; i < width; i++) {
for (j = 0; j < height; j++) {
if (pixels[(j*width+i)*4] != 0) { // alpha is the very first byte and then every fourth one
break leftLoop;
}
}
}
x0 = i;
topLoop:
for (j = 0; j < height; j++) {
for (i = 0; i < width; i++) {
if (pixels[(j*width+i)*4] != 0) {
break topLoop;
}
}
}
y0 = j;
rightLoop:
for (i = width-1; i >= 0; i--) {
for (j = 0; j < height; j++) {
if (pixels[(j*width+i)*4] != 0) {
break rightLoop;
}
}
}
x1 = i+1;
bottomLoop:
for (j = height-1; j >= 0; j--) {
for (i = 0; i < width; i++) {
if (pixels[(j*width+i)*4] != 0) {
break bottomLoop;
}
}
}
y1 = j+1;
return img.getSubimage(x0, y0, x1-x0, y1-y0);
}
I think this is exactly what you should do, loop through the array of pixels, check for alpha and then discard. Although when you for example would have a star shape it will not resize the image to be smaller be aware of this.
A simple fix for code above. I used the median for RGB and fixed the min() function of x and y:
private static BufferedImage trim(BufferedImage img) {
int width = img.getWidth();
int height = img.getHeight();
int top = height / 2;
int bottom = top;
int left = width / 2 ;
int right = left;
for (int x = 0; x < width; x++) {
for (int y = 0; y < height; y++) {
if (isFg(img.getRGB(x, y))){
top = Math.min(top, y);
bottom = Math.max(bottom, y);
left = Math.min(left, x);
right = Math.max(right, x);
}
}
}
return img.getSubimage(left, top, right - left, bottom - top);
}
private static boolean isFg(int v) {
Color c = new Color(v);
return(isColor((c.getRed() + c.getGreen() + c.getBlue())/2));
}
private static boolean isColor(int c) {
return c > 0 && c < 255;
}
[Hi I tried the following. In the images file idle1.png is the image with a big transparent box while testing.png is the same image with minimum bounding box
'BufferedImage tempImg = (ImageIO.read(new File(fileNPath)));
WritableRaster tempRaster = tempImg.getAlphaRaster();
int x1 = getX1(tempRaster);
int y1 = getY1(tempRaster);
int x2 = getX2(tempRaster);
int y2 = getY2(tempRaster);
System.out.println("x1:"+x1+" y1:"+y1+" x2:"+x2+" y2:"+y2);
BufferedImage temp = tempImg.getSubimage(x1, y1, x2 - x1, y2 - y1);
//for idle1.png
String filePath = fileChooser.getCurrentDirectory() + "\\"+"testing.png";
System.out.println("filePath:"+filePath);
ImageIO.write(temp,"png",new File(filePath));
where the get functions are
public int getY1(WritableRaster raster) {
//top of character
for (int y = 0; y < raster.getHeight(); y++) {
for (int x = 0; x < raster.getWidth(); x++) {
if (raster.getSample(x, y,0) != 0) {
if(y>0) {
return y - 1;
}else{
return y;
}
}
}
}
return 0;
}
public int getY2(WritableRaster raster) {
//ground plane of character
for (int y = raster.getHeight()-1; y > 0; y--) {
for (int x = 0; x < raster.getWidth(); x++) {
if (raster.getSample(x, y,0) != 0) {
return y + 1;
}
}
}
return 0;
}
public int getX1(WritableRaster raster) {
//left side of character
for (int x = 0; x < raster.getWidth(); x++) {
for (int y = 0; y < raster.getHeight(); y++) {
if (raster.getSample(x, y,0) != 0) {
if(x > 0){
return x - 1;
}else{
return x;
}
}
}
}
return 0;
}
public int getX2(WritableRaster raster) {
//right side of character
for (int x = raster.getWidth()-1; x > 0; x--) {
for (int y = 0; y < raster.getHeight(); y++) {
if (raster.getSample(x, y,0) != 0) {
return x + 1;
}
}
}
return 0;
}'[Look at Idle1.png and the minimum bounding box idle = testing.png][1]
Thank you for your help regards Michael.Look at Idle1.png and the minimum bounding box idle = testing.png]images here
If your sheet already has transparent pixels, the BufferedImage returned by getSubimage() will, too. The default Graphics2D composite rule is AlphaComposite.SRC_OVER, which should suffice for drawImage().
If the sub-images have a distinct background color, use a LookupOp with a four-component LookupTable that sets the alpha component to zero for colors that match the background.
I'd traverse the pixel raster only as a last resort.
Addendum: Extra transparent pixels may interfere with collision detection, etc. Cropping them will require working with a WritableRaster directly. Rather than working from the center out, I'd start with the borders, using a pair of getPixels()/setPixels() methods that can modify a row or column at a time. If a whole row or column has zero alpha, mark it for elimination when you later get a sub-image.

Categories