crop image and merge it back without quality loss - java

I cropped image into small pieces using .getSubimage()
int width = image.getWidth();
int height = image.getHeight();
int c = 4;
int r = 4;
int pWidth = width / c;
int pHeight = height / r;
int x = 0;
int y = 0;
for (int i = 0; i < c; i++) {
y = 0;
for (int j = 0; j < r; j++) {
if ((r - j) == 1 && ((c - i) == 1)) {
BufferedImage SubImage = image.getSubimage(x, y, width - x, height - y);
File outfile = new File(imageName + "/" + "subPic" + i + " " + j + " " + "jpeg");
ImageIO.write(SubImage, "jpeg", outfile);
y += pHeight;
} else if ((r - j) == 1) {
BufferedImage SubImage = image.getSubimage(x, y, pWidth, height - y);
File outfile = new File(imageName + "/" + "subPic" + i + " " + j + " " + "jpeg");
ImageIO.write(SubImage, "jpeg", outfile);
y += pHeight;
} else if ((c - i) == 1) {
BufferedImage SubImage = image.getSubimage(x, y, width - x, pHeight);
File outfile = new File(imageName + "/" + "subPic" + i + " " + j + " " + "jpeg");
ImageIO.write(SubImage, "jpeg", outfile);
y += pHeight;
} else {
BufferedImage SubImage = image.getSubimage(x, y, pWidth, pHeight);
y += pHeight;
File outfile = new File(imageName + "/" + "subPic" + i + " " + j + " " + "jpeg");
ImageIO.write(SubImage, "jpeg", outfile);
}
}
x += pWidth;
}
and then merged them all back using g2d.drawimage(),
BufferedImage combinedIm = new BufferedImage(275, 183, BufferedImage.TYPE_3BYTE_BGR);
Graphics2D g2d = combinedIm.createGraphics();
int currWidth = 0;
int currHeight = 0;
int iter = -1;
for (int i = 0; i < x; i++) {
currHeight = 0;
for (int j = 0; j < y; j++) {
iter += 1;
g2d.drawImage(imagePieces[iter], currWidth, currHeight, null);
currHeight += imagePieces[iter].getHeight();
}
currWidth += imagePieces[iter].getWidth();
}
g2d.dispose();
where imagePieces is an array with sub images of main image. But merged image has worse quality so my quality check always return false.
for (int x = 0; x < width; x++) {
for (int y = 0; y < height; y++) {
if (mainPic.getRGB(x, y) != subPic.getRGB(x, y)) {
return false;
There is the original image:
and the merged image:
What else can I use to cut and merge image so it will pass equality check, or maybe there are better ways to check if images are the same.

I see that you are compressing the pieces in .jpg, so you need to decode them back in order to obtain rgb data. I suppose that you couldn't pass the test because of the data loss after compressing/decoding pieces.
Try not to compress pieces and store them in .bmp format. Or you can modify your test: replace the strict comparison by the inequality, so you can compensate the aforementioned quality loss, e.g.:
final int max_diff = 3;
for (int x = 0; x < width; x++) {
for (int y = 0; y < height; y++) {
if (Math.abs(mainPic.getRGB(x, y) - subPic.getRGB(x, y)) > max_diff) {
return false;

Related

Edge detection using sobel operator

So I am trying to write a program that uses sobel operator to detect edges in an image. Below is my method.
/**
* Detects edges.
* #param url - filepath to the iamge.
*/
private void detect(String url) {
BufferedImage orgImage = readImage(url);
int width = orgImage.getWidth();
int height = orgImage.getHeight();
BufferedImage resImage = new BufferedImage(width, height, BufferedImage.TYPE_BYTE_BINARY);
WritableRaster inraster = orgImage.getRaster();
WritableRaster outraster = resImage.getRaster();
System.out.println("size: " + width + "X" + height);
// Loop through every pixel, ignores the edges as these will throw out of
//bounds.
for (int i = 1; i < width-2; i++) {
for (int j = 1; j < height-2; j++) {
// Compute filter result, loops over in a
// box pattern.
int sum = 0;
for (int x = -1; x <= 1; x++) {
for (int y = -1; y <= 1; y++) {
int sum1 = i+y;
int sum2 = j+x;
int p = inraster.getSample(sum1, sum2, 0);
sum = sum + p;
}
}
int q = (int) Math.round(sum / 9.0);
if(q<150){
q = 0;
}else{
q = 255;
}
outraster.setSample(i, j, 0, q);
}
}
writeImage(resImage, "jpg", "EdgeDetection " + url);
}
This mostly just gives me a black and white image:
Before
After
I am obviosly calculating the pixel value wrong somehow. I am also note sure what value to use when deciding if the pixel should be black or white.

Java: implementation of Gaussian Blur

I need to implement Gaussian Blur in Java for 3x3, 5x5 and 7x7 matrix. Can you correct me if I'm wrong:
I've a matrix(M) 3x3 (middle value is M(0, 0)):
1 2 1
2 4 2
1 2 1
I take one pixel(P) from image and for each nearest pixel:
s = M(-1, -1) * P(-1, -1) + M(-1, 0) * P(-1, 0) + ... + M(1, 1) * P(1, 1)
An then division it total value of matrix:
P'(i, j) = s / M(-1, -1) + M(-1, 0) + ... + M(1, 1)
That's all that my program do. I leave extreme pixels not changed.
My program:
for(int i = 1; i < height - 1; i++){
for(int j = 1; j < width - 1; j++){
int sum = 0, l = 0;
for(int m = -1; m <= 1; m++){
for(int n = -1; n <= 1; n++){
try{
System.out.print(l + " ");
sum += mask3[l++] * Byte.toUnsignedInt((byte) source[(i + m) * height + j + n]);
} catch(ArrayIndexOutOfBoundsException e){
int ii = (i + m) * height, jj = j + n;
System.out.println("Pixels[" + ii + "][" + jj + "] " + i + ", " + j);
System.exit(0);
}
}
System.out.println();
}
System.out.println();
output[i * width + j] = sum / maskSum[0];
}
}
I get source from a BufferedImage like this:
int[] source = image.getRGB(0, 0, width, height, null, 0, width);
So for this image:
Result is this:
Can you describe me, what is wrong with my program?
First of all, your formula for calculating the index in the source array is wrong. The image data is stored in the array one pixel row after the other. Therefore the index given x and y is calculated like this:
index = x + y * width
Furthermore the color channels are stored in different bits of the int cannot simply do the calculations with the whole int, since this allows channels to influence other channels.
The following solution should work (even though it just leaves the pixels at the bounds transparent):
public static BufferedImage blur(BufferedImage image, int[] filter, int filterWidth) {
if (filter.length % filterWidth != 0) {
throw new IllegalArgumentException("filter contains a incomplete row");
}
final int width = image.getWidth();
final int height = image.getHeight();
final int sum = IntStream.of(filter).sum();
int[] input = image.getRGB(0, 0, width, height, null, 0, width);
int[] output = new int[input.length];
final int pixelIndexOffset = width - filterWidth;
final int centerOffsetX = filterWidth / 2;
final int centerOffsetY = filter.length / filterWidth / 2;
// apply filter
for (int h = height - filter.length / filterWidth + 1, w = width - filterWidth + 1, y = 0; y < h; y++) {
for (int x = 0; x < w; x++) {
int r = 0;
int g = 0;
int b = 0;
for (int filterIndex = 0, pixelIndex = y * width + x;
filterIndex < filter.length;
pixelIndex += pixelIndexOffset) {
for (int fx = 0; fx < filterWidth; fx++, pixelIndex++, filterIndex++) {
int col = input[pixelIndex];
int factor = filter[filterIndex];
// sum up color channels seperately
r += ((col >>> 16) & 0xFF) * factor;
g += ((col >>> 8) & 0xFF) * factor;
b += (col & 0xFF) * factor;
}
}
r /= sum;
g /= sum;
b /= sum;
// combine channels with full opacity
output[x + centerOffsetX + (y + centerOffsetY) * width] = (r << 16) | (g << 8) | b | 0xFF000000;
}
}
BufferedImage result = new BufferedImage(width, height, BufferedImage.TYPE_INT_ARGB);
result.setRGB(0, 0, width, height, output, 0, width);
return result;
}
int[] filter = {1, 2, 1, 2, 4, 2, 1, 2, 1};
int filterWidth = 3;
BufferedImage blurred = blur(img, filter, filterWidth);

Numerical image recognition in Java

I want to recognize the numbers in the Pic1.
I did some work on it and it returns to pic2
Here is my code:
package captchadecproj;
import java.awt.image.BufferedImage;
import java.io.File;
import java.io.IOException;
import javax.imageio.ImageIO;
/*
* #author Mr__Hamid
*/
public class NewClass {
public static void main(String args[]) throws IOException {
int width = 110;
int heigth = 40;
BufferedImage image1 = new BufferedImage(width, heigth, BufferedImage.TYPE_INT_RGB);
BufferedImage num1 = new BufferedImage(width, heigth, BufferedImage.TYPE_INT_RGB);
BufferedImage image = null;
File f = null;
try {
f = new File("E:\\Desktop 2\\Captcha Project\\CaptchaDecoder\\captchaDecProj\\167.png");
image = new BufferedImage(width, heigth, BufferedImage.TYPE_INT_ARGB);
image = ImageIO.read(f);
System.out.println("Read!");
} catch (IOException e) {
System.out.println("Error" + e);
}
int[] pixel = null;
for (int y = 0; y < image.getHeight(); y++) {
for (int x = 0; x < image.getWidth(); x++) {
pixel = image.getRaster().getPixel(x, y, new int[3]);
if (pixel[0] < 30 & pixel[1] > 130 & pixel[2] < 110 & pixel[2] > 60) {
image1.setRGB(x, y, Integer.parseInt("ffffff".trim(), 16));
System.out.println(pixel[0] + " - " + pixel[1] + " - " + pixel[2] + " - " + (image.getWidth() * y + x));
} else {
image1.setRGB(x, y, 1);
System.out.println(pixel[0] + " - " + pixel[1] + " - " + pixel[2] + " - " + (image.getWidth() * y + x));
}
}
}
try {
f = new File("D:\\Original.jpg");
ImageIO.write(image, "jpg", f);
f = new File("D:\\black&White.jpg");
ImageIO.write(image1, "jpg", f);
System.out.println("Writed");
} catch (IOException e) {
System.out.println("Error" + e);
}
}
}
I have two questions:
How can I split these numbers?
How can I recognize which one is my number?
For example in the uploaded pic: 7, 1, 6
This is answer for first question, which is how to split numbers.
I recommend you something, is to convert your image to two dimensional array, then all operations will be perform much quicker than when you use BufferedImage.
BufferedImage image = ImageIO.read(new URL("http://i.stack.imgur.com/QaTj5.jpg"));
int startPos = 0, lastValue = 0;
Set<Integer> colours = new HashSet<>();
for (int x = 0; x < image.getWidth(); x++) {
int histValue = 0;
for (int y = 0; y < image.getHeight(); y++) {
colours.add(image.getRGB(x, y) );
if (image.getRGB(x, y) == 0xffffFFFF) {
histValue++;
}
}
if (histValue == 0 && lastValue == 0) {
startPos = x;
} else if (histValue == 0 && lastValue != 0) {
BufferedImage segment = image.getSubimage(startPos, 0, x
- startPos, image.getHeight());
ImageIO.write(segment, "jpg", new File("Segment" + startPos
+ ".jpg"));
}
lastValue = histValue;
}
if (lastValue!=0){
BufferedImage segment = image.getSubimage(startPos, 0, image.getWidth()
- startPos, image.getHeight());
ImageIO.write(segment, "jpg", new File("Segment" + startPos
+ ".jpg"));
}
Now all what you need to do is to find some decent algorithm for ocr.

Finding white rectangle in an image

I'm trying to find a white rectangle in an image. The rectangle size is fixed. This is what I've come up as of yet:
BufferedImage bImage = bufferedImage;
int height = bufferedImage.getHeight(); //~1100px
int width = bufferedImage.getWidth(); //~1600px
int neededWidth = width / 2;
int neededHeight = 150;
int x = 0;
int y = 0;
boolean breaker = false;
boolean found = false;
int rgb = 0xFF00FF00;
int fx, fy;
fx = fy = 0;
JavaLogger.log.info("width, height: " + w + ", " + h);
while ((x != (width / 2) || y != (height - neededHeight)) && found == false) {
for (int i = y; i - y < neededHeight + 1; i++) {
for (int j = x; j - x < neededWidth + 1; j++) { //Vareetu buut, ka +1 vajadziigs
//JavaLogger.log.info("x,y: " + j + ", " + i);
long pixel = bImage.getRGB(j, i);
if (pixel != colorWhite && pixel != -1) {
//bImage.setRGB(j, i, rgb);
//JavaLogger.log.info("x,y: " + (j+x) + ", " + (i+y));
breaker = true;
break;
} else {
//bImage.setRGB(j, i, 0xFFFFFF00);
}
//printPixelARGB(pixel);
if ((i - y == neededHeight-10) && j - x == neededWidth-10) {
JavaLogger.log.info("width, height: " + x + ", " + y + "," + j + ", " + i);
fx = j;
fy = i;
found = true;
breaker = true;
break;
}
}
if (breaker) {
breaker = false;
break;
}
}
if (x < (width / 2)) {
x++;
} else {
if (y < (height - neededHeight)) {
y++;
x = 0;
} else {
break;
}
}
//JavaLogger.log.info("width, height: " + x + ", " + y);
}
if (found == true) {
for (int i = y; i < fy; i++) {
for (int j = x; j < fx; j++) {
bImage.setRGB(j, i, 0xFF00FF3F);
}
}
}
JavaLogger.log.info("width, height: " + w + ", " + h);
This works ok, if the rectangle I need is close to the begining of (0;0), but as it get further away, the performance degrades quite severely. I'm wondering, if there's something that can be done?
For example, this search took nearly 8s, which is quite a lot.
I'm thinking, that this can deffinitely be done more effectively. Maybe some blob finding? Read about it, but I've no idea how to apply it.
Also, I'm new to both Java and Image processing, so any help is appreciated.
This is very rough, but successfully finds all the white pixels in the image, more checking can be done to ensure it is the size you want and everything is there but the basics are there.
PS: I have not tested with your image. r and this.rc is picture size and p and this.px is the inner rectangle size
public static void main(String[] args) {
JFrame frame = new JFrame();
final int r = 100;
final int p = 10;
NewJPanel pan = new NewJPanel(r, p, new A() {
#Override
public void doImage(BufferedImage i) {
int o = 0;
for (int j = 0; j < i.getWidth() - p; j++) {
for (int k = 0; k < i.getHeight() - p; k++) {
PixelGrabber pix2 = new PixelGrabber(
i, j, k, p, p, false);
try {
pix2.grabPixels();
} catch (InterruptedException ex) {}
int pixelColor = pix2.getColorModel()
.getRGB(pix2.getPixels());
Color c = new Color(pixelColor);
if (c.equals(Color.WHITE)) {
System.out.println("Found at : x:" + j + ",y:" + k);
}
}
}
}
});
frame.getContentPane().add(pan);
frame.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE);
frame.setSize(500, 500);
frame.setLocationRelativeTo(null);
frame.setVisible(true);
}
private interface A {
void doImage(BufferedImage i);
}
private static class NewJPanel extends JPanel {
private static final long serialVersionUID = -5348356640373105209L;
private BufferedImage image = null;
private int px;
private int rc;
private A a;
public NewJPanel(int r, int p, A a) {
this.px = p;
this.rc = r;
this.a = a;
}
public BufferedImage getImage() {
return image;
}
#Override public void paint(Graphics g) {
super.paint(g);
image = new BufferedImage(this.rc, this.rc,
BufferedImage.TYPE_INT_ARGB);
java.awt.Graphics2D g2 = image.createGraphics();
g2.setColor(Color.BLACK);
g2.fillRect(0, 0, this.rc, this.rc);
g2.setColor(Color.WHITE);
g2.fillRect(
new Random().nextInt(this.rc - this.px),
new Random().nextInt(this.rc - this.px),
this.px, this.px);
g.drawImage(image, this.rc, this.rc, this);
this.a.doImage(this.image);
}
}
I'm no expert but I don't think the code is the problem - you need to change your algorithm. I would start by recursively searching for a single white pixel on the 2d plane, something like:
findWhitePixel(square){
look at pixel in the middle of 'square' - if it's white return it, otherwise:
findWhitePixel(top-right-quarter of 'square')
findWhitePixel(top-left-quarter of 'square')
findWhitePixel(bottom-right-quarter of 'square')
findWhitePixel(bottom-left-quarter of 'square')
}
after you find a white pixel try travesing up, down, left and right from it to find the borders on you shape. if it's a given that there can only be rectangles - your done. if there might be other shapes (triangles, circles, etc.) you'll need some verification here.
What you are asking can be solved by the operation known as "erosion". The erosion replaces every pixel by the darkest of all pixels in the rectangle of the requested size at that location (top-left corner). Here, darkest means that non-white supersedes white.
The output of erosion is an image with W-1 columns and H-1 rows less. Any white pixel in it corresponds to a solution.
In the lucky case of a rectangle shape, erosion is a separable operation. This means that you can erode first using an horizontal segment shape, then a vertical segment shape on the output of the first erosion. For a W x H restangle size, this replaces W * H operations by W + H, a significant saving.
In the also lucky case of a binary image (non-white or white), erosion by a segment can be done extremely efficiently: in every row independently, find all contiguous runs of white pixels, and turn the W-1 rightmost ones to non-white. Do the same to all columns, shortening the white runs by H-1 pixels.
Example: find all 3x2 rectangles:
####....####
##.....#..##
#..######...
.....###....
After 3x1 erosion:
####..####
##...#####
#########.
...#####..
After 1x2 erosion:
####.#####
##########
#########.
This algorithms takes constant time per pixel (regardless the rectangle size). Properly implemented, should take a handful of milliseconds.

Bitmap conversion: Creating bitmap that excludes transparent sides from transparent bitmap

I have a set of bitmaps. They are all transparent to some extent, and I don't know in advance which parts are transparent. I would like to create a new bitmap out of the original bitmap that excludes the transparent parts, but in a square. I think this image explains it:
I know how to create a bitmap out of a existing bitmap, but I don't know how to find out which part is transparent and how to use that to achieve my goal.
This is how I plan on doing this:
public Bitmap cutImage(Bitmap image) {
Bitmap newBitmap = null;
int width = image.getWidth();
int height = image.getHeight();
newBitmap = Bitmap.createBitmap(width, height, Bitmap.Config.ARGB_8888);
Canvas canvas = new Canvas(newBitmap);
//This is where I need to find out correct values of r1 and r1.
Rect r1 = new Rect(?, ?, ?, ?);
Rect r2 = new Rect(?, ?, ?, ?);
canvas.drawBitmap(image, r1, r2, null);
return newBitmap;
}
Does anyone know how to achieve this?
EDIT:
I got it work using the following algorithm to find left, right, top and bottom values:
private int x1;
private int x2;
private int y1;
private int y2;
private void findRectValues(Bitmap image)
{
for(int x = 0; x < image.getWidth(); x++)
{
for(int y = 0; y < image.getHeight(); y++)
{
if(image.getPixel(x, y) != Color.TRANSPARENT)
{
System.out.println("X1 is: " + x);
x1 = x;
break;
}
}
if(x1 != 0)
break;
}
for(int x = image.getWidth()-1; x > 0; x--)
{
for(int y = 0; y < image.getHeight(); y++)
{
if(image.getPixel(x, y) != Color.TRANSPARENT)
{
System.out.println("X2 is: " + x);
x2 = x;
break;
}
}
if(x2 != 0)
break;
}
for(int y = 0; y < image.getHeight(); y++)
{
for(int x = 0; x < image.getWidth(); x++)
{
if(image.getPixel(x, y) != Color.TRANSPARENT)
{
System.out.println("Y1 is: " + y);
y1 = y;
break;
}
}
if(y1 != 0)
break;
}
for(int y = image.getHeight()-1; y > 0; y--)
{
for(int x = 0; x < image.getWidth(); x++)
{
if(image.getPixel(x, y) != Color.TRANSPARENT)
{
System.out.println("Y2 is: " + y);
y2 = y;
break;
}
}
if(y2 != 0)
break;
}
}
i think this is a bit more efficient and it works great for me
public Bitmap cropBitmapToBoundingBox(Bitmap picToCrop, int unusedSpaceColor) {
int[] pixels = new int[picToCrop.getHeight() * picToCrop.getWidth()];
int marginTop = 0, marginBottom = 0, marginLeft = 0, marginRight = 0, i;
picToCrop.getPixels(pixels, 0, picToCrop.getWidth(), 0, 0,
picToCrop.getWidth(), picToCrop.getHeight());
for (i = 0; i < pixels.length; i++) {
if (pixels[i] != unusedSpaceColor) {
marginTop = i / picToCrop.getWidth();
break;
}
}
outerLoop1: for (i = 0; i < picToCrop.getWidth(); i++) {
for (int j = i; j < pixels.length; j += picToCrop.getWidth()) {
if (pixels[j] != unusedSpaceColor) {
marginLeft = j % picToCrop.getWidth();
break outerLoop1;
}
}
}
for (i = pixels.length - 1; i >= 0; i--) {
if (pixels[i] != unusedSpaceColor) {
marginBottom = (pixels.length - i) / picToCrop.getWidth();
break;
}
}
outerLoop2: for (i = pixels.length - 1; i >= 0; i--) {
for (int j = i; j >= 0; j -= picToCrop.getWidth()) {
if (pixels[j] != unusedSpaceColor) {
marginRight = picToCrop.getWidth()
- (j % picToCrop.getWidth());
break outerLoop2;
}
}
}
return Bitmap.createBitmap(picToCrop, marginLeft, marginTop,
picToCrop.getWidth() - marginLeft - marginRight,
picToCrop.getHeight() - marginTop - marginBottom);
}
If all the images you want to crop are more or less in the center of the original canvas, I guess you could so something like this:
Start from each border working your way inwards the image searching for non-transparent pixels
Once you've found the top-left pixel and the right-bottom, you'll have your desired target.
Copy the image as you please
Now, the question remains is what you consider a transparent pixel. Does alpha trasparency counts? if so, how much alpha until you decide it's transparent enough to be cut from the image?
To find the non-transparent area of your bitmap, iterate across the bitmap in x and y and find the min and max of the non-transparent region. Then crop the bitmap to those co-ordinates.
Bitmap CropBitmapTransparency(Bitmap sourceBitmap)
{
int minX = sourceBitmap.getWidth();
int minY = sourceBitmap.getHeight();
int maxX = -1;
int maxY = -1;
for(int y = 0; y < sourceBitmap.getHeight(); y++)
{
for(int x = 0; x < sourceBitmap.getWidth(); x++)
{
int alpha = (sourceBitmap.getPixel(x, y) >> 24) & 255;
if(alpha > 0) // pixel is not 100% transparent
{
if(x < minX)
minX = x;
if(x > maxX)
maxX = x;
if(y < minY)
minY = y;
if(y > maxY)
maxY = y;
}
}
}
if((maxX < minX) || (maxY < minY))
return null; // Bitmap is entirely transparent
// crop bitmap to non-transparent area and return:
return Bitmap.createBitmap(sourceBitmap, minX, minY, (maxX - minX) + 1, (maxY - minY) + 1);
}

Categories