I'm a beginner in Java programming. I have to submit project of server- client and I am stuck in pixel comparison. .acc to code It accepts BufferedImage and compares pixels. How to store the pixel difference in the 2nd image itself and return it?
Take a look at BufferedImage's getRGB(int x, int y) method. This will provide an approximate RGB value for the given (x, y) location as an int, which can then be compared to the corresponding location in the other image.
If you wish to perform a more detailed comparison you'll need to iterate over each image band separately, comparing the samples for that band with the corresponding band for the other image. (For example, an RGBA encoded image has four individual bands to compare, whereas a greyscale image has just one.)
Obviously you could start by comparing the image dimensions to ensure they are equal before proceding to the more detailed comparison.
Also, you should not expect people to paste detailed code solutions; That's not the way Stack Overflow works - People will be far more willing to help with specific problems so you should try coding the solution and post a code snippet if you get stuck.
Related
I´m looking forward to writing a method in Java wich detects and also stores "colorclusters" in seperate files.
For instance: a color cluster could be a green rectangle or any other section of a picture which contains very similar pixelcolors in range:
Unfortunately, I´ve already tried thought a thousand ways of how to solve this issue but nothing worked so far. Does anyone know if there is already such a method or how to solve this problem?
Look into the concept of a "Graphics Kernel"
Basically it is a relocatable array representing a pixel and its neighbors, coupled with an algorithm to determine some interesting quality about the pixel.
Kernels are evaluated against each pixel to give a value. An pseudocode example of a kernel might be.
value = sum of color_distance_between pixel and all neighbors
If the value is zero, then the pixel should be exactly the same as it's neighbors. If the value is not zero, then it had differing neighbors. Take care that all color distances are positive, or you might have color differences that cancel each other out.
Then your program runs through each pixel, determining if it is similar to it's neighbors. Large regions of pixels with no color distance would be grouped and any pixel within that group has roughly the same color.
Right now I have a composite code that produces and image from recorded data. I am trying to figure out a way that I can fill the spots in the image where no data was recorded (aka where it reads 0.0) with a new color. I have been experimenting a little with Graphics, but am not finding a way that I can just will these empty spots. I would post an image if I had enough points...
But I hope you understand what I am trying to say.
Like the comment suggests we really need more info.
If you use a BufferedImage then you can simply set a single pixel color using this method setRGB(int x, int y, int rgb)
I'm trying to figure out a good method for comparing two images in terms of their color. One idea I had was to take the average color of both images and subtract that amount to get a "color distance." Whichever two images have the smallest color distance would be a match. Does this seem like a viable option for identifying an image from a database of images?
Ideally I would like to use this to identify playing cards put through an image scanner.
For example if I were to scan a real version of this card onto my computer I would want to be able to compare that with all the images in my database to find the closest one.
Update:
I forgot to mention the challenges involved in my specific problem.
The scanned image of the card and the original image of the card are most likely going to be different sizes (in terms of width and height).
I need to make this as efficient as possible. I plan on using this to scan/identify hundreds of cards at a time. I figured that finding (and storing) a single average color value for each image would be far more efficient than comparing the individual pixels of each image in the database (the database has well over 10,000 images) for each scanned card that needed to be identified. The reason why I was asking about this was to see if anyone had tried to compare average color values before as a means of image recognition. I have a feeling it might not work as I envision due to issues with both color value precision and accuracy.
Update 2:
Here's an example of what I was envisioning.
Image to be identified = A
Images in database = { D1, D2 }
average color of image A = avg(A) = #8ba489
average color of images in database = { #58727a, #8ba489 }
D2 matches with image A because #8ba489 - #8ba489 is less than #8ba489 - #58727a.
Of course the test image would not be an exact match with any of those images because it would be scanned in; however, I'm trying to find the closest match.
Content based image retrieval (CBIR) can do the trick for you. There's LIRE, a java library for that. You can even first try several approaches using different color based image features with the demo. See https://code.google.com/p/lire/ for downloads & source. There's also the "Simple Application" which gets you started with indexing and search really fast.
Based on my experience I'd recommend to use either the ColorLayout feature (if the images are not rotated), the OpponentHistogram, or the AutoColorCorrelogram. The CEDD feature might also yield good results, and it's the smallest with ~ 60 bytes of data per image.
If you want to check color difference like this:
http://en.wikipedia.org/wiki/Color_difference
You can use Catalano Framework,
http://code.google.com/p/catalano-framework/
It works in Java and Android.
Example using Color Difference:
float[] lab = ColorConverter.RGBtoLAB(100, 120, 150, ColorConverter.CIE2_D65);
float[] lab2 = ColorConverter.RGBtoLAB(50, 80, 140, ColorConverter.CIE2_D65);
double diff = ColorDifference.DeltaC(lab, lab2);
I think your idea is not good enough to do the task.
Your method will say all images below are the same (average color of all images are 128).
Your color averaging approach would most likely fail, as #Heejin already explained.
You can do try it in different way. Shrink all images to some arbitrary size, and then subtract uknown image from all know images, the one with smallest difference is the one you are looking for. It's really simple method and it would't be slower than the averaging.
Another option is to use some smarter algorithm:
http://www.hackerfactor.com/blog/index.php?/archives/432-Looks-Like-It.html
I have used this method in past and the results are okay-ish. Ir works great for finding same images, not so well for finding siilar images.
I'm pretty new to manually manipulating images, so please bear with me.
I have an image that I'm allowing the user to shrink/grow and move around.
The basic behavior works perfectly. However, I need to be able to grab whatever is in the "viewport" (visible clipping region rectangle) and save it out as a separate bitmap.
Before I can do this, I need to get a fix on WHERE the image actually is and what is being displayed. This is proving more tricky than I would have imagined.
My problem is that the Matrix documentation is absurdly vague, and I'm lost as to how I can measure the coordinates and dimensions of my transformed image. As I see it, the X,Y of the image remain constant even as the user shrinks/grows it. So, even though it reports at being at 0,0 it's displayed at (say) 100,100. And the only way I can get those coordinates is to do a fairly ugly computation (again... I'm probably not doing it the most elegant way, since geometry is not my forte).
I'm kind of hoping that I'm missing something and that there's some way to pull the object's auto translated coordinates and dimensions.
in an ideal world I would be able to call (pseudo) myImg.getDisplayedWidth() and myImg.getDisplayedX().
Oh, and I should add that this may all be a problem that I'm causing myself by using the center of the image as the point from which to grow/shrink. If I left the default 0,0 coordinate as the non changing point, I think the location would be correct no matter what its size was. So... maybe the answer to all this is to simply figure out my center offset and apply that to my translations?
All help greatly appreciated (and people not arbitrarily messing with my question's title even more so!).
The Matrix method mapPoints(float[] dst, float[] src) can be used to get a series of translated points by applying the Matrix translation. Or in (slightly) more layman's terms, an instance of the Matrix class contains not only the translation instruction but also convenience methods to apply the Matrix translation to a series of points.
So in your case, you just need the corners of your untranslated Bitmap (x, y, width, height) and pass the corner points into that method to get the translated points.
I currently have a Java program that will get the rgb values for each of the pixels in an image. I also have a method to calculate a Haar wavelet on a 2d matrix of values. However I don't know which values I should give to my method that calculates the Haar wavelet. Should I average each pixels rgb value and computer a haar wavelet on that? or maybe just use 1 of r, g,b.
I am trying to create a unique fingerprint for an image. I read elsewhere that this was a good method as I can take the dot product of 2 wavelets to see how similar the images are to each other.
Please let me know of what values I should be computing a Haar wavelet on.
Thanks
Jess
You should regard the R/G/B components as different images: Create one matrix for R, G and B each, then apply the wavelet to parts of those independently.
You then reconstruct the R/G/B-images from the 3 wavelet-compressed channels and finally combine those to a 3-channel bitmap.
Since eznme didn't answer your question (You want fingerprints, he explains compression and reconstruction), here's a method you'll often come across:
You separate color and brightness information (chrominance and luma), and weigh them differently. Sometimes you'll even throw away the chrominance and just use the luma part. This reduces the size of your fingerprint significantly (~factor three) and takes into account how we perceive an image - mainly by local brightness, not by absolute color. As a bonus you gain some robustness concerning color manipulation of the image.
The separation can be done in different ways, e.g. transforming your RGB image to YUV or YIQ color space. If you only want to keep the luma component, these two color spaces are equivalent. However, they encode the chrominance differently.
Here's the linear transformation for the luma Y from RGB:
Y = 0.299*R + 0.587*G + 0.114*B
When you take a look at the mathematics, you notice that we're doing nothing else than creating a grayscale image – taking into account that we perceive green brighter than red and red brighther than blue when they all are numerically equal.
Incase you want to keep a bit of chrominance information, in order to keep your fingerprint as concise as possible, you could reduce the resolution of the two U,V components (each actually 8 bit). So you could join them both into one 8 bit value by reducing their information to 4 bit and combining them with the shift operator (don't know how that works in java). The chrominance should weigh less in comparison to the luma, in the final fingerprint-distance calculation (the dot product you mentioned).