This question already has answers here:
Merging two images
(4 answers)
Closed 9 years ago.
So I am basically trying to blend two pictures together by taking the weighted average of each pixel. However, in order for the transformation to apply, the two pictures must be exactly the same size. If not, then the first picture remains unchanged.
http://www.seas.upenn.edu/~cis120/current/hw/hw07/javadoc/Manipulate.html#alphaBlend(double,%20NewPic,%20NewPic)
This is basically what I have to do, but I need clarification. I'm not understanding what I have to do with the colors
Firstly, as this is blatantly homework, you should probably check that you're allowed to get code from the internet. If not you should be careful about what you take from answers you get, less you be accused of plagiarism and/or cheating.
Assuming this is fine however, you will want to interpret each NewPic as a bitmap. Then compare each Pixel in each bitmap with the corresponding Pixel in the second bitmap. Look at the average of each colour and use those to create a pixel in a third bitmap. Once you've created each pixel in the third bitmap, return it.
Related
This question already has answers here:
How to compare two colors for similarity/difference
(19 answers)
Closed 5 years ago.
I have an Java program that checks on an image if each pixel has his color similar to an target. Up to the point to get each pixel, nothing wrong, but the things get hard when I need to check, with an error margin, if the color of the pixel is similar to the target.
I created an formula to solve this, but is very inefficient. There is: target + (errorMargin * 10) for maximum range and target - (errorMargin * 10) for minimum range, and those formulas not run too well, because if I search for the color (117,132,93) in RGB, the code show me brown matches (the values is a dark green with a little blueness). So, exists some formula for determining if an color is similar to another, with error margin, that is more efficient?
Assuming in a universe where there is no image processing book written yet and there is no MATLAB software implemented yet, I will solve this problem by following algorithm:
Subtract target from image.
Take RMS value.
Compare it with errorMargin.
A small tips for improvement:
Try to get error margin dynamically based on image average colors.
This question already has answers here:
How do I get a Raster from an Image in java?
(2 answers)
Closed 8 years ago.
I have been searching google for stuff like "Java Bitmap", "Create Java Bitmap", etc and cannot seem to find much information. From all of the code examples it looks like all of the bitmap libraries are third party or for android.
What I want to do is very simple. I want to create a small bitmap maybe 10x80, and be able to color each pixel at (x,y). I want to make a small I guess color bar that will show that position of items in a queue by color.
Are there any build in libraries to do this?
There's the java.awt.image.BufferedImage class. This has pixel-specific get/set methods.
http://docs.oracle.com/javase/7/docs/api/java/awt/image/BufferedImage.html
Here is an example of creating and writing to pixels with BufferedImage:
BufferedImage image = new BufferedImage(10, 80, BufferedImage.TYPE_4BYTE_ABGR);
image.setRGB(5, 20, Color.BLUE.getRGB());
The third parameter on the image is the type, which may vary based on your use, but for basic solid colors the one shown is good. The setRGB uses an int representation of the color so you can't just use a Color constant directly.
You probably don't want to use VolatileImage because the benefits are dubious in current Java implementations. See this stackoverflow question for why it may not help.
Look at the Oracle image turorial for help with this. It explains both your options and how to interact with them.
I have one method to combine 3 greyscale images to one colour image which is done by using getRed(), getGreen() and getBlue() in Java, for each individual input image and then applying the colour to the output image which works quite well. Im looking to find other methods for doing this however.
It doesnt have to be accurate in terms of sea being blue, etc. but it needs to be coloured in a way that different areas of the 'map' can be differentiated.
Ive been looking into ways of doing this but unfortunately havent actually managed to find an alternative way of doing it, im looking to use something apart from the getRGB() values.
Im not looking for anyone to code for me, just to give me some pointers on what to look for.
Thanks!
Your comment here is critical: "but it needs to be coloured in a way that different areas of the 'map' can be differentiated. Ive been looking into ways of doing this but unfortunately havent actually managed to find an alternative way of doing it"
What did you do with the other 4 images - or "channels". Usually, when doing color space mapping one has 3 channels and one converts to another 3 channel color space. In your case you have 7 channels, and you want to put all that information into 3 channels? It all depends on what you are viewing. Hyperspectral imagery would be a good place to start to see containers for storing imagery data with more than 3 channels.
You can convert to a different colorspace as others have suggested or perform any other transformation. It sounds like though in order to differentiate different parts of the image, you will need some post processing. This will depend on your transformation.
I'm trying to figure out a good method for comparing two images in terms of their color. One idea I had was to take the average color of both images and subtract that amount to get a "color distance." Whichever two images have the smallest color distance would be a match. Does this seem like a viable option for identifying an image from a database of images?
Ideally I would like to use this to identify playing cards put through an image scanner.
For example if I were to scan a real version of this card onto my computer I would want to be able to compare that with all the images in my database to find the closest one.
Update:
I forgot to mention the challenges involved in my specific problem.
The scanned image of the card and the original image of the card are most likely going to be different sizes (in terms of width and height).
I need to make this as efficient as possible. I plan on using this to scan/identify hundreds of cards at a time. I figured that finding (and storing) a single average color value for each image would be far more efficient than comparing the individual pixels of each image in the database (the database has well over 10,000 images) for each scanned card that needed to be identified. The reason why I was asking about this was to see if anyone had tried to compare average color values before as a means of image recognition. I have a feeling it might not work as I envision due to issues with both color value precision and accuracy.
Update 2:
Here's an example of what I was envisioning.
Image to be identified = A
Images in database = { D1, D2 }
average color of image A = avg(A) = #8ba489
average color of images in database = { #58727a, #8ba489 }
D2 matches with image A because #8ba489 - #8ba489 is less than #8ba489 - #58727a.
Of course the test image would not be an exact match with any of those images because it would be scanned in; however, I'm trying to find the closest match.
Content based image retrieval (CBIR) can do the trick for you. There's LIRE, a java library for that. You can even first try several approaches using different color based image features with the demo. See https://code.google.com/p/lire/ for downloads & source. There's also the "Simple Application" which gets you started with indexing and search really fast.
Based on my experience I'd recommend to use either the ColorLayout feature (if the images are not rotated), the OpponentHistogram, or the AutoColorCorrelogram. The CEDD feature might also yield good results, and it's the smallest with ~ 60 bytes of data per image.
If you want to check color difference like this:
http://en.wikipedia.org/wiki/Color_difference
You can use Catalano Framework,
http://code.google.com/p/catalano-framework/
It works in Java and Android.
Example using Color Difference:
float[] lab = ColorConverter.RGBtoLAB(100, 120, 150, ColorConverter.CIE2_D65);
float[] lab2 = ColorConverter.RGBtoLAB(50, 80, 140, ColorConverter.CIE2_D65);
double diff = ColorDifference.DeltaC(lab, lab2);
I think your idea is not good enough to do the task.
Your method will say all images below are the same (average color of all images are 128).
Your color averaging approach would most likely fail, as #Heejin already explained.
You can do try it in different way. Shrink all images to some arbitrary size, and then subtract uknown image from all know images, the one with smallest difference is the one you are looking for. It's really simple method and it would't be slower than the averaging.
Another option is to use some smarter algorithm:
http://www.hackerfactor.com/blog/index.php?/archives/432-Looks-Like-It.html
I have used this method in past and the results are okay-ish. Ir works great for finding same images, not so well for finding siilar images.
This question already has an answer here:
Closed 10 years ago.
Possible Duplicate:
MouseListener needs to interact with many object Java
I have a program with many images drawn onto the screen, as well as other things. I have a mouse listener, but what is the most efficient way of detecting what has been clicked on the screen? Because if I have an image 100px x 50px # starting at point 500, 300,
I can't say if (x > 500 && x < 600) etc.. for EVERY image on screen?
Thank you for any help
One way to tackle this kind of problem efficiently is to use a QuadTree, which is a data structure that recursively subdivides the screen. This enables you to only check those images that are in roughly the right part of the screen.
Or a simpler approach would be to simply subdivide the screen into quarters or 16ths, and 'register' each image with the portions of the screen that it covers. This may be less effective if you have any large images, relative to the screen size.
This would probably only be effective if many of your images are static, since the quadtree needs recalculating when image move.
You may find that simply checking every single image is actually fast enough - you didn't say how many images you have, or how long it currently takes to check them all...