How to calculate the percentage of an image (jpeg) that has been edited using Java? For example, when an image is uploaded, the user wants to know how many percent of that image has been edited, so if it is 0% it shows that it is an original picture.
If the original image and the edited image haven't changed size this is trivial. Compare every pixel in your original image to the edited image. Those pixels which are not the same may have been edited. The amount of change is ChangedPixels / TotalPixels.
Some image formats are lossy (JPG for instance.) Because the format is lossy, saving an image with no edits, and comparing it against the original copy will likely measure some change when there were no human edits because some content was distorted.
Images with different sizes are easy too. Just compare as many pixels as possible (ie. the minimum of the widths, and minimum of heights.) The amount of change then is:
int maxPixels = Max(width1, width2) * Max(height1, height2);
double amountDifferent = totalChangedPixels / (double)maxPixels;
Any pixels which were added or removed are considered "changed" pixels.
Related
I have a neural network (made in java) which classifies handwritten digits, trained using the mnist data set.
I have a GUI where the user will draw a number (number on the left) and when the user hits the "guess" button the drawing is converted into a 400 by 470 image and is down-scaled to a 20 by 20 image, then is centered to a 28 by 28 image to feed into the network where the output is given on the right.
Here is what the GUI looks like:
My problem however, is that when I have a number that doesn't take up the majority of the panel (such as the 3 in the image above) the down-scaled image that is used as the input for the network is too small which causes the network to guess incorrectly.
Here is the final input image when the number is drawn small:
Here is the final input image when the number is drawn large:
What I'm asking is: is there any way to make the number that is drawn small the same size as the number drawn large while still keeping the size of the image as 28 by 28?
You can either use another object-detection network just to find the bounding box, or just calculate where the leftmost, rightmost, upmost, and bottom-most drawn pixel is. If you fear there will be outliers (there should not unless the user purposefully clicks an area far from the figure), you can remove outliers fairly easily. There are a number of ways, but method is to compute the distance of each drawn pixel to the center of the image, putting them into a distribution (normal might be good enough), and then compute which are outliers, and get rid of them. (Or compute the distance beyond which pixels become outliers, and cropping the box to fit). Then you scale the rectangle up to the correct size.
This is just a general method. As for specifics, I do not know how exactly your images are represented, but you can iterate over every pixel and note their positions (the number of iterations is not overly expensive).
I am working on image interpolation for which I am using bi-cubic interpolation to double the resolution of image in java using AffinedTransformOp.I used BufferedImage of TYPE_4BYTE_ABGR while doing up-scaling. When I tried to save back my upscale image using ImageIO.write then I found that openjdk does not support jpeg encoding for TYPE_4BYTE_ABGR so I converted this up-scaled image from TYPE_4BYTE_ABGR to TYPE_3BYTE_BGR. When I saved it in folder then found that the memory taken by this upscale image is way less(about half time) than the memory taken by original image.
So I assume that the original(input) image is represented by four channels ARGB while upscale(output) image is taking 3 channels RGB and that's why getting less memory.
Now my question is that should I use this conversion?
Is there some information that is getting lost?
Does quality of image remains same?
P.S: I've read from the documentation of ImageIO that when we convert ARGB to RGB than the alpha value gets premultiplied to RGB values and I think it should not affect the quality of the image.
I solved my problem and hope to share my answer. Actually the type of my original image was Grayscale and the color space of my original image was grey (meaning only one channel with 8 bits) with quality of 90.Problem arised when I used TYPE_4BYTE_ABGR for the upscaling instead of using TYPE_BYTE_GRAY. Secondly when you try to save this image in a file in jpeg format ImageIO.write uses compression of 75 by default so the image size will get small. You should use the compression factor which suits you or you should save it in PNG format. You can view information about your image by using identify -verbos image.jpg in linux and can see the color space, image type and quality etcYou can check this post to see how to set your compression quality manually in ImageIO.
I am trying to parse a html page to find the most prominent image. So, after parsing the html page to extract all img tags, i am trying to find the largest image by comparing the dimension of the image.
Is it right to compare the images by calculating the area as (width * height)?
That depends entirely on your definition of 'largest'. width * height is certainly a valid approach, but it has the flaw that a 1x1000 image is 'larger' than a 30x30 one even though the latter could very well be more noticeable. It also has the problem that a large image that's mostly the same as the background color will be more 'noticeable' than a medium image that isn't, which might not be the case.
In order to figure out how to determine how to find the 'largest' image, you need to specify why you want it.
Is it possible to get the current quality of an existing image?
I want to load a JPEG and save it again without any change in quality and DPI. (I need to do some pixel manipulation before saving)
JPEG is a lossy format.
The direct way to do this, read the image, do what you need to do, reencode the image, will result in the image being slightly deteriorated.
That said, if that is fine, then you need to know that the way that quality works in JPEG encoding, is to determine how much information to keep about contrast. The less quality, the less sharp a transition you can have. In other words, this is not a single setting enclosed in the JPEG-file, but a setting determining the number of image data saved.
What you can do is to say that the final image need to be around the same size as the original. You can then encode the result at different quality settings and choose the one giving the image size you want.
I have a folder with many images with different backgrounds. I have got requirement to sort these images on the basis of background color.
Can I make a java program to read the folder and each image file in there, and decide the image of each file? please share options.
Yes, it is possible. You can load images with ImageIO.
BufferedImage img = ImageIO.read(imageFile);
int rgb = img.getRGB(x,y);
Color color = new Color(rgb);
But you have to create an algorithm that finds out which color is the backround color. It depends on the kind of images.
So, not knowing what your images really look like, you may want to average as much of the background as you can to come up with a good representation of the background color.
I would consider a couple things:
* Read in the pixels of each of the four edges. If there's little variance in the pixel color, then you may be done, just take the average.
* Do the same, but also read in lines from the edge to the middle until you hit a pixel that has a rather different color than your running average. Do this for all edges.
Those would be the cheapest things that I can think of to cover variances in background color. Depending on the images you're working with, you may have to get fancier.
A BufferedImage should get you your image data.
Mark