Converting Pixel Array to Image in Java - java

I have used this code to get Image into array of pixels.
convertTo2DWithoutUsingGetRGB method for reading image to pixel array
writeToFile method for pixel array to image
Now I would like to convert the array of pixels to Image. But when I convert it, I am losing image data.
Initial Image size: 80Kb JPG
Duplicate Image size: 71Kb JPG
I can clearly notice some difference between the both images, the Java produced image has some sort of white-noise.
I would like to reproduce the image without single pixel loss of data, how do I achieve in Java?

The jpg file format uses a lossy compression algorithm which means that the files it generates will have slight differences from the original. You can change the quality setting to compress more or less but you can't save the with its original size without any modifications.
This is why jpg isn't recommended for image editing. Use a lossless format instead, like PNG.

Related

Should I convert BufferedImage.TYPE_4BYTE_ABGR to BufferedImage.TYPE_3BYTE_BGR?

I am working on image interpolation for which I am using bi-cubic interpolation to double the resolution of image in java using AffinedTransformOp.I used BufferedImage of TYPE_4BYTE_ABGR while doing up-scaling. When I tried to save back my upscale image using ImageIO.write then I found that openjdk does not support jpeg encoding for TYPE_4BYTE_ABGR so I converted this up-scaled image from TYPE_4BYTE_ABGR to TYPE_3BYTE_BGR. When I saved it in folder then found that the memory taken by this upscale image is way less(about half time) than the memory taken by original image.
So I assume that the original(input) image is represented by four channels ARGB while upscale(output) image is taking 3 channels RGB and that's why getting less memory.
Now my question is that should I use this conversion?
Is there some information that is getting lost?
Does quality of image remains same?
P.S: I've read from the documentation of ImageIO that when we convert ARGB to RGB than the alpha value gets premultiplied to RGB values and I think it should not affect the quality of the image.
I solved my problem and hope to share my answer. Actually the type of my original image was Grayscale and the color space of my original image was grey (meaning only one channel with 8 bits) with quality of 90.Problem arised when I used TYPE_4BYTE_ABGR for the upscaling instead of using TYPE_BYTE_GRAY. Secondly when you try to save this image in a file in jpeg format ImageIO.write uses compression of 75 by default so the image size will get small. You should use the compression factor which suits you or you should save it in PNG format. You can view information about your image by using identify -verbos image.jpg in linux and can see the color space, image type and quality etcYou can check this post to see how to set your compression quality manually in ImageIO.

How to save jpeg image after replacing LSB in DCT with the Jsteg method?

I am using the Jsteg method, but i have a confusion, something i don't quite understand.The steps are:
Get 8x8 pixel block
Discret cosine tranform
Quantization
Replace the Least Significant Bit
What i don't understand is, when i open the image in java using BufferedImage and ImageIO and do these steps, how do i save the changes? if i write:
ImageIO.write(img,"jpg",new_img);
does java recompress the image and then the hidden text is lost, or when it does the compression it doesn't change anything since i did compress manually?
or should i save it in another way?
Bottom line:After replacing LSB, how to save the encoded image ?

Working with JPEG images in Java

I am using the BufferedImage class to read in an image as pixels which I then use bit shifting to get their appropriate components into separate int arrays. This works OK.
I have used this reference site to manually perform DCT functions with the pixel arrays.
Methods used: forwardDCT(), quantitizeMatrix(), dequantitzeMatrax(), inverseDCT()
which then are fed back into a resultant image array to reconstruct the JPEG file, which I then use BufferedImage's write() method to write the pixel data back out as the image.
This works perfectly, and I can view the image. (Even better the value I use to compress visually works).
My question is, is there a way to write the quantitize coefficients as the compressed values as a JPEG?
Because the BufferedImage write() method is used to input pixel data, rather than coefficient data?
Hope this is clear.
Thanks
Ultimately the DCT calculation is just one step in the whole JPEG encoding process. A complete implementation also has to deal with quantization, Huffman encoding, and conforming with the JPEG standard.
Java effectively just gives you an interface to a JPEG encoder that lets you do useful things like save images.
The ImageWriter that ImageIO.write() uses for JPEG images depends on your system. The default ImageWriter for JPEGs will only let you change some settings that affect the quantization and encoding using the JPEGImageWriteParam class (http://docs.oracle.com/javase/6/docs/api/javax/imageio/ImageWriteParam.html).
Getting your hand-crafted DCT coefficients into a JPEG file could potentially involve writing an entire JPEG library. If you don't want to do all that work, then you could modify the source of an existing library so that it uses your DCT coefficients.
Before the DCT . . .
While JPEG has no knowledge of colors, it is normal for JPEG file formats to use the YCbCr color space. If you are thinking about writing a JPEG file, you would need to do this conversion first.
After the Quantization . . .
The coefficients are run length encoded. That's a step you'd have to add. That's the most complex part of JPEG encoding.

How to get a better understanding of a png representation on java?

I want to dive in the low level of how a png file is represented on memory in java, so that i can iterate over its pixels, change them, create a modified png file using existing one, etc.
Where do i begin?
You could begin by reading it into a BufferedImage with ImageIO.read(file) .
The getRGB(...) methods can help you to obtain information about the individual pixels, and the corresponding setRGB(...) methods help you to change them.
The representation of an image in memory in Java, is essentially unrelated to the format of the file: be it PNG, JPEG, GIF or whatever, those are standards for encoding an image as a (language independent) stream of bytes. But when you are manipulating the pixels of an image in memory, you have already decoded it, and so you've "forgotten" from which format (PNG, JPEG...) it came from.
The most common way of manipulating an image in Java is using the BufferedImage class, included in the java.awt.image.* package. But that's not a requisite. For instance, I've worked on a low level PNG coder/encoder (PNGJ) that does not use BufferedImage, but instead gives you each image line as an int[] array.

Java Image Quality (JPEG)

Is it possible to get the current quality of an existing image?
I want to load a JPEG and save it again without any change in quality and DPI. (I need to do some pixel manipulation before saving)
JPEG is a lossy format.
The direct way to do this, read the image, do what you need to do, reencode the image, will result in the image being slightly deteriorated.
That said, if that is fine, then you need to know that the way that quality works in JPEG encoding, is to determine how much information to keep about contrast. The less quality, the less sharp a transition you can have. In other words, this is not a single setting enclosed in the JPEG-file, but a setting determining the number of image data saved.
What you can do is to say that the final image need to be around the same size as the original. You can then encode the result at different quality settings and choose the one giving the image size you want.

Categories