The title says enough I think.
I have a full quality BufferedImage and I want to send it through an OutputStream with a low
bitdepth. I don't want an algorithm to change pixel by pixel the quality, so it is still a full-quality.
So, the goal is to write the image (with the full resolution, full size) through the OuputStream which takes a very little number of bytes to write.
Thanks,
Martijn
You need to encode the image data into the format that has the right characteristics for your image. If it's 24-bit color and has a lot of colors and you want to lose no quality, you are probably stuck with PNG, but look into lossless JPEG 2000.
If you can lose some quality, then try
Lossy JPEG 2000 -- much smaller than JPEG for the same quality loss
reducing the number of colors and using a color mapped format
If the image has only gray or black and white data, make sure that you are encoding it as such (8-bit gray or 1-bit black and white). Then, make sure you use an encoder that is tuned for that kind of format (for example TIFF with Group 4 or JBIG2).
Another good option is to remove all of the unwanted meta-data from the image (or make sure that your encoder doesn't put any in).
If you want to stick with what's in Java, you probably have to use TIFF, PNG or JPEG -- there are third party image endcoders (for example, my company, Atalasoft, makes a .NET version of these advanced encoders -- there are Java vendors out there as well)
FYI: reducing bit depth usually means reducing quality (unless the reduction is meaningless). For example if I have a 24-bit color image, but all of the colors are gray (where R==G==B), then reducing to 8-bit gray does not lose quality. This is also true if your image has any collection of 256 different colors and you switch to a color mapped (indexed or paletted) format. You reduce the number of bytes and don't reduce quality.
Have a look at the java.util.zip package for all about compression:
http://java.sun.com/developer/technicalArticles/Programming/compression/
Though JPEG and PNG are already somehow compressed, using the Inflater class, you can reduce byte count while maintaining the exact same quality of your image. Compressing an already compressed format generally are not significant. If you group several images together, though, inflating them is significant (since inflation techniques recognize common patterns in object data, which reduce repetition and thus reduces byte count).
Related
I am working on image interpolation for which I am using bi-cubic interpolation to double the resolution of image in java using AffinedTransformOp.I used BufferedImage of TYPE_4BYTE_ABGR while doing up-scaling. When I tried to save back my upscale image using ImageIO.write then I found that openjdk does not support jpeg encoding for TYPE_4BYTE_ABGR so I converted this up-scaled image from TYPE_4BYTE_ABGR to TYPE_3BYTE_BGR. When I saved it in folder then found that the memory taken by this upscale image is way less(about half time) than the memory taken by original image.
So I assume that the original(input) image is represented by four channels ARGB while upscale(output) image is taking 3 channels RGB and that's why getting less memory.
Now my question is that should I use this conversion?
Is there some information that is getting lost?
Does quality of image remains same?
P.S: I've read from the documentation of ImageIO that when we convert ARGB to RGB than the alpha value gets premultiplied to RGB values and I think it should not affect the quality of the image.
I solved my problem and hope to share my answer. Actually the type of my original image was Grayscale and the color space of my original image was grey (meaning only one channel with 8 bits) with quality of 90.Problem arised when I used TYPE_4BYTE_ABGR for the upscaling instead of using TYPE_BYTE_GRAY. Secondly when you try to save this image in a file in jpeg format ImageIO.write uses compression of 75 by default so the image size will get small. You should use the compression factor which suits you or you should save it in PNG format. You can view information about your image by using identify -verbos image.jpg in linux and can see the color space, image type and quality etcYou can check this post to see how to set your compression quality manually in ImageIO.
I have made an app that uses the camera to capture images. The images are passed back to my application as Bitmap. I want to know how to modify my code to save the Bitmap into JPEG format at its full resolution?
FileOutputStream out = new FileOutputStream(file);
bitmap.compress(Bitmap.CompressFormat.JPEG, 100, out);
out.flush();
out.close();
I think the Bitmap is being compressed into smaller size!
Compression refers to the the reduction of physical disk space required to save the image. It doesn't automatically mean that the image quality or resolution is also reduced.
JPEG is part of a group of file formats that (mostly) belong to the lossy compression algorithms. In other words some minor image detail and quality is sacrificed to reduce the file size of the image, but it still wouldn't reduce the resolution of the image.
If you want to reduce the file size of the image, but don't want to loose any image quality you need to use a file format which supports lossless compression. You can for example use Bitmap.CompressFormat.PNG.
WEBP supports both lossy and lossless compression (and is even smaller than PNG and JPG in file size). But support for WEBP was only added in API level 14 so there might be some backwards compatibility problems. Just use WEBP if possible, otherwise PNG if you care about image quality.
In any case let's look at the compress() method:
public boolean compress (Bitmap.CompressFormat format, int quality, OutputStream stream)
As you can see you can choose the CompressFormat, pass an OutputStream in and pick a quality. The number you pass in as quality can be between 0 and 100 and it determines if you compress either lossy or lossless. Since you pass in 100 the compression will always be lossless regardless of which CompressFormat you pick!
As an aside: Since PNG only supports lossless compression it will ignore the quality parameter completely and always save the image without reducing its quality!
I am using the BufferedImage class to read in an image as pixels which I then use bit shifting to get their appropriate components into separate int arrays. This works OK.
I have used this reference site to manually perform DCT functions with the pixel arrays.
Methods used: forwardDCT(), quantitizeMatrix(), dequantitzeMatrax(), inverseDCT()
which then are fed back into a resultant image array to reconstruct the JPEG file, which I then use BufferedImage's write() method to write the pixel data back out as the image.
This works perfectly, and I can view the image. (Even better the value I use to compress visually works).
My question is, is there a way to write the quantitize coefficients as the compressed values as a JPEG?
Because the BufferedImage write() method is used to input pixel data, rather than coefficient data?
Hope this is clear.
Thanks
Ultimately the DCT calculation is just one step in the whole JPEG encoding process. A complete implementation also has to deal with quantization, Huffman encoding, and conforming with the JPEG standard.
Java effectively just gives you an interface to a JPEG encoder that lets you do useful things like save images.
The ImageWriter that ImageIO.write() uses for JPEG images depends on your system. The default ImageWriter for JPEGs will only let you change some settings that affect the quantization and encoding using the JPEGImageWriteParam class (http://docs.oracle.com/javase/6/docs/api/javax/imageio/ImageWriteParam.html).
Getting your hand-crafted DCT coefficients into a JPEG file could potentially involve writing an entire JPEG library. If you don't want to do all that work, then you could modify the source of an existing library so that it uses your DCT coefficients.
Before the DCT . . .
While JPEG has no knowledge of colors, it is normal for JPEG file formats to use the YCbCr color space. If you are thinking about writing a JPEG file, you would need to do this conversion first.
After the Quantization . . .
The coefficients are run length encoded. That's a step you'd have to add. That's the most complex part of JPEG encoding.
I'm working on a project, a client-server application named 'remote desktop control'. What I need to do is take a screen capture of the client computer and send this screen capture to the server computer. I would probably need to send 3 to 5 images per second. But considering that sending BufferedImage directly will be too costly for the process, I need to reduce the size of the images. The image quality need not to be loss less.
How can I reduce the byte size of the image? Any suggestions?
You can compress it with ZIP very easily by using GZIPInputStream and its output counterpart on the other end of the socket.
Edit:
Also note that you can create delta images for transmission, you can use a "transpartent color" for example (magic pink #FF00FF) to indicate that no change was made on that part of the screen. On the other side you can draw the new image over the last one ignoring these magic pixels.
Note that if the picture already contains this color you can change the real pink pixels to #FF00FE for example. This is unnoticable.
An other option is to transmit a 1-bit mask with every image (after painting the no-change pixels to an arbitrary color. For this you can change the color which is mostly used in the picture to result in the best compression ratio (optimal huffman-coding).
Vbence's solution of using a GZIPInputStream is a good suggestion. The way this is done in most commercial software - Windows Remote Desktop, VNC, etc. is that only changes to the screen-buffer are sent. So you keep a copy on the server of what the client 'sees', and with each consecutive capture you calculate what is different in terms of screen areas. Then you only send these screen areas to the client along with their top-left coords, width, height. And update the server copy of the client 'view' with just these new areas.
That will MASSIVELY reduce the amount of network data you use, while I have been typing this answer, only 400 or so pixels (20x20) are changing with each keystroke. This on a 1920x1080 screen is just 1/10,000th of the screen, so clearly worth thinking about.
The only expensive part is how you calculate the 'difference' between one frame and the next. There are plenty of libraries out there to do that cheaply, most of them very mathematical (discrete cosine transform type stuff, way over my head), but it can be done relatively cheaply.
See this thread for how to encode to JPG with controllable compression/quality. The slider on the left is used to control the level.
Ultimately it would be better to encode the images directly to a video codec that can be streamed, but I am a little hazy on the details.
One way would be to use ImageIO API
ImageIO.write(buffimg, "jpg", new File("buffimg.jpg"));
As for the quality and other parameters- I'm not sure, but it should be possible, just dig deeper.
Is it possible to get the current quality of an existing image?
I want to load a JPEG and save it again without any change in quality and DPI. (I need to do some pixel manipulation before saving)
JPEG is a lossy format.
The direct way to do this, read the image, do what you need to do, reencode the image, will result in the image being slightly deteriorated.
That said, if that is fine, then you need to know that the way that quality works in JPEG encoding, is to determine how much information to keep about contrast. The less quality, the less sharp a transition you can have. In other words, this is not a single setting enclosed in the JPEG-file, but a setting determining the number of image data saved.
What you can do is to say that the final image need to be around the same size as the original. You can then encode the result at different quality settings and choose the one giving the image size you want.