So I have an image, called square.png, that is about 3.7 kB in size. I read it into a BufferedImage as so:
BufferedImage img = ImageIO.read("square.png");
At this point I tried writing several different objects to the disk.
ObjectOutputStream stream = new ObjectOutputStream(new FileOutputStream("square_out.data"));
I tried the image wrapped in an ImageIcon: stream.writeObject(new ImageIcon(img));
I tried a 2D array of the same size as the pixel dimension of the image (800x600).
I tried wrapping that array in a custom class that implements Serializable and writing that.
All the above techniques resulted in a square_out.data with a size of about 1.9 MB. That's huge considering the original image was only a handful of kilobytes. It's also odd because the size was exactly the same for each. Is there any reasonable explanation for this/is there a way around it? I'd like to store lots of these, so the absurd file size is bothersome.
Because the BufferedImage stores the image in uncompressed format internally (and that is what gets serialized).
Use ImageIO.write to save the image in compressed format.
Related
This is related to How to extract image bytes out of PDF efficiently, but I'll try to restate the problem differently so it's less about PDF parsing and more about image processing.
I'm using PDFBox to extract images out of PDF files. There's an class PDImageXObject that represents the image inside the PDF, which contains image metadata (height, width, etc), and exposes two APIs to pull out the image are: BufferedImage getImage() and BufferedImage getImage(Rectangle rect, int subsampling);.
The current code is straightforward:
BufferedImage image = pdImage.getImage();
ImageIO.write(image, "jpg", baos);
However, for a large image, I'm having an issue with memory usage, as BufferedImage is storing uncompressed image data in memory, which is a lot bigger than the compressed result.
Is there a way to avoid loading the whole image into memory by breaking it up into tiles (e.g. 1024x1024) and iterating over them using the getImage signature that takes Rectangle? I'm seeing some promising information about JAI being able to use Tiles to output a compressed image without loading the uncompressed content into memory at once, but I don't understand how to tie it together with what I have from PDImageXObject. Or is there another way to do it? Is JAI still an active project?
By the way, the purpose of extracting the image is to feed it into the next component in the pipeline that can handle multiple image formats. So, if some format other than jpg, is more suited for tiled processing, that should be ok.
I'm aware of one possibility using something like BigBufferedImage. But I was thinking processing a Tile at a time looked promising.
OK, I found a libray: Commons Imaging. Class Imaging maybe can help you.
I think you can try createInputStream() method, find out the size of real data(bytes length).
I have an application that need to show a very large image (from a png on disk) with only a small part of the image is visible on screen at any time. However the visible section can move quickly around the large image.
Loading the whole image into a BufferedImage at once is not a good idea as it can be 10 000 - 100 000 pixels wide, but the size on disk is not very large (a few MB perhaps) so it is a question of loading only the relevant sections to be displayed.
I've tried creating an ImageReader like this:
FileImageInputStream is = new FileImageInputStream(imageFile);
ImageReader imageReader = ImageIO.getImageReaders(is).next();
ImageReader.setInput(is, false, true);
ImageReadParam readParameters = imageReader.getDefaultReadParam();
And then a method for getting a subimage something like this:
private BufferedImage loadFrame(int x, int y, int w, int h) {
readParameters.setSourceRegion(new Rectangle(x,y,w,h));
try {
return imageReader.read(0, readParameters);
} catch (IOException ex) {
return null;
}
}
This works in principle, but it is far too slow. So when moving fast around the image it lags way too much.
I also tried splitting up the source image beforehand so I had a bunch of smaller images on disk that I then loaded as needed using ImageIO.read(getImageFile(x,y)) where getImageFile(x,y) would return the appropriate File for that location. This method was actually much faster and fully usable.
So I guess I have a way to make this work, but it just seems a bit awkward to do it this way. Besides needing some preparation of the source image, it also requires a lot of disk access (although I guess this is probably buffered somewhere).
So my question is: What would be the best way to do this? (And why is it faster to load an image from disk than to load a part of an image from an ImageReader?)
PNG is a compressed format, you can't just open the file and seek to a specific location to start reading the region (like you can with a bitmap ~after reading the file header of course). The whole PNG needs to be loaded (parsed/decompressed) before you can start extracting regions of it. (http://en.wikipedia.org/wiki/Portable_Network_Graphics#File_header)
If you want to sacrifice disk space to improved memory (RAM) usage and performance...
You can divide the image up and load only those grid chunks that you need to build the view for the user.
1x1.png, 1x2.png, 2x1.png, 2x2.png - if the user is looking at the top left corner you only need to load 1x1.png etc etc.
You can convert the image to a bitmap BMP the image will be much larger on the disk, but you'll be able to extract specific regions of it without having to process the whole file.
I have used this code to get Image into array of pixels.
convertTo2DWithoutUsingGetRGB method for reading image to pixel array
writeToFile method for pixel array to image
Now I would like to convert the array of pixels to Image. But when I convert it, I am losing image data.
Initial Image size: 80Kb JPG
Duplicate Image size: 71Kb JPG
I can clearly notice some difference between the both images, the Java produced image has some sort of white-noise.
I would like to reproduce the image without single pixel loss of data, how do I achieve in Java?
The jpg file format uses a lossy compression algorithm which means that the files it generates will have slight differences from the original. You can change the quality setting to compress more or less but you can't save the with its original size without any modifications.
This is why jpg isn't recommended for image editing. Use a lossless format instead, like PNG.
I have a specific object with image data. And I want read/write images (in BMP, JPEG and PNG formats) to this object.
Of course, I can read/write image into/from BufferedImage and then copy/paste pixels into/from my object. But, I want to do it more faster without intermediate objects!
How can I perform it over standard Java library or throw other Java library?
P.S.:
I know pngj library that allow read PNG images by lines. Maybe you know same libraries for BMP, JPEG (or for all of them: BMP, JPEG, PNG)?
Why don't you just have the BufferedImage in your Object? BufferedImage isn't all that bad. Otherwise, you will need to use something like BufferedImage to convert the image file into pixels, and then read the BufferedImage into something like an int[] array of pixels, but it still requires BufferedImage in the middle, and this would slow things down by adding an extra loop to read over and store the pixels.
Is there a reason why BufferedImage isn't suitable for your purpose?
If you want to do all the reading by yourself, you can just open the file with a stream and read it into an array, but you would need to work your way through the whole structure of the file (headers etc...) if you are working with compressed formats you'd have to work on that too.
Best would be to use imageio to load/write your files and I don't believe there is a lot of overhead coming from the read/write of your images.
Also if you expect to do some heavy work on the images, know that BufferedImages can be hardware accelerated which will improve performance a lot.
I have a servlet based application that is serving images from files stored locally. I have added logic that will allow the application to load the image file to a BufferedImage and then resize the image, add watermark text over the top of the image, or both.
I would like to set the content length before writing out the image. Apart from writing the image to a temporary file or byte array, is there a way to find the size of the BufferedImage?
All files are being written as jpg if that helps in calculating the size.
BufferedImage img = = new BufferedImage(500, 300, BufferedImage.TYPE_INT_RGB);
ByteArrayOutputStream tmp = new ByteArrayOutputStream();
ImageIO.write(img, "png", tmp);
tmp.close();
Integer contentLength = tmp.size();
response.setContentType("image/png");
response.setHeader("Content-Length",contentLength.toString());
OutputStream out = response.getOutputStream();
out.write(tmp.toByteArray());
out.close();
No, you must write the file in memory or to a temporary file.
The reason is that it's impossible to predict how the JPEG encoding will affect file size.
Also, it's not good enough to "guess" at the file size; the Content-Length header has to be spot-on.
Well, the BufferedImage doesn't know that it's being written as a JPEG - as far as it's concerned, it could be PNG or GIF or TGA or TIFF or BMP... and all of those have different file sizes. So I don't believe there's any way for the BufferedImage to give you a file size directly. You'll just have to write it out and count the bytes.
You can calculate the size of a BufferedImage in memory very easily. This is because it is a wrapper for a WritableRaster that uses a DataBuffer for it's backing. If you want to calculate it's size in memory you can get a copy of the image's raster using getData() and then measuring the size of the data buffer in the raster.
DataBuffer dataBuffer = bufImg.getData().getDataBuffer();
// Each bank element in the data buffer is a 32-bit integer
long sizeBytes = ((long) dataBuffer.getSize()) * 4l;
long sizeMB = sizeBytes / (1024l * 1024l);`
Unless it is a very small image file, prefer to use chunked encoding over specifying a content length.
It was noted in one or two recent stackoverflow podcasts that HTTP proxies often report that they only support HTTP/1.0, which may be an issue.
Before you load the image file as a BufferedImage make a reference to the image file via the File object.
File imgObj = new File("your Image file path");
int imgLength = (int) imgObj.length();
imgLength would be your approximate image size though it my vary after resizing and then any operations you perform on it.