Load parts of large image fast in Java - java

I have an application that need to show a very large image (from a png on disk) with only a small part of the image is visible on screen at any time. However the visible section can move quickly around the large image.
Loading the whole image into a BufferedImage at once is not a good idea as it can be 10 000 - 100 000 pixels wide, but the size on disk is not very large (a few MB perhaps) so it is a question of loading only the relevant sections to be displayed.
I've tried creating an ImageReader like this:
FileImageInputStream is = new FileImageInputStream(imageFile);
ImageReader imageReader = ImageIO.getImageReaders(is).next();
ImageReader.setInput(is, false, true);
ImageReadParam readParameters = imageReader.getDefaultReadParam();
And then a method for getting a subimage something like this:
private BufferedImage loadFrame(int x, int y, int w, int h) {
readParameters.setSourceRegion(new Rectangle(x,y,w,h));
try {
return imageReader.read(0, readParameters);
} catch (IOException ex) {
return null;
}
}
This works in principle, but it is far too slow. So when moving fast around the image it lags way too much.
I also tried splitting up the source image beforehand so I had a bunch of smaller images on disk that I then loaded as needed using ImageIO.read(getImageFile(x,y)) where getImageFile(x,y) would return the appropriate File for that location. This method was actually much faster and fully usable.
So I guess I have a way to make this work, but it just seems a bit awkward to do it this way. Besides needing some preparation of the source image, it also requires a lot of disk access (although I guess this is probably buffered somewhere).
So my question is: What would be the best way to do this? (And why is it faster to load an image from disk than to load a part of an image from an ImageReader?)

PNG is a compressed format, you can't just open the file and seek to a specific location to start reading the region (like you can with a bitmap ~after reading the file header of course). The whole PNG needs to be loaded (parsed/decompressed) before you can start extracting regions of it. (http://en.wikipedia.org/wiki/Portable_Network_Graphics#File_header)
If you want to sacrifice disk space to improved memory (RAM) usage and performance...
You can divide the image up and load only those grid chunks that you need to build the view for the user.
1x1.png, 1x2.png, 2x1.png, 2x2.png - if the user is looking at the top left corner you only need to load 1x1.png etc etc.
You can convert the image to a bitmap BMP the image will be much larger on the disk, but you'll be able to extract specific regions of it without having to process the whole file.

Related

Extract image into a file from PDImageXObject without loading it into memory

This is related to How to extract image bytes out of PDF efficiently, but I'll try to restate the problem differently so it's less about PDF parsing and more about image processing.
I'm using PDFBox to extract images out of PDF files. There's an class PDImageXObject that represents the image inside the PDF, which contains image metadata (height, width, etc), and exposes two APIs to pull out the image are: BufferedImage getImage() and BufferedImage getImage(Rectangle rect, int subsampling);.
The current code is straightforward:
BufferedImage image = pdImage.getImage();
ImageIO.write(image, "jpg", baos);
However, for a large image, I'm having an issue with memory usage, as BufferedImage is storing uncompressed image data in memory, which is a lot bigger than the compressed result.
Is there a way to avoid loading the whole image into memory by breaking it up into tiles (e.g. 1024x1024) and iterating over them using the getImage signature that takes Rectangle? I'm seeing some promising information about JAI being able to use Tiles to output a compressed image without loading the uncompressed content into memory at once, but I don't understand how to tie it together with what I have from PDImageXObject. Or is there another way to do it? Is JAI still an active project?
By the way, the purpose of extracting the image is to feed it into the next component in the pipeline that can handle multiple image formats. So, if some format other than jpg, is more suited for tiled processing, that should be ok.
I'm aware of one possibility using something like BigBufferedImage. But I was thinking processing a Tile at a time looked promising.
OK, I found a libray: Commons Imaging. Class Imaging maybe can help you.
I think you can try createInputStream() method, find out the size of real data(bytes length).

Merged two images --> 4 times the size! How do I reduce the file size?

I merge two images using the code below. One base image without transparency, one overlay image with transparency.
The file-size of the images one there own is 20kb and 5kb, respectively.
Once I merged the two images, the resulting file-size is > 100kb, thus at least 4 times the combined size of 25kb. I expected a file-size less than 25kb.
public static void mergeTwoImages(BufferedImage base, BufferedImage overlay, String destPath, String imageName) {
// create the new image, canvas size is the max. of both image sizes
int w = Math.max(base.getWidth(), overlay.getWidth());
int h = Math.max(base.getHeight(), overlay.getHeight());
BufferedImage combined = new BufferedImage(w, h, BufferedImage.TYPE_INT_ARGB);
// paint both images, preserving the alpha channels
Graphics2D g2 = combined.createGraphics();
g2.setRenderingHint(RenderingHints.KEY_ANTIALIASING, RenderingHints.VALUE_ANTIALIAS_ON);
g2.drawImage(base, 0, 0, null );
g2.drawImage(overlay, 0, 0, null);
g2.dispose();
// Save as new image
saveImage(combined, destPath + "/" + imageName + "_merged.png");
}
My application has to be with very good performance, thus can anyone explain me why this effect happens and how I can reduce the resulting file size?
Thanks a lot!
EDIT:
Thanks a lot for your answers. The saveImage code is:
public static void saveImage(BufferedImage src, String file) {
try {
File outputfile = new File(file);
ImageIO.write(src, "png", outputfile);
} catch (IOException e) {
e.printStackTrace();
}
}
Because PNG is a lossless format, there are only two major factors that are likely to impact the file size:
How many pixels are in the file, and
How well the format can be compressed.
Since it sounds like you're doing an overlay, I'm guessing #1 is not changing. Compare the pixel dimensions of the input and output files to double-check this.
Most likely you're seeing issues because your merged image is more complicated, so the PNG filtering algorithms have a harder time compressing the files. There's not much you can do about this, short of changing the images or switching to a lossy file format.
To explain just a bit further, let's say you have one all-white image and one all-red image. Both are 100x100 pixels. These images would be really easy to compress because you'd just need to encode: repeat red 10000 times. Now, say you merge these images in a way that every other pixel comes from a different image. Now it's checkered. If you have a good encoding mechanism set up, you might still be able to encode this well by saying: repeat [red,white] 10000 times. But you'll notice even with this ideal encoding algorithm, I've increased the size of my encoded message by quite a bit. And if you don't have an encoding format that's perfectly ideal for this sort of thing, it all goes downhill from there.
In general, the more varied and random-seeming the pixels of your image are, compared to one another, the larger the resulting file will be.
Save the image as JPEG with a higher compression ratio/lower quality. For further details see:
How to decrease image thumbnail size in java
This answer to "Java Text on Image" for an SSCCE.

Fastest way to load and display a jpeg on a SurfaceView?

This is a bit of a followup to my last question: Canvas is drawing too slowly
Now that I can draw images more quickly, the problem I am faced with is that the actual loading of the images takes far too long.
In the app I am working on, the user is able to play back video frames (jpegs) in succession, as though he is viewing the video in realtime. I have been using BitmapFactory.decodeFile() to load each jpeg in a Bitmap. I'm unable to load all images at once since there are about 240 of them, and that would use up all of my heap space. What I have been doing is preloading up to 6 at a time into an array by way of a separate thread in order to cut down on the time it takes for each image to display.
Unfortunately, it takes somewhere between 50 and 90ms to load an image, and I need to show an image every 42ms. Is there a faster way to load images possibly?
For clarification, these images are in a folder on the SD card, and they are all 720x480 jpegs. I am sampling them at half that size to cut down on memory usage.
I ended up doing this quite a bit differently than I had originally envisioned. There was quite a bit to it, but here's the gist of how I achieved my goal:
All images are stored on SD card and written to one file (each image takes up X bytes in the file)
Use native code to read from and write to the image file
When requesting an image, I pass the index of the image in the list and a bitmap object (RGB_565) to the native code using a JNI wrapper
The native code locks the bitmap surface, writes pixel data (as a uint8_t**) directly to the bitmap, then unlocks it
The image is rendered to the screen
By doing it this way, I only needed to store one image in memory at a time, and I was able to bypass garbage collection (since the bitmap was only created once and then repopulated natively). I hope someone else might find this strategy useful.
Guess you already tried all methods in this tutorial http://www.higherpass.com/Android/Tutorials/Working-With-Images-In-Android/2/ and chosen the fastest. Maybe tweaking resizing can decrease loading time.
Best of all would of course be if you didn't have to resize the images at all. If you have full control of the images maybe you could try to pack them as sprites, see article http://www.droidnova.com/2d-sprite-animation-in-android,471.html

Resize Image files

I have a lot of images that taken by my Digital camera with very high resolution 3000 * 4000 and it takes a lot of Hard disk space, I used Photoshop to open each Image and re-size it o be with small resolution, but it needs a lot of time and effort
I think that I can write simple program that open the folder of images and read each file and get it's width and height and if it's very high change it and overwrite the image with the small one.
Here some code I use in a Java-EE project (should work in normal application to:
int rw = the width I needed;
BufferedImage image = ImageIO.read(new File(filename));
ResampleOp resampleOp = new ResampleOp(rw,(rw * image.getHeight()) / image.getWidth() );
resampleOp.setFilter(ResampleFilters.getLanczos3Filter());
image = resampleOp.filter(image, null);
File tmpFile = new File(tmpName);
ImageIO.write(image, "jpg", tmpFile);
The resample filter comes from java-image-scaling library. It also contains BSpline and Bicubic filters among others if you don't like the Lanczos3. If the images are not in sRGB color space Java silently converts the color space to sRGB (which accidentally was what I needed).
Also Java loses all EXIF data, thought it does provide some (very hard to use) methods to retrieve it. For color correct rendering you may wish to at least add a sRGB flag to the file. For that see here.
+1 to what some of the other folks said about not specifically needing Java for this, but I imagine you must have known this and were maybe asking because you either wanted to write such a utility or thought it would be fun?
Either way, getting the image file listing from a dir is straight forward, resizing them correctly can take a bit more leg work as you'll notice from Googling for best-practices and seeing about 9 different ways to actually resize the files.
I wrote imgscalr to address this exact issue; it's a dead-simple API (single class, bunch of static methods) and has some good adoption in webapps and other tools utilizing it.
Steps to resize would look like this (roughly):
Get file list
BufferedImage image = ImageIO.read(files[i]);
image = Scalr.resize(image, width);
ImageIO.write(image);
There are a multitude of "resize" methods to call on the Scalr class, and all of them honor the image's original proportions. So if you scale only using a targetWidth (say 1024 pixels) the height will be calculated for you to make sure the image still looks exactly right.
If you scale with width and height, but they would violate the proportions of the image and make it look "Stretched", then based on the orientation of the image (portrait or landscape) one dimension will be used as the anchor and the other incorrect dimension will be recalculated for you transparently.
There are also a multitude of different Quality settings and FIT-TO scaling modes you can use, but the library was designed to "do the right thing" always, so using it is very easy.
You can dig through the source, it is all Apache 2 licensed. You can see that it implements the Java2D team's best-practices for scaling images in Java and pedantically cleans up after itself so no memory gets leaked.
Hope that helps.
You do not need Java to do this. It's a waste of time and resources. If you have photoshop you can do it with recording actions: batch resize using actions
AffineTransformOp offers the additional flexibility of choosing the interpolation type, as shown here.
You can individually or batch resize with our desktop image resizing application called Sizester. There's a full functioning 15-day free trial on our site (www.sizester.com).

Loading thumbnails on a canvas takes too long, with swt

Hi I have an application that loads all the images of a folder in a canvas, vertically. Like thumbnails. These folders have usually more than 20 images, around 1mb sometimes even 2.
I created a class called Index, that extend canvas.
I managed to load all the images and resize them to the proper size (the original size is around 1280x1985, yeah they are quit big. But it takes too long, and I think I know why but I dont know how fix it or do it better.
public void loadImages(){
System.out.println("Loading Images");
List<String> imageList = new ArrayList<String>();
imageList = listDirImages(this.strDir);
int listSize=imageList.size();
for(int i=0;i<listSize;i++){
System.out.println(imageList.get(i));
Image sourceImage;
System.out.println(strDir.concat("/".concat(imageList.get(i))));
try {
sourceImage = new Image(getDisplay(),strDir.concat("/".concat(imageList.get(i))));
//sourceImage[i] = new Image(getDisplay(),strDir.concat("/".concat(lsImagenes.get(i))));
}catch(Exception e){
System.out.println(e);
//band=1;
}
}
}
This function uses the directory, then it calls to a function to list all the images. It lists all the images.
The original code was different but i tried to eliminate the code to see where it is taking so long. Original sourceImage was an array of images (I dont know if that is better), and I resized the images creating new ones, but it took longer to create them.
with 25 images it takes almost 45 seconds to load this part, I know the problem is that i am loading the full image, and they are quite heavy. Is there a way to load them directly to a thumbnail?
Some folders have around 80 pages, that is like 2 minutes. (For one part I think i have to this as thread, so the whole program could run other things while it is loading the index.
You need thumbnails to be pre-build on the server and just load them with the javascript. Even if there is a way to load images directly in thumbnails the client will still download the full size image to create the thumbnail and this is not good for the network bandwidth.

Categories