We have an application which serve images, to speed up the response time, we cache the BufferedImage directly in memory.
class Provider {
#Override
public IData render(String... layers,String coordinate) {
int rwidth = 256 , rheight = 256 ;
ArrayList<BufferedImage> result = new ArrayList<BufferedImage>();
for (String layer : layers) {
String lkey = layer + "-" + coordinate;
BufferedImage imageData = cacher.get(lkey);
if (imageData == null) {
try {
imageData = generateImage(layer, coordinate,rwidth, rheight, bbox);
cacher.put(lkey, imageData);
} catch (IOException e) {
e.printStackTrace();
continue;
}
}
if (imageData != null) {
result.add(imageData);
}
}
return new Data(rheight, rheight, width, result);
}
private BufferedImage generateImage(String layer, String coordinate,int rwidth, int rheight) throws IOException {
BufferedImage image = new BufferedImage(rwidth, rheight, BufferedImage.TYPE_INT_ARGB);
Graphics2D g = image.createGraphics();
g.setColor(Color.RED);
g.drawString(layer+"-"+coordinate, new Random().nextInt(rwidth), new Random().nextInt(rheight));
g.dispose();
return image;
}
}
class Data implements IData {
public Data(int imageWidth, int imageHeight, int originalWidth, ArrayList<BufferedImage> images) {
this.imageResult = new BufferedImage(this.imageWidth, this.imageHeight, BufferedImage.TYPE_INT_ARGB);
Graphics2D g = imageResult.createGraphics();
for (BufferedImage imgData : images) {
g.drawImage(imgData, 0, 0, null);
imgData = null;
}
imageResult.flush();
g.dispose();
images.clear();
}
#Override
public void save(OutputStream out, String format) throws IOException {
ImageIO.write(this.imageResult, format, out);
out.flush();
this.imageResult = null;
}
}
usage:
class ImageServlet extends HttpServlet {
void doGet(req,res){
IData data= provider.render(req.getParameter("layers").split(","));
OutputStream out=res.getOutputStream();
data.save(out,"png")
out.flush();
}
}
Note:the provider filed is a single instance.
However it seems that there is a possible memory leak because I will get Out Of Memory exception when the application keep running for about 2 minutes.
Then I use visualvm to check the memory usage:
Even I Perform GC manually, the memory can not be released.
And Though there are only 300+ BufferedImage cached, and 20M+ memory are used, 1.3G+ memory are retained. In fact, through "firebug" I can make sure that a generate image is less than 1Kb. So I think the memory usage is not healthy.
Once I do not use the cache (comment the following line):
//cacher.put(lkey, imageData);
The memory usage looks good:
So it seem that the cached BufferedImage cause the memory leak.
Then I tried to transform the BufferedImage to byte[] and cache the byte[] instead of the object itself. And the memory usage is still normal. However I found the Serialization and Deserialization for the BufferedImage will cost too much time.
So I wonder if you guys have any experience of image caching?
update:
Since there are so many people said that there is no memory leak but my cacher use too many memory, I am not sure but I have tried to cache byte[] instead of BufferedImage directly, and the memory use looks good. And I can not imagine 322 image will take up 1.5G+ memory,event as #BrettOkken said, the total size should be (256 * 256 * 4byte) * 322 / 1024 / 1024 = 80M, far less than 1Gb.
And just now,I change to cache the byte and monitor the memory again, codes change like this:
BufferedImage ig = generateImage(layer,coordinate rwidth, rheight);
ByteArrayOutputStream bos = new ByteArrayOutputStream();
ImageIO.write(ig, "png", bos);
imageData = bos.toByteArray();
tileCacher.put(lkey, imageData);
And the memory usage:
Same codes, same operation.
Note from both VisualVM screenshots that 97.5% memory consumed by 4,313 instances of int[] (Which I assume is by cached buffered image) is not consumed in non-cached version.
Although you have a less than 1K PNG image (which is compressed as per PNG format), this single image is being generated out of multiple instances of buffered image (which is not compressed). Hence you cannot directly co-relate image size from browser to memory occupied on server. So issue here is not memory leak but amount of memory required to cache this uncompressed layers of buffered images.
Strategy to resolve this is to tweak your caching mechanism:
If possible use compressed version of layers cached instead of raw
images
Ensure that you will never run out of memory by limiting cache size
by instances or by amount of memory utilized. Use either LRU or LIRS
cache eviction policy
Use custom key object with coordinate and layer as two separate
variables overriding with equals/hashcode to use as key.
Observe the behavior and if you have too many cache misses then you
will need better caching strategy or cache may be unnecessary
overhead.
I believe you are caching layers as you expect combinations of layer
and coordinates and hence cannot cache final images but depending on kind of
pattern of requests you expect you may want to consider that option if possible
Not sure what caching API you are using or what are actual values in your request. However based of visualvm it looks to me that String objects are leaking. Also as you mentioned if you turn off caching, problem is resolved.
Consider extract of below snippet of your code.
String lkey = layer + "-" + coordinate;
BufferedImage imageData = cacher.get(lkey);
Now here are few things for you to consider for this code.
You possibly getting new string objects each time for lkey
Your cache has no upper limit with and no eviction policy (e.g. LRU)
Cacher instead of doing String.equals() is doing == and since this
are new string objects they never match causing new entry each time
VisualVM is a start but it doesn't give the complete picture.
You need to trigger a heap dump while the application is using a high amount of memory.
You can trigger a heap dump from VisualVM. It can also be done automatically on an OOME if you add this vmarg to the java process:
-XX:+HeapDumpOnOutOfMemoryError
Use Memory Analyzer Tool to open and inspect the heap dump.
The tool is quite capable and can help you walk the object references to discover:
What is actually using your memory.
Why the objects from #1 aren't being garbage collected.
Related
For a project I am working on, I was tasked with creating a way of converting an image into a non-cryptographic hash so it could be easily compared with similar images, however I ran into an issue where the JVM would begin to recklessly consume memory, despite the Java Monitoring & Management Console not reporting any increase in memory consumption.
When I first ran the application, the Task Manager would report values like this:
However after only about 30 seconds, those values would have doubled or tripled.
I used the JMMC to create a dump of the process, but it only reported around 1.3MB of usage:
The strangest part to me is that the application performs an operation which lasts for about 15 seconds, then it waits for 100 seconds (debug), and it is during the 100 seconds of thread sleeping that the memory used doubles.
Here are my two classes:
ImageHashGenerator.java
package com.arkazex.srcbot;
import java.awt.Color;
import java.awt.Image;
import java.awt.image.BufferedImage;
public class ImageHashGenerator {
public static byte[] generateHash(Image image, int resolution) {
//Resize the image
Image rscaled = image.getScaledInstance(resolution, resolution, Image.SCALE_SMOOTH);
//Convert the scaled image into a buffered image
BufferedImage scaled = convert(rscaled);
//Create the hash array
byte[] hash = new byte[resolution*resolution*3];
//Variables
Color color;
int index = 0;
//Generate the hash
for(int x = 0; x < resolution; x++) {
for(int y = 0; y < resolution; y++) {
//Get the color
color = new Color(scaled.getRGB(x, y));
//Save the colors
hash[index++] = (byte) color.getRed();
hash[index++] = (byte) color.getGreen();
hash[index++] = (byte) color.getBlue();
}
}
//Return the generated hash
return hash;
}
//Convert Image to BufferedImage
private static BufferedImage convert(Image img) {
//Create a new bufferedImage
BufferedImage image = new BufferedImage(img.getWidth(null), img.getHeight(null), BufferedImage.TYPE_3BYTE_BGR);
//Get the graphics
image.getGraphics().drawImage(img, 0, 0, null);
//Return the image
return image;
}
}
Test.java
package com.arkazex.srcbot;
import java.io.File;
import java.io.IOException;
import javax.imageio.ImageIO;
public class Test {
public static void main(String[] args) throws IOException {
//Create a hash
byte[] hash = ImageHashGenerator.generateHash(ImageIO.read(new File("img1.JPG")), 8); //Memory grows to around 150MB here
System.out.println(new String(hash));
try{ Thread.sleep(100000); } catch(Exception e) {} //Memory grows to around 300MB here
}
}
EDIT: The program stopped growing to 300MB after a few seconds for no apparent reason. I had not changed anything in the code, it just stopped doing it.
I think that what you missing here is that some of the image classes use off-heap memory. This is (presumable) invisible to the JMMC because it only gets told about on-heap usage. The OS-level memory usage monitoring sees it ... because it is looking at the total resource consumption of the JVM running your application.
The problem is that the off-heap memory blocks are only reclaimed when the corresponding on-heap image objects are finalized. That only happens when they are garbage collected.
The program stopped growing to 300MB after a few seconds for no apparent reason. I had not changed anything in the code, it just stopped doing it.
I expect that the JVM decided it was time to do a full GC (or something like that) and that caused it to free up lots of space in the off-heap memory pool. That meant the JVM no longer needed to keep growing the pool.
(I am being deliberately vague because I don't actually know how off-heap memory allocation works under the covers in a modern JVM. But if you want to investigate, the JVM source code can be downloaded ...)
See the explanation in the /** comments */
public class Test {
public static void main(String[] args) throws IOException {
//Create a hash
/** Here it allocates (3 * resolution^2 )bytes of memory to a byte array */
byte[] hash = ImageHashGenerator.generateHash(ImageIO.read(new File("img1.JPG")), 8); //Memory grows to around 150MB here
/** And here it again allocates the same memory to a String
Why print a String of 150 million chars? */
System.out.println(new String(hash));
try{ Thread.sleep(100000); } catch(Exception e) {} //Memory grows to around 300MB here
}
}
I'm receiving a Bitmap in byte array through socket and I read it and then I want to set it os.toByteArray as ImageView in my application. The code I use is:
try {
//bmp = BitmapFactory.decodeByteArray(result, 0, result.length);
bitmap_tmp = Bitmap.createBitmap(540, 719, Bitmap.Config.ARGB_8888);
ByteBuffer buffer = ByteBuffer.wrap(os.toByteArray());
bitmap_tmp.copyPixelsFromBuffer(buffer);
Log.d("Server",result+"Length:"+result.length);
runOnUiThread(new Runnable() {
#Override
public void run() {
imageView.setImageBitmap(bitmap_tmp);
}
});
return bmp;
} finally {
}
When I run my application and start receiving Byte[] and expect that ImageView is changed, it's not.
LogCat says:
java.lang.RuntimeException: Buffer not large enough for pixels at
android.graphics.Bitmap.copyPixelsFromBuffer
I searched in similar questions but couldn't find a solution to my problem.
Take a look at the source (version 2.3.4_r1, last time Bitmap was updated on Grepcode prior to 4.4) for Bitmap::copyPixelsFromBuffer()
The wording of the error is a bit unclear, but the code clarifies-- it means that your buffer is calculated as not having enough data to fill the pixels of your bitmap.
This is (possibly) because they use the buffer's remaining() method to figure the capacity of the buffer, which takes into account the current value of its position attribute. If you call rewind() on your buffer before you invoke copyFromPixels(), you might see the runtime exception disappear. I say 'might' because the ByteBuffer::wrap() method should set the position attribute value to zero, removing the need to call rewind, but judging by similar questions and my own experience resetting the position explicitly may do the trick.
Try
ByteBuffer buffer = ByteBuffer.wrap(os.toByteArray());
buffer.rewind();
bitmap_tmp.copyPixelsFromBuffer(buffer);
The buffer size should be exactly 1553040B (assuming bitmap's height, width and 32bit to encode each color).
I have some code which uses the ImageReader class to read in a large number of TIF images. The imageReader object is final and created in the constructor.
synchronized(imageReader) {
LOG.debug(file);
FileInputStream fin = new FileInputStream(file);
ImageInputStream iis = ImageIO.createImageInputStream(fin);
imageReader.setInput(iis, false);
int sourceXSubSampling = targetSize == null ?
1 : Math.max(1, imageReader.getWidth(0) / targetSize.width);
int sourceYSubSampling = targetSize == null ?
1 : Math.max(1, imageReader.getHeight(0) / targetSize.height);
ImageReadParam subSamplingParam = new ImageReadParam();
subSamplingParam.setSourceSubsampling(sourceXSubSampling, sourceYSubSampling, 0, 0);
return imageReader.read(0, subSamplingParam);
}
About one instance in four the ImageReader get "stuck" on the first image it loaded and keeps loading that same image over and over again even though it is provided with different ImageInputStreams. This is evidenced by the output to the logger.
How do I solve this. I was thinking about taking a "fingerprint" of the image and getting a different ImageReader from the iterator if this occurs but that seems like overkill. Does anyone know how to solve this problem?
As #MadProgrammer says in the comment section, the typical pattern for reading multiple images is to obtain a new ImageReader for each image, and afterwards dispose() it. The time/memory spent on creating a reader instance is very small, compared to actually reading an image. So any performance penalty should be negligible.
In theory it should, however, be sufficient to invoke reset() on the ImageReader before/after each read.
I have a problem saving large (f.e. 12 000 x 9 000 ) images.
I'm developing a graphical editing software ( something like simple Photoshop ) and
The user obviously has to have to ability to save the image.
Lets say I would like to save the image as .png.
Does JAVA always need to use the BufferedImage for saving drawn stuff ?
I know the equation for size of the image is:
Xsize * Ysize * 4 ( red, green, blue, alpha )
So in this case we get over 400 MB.
I know I could save the image in parts ( tiles ) but the user would have to merge them somehow anyway.
Is there any other way to save such a large image without using the BufferedImage ?
Code for saving the image:
public static void SavePanel() {
BufferedImage image = null;
image = new BufferedImage(
(int) (Main.scale * sizeX ),
(int) (Main.scale * sizeY ),
BufferedImage.TYPE_INT_RGB);
g2 = image.createGraphics();
panel.paint(g2);
try {
ImageIO.write(image, "png", new File(FactoryDialog.ProjectNameTxt.getText() + ".png"));
} catch (IOException e) {
}
}
Thank you in advance !
The ImageIO.write(..) methods accept an RenderedImage, not just a BufferedImage. I successfully exploited this fact some time ago to write out really large images. Generally, the writer implementations write out the image sequentially, and ask the RenderedImage only for the pieces they currently need.
From looking at your code, I think it should be possible to hack a RenderedImage implementation which takes your panel in it's constructor and can be passed to ImageIO for writing. During the process, ImageIO will request data from your image. You can then use the panel to create the requested pieces (Raster contents) on the fly. This way, the whole image does not have to be stored in memory at any point. A starting point for this approach is
public class PanelImage implements RenderedImage {
private final Panel panel;
public PanelImage(Panel panel) {
this.panel = panel;
}
/* implement all the missing methods, don't be afraid, most are trivial */
}
Obviously, you should also check if your panel doesn't suffer from the same problem as the BufferedImage. Depending on the nature of you application, you'll have to hold the image in memory at least once anyway (modulo using tiles). But this way you can at least avoid the duplication.
Using native image resizer like image magick instead.
I'm opening bunch of files using JFileChooser and for each image I create BufferedImage using image = ImageIO.read(path);. Where image is declared as a static field.
Now I've got 30files 1Mb each, and after running 60times read() my memory usage (checked in OS programs manager) grows for about 70Mb.
Because my image variable is static it's not the issue that image content is stored somewhere. So my questions is, why I'm loosing so much memory?
I'm writing app that need to load tons of pictures to memory, is there somewhere a leek? Is it garbage collector task to clean unused data?
Here is my code to read this data:
public class FileListElement {
private static final long serialVersionUID = -274112974687308718L;
private static final int IMAGE_WIDTH = 1280;
// private BufferedImage thumb;
private static BufferedImage image;
private String name;
public FileListElement(File f) throws IllegalImageException {
// BufferedImage image = null;
try {
image = ImageIO.read(f);
// if (image.getWidth() != 1280 || image.getHeight() != 720) {
// throw new IllegalImageException();
// }
} catch (IOException e) {
e.printStackTrace();
}
image.flush();
//
// thumb = scale(image, 128, 72);
// image = null;
name = "aa";
}
}
What's wrong with it?
Maybe I'm doing sth wrong? I need raw pixels from tons of images or compressed images loaded to RAM. So that I could fast access to any pixel of the image.
It's odd that loading 1Mb pic takes much more than 1Mb.
You can't count on the current memory usage being the amount of memory that is needed the garbage collection does not run constantly, especially if you are far from your max memory usage. Try loading more images, you might find there is no issue.
It's odd that loading 1Mb pic takes much more than 1Mb.
Well, I would expect the format stored on disk to possibly be compressed/smaller than a BufferedImage in memory. So I don't think this is odd.