I am using javax.imageio.ImageIO.read() which almost takes 9 seconds to read an image having size 5 mb and located at windows temp location, PFB the screenshot of Jprofiler. i want a more efficient way which can decrease the time to at least 2-3 seconds.
The file is coming as a org.springframework.web.multipart.MultipartFile request through rest endpoint and then getting copied to the windows temp location for further proccessing. The complete code block:
String fileName = StringUtils.cleanPath(multipartFile.getOriginalFilename());
Path destinationPath = Paths.get(System.getProperty("java.io.tmpdir"));
String tempPath = destinationPath.resolve(fileName).toString();
File uploadedImageFile = new File(tempPath);
File originalFileInTempLocation = file.transferTo(uploadedImageFile);
BufferedImage originalImage = ImageIO.read(originalFileInTempLocation);
While googling i found a 3rd party library which claims to be more efficient than javax.imageio.ImageIO but its a paid one: https://files.idrsolutions.com/maven/site/jdeli/apidocs/com/idrsolutions/image/JDeli.html#read(java.io.File).
Can anyone suggest a better and efficient way to read the image file.
Your time of 9 seconds looks bad - I wonder if you've included all code and timing of just ImageIO.read(File)?
However I found that ImageIO.read(File) quite slow on non-SSD drives (ie 500ms to 1 second). When testing NAS and HDD drives on my PC 4-6MB some image reads used InputStream.read() up to 1000 times. I found that that elapsed time for loads are reduced if entire image was read into memory before calling ImageIO.read:
static BufferedImage load(File f) throws IOException
{
byte[] bytes = Files.readAllBytes(f.toPath());
try (InputStream is = new ByteArrayInputStream(bytes))
{
return ImageIO.read(is);
}
}
This solution can slow down access time on SSD drives a little (which was not an issue for me at ~ 20ms), but the margin of gain on NAS/network drive was 50-250ms on my PC.
Obviously the timings you observe would depend on your hardware, image file size and types but it may be worth trying in your particular case.
If you need to load several images you can split the file I/O and ImageIO calls to separate threads for a small extra gain, but at cost of extra complexity and memory footprint.
First of all, I have to say that 9 seconds for a 5Mb image seem too much for modern hardware. Sure, complex images or sophisticated formats (i.e. multi-page tiff) can take longer but my low-end PC needs 1.3 seconds for a 5mb JPG. I would definitely try to identify hardware and/or OS level issues.
Having that said, here is my proposed solution for faster image loading since the use case is to generate thumbnails. The key point here is that we can instruct the image reader to only read part of the image from disk, since we need less image data to generate a thumbnail. This technique (sub-sampling) can be realized for JPG as:
Iterator<ImageReader> imageReadersByFormatName = ImageIO.getImageReadersByFormatName("jpg");
ImageReader ir = imageReadersByFormatName.next(); //There should be at least the default com.sun.imageio.plugins.jpeg.JPEGImageReader installed
ImageReadParam defaultReadParam = ir.getDefaultReadParam();
defaultReadParam.setSourceSubsampling(8, 8, 0, 0); //Configure the reader to read every 8th row / 8th column of the image
ir.setInput(ImageIO.createImageInputStream(new FileInputStream(IMAGE_PATH)), true);
BufferedImage image = ir.read(0, defaultReadParam);
You will have to determine the optimal values for X-axis and Y-axis sub-sampling (first two parameters to setSourceSubsampling()). This will be a speed/image quality trade-off.
There are more configuration options for ImageReadParam.
Bonus material: Since you are using AWS, you might consider this approach in order to delegate this task to a lambda function (i.e. if this task can be done asynchronously).
I would recommend trying to use classes from the javax.imageio.stream package:
if you want to keep your solution with the temp file:
FileInputStream fileInputStream = new FileInputStream(uploadedImageFile);
FileCacheImageInputStream fileCache = new FileCacheImageInputStream(fileInputStream, new File(tempPath));
BufferedImage bufferedImage = ImageIO.read(fileCache);
if you have enough memory you can use solution with the memory cache:
FileInputStream fileInputStream = new FileInputStream(uploadedImageFile);
MemoryCacheImageInputStream memoryCache = new MemoryCacheImageInputStream(fileInputStream);
BufferedImage bufferedImage = ImageIO.read(memoryCache);
Related
From pagespeed I am getting only image link and possible optimizations in bytes & percentage like,
Compressing and resizing https://example.com/…ts/xyz.jpg?036861 could save 212KiB (51% reduction).
Compressing https://example.com/…xyz.png?303584508 could save 4.4KiB (21% reduction).
For an example I have image of size 300kb and for this image pagespeed is displaying 100kb & 30% of reduction.
This is only for one image but I am sure I will have lots of images for compression.
so how can I compress image by passing bytes or percentage as a parameter or using anyother calculations in java
(by using API or image-processing Tool) so,that I can get compressed version of image as suggested by google.
Thanks in advance.
You can use Java ImageIO package to do the compression for many images formats, here is an example
import java.awt.image.BufferedImage;
import java.io.*;
import java.util.Iterator;
import javax.imageio.*;
import javax.imageio.stream.*;
public class Compresssion {
public static void main(String[] args) throws IOException {
File input = new File("original_image.jpg");
BufferedImage image = ImageIO.read(input);
File compressedImageFile = new File("compressed_image.jpg");
OutputStream os = new FileOutputStream(compressedImageFile);
Iterator<ImageWriter> writers = ImageIO.getImageWritersByFormatName("jpg");
ImageWriter writer = (ImageWriter) writers.next();
ImageOutputStream ios = ImageIO.createImageOutputStream(os);
writer.setOutput(ios);
ImageWriteParam param = writer.getDefaultWriteParam();
param.setCompressionMode(ImageWriteParam.MODE_EXPLICIT);
param.setCompressionQuality(0.05f); // Change the quality value you prefer
writer.write(null, new IIOImage(image, null, null), param);
os.close();
ios.close();
writer.dispose();
}
}
You can find more details about it here
Also there are some third party tools like these
https://collicalex.github.io/JPEGOptimizer/
https://github.com/depsypher/pngtastic
EDIT: If you want to use Google PageSpeed in your application, it is available as web server module either for Apache or Nginx, you can find how to configure it for your website here
https://developers.google.com/speed/pagespeed/module/
But if you want to integrate the PageSpeed C++ library in your application, you can find build instructions for it here.
https://developers.google.com/speed/pagespeed/psol
It also has a Java Client here
https://github.com/googleapis/google-api-java-client-services/tree/main/clients/google-api-services-pagespeedonline/v5
There is colour compression ("compression quality") and there is resolution compression ("resizing"). Fujy's answer deals with compression quality, but this is not where the main savings come from: the main savings come from resizing down to a smaller size. E.g. I got a 4mb photo to 207K using the maximum compression quality using fujy's answer, and it looked awful, but I got it down to 12K using a reasonable quality but a smaller size.
So the above code should be used for "compression quality", but this is my recommendation for resizing:
https://github.com/rkalla/imgscalr/blob/master/src/main/java/org/imgscalr/Scalr.java
I wish resizing was part of the standard Java libraries, but it seems it's not, (or there are image quality problems with the standard methods?). But Riyad's library is really small - it's just one class. I just copied this class into my project, because I never learnt how to use Maven, and it works great.
One liner java solution: thumbnailator.
Maven dependency:
<!-- https://mvnrepository.com/artifact/net.coobird/thumbnailator -->
<dependency>
<groupId>net.coobird</groupId>
<artifactId>thumbnailator</artifactId>
<version>0.4.17</version>
</dependency>
The one liner:
Thumbnails.of(inputImagePathString).scale(scalingFactorFloat).outputQuality(qualityFactorFloat).toFile(outputImagePathString);
As a solution for this problem I can recommend the API of TinyPNG.
You can use it for compressing as well as resizing the image.
Documentation: tinypng.com/developers/reference/java
I need to open an .hdr file and work on it, but imageIO doesn't supports that format.
The problem is that I need to keep the information loss as little as possible: 32bpc is perfect, 16 is fine and less the 16 won't work.
There are 3 possible solutions I came up to:
Find a plugin that allow me to open .HDR file. I've been searching for it a lot but without luck;
Find a way to convert the .HDR file to a format I can find a plugin for. Tiff maybe? Tried this too but still no luck;
Reduce the dynamic range from 32bpc to 16bpc and then convert it to png. This is tricky because once I have a png file I win, but it's not that easy to cut the range without killing the image..
What would you recommend me to do? Do you know a way to make one of those 3 options works? Or do you have a better idea?
You can now read .HDR using ImageIO. :-)
This is a first version, so it might be a little rough around the edges, but should work for standard (default settings) Radiance RGBE .HDR files.
The returned image will be a custom BufferedImage with a DataBufferFloat backing (ie., samples will be in 3 samples, 32-bit float interleaved RGB format).
By default, a simple global tone-mapping is applied, and all RGB values will be normalized to range [0...1] (this allows anyone to just use ImageIO.read(hdrFile) and the image will look somewhat reasonable, in a very reasonable time).
It is also possible to pass an HDRImageReadParam to the ImageReader instance with a NullToneMapper. This is even faster, but the float values will be unnormalized, and might exceed the max value. This allows you to do custom, more sophisticated tone-mapping on the image data, before converting to something more displayable.
Something like:
// Create input stream
ImageInputStream input = ImageIO.createImageInputStream(hdrFile);
try {
// Get the reader
Iterator<ImageReader> readers = ImageIO.getImageReaders(input);
if (!readers.hasNext()) {
throw new IllegalArgumentException("No reader for: " + hdrFile);
}
ImageReader reader = readers.next();
try {
reader.setInput(input);
// Disable default tone mapping
HDRImageReadParam param = (HDRImageReadParam) reader.getDefaultReadParam();
param.setToneMapper(new NullToneMapper());
// Read the image, using settings from param
BufferedImage image = reader.read(0, param);
}
finally {
// Dispose reader in finally block to avoid memory leaks
reader.dispose();
}
}
finally {
// Close stream in finally block to avoid resource leaks
input.close();
}
// Get float data
float[] rgb = ((DataBufferFloat) image.getRaster().getDataBuffer()).getData();
// TODO: Custom tone mapping on float RGB data
// Convert the image to something easily displayable
BufferedImage converted = new ColorConvertOp(null).filter(image, new BufferedImage(image.getWidth(), image.getHeight(), BufferedImage.TYPE_INT_RGB));
// Optionally write as JPEG or other format
ImageIO.write(converted, "JPEG", new File(...));
I have used standard java file stream to upload a file. When I tried to upload a 25MB size zip file , it took almost 11 minutes. but when I tried to upload that file on yousendit.com a file uploading site it just took 25 seconds. Following is my code
File file = new File(destination + fileName);
FileOutputStream fileOutputStream = new FileOutputStream(file);
byte[] buffer = new byte[1024];
InputStream in = dataHandler.getDataSource().getInputStream();
int len = in.read(buffer);
while (len != -1) {
fileOutputStream.write(buffer, 0, len);
len = in.read(buffer);
}
fileOutputStream.flush();
fileOutputStream.close();
I dont have Ideas ho to speed up the uploading? Is there any other 3rd party API , or any other suggestions?
You can split file into chunks and upload each one in separate thread. As far as I remember HTTP standard defines special headers that help server to join the chunks together.
Start from taking a look on FileUpload from Apcahe
You may use a flash or html5 plugin to upload the file to your server, and do the things to the file which has been on your server, that'll be much faster I think.
There is something terribly wrong if a software stack cannot achieve 40kb per second throughput on an upload.
I suggest that you increase the size of buffer. Make it 10 times bigger and see if you get a speedup.
If that doesn't help I suggest that you profile your system to try to identify where the bottleneck is. The code you've written should not be CPU intensive. If it is, it would be instructive to understand why.
My guess is either that you've got a particularly badly written filter "upstream" of that code ... or that the problem is not in the application at all, despite what the network team thinks. Perhaps it is a problem with virtualization / virtual networking.
We are using Java2D to resize photos uploaded to our website, but we run into an issue (a seemingly old one, cf.: http://forums.sun.com/thread.jspa?threadID=5425569) - a few particular JPEGs raise a CMMException when we try to ImageIO.read() an InputStream containing their binary data:
java.awt.color.CMMException: Invalid image format
at sun.awt.color.CMM.checkStatus(CMM.java:131)
at sun.awt.color.ICC_Transform.<init>(ICC_Transform.java:89)
at java.awt.image.ColorConvertOp.filter(ColorConvertOp.java:516)
at com.sun.imageio.plugins.jpeg.JPEGImageReader.acceptPixels(JPEGImageReader.java:1114)
at com.sun.imageio.plugins.jpeg.JPEGImageReader.readImage(Native Method)
at com.sun.imageio.plugins.jpeg.JPEGImageReader.readInternal(JPEGImageReader.java:1082)
at com.sun.imageio.plugins.jpeg.JPEGImageReader.read(JPEGImageReader.java:897)
at javax.imageio.ImageIO.read(ImageIO.java:1422)
at javax.imageio.ImageIO.read(ImageIO.java:1326)
...
(snipped the remainder of the stack trace, which is our ImageIO.read() call, servlet code and such)
We narrowed it down to photos taken on specific cameras, and I selected a photo that triggers this error: http://img214.imageshack.us/img214/5121/estacaosp.jpg.
We noticed that this only happens with Sun's JVM (on Linux and Mac, just tested it on 1.6.0_20) - a test machine with OpenJDK reads the same photos without a hitch, possibly due to a different implementation of the JPEG reader.
Unfortunately, we are unable to switch JVMs in production, nor to use native-dependent solutions such as ImageMagick ( http://www.imagemagick.org/ ).
Considering that, my question is: Does a replacement for ImageIOs JPEG reader which can handle photos such as the linked one exist? If not, is there another 100% pure Java photo resizing solution which we can use?
Thank you!
One possibly useful library for you could be the Java Advanced Imaging Library (JAI)
Using this library can be quite a bit more complicated than using ImageIO but in a quick test I just ran, it did open and display the problem image file you linked.
public static void main(String[] args) {
RenderedImage image = JAI.create("fileload", "estacaosp.jpg");
float scale=(float) 0.5;
ParameterBlock pb = new ParameterBlock();
pb.addSource(image);
pb.add(scale);
pb.add(scale);
pb.add(1.0F);
pb.add(1.0F);
pb.add(new InterpolationNearest() );// ;InterpolationBilinear());
image = JAI.create("scale", pb);
// Create an instance of DisplayJAI.
DisplayJAI srcdj = new DisplayJAI(image);
JScrollPane srcScrollPaneImage = new JScrollPane(srcdj);
// Use a label to display the image
JFrame frame = new JFrame();
frame.getContentPane().add(srcScrollPaneImage, BorderLayout.CENTER);
frame.pack();
frame.setVisible(true);
}
After running this code the image seems to load fine. It is then resized by 50% using the ParamaterBlock
And finally if you wish to save the file you can just call :
String filename2 = new String ("tofile.jpg");
String format = new String ("JPEG");
RenderedOp op = JAI.create ("filestore", image, filename2, format);
I hope this helps you out. Best of luck.
Old post, but for future reference:
Inspired by this question and links found here, I've written a JPEGImageReader plugin for ImageIO that supports JPEG images with these kind of "bad" ICC color profiles (the "issue" is the rendering intent in the ICC profile is incompatible with Java's ColorConvertOp). It's plain Java and does not require JAI.
The source code and linked binary builds are freely available from the TwelveMonkeys project on GitHub.
I faced the same issue. I was reluctant to use JAI as it is outdated but it looks like it's the shortest solution.
This code converts an InputStream to a BufferedImage, using sun's ImageIO (fast) or in the few cases where this problem occur, using JAI:
public static BufferedImage read(InputStream is) throws IOException {
try {
// We try it with ImageIO
return ImageIO.read(ImageIO.createImageInputStream(is));
} catch (CMMException ex) {
// If we failed...
// We reset the inputStream (start from the beginning)
is.reset();
// And use JAI
return JAI.create("stream", SeekableStream.wrapInputStream(is, true)).getAsBufferedImage();
}
}
EDIT:
Got the directory to live. Now there's another issue in sight:
The files in the storage are stored with their DB id as a prefix
to their file names. Of course I don't want the users to see those.
Is there a way to combine the response.redirect and the header setting
für filename and size?
best,
A
Hi again,
new approach:
Is it possible to create a IIS like virtual directory within tomcat in order
to avoid streaming and only make use of header redirect? I played around with
contexts but could'nt get it going...
any ideas?
thx
A
Hi %,
I'm facing a wired issue with the java heap space which is close
to bringing me to the ropes.
The short version is:
I've written a ContentManagementSystem which needs to handle
huge files (>600mb) too. Tomcat heap settings:
-Xmx700m
-Xms400m
The issue is, that uploading huge files works eventhough it's
slow. Downloading files results in a java heap space exception.
Trying to download a 370mb file makes tomcat jump to 500mb heap
(which should be ok) and end in an Java heap space exception.
I don't get it, why does upload work and download not?
Here's my download code:
byte[] byt = new byte[1024*1024*2];
response.setHeader("Content-Disposition", "attachment;filename=\"" + fileName + "\"");
FileInputStream fis = null;
OutputStream os = null;
fis = new FileInputStream(new File(filePath));
os = response.getOutputStream();
BufferedInputStream buffRead = new BufferedInputStream(fis);
while((read = buffRead.read(byt))>0)
{
os.write(byt,0,read);
os.flush();
}
buffRead.close();
os.close();
If I'm getting it right the buffered reader should take care of any
memory issue, right?
Any help would be highly appreciated since I ran out of ideas
Best regards,
W
If I'm getting it right the buffered
reader should take care of any memory
issue, right?
No, that has nothing to do with memory issues, it's actually unnecessary since you're already using a buffer to read the file. Your problem is with writing, not with reading.
I can't see anything immediately wrong with your code. It looks as though Tomcat is buffering the entire response instead of streaming it. I'm not sure what could cause that.
What does response.getBufferSize() return? And you should try setting response.setContentLength() to the file's size; I vaguely remember that a web container under certain circumstances buffers the entire response in order to determine the content length, so maybe that's what's happening. It's good practice to do it anyway since it enables clients to display the download size and give an ETA for the download.
Try using the setBufferSize and flushBuffer methods of the ServletResponse.
You better use java.nio for that, so you can read resources partially and free resources already streamed!
Otherwise, you end up with memory problems despite the settings you've done to the JVM environment.
My suggestions:
The Quick-n-easy: Use a smaller array! Yes, it loops more, but this will not be a problem. 5 kilobytes is just fine. You'll know if this works adequately for you in minutes.
byte[] byt = new byte[1024*5];
A little bit harder: If you have access to sendfile (like in Tomcat with the Http11NioProtocol -- documentation here), then use it
A little bit harder, again: Switch your code to Java NIO's FileChannel. I have very, very similar code running on equally large files with hundreds of concurrent connections and similar memory settings with no problem. NIO is faster than plain old Java streams in these situations. It uses the magic of DMA (Direct Memory Access) allowing the data to go from disk to NIC without ever going through RAM or the CPU. Here is a code snippet for my own code base...I've ripped out much to show the basics. FileChannel.transferTo() is not guaranteed to send every byte, so it is in this loop.
WritableByteChannel destination = Channels.newChannel(response.getOutputStream());
FileChannel source = file.getFileInputStream().getChannel();
while (total < length) {
long sent = source.transferTo(start + total, length - total, destination);
total += sent;
}
The following code is able to streaming data to the client, allocating only a small buffer (BUFFER_SIZE, this is a soft point since you may want to adjust it):
private static final int OUTPUT_SIZE = 1024 * 1024 * 50; // 50 Mb
private static final int BUFFER_SIZE = 4096;
#Override
protected void doGet(HttpServletRequest request,HttpServletResponse response)
throws ServletException, IOException {
String fileName = "42.txt";
// build response headers
response.setStatus(200);
response.setContentLength(OUTPUT_SIZE);
response.setContentType("text/plain");
response.setHeader("Content-Disposition",
"attachment;filename=\"" + fileName + "\"");
response.flushBuffer(); // write HTTP headers to the client
// streaming result
InputStream fileInputStream = new InputStream() { // fake input stream
int i = 0;
#Override
public int read() throws IOException {
if (i++ < OUTPUT_SIZE) {
return 42;
} else {
return -1;
}
}
};
ReadableByteChannel input = Channels.newChannel(fileInputStream);
WritableByteChannel output = Channels.newChannel(
response.getOutputStream());
ByteBuffer buffer = ByteBuffer.allocate(BUFFER_SIZE);
while (input.read(buffer) != -1) {
buffer.flip();
output.write(buffer);
buffer.clear();
}
input.close();
output.close();
}
Are you required to serve files using Tomcat? For this kind of tasks we have used separate download mechanism. We chained Apache -> Tomcat -> storage and then add rewrite rules for download. Then you just by-pass Tomcat and Apache will serve the file to client (Apache->storage). But if works only if you have files stored as files. If you read from DB or other type of non-file storage this solution cannot be used successfully. the overall scenario is that you generate download links for files as e.g. domain/binaries/xyz... and write redirect rule for domain/files using Apache mod_rewrite.
Do you have any filters in the application, or do you use the tcnative library? You could try to profile it with jvisualvm?
Edit: Small remark: Note that you have a HTTP response splitting attack possibility in the setHeader if you do not sanitize fileName.
Why don't you use tomcat's own FileServlet?
It can surely give out files much better than you can possible imagine.
A 2-MByte buffer is way too large! A few k should be ample. Megabyte-sized objects are a real issue for the garbage collector, since they often need to be treated separately from "normal" objects (normal == much smaller than a heap generation). To optimize I/O, your buffer only needs to be slightly larger than your I/O buffer size, i.e. at least as large as a disk block or network package.