Got a problem when reading and writing a png file. I read it with ImageIO to a byte array and then write this byte array again using ImageIO. But the file size increases significantly. How can this happen?
public BufferedImage toBufferedImage(InputStream inputstream) {
try {
return ImageIO.read(inputstream);
} catch (Exception e) {
throw new IllegalStateException("Can't convert to buffered image", e);
}
}
public byte[] toByteArray(BufferedImage bufferedImage, String filetype) {
ByteArrayOutputStream output = new ByteArrayOutputStream();
try {
ImageIO.write(bufferedImage, filetype, output);
return output.toByteArray();
} catch (Exception e) {
throw new IllegalStateException(e);
}
}
Follow up: is there any library to support compressed PNGs that is written in Java and does not need any native code?
This is most likely due to the compression algorithm being different between Java and whatever created the original PNG.
The documentation says it's decoding the input file, so it's not being held in memory as a PNG:
Returns a BufferedImage as the result of decoding a supplied File with an ImageReader chosen automatically from among those currently registered. The File is wrapped in an ImageInputStream. If no registered ImageReader claims to be able to read the resulting stream, null is returned.
When it writes it back, it has to re-encode the PNG file, and Java's PNG encoding doesn't seem to be as efficient as whatever created your original file.
The PNG writer supplied with the JDK does not support compression. You can quickly check this with:
w = ImageIO.getImageWritersByFormatName("png").next();
p = w.getDefaultWriteParam();
print("Can compress? "+p.canWriteCompressed());
// Can compress? false
It's possible that imageio-ext or jai-imageio include a png writer with compression support: http://java.net/projects/imageio/
Related
I used the following function to create a base64 encoded string of my Gravatar image (https://www.gravatar.com/avatar/cd5415f97afbe0177ba35ae31fbfd0db):
final BASE64Encoder encoder = new BASE64Encoder();
String encoded = encoder.encode(inputStreamToByteArray(is));
encoded = encoded.replaceAll("\r?\n", "");
return encoded;
I ran the method a couple of days ago and got the following base64 encoded string:
/9j/4AAQSkZJRgABAQAAAQABAAD//gA7Q1JFQVRPUjogZ2QtanBlZyB2MS4wICh1c2luZyBJSkcgSlBFRyB2ODApLCBxdWFsaXR5ID0gOTAK/9sAQwADAgIDAgIDAwMDBAMDBAUIBQUEBAUKBwcGCAwKDAwLCgsLDQ4SEA0OEQ4LCxAWEBETFBUVFQwPFxgWFBgSFBUU/9sAQwEDBAQFBAUJBQUJFA0LDRQUFBQUFBQUFBQUFBQUFBQUFBQUFBQUFBQUFBQUFBQUFBQUFBQUFBQUFBQUFBQUFBQU/8AAEQgAUABQAwEiAAIRAQMRAf/EAB8AAAEFAQEBAQEBAAAAAAAAAAABAgMEBQYHCAkKC//EALUQAAIBAwMCBAMFBQQEAAABfQECAwAEEQUSITFBBhNRYQcicRQygZGhCCNCscEVUtHwJDNicoIJChYXGBkaJSYnKCkqNDU2Nzg5OkNERUZHSElKU1RVVldYWVpjZGVmZ2hpanN0dXZ3eHl6g4SFhoeIiYqSk5SVlpeYmZqio6Slpqeoqaqys7S1tre4ubrCw8TFxsfIycrS09TV1tfY2drh4uPk5ebn6Onq8fLz9PX29/j5+v/EAB8BAAMBAQEBAQEBAQEAAAAAAAABAgMEBQYHCAkKC//EALURAAIBAgQEAwQHBQQEAAECdwABAgMRBAUhMQYSQVEHYXETIjKBCBRCkaGxwQkjM1LwFWJy0QoWJDThJfEXGBkaJicoKSo1Njc4OTpDREVGR0hJSlNUVVZXWFlaY2RlZmdoaWpzdHV2d3h5eoKDhIWGh4iJipKTlJWWl5iZmqKjpKWmp6ipqrKztLW2t7i5usLDxMXGx8jJytLT1NXW19jZ2uLj5OXm5+jp6vLz9PX29/j5+v/aAAwDAQACEQMRAD8A7j4ZeIl1PRljvrcCMgorw3Qdk78qBnb6Zzjiusv4o7nS45w3lzwuY2heQSMV/hbOBXjNtfXPhLxe088l3dWwlVpYIJdxEWCR8gGQAfavdPCGseH/AB3oyjTLiKSaVNyzs3mOrDu2R6jGCO1fk2Hr1cqzOFap8F+WXktv6+89NxVSnZHj/wATvjP4V+EotR4hvGhmulZooYkLsVHUn0H168+lebw/tleC7yVRDFcmJiB5uDgc8549xXnn7Xutaz4d8Z6PofiAt420OGRb37PqreXLDlirxo0OwKrBey8e9fcf7OP7J37Ofifwjp/irw/4dj1y0v40lMGo3klyttJgFo2jLlAyng5B6V+3xqqK+FP+vU8SSbejseZeD/iBonju3MulXQkYDcYmGGA9cfjXRstdZ+1b4K+GPwWs/C2paBo1hoHiTUdVgsoYdOIhSaE/LKWiX5doDD5gAc7cnHB5VwRwRg1UnF2cNLmkHJq0uhXkWoWqw/Sq7nFZvY0K83ArNmOWNX5zyazp25NYy2NojPFZudI1nTtTsYRLdSMLfPlq2ATyDuxx616Tph0REsbhNIsP7WuEjZ57aArsY/eX5slV3cZzxmuT8RaY2qaNNDG8cUvVZJeFU+ua6n4ezWHw88HiyjbyU3pcRXZAjWUsvzcnhmcqfpkYHGa/NuIaVHDYr2s/tdF1umn0en+Z0YeTnT5ex86/Ff8AZ+8Y/tRfG3Wz4Z0+OHStIEWmT6pNMEtklSPc4LYyzb2YfKCenarul/s/eOf2VLK7udJ+LI0+5lw76dplq8y3EgB2g5HTJxkrjkntX1D8OfEx8O/CHS/LjFtNqN1d3l0I+SJpbiVn3H+8Cdp9NuKxfitb2vivwBLcwzeXdWrl4Z9wDK3v1yp9K9PDZnX9tTwVLSMUle127JdX6HJUpRUXUe58t2XgnU/iJ8QLLxR8Q/GT+IruCRJI4F34HQqnzAbVz1Cjt719CTne5cYIbn5eleD2vjxb5Uju/LM6nCSqQenrj6GvT/BfjK31m3NnMPLmAGwlcHJGfyznn39q9NYnEUsYp15Xi1ba1tf8zGDjy2RvyniqshxU0zEMQRgjjBqpK1fTN3LIJm4NZ0p5q5O3HvVKY1lM2idrGoKkMoZT1B6GvKPEnjrUvBl241jVQumvGFgjlgKQRSbyCS6nOQpyByTgdB19ZQ5HFeL/ABS1CHRNemuNb0WbVbCWSK3s7p5P3NkSC7MFKsDkryuBnOCSK8jM8FDFypTlG/K/6/Ezo1HC9j1HwH4p+1+Af7Pub5LzUVZtQDecZGkjldm3EnkHfvBB5BA9RXN+NPGl1L4I1/w9HNJZy36D7NeQgFoJBgHIPVSOo46cGsPSPjv4Q1HSYbmfV2g1ESO0apYFFuFJIkjdB91mO3BXIBReBk5f4rt0nQPGjrvAfZIpV146EHkGvi8cq2WY32kPglqnvZ9V/X6HTHlqx13PAvCV7o+m+IZdO8WXOoWjRDIGmQq0kpONpXewAU/j074zX0B4b8efDee0msdB0DXJNTGVS61K8jDRKQBvVVBztPOORyc+leVeJvCula9EttqcMgmXJiuoW2yRH1Uj+VcVLpWp+EtlxPcST28bDy7+3AEkR7GRVOQP9pfXkCvrsPmNHGwUau/4HBOi6b90+trPVrm+maC9WM3UaDEsSlfNUBfmZcna3I6Eg8EEg1LKpFcH8JviQvikLpV/NFb64YTDZXwbEF4pO4RMR6kBgCOD8y9WDegK63EKyLnawyMgg/kele3SleNuxUXdGfPmqUvNak0QOaozRYzVSTNU7HXxvjg9KzPE/jvSdN8QaNpl5pch02ZD9q86JJY5FG4ZDblbcdwyMcADBrQArkPivp2pX/hZf7Nd0khmEkuwDJi2sGGeoHIzjrjuK58WpujJ03ZoxpuzsePeL9Js/EscCpp7WFg1286G+XZEoyCoJU5RmA6bjz25OPUJvDaeJPAFpL4h1KWxurNT5etRP5UjRq3yPICSDlcAg5z14JrwTx18XNai1ia9i12dbVFMZsZ5i0ZAJyhThcDkfKB2r1H4NeP7Pxo9kPEWItXMatptqUC2oUA5MYAC+ZweTz2B7V4+BVGo5q10901pf8f8/M2lzIl8LeCNY8QTTfb8nS1YLDfXFv5Es692EW4kf8Cxnr7VvXvwYtTG3karI67SvkzJwR6d69GbNQSZxXRHJcJCXPBNPybt92qIdVtWZ8pa18IPF3gG8lv9FRNQst5d9PcEDbnOVI6EcnH4jnp714D8ZyeLtAgmvIZbbUUQCSKcgsw9cjhvTI9u+a6t2IrktSspvD9019p6O+nyuHvLKLqP+mkYHf1UD1wDyretQoqm/el8/wDP/Mwcn0N+SQiqNxLwabFfxXao0UqTRSr5kMyEFZU9R7jIyO2R1BBMc4JNdMouO41K+p3axDFO8oEEEZHoasLEKf5dZXLsfKPxZ+GVlfeMrmxFjiBvmVjIAHZlJznPB5xgjGBWD4d+Gt5oeq2l5oms2cJjuFnitLpyQcbWYg9uADgAn5RwK+ivi9o8j6db6jbxgyxN5Urs+1VQ8hm4PAOfT72OQa86svDGm3FpHcXEz3+pO8bxf6RujR1fIaRVQuUGOQOoI46Gvgsfia2Aqypxdo9Pn/W/kdtOKnG57mgE0Ecisrh1DBk6HPce1RPHioPCt9HqOmsFuorp43OfLcNhTyue447MAeOlackXtX2uFrrE0IVVpdf8P+JxTg4yaMiWPFVJIyDWxLD7VTlhxXXcyOB8ReHrm0El3pTMil/Ne2QZ2SdfNjHqedydHBP8RO6Tw34ii8R2sn3Uu4CFmiU5APZl9VPb8R1BrsJIPSudj8H29p4kk1i3/cyTxGOeID5XOc7vY+v+JJovpYLH/9k=
Just today I executed the method again, using the same image. But instead of retrieving the same encoded string, I received:
/9j/4AAQSkZJRgABAQAAAQABAAD//gA7Q1JFQVRPUjogZ2QtanBlZyB2MS4wICh1c2luZyBJSkcgSlBFRyB2NjIpLCBxdWFsaXR5ID0gOTAK/9sAQwADAgIDAgIDAwMDBAMDBAUIBQUEBAUKBwcGCAwKDAwLCgsLDQ4SEA0OEQ4LCxAWEBETFBUVFQwPFxgWFBgSFBUU/9sAQwEDBAQFBAUJBQUJFA0LDRQUFBQUFBQUFBQUFBQUFBQUFBQUFBQUFBQUFBQUFBQUFBQUFBQUFBQUFBQUFBQUFBQU/8AAEQgAUABQAwEiAAIRAQMRAf/EAB8AAAEFAQEBAQEBAAAAAAAAAAABAgMEBQYHCAkKC//EALUQAAIBAwMCBAMFBQQEAAABfQECAwAEEQUSITFBBhNRYQcicRQygZGhCCNCscEVUtHwJDNicoIJChYXGBkaJSYnKCkqNDU2Nzg5OkNERUZHSElKU1RVVldYWVpjZGVmZ2hpanN0dXZ3eHl6g4SFhoeIiYqSk5SVlpeYmZqio6Slpqeoqaqys7S1tre4ubrCw8TFxsfIycrS09TV1tfY2drh4uPk5ebn6Onq8fLz9PX29/j5+v/EAB8BAAMBAQEBAQEBAQEAAAAAAAABAgMEBQYHCAkKC//EALURAAIBAgQEAwQHBQQEAAECdwABAgMRBAUhMQYSQVEHYXETIjKBCBRCkaGxwQkjM1LwFWJy0QoWJDThJfEXGBkaJicoKSo1Njc4OTpDREVGR0hJSlNUVVZXWFlaY2RlZmdoaWpzdHV2d3h5eoKDhIWGh4iJipKTlJWWl5iZmqKjpKWmp6ipqrKztLW2t7i5usLDxMXGx8jJytLT1NXW19jZ2uLj5OXm5+jp6vLz9PX29/j5+v/aAAwDAQACEQMRAD8A7j4ZeIl1PRljvrcCMgorw3Qdk78qBnb6Zzjiusv4o7nS45w3lzwuY2heQOxX+Fs4FeM219c+EvF7TzyXd1bCVWlggl3ERYJHyAZAB9q908Iax4f8d6Mo0y4ikmlTcs7N5jqw6Fsj1GMEdq/JsPXq5XmcK1T4L8r8uh6biqlOyPH/AInfGjwr8JRajxDeNDNdKzRQxRl2KjqTjoPr159K83h/bK8F3koEMNyYmIHm4OBzznj3Feefte61rPh3xno+h+IC3jbQ4ZFvfs+qt5csOWKvGjQ7AqsF7Lx719x/s4/snfs5+J/COn+KvD/h2PW7S/jSU2+o3klyttJgFo2jLlAyng5B6V+3xqqK+FP+vU8SSbejseZeD/iBonju3MulXQkYDJiYYYD1x+NdGy11n7Vvgr4Y/Baz8LaloGjWGgeI9R1WCyhh04iFJoT8spaJflwAw+YAHO3JxweVcEcEYNVJxdnHqaQcmrSK8i1C1WH6VXc4rN7GhXm4FZsxyxq/OeTWdO3WsJbG0Rnis3Okazp2p2MIlupGFvny1bAJ5B3Y49a9J0w6IiWNwmkWH9rXCRs81tAV2MfvL82Sq54znjNcn4i0xtU0aaGN44peqyS8Kp9c11Pw9msPh54OFlG3kpvSeK7IEayll+bk8MzFT9MjA4zX5vxDSo4bE+0n9rout9H8v8zfDyc6fL2PnX4r/s/eMf2ovjbrZ8M6fHDpWkCLTJ9UmmCWySpHucFsZZt7MPlBPTtV3S/2fvHP7Klld3Ok/Fkafcy4d9O0y1eZbiQA7Qcjpk4yVxyT2r6g+HPiY+HfhDpflxi2m1G6u7y68vkiaW4lZ9x/vAnafTbisb4rW9r4q8AS3MM3l3Vq5eGfcAyt79cqfSvTw2Z1/bU8FS0jFJXtduy7vuctSlFRdR7ny3ZeCdT+InxAsvFHxD8Zv4iu4JEkjgXfgdCqfMF2rnqFHb3r6EnIdy4wQ3Py9K8HtfHi3ypHd+WZ1OElUg9PXH0Nen+C/GVvrNubOYeXMANhK4OSM/lnPPv7V6axNeljFKvK8WrbWsYwceWyN+U8VVkOKmmYhiCMEcYNVJWr6Zu5ZBM3BrOlPNXJ2496pTGspm0TtY1BUhlDKeoPQ15R4k8dal4Mu3GsaqF014wsEcsBSCKTeQSXU5yFOQOScDoOvrKHI4rxf4pahDomvT3Gt6LNqthLJFb2d08n7myJBdmClWByV5XAznBJFeRmeDhi5U5yjflf9fkZ0ZuF7HqPgPxSbvwD/Z9zfJeairNqAbzzI0kcrs24k8g794IPIwPUVzfjTxpdS+CNf8PRzPZy36A215CAWgkGAcg9VI6jjpwaw9I+O/hDUdJhuZtXaDURI7RqlgUW4UkiSN0H3WY7SCuQCi8DJy/xXbpOgeNHXeA+yRSrrx0IPINfF45VssxntI/BLVPez6o6Y8tWOu54F4SvdH03xDLp3iy51C0aIZA0yFWklJxtK72ACn8enfGa+gPDfjz4bz2k1joOg65LqYysd1qV5GGiUgDeqqDnaeccjk59K8q8TeFdK16JbbU4ZBMuTFdQsVkiPqpH8q4qXStT8JbLie4knt42Hl39uAJIj2Miqcgf7S+vIFfXYbMaWNharucE6Lpv3T62s9Wub6ZoL1YzcxoMSxKV81QF+ZlydrcjoSDwQSDUsqkVwfwm+JC+KQulX80VvrhhMNlfBsQXik7hExHqQGAI4PzL1YN6ArrcQrIudrDIyCD+R6V7dKV427FRdzPnzVKXmtSaIHNUZosZqpI1TsdfG+OD0rM8T+O9J03xBo2mXmlyHTZkP2rzokljkUbhkNuVtx3DIxwAMGtACuQ+K+n6lf8AhZf7Md0khmEkuwDJi2sGGeoHIzjrjuK58Wpui3TdmjGm7M8e8X6TZ+JY4FTT2sLBrt50N8uyJRkFQSpyjMB03HntyceoTeG08SeALSTxDqUtjdWany9aifypGjVvkeQEkHK4BBznrwTXgnjr4ua1FrE17Frs6WsamM2M8xaMgE5QpwuByPlA7V6j8GvH9n40eyHiPEWr+WrabalAtqFAOTGAAvmfKeTz2B7V4+CVKo5q10901pf8TaV0SeFvBGseIJpvt+TpasFhvri38iWde7CLcSP+BYz19q3734MWpjbyNVkddpXyZk4I9O9ejNmoJM4reOTYSEueCafk3+WxDqtqzPlLWvhB4u8A3kt/oqJqFlvLvp7ggbQc5UjoRycfiOenvXgPxnJ4u0CCa8hlttRVAJIpyCzD1yOG9Mj275rq3YiuS1Kym8P3TX2no76fK4e8souo/wCmkYHf1UD1wDyrevQoqm/el8/8/wDMwcmtjflkIqjcS8GmxX8V2qNFKk0Uq+ZDMhBWVPUe4yMjtkdQQTHOCTXTKLjuNSO7WIYpxiBBBGR6GrCxCn+XWVy7Hyj8WfhlZX3jK4sRY4gb5lYyAB2ZSc5zwecYIxgVg+HfhreaHqtpeaJrNnAY7hZ4rS6ckHG1mIPbgA4AJO0cCvor4vaPI+m2+o28YMsTeVK7PtVUPIZuDwDnnj72OQa86svDGm3FpHcXEz3+pO8bxf6RujR1fIaRVQuUGOQOoI46Gvgsfia2Aqypxfu9PmdtOKmrnuaATQRyKyuHUMGToc9x7VE8eKg8K30eo6awW6iunjc58tw2FPK57j6MAeOlackXtX2uFrrE0Y1V1RxTg4yaMiWPFVJIyK2JYfaqcsOK67mRwXiLw9c2gku9KZkUv5r2yDOyTr5sY9TzuTo4J7k7n+G/EUXiO1k+6l3AQs0SnIB7Mvqp7fiOoNdhJB6VzsfhC3tPEkmsW/7mSeIxzxAfK5znd7HPX/Ek0X0sFj//2Q==
As you may notice, the first couple of characters are the same, but if you look at the end of the strings, you notice directly that they are completely different. Why is that? The image is still my Gravatar image (e.g., using http://codebeautify.org/base64-to-image-converter shows the same image for both strings).
Thanks for any hint, explanation! Is there anything wrong with my implementation? Is some of the implementation time or location dependent? How can I get the same base64 string for the same image?
PS: inputStreamToByteArray is implemented as follows:
public static byte[] inputStreamToByteArray(final InputStream is) {
final ByteArrayOutputStream buffer = new ByteArrayOutputStream();
final byte[] data = new byte[16384];
try {
int nRead;
while ((nRead = is.read(data, 0, data.length)) != -1) {
buffer.write(data, 0, nRead);
}
buffer.flush();
} catch (final IOException e) {
return null;
} finally {
try {
buffer.close();
} catch (final IOException e) {
// ignore
}
}
return buffer.toByteArray();
}
Converting both images back to .jpg, and using http://regex.info/exif.cgi, the following header comments appear:
One:
CREATOR: gd-jpeg v1.0 (using IJG JPEG v80), quality = 90
Other:
CREATOR: gd-jpeg v1.0 (using IJG JPEG v62), quality = 90
(can't remember which order I converted the images in, but there is a version change either way)
As you can see, they have upgraded the JPEG compressor (or spec?) to a newer version, resulting in a different JPEG encoding, or at least difference in the comments.
So to answer your question, there does not seem to be an issue with your base64 converter, but just that the two images actually are different
hi i'm working on a project in which i need to make changes to BASE64 string of an image(jpg)...so at first when i didn't made any changes, the ImageReader was properly working and my image was displayed properly..but when i made changes to my BASE64 string the above exception came..i searched a lot and came to know that im==null comes when ByteStream is not jpeg,png,gif..etc..so what if i have a new type of ByteStream...what should i use?? or what ever my BASE64 string is i need to convert that to an image..so how can i do that??
here is my code snippet:this is to convert BASE64 string to an image
public static BufferedImage decodeToImage(String imageString) throws IOException {
BufferedImage image = null;
byte[] imageByte;
try {
BASE64Decoder decoder = new BASE64Decoder();
imageByte = decoder.decodeBuffer(imageString);
ByteArrayInputStream bis = new ByteArrayInputStream(imageByte);
image = ImageIO.read(bis);
bis.close();
}
catch (Exception e) {
e.printStackTrace();
}
ImageIO.write(image, "jpg", new File("d:/CopyOfTestImage.jpg"));
return image;
}
Have a look at the Javadocs for ImageIO.read:
Returns a BufferedImage as the result of decoding a supplied InputStream with an ImageReader chosen automatically from among those currently registered. The InputStream is wrapped in an ImageInputStream. If no registered ImageReader claims to be able to read the resulting stream, null is returned. [emphasis mine]
The read method can return null, yet you are not checking for this. In fact the method is probably returning null, which is why ImageIO.write throws an exception when you pass null into it.
First things first, you need to check for error conditions and handle them appropriately (including the null return, but also including any exceptions that are thrown, which you currently catch and ignore).
Now if you're getting null back from ImageIO.read, it means the bytes you passed into the read method did not appear to be a valid image in any known format. You need to look in more detail at the modifications you're making to the base64 string, and ensure that what you're doing is valid, and results in a valid image. Alternatively, if you're getting some other exception thrown, then you need to handle that appropriately.
(As a general rule, don't throw away/skip over errors, because then when things go wrong you have no idea why!)
change this line:
ImageIO.write(image, "jpg", new File("d:/CopyOfTestImage.jpg"));
to something like this
image = ImageIO.read(getClass().getResource("/resources/CopyOfTestImage.jpg"));
I have to convert tif files to jpeg (although I've realized that it also fails for converting tif to any extension). My code works properly on my local windows machine but it doesn't work on my dev enviroment (which is a CentOs machine). This is my code (very simple as you could see):
public static boolean convertTIFFToJPEG(final File in, final File out) {
try {
final PlanarImage image = JAI.create("ImageRead", in);
final ParameterBlockJAI storeOperation = new ParameterBlockJAI(
"FileStore");
storeOperation.addSource(image);
storeOperation.setParameter("filename", out.getPath());
storeOperation.setParameter("format", "jpeg");
JAI.create("FileStore", storeOperation);
} catch (final IllegalArgumentException e) {
return false;
}
return true;
}
This code works fine in windows, however when I try to execute it on linux I get this error:
Caused by: java.lang.RuntimeException: - Unable to render RenderedOp for this operation.
at javax.media.jai.RenderedOp.createInstance(RenderedOp.java:827)
at javax.media.jai.RenderedOp.createRendering(RenderedOp.java:867)
at javax.media.jai.RenderedOp.getRendering(RenderedOp.java:888)
at javax.media.jai.RenderedOp.createInstance(RenderedOp.java:799)
at javax.media.jai.RenderedOp.createRendering(RenderedOp.java:867)
at javax.media.jai.RenderedOp.getRendering(RenderedOp.java:888)
at javax.media.jai.JAI.createNS(JAI.java:1099)
at javax.media.jai.JAI.create(JAI.java:973)
at javax.media.jai.JAI.create(JAI.java:1395)
at example.TIFFUtils.convertTIFFToJPEG(TIFFUtils.java:97)
If I debug the application, to try to find out something more about the exception, I only get that the cause of it is 'Unable to render RenderedOp for this operation'.
To fix it, I've tried different versions of the Oracle JDK/JRE. Currently I'm using sdk1.6_20, but I also tried the last one and other previous distribution.
On the other hand, I've tried a lot of possibles ways to make the same process (TIFF->JPEG) by using JAI and ImageIO. This one is the code, that I used for ImageIO:
public static boolean convertTIFFToJPEG2(final File in, final File out) {
try {
ImageOutputStream ios = null;
ImageWriter writer = null;
// find an appropriate writer
Iterator<ImageWriter> it = ImageIO.getImageWritersByFormatName(JPEG_FORMAT);
if (it.hasNext()) {
writer = (ImageWriter)it.next();
} else {
return false;
}
ios = ImageIO.createImageOutputStream(out);
writer.setOutput(ios);
JPEGImageWriteParam writeParam = new JPEGImageWriteParam(Locale.ENGLISH);
BufferedImage image = ImageIO.read(in);
IIOImage iioImage = new IIOImage(image , null, null);
// write it!
writer.write(null, iioImage, writeParam);
} catch (final IllegalArgumentException e) {
return false;
} catch (IOException e) {
return false;
}
return true;
}
And I got the same result: it only works in windows but provokes that exception in linux. In this case the next instructions return null, so the variable 'image' doesn't contain anything.
BufferedImage image = ImageIO.read(tiffFile);
Any new idea?
I don't think all implementations of JRE (such as OpenJDK) included a Jpeg Encoder that could be used by default in JAI. I don't know if that's your problem though. Regardless, you might have better success by using the JAI Image IO Tools extension:
http://java.net/projects/imageio
http://www.oracle.com/technetwork/java/current-142188.html
This provides integration allowing JAI to use ImageIO libraries for image decoding/encoding which is generally much improved over the original JAI handling.
To further debug your problem, you might try temporary outputting to image format other than JPEG to see if your problem is specific to JPEG encoding.
I'm currently developing an application in Java/Android that allows the user to compress and decompress files. At first, I started to study the file size such as:
1Byte = 8Bits
1KB = 1024Byte
1MB = 1024KB
1GB = 1024MB
1TB = 1024GB
1PB = 1024TB
1EB = 1024PB
1ZB = 1024EB
1YB = 1024ZB
After I studied this, I studied and read some articles on the net and found out there are 2 types of file compression (Correct me if I'm wrong): Lossless and Lossy. Lossless compression means that a file is compressed into a smaller bit without losing any single file while lossy compression means that important files were being removed while compressing the file.
I also read that compression(run-length coding method) is like this:
AAABBCCDFFFFEEEEH
to this:
3A2B2CD4F4EH
which gives me an idea on how compressing/decompressing works on file.
I also searched the net that there is an API for compressing file on java(also applicable on android) which is
java.util.zip
I also tried some codes on compressing and decompressing file from various helpful websites/forum/etc (including stackoverflow.com) which gives me an experience to this study.
I also read about algorithms used in data compression which are
Huffman encoding algorithm - assigns a code to characters in a file based on how frequently those characters occur
run-length encoding - generates a two-part value for repeated characters: the first part specifies the number of times the character is repeated, and the second part identifies the character
Lempel-Ziv algorithm - converts variable-length strings into fixed-length codes that consume less space than the original strings.
Now, I need to know how to code an algo in compressing and decompressing file by using java.util.zip(I also don't know how to use it. tutorials on net is not working for me :/). What algo does winzip, winrar, compressed folder(windows), and androzip(android app) is using? Will someone please teach me step by step(treat me as an unschooled person) on how java.util.zip works and the different algorithms. sorry for the long post folks. Thanks for the future help and posts(if there will be)!
public static final byte[] unzip(byte[] in) throws IOException {
// decompress using GZIPInputStream
ByteArrayOutputStream outStream =
new ByteArrayOutputStream(EXPECTED_COMPRESSION_RATIO * in.length);
GZIPInputStream inStream =
new GZIPInputStream ( new ByteArrayInputStream(in) );
byte[] buf = new byte[BUF_SIZE];
while (true) {
int size = inStream.read(buf);
if (size <= 0)
break;
outStream.write(buf, 0, size);
}
outStream.close();
return outStream.toByteArray();
}
public static final byte[] zip(byte[] in) {
try {
// compress using GZIPOutputStream
ByteArrayOutputStream byteOut=
new ByteArrayOutputStream(in.length / EXPECTED_COMPRESSION_RATIO);
GZIPOutputStream outStream= new GZIPOutputStream(byteOut);
try {
outStream.write(in);
} catch (Exception e) {
LOG.error("", e);
}
try {
outStream.close();
} catch (IOException e) {
LOG.error("", e);
}
return byteOut.toByteArray();
} catch (IOException e) {
LOG.error("", e);
return null;
}
}
Using Base64 from Apache commons
public byte[] encode(File file) throws FileNotFoundException, IOException {
byte[] encoded;
try (FileInputStream fin = new FileInputStream(file)) {
byte fileContent[] = new byte[(int) file.length()];
fin.read(fileContent);
encoded = Base64.encodeBase64(fileContent);
}
return encoded;
}
Exception in thread "AWT-EventQueue-0" java.lang.OutOfMemoryError: Java heap space
at org.apache.commons.codec.binary.BaseNCodec.encode(BaseNCodec.java:342)
at org.apache.commons.codec.binary.Base64.encodeBase64(Base64.java:657)
at org.apache.commons.codec.binary.Base64.encodeBase64(Base64.java:622)
at org.apache.commons.codec.binary.Base64.encodeBase64(Base64.java:604)
I'm making small app for mobile device.
You cannot just load the whole file into memory, like here:
byte fileContent[] = new byte[(int) file.length()];
fin.read(fileContent);
Instead load the file chunk by chunk and encode it in parts. Base64 is a simple encoding, it is enough to load 3 bytes and encode them at a time (this will produce 4 bytes after encoding). For performance reasons consider loading multiples of 3 bytes, e.g. 3000 bytes - should be just fine. Also consider buffering input file.
An example:
byte fileContent[] = new byte[3000];
try (FileInputStream fin = new FileInputStream(file)) {
while(fin.read(fileContent) >= 0) {
Base64.encodeBase64(fileContent);
}
}
Note that you cannot simply append results of Base64.encodeBase64() to encoded bbyte array. Actually, it is not loading the file but encoding it to Base64 causing the out-of-memory problem. This is understandable because Base64 version is bigger (and you already have a file occupying a lot of memory).
Consider changing your method to:
public void encode(File file, OutputStream base64OutputStream)
and sending Base64-encoded data directly to the base64OutputStream rather than returning it.
UPDATE: Thanks to #StephenC I developed much easier version:
public void encode(File file, OutputStream base64OutputStream) {
InputStream is = new FileInputStream(file);
OutputStream out = new Base64OutputStream(base64OutputStream)
IOUtils.copy(is, out);
is.close();
out.close();
}
It uses Base64OutputStream that translates input to Base64 on-the-fly and IOUtils class from Apache Commons IO.
Note: you must close the FileInputStream and Base64OutputStream explicitly to print = if required but buffering is handled by IOUtils.copy().
Either the file is too big, or your heap is too small, or you've got a memory leak.
If this only happens with really big files, put something into your code to check the file size and reject files that are unreasonably big.
If this happens with small files, increase your heap size by using the -Xmx command line option when you launch the JVM. (If this is in a web container or some other framework, check the documentation on how to do it.)
If the file recurs, especially with small files, the chances are that you've got a memory leak.
The other point that should be made is that your current approach entails holding two complete copies of the file in memory. You should be able to reduce the memory usage, though you'll typically need a stream-based Base64 encoder to do this. (It depends on which flavor of the base64 encoding you are using ...)
This page describes a stream-based Base64 encoder / decoder library, and includes lnks to some alternatives.
Well, do not do it for the whole file at once.
Base64 works on 3 bytes at a time, so you can read your file in batches of "multiple of 3" bytes, encode them and repeat until you finish the file:
// the base64 encoding - acceptable estimation of encoded size
StringBuilder sb = new StringBuilder(file.length() / 3 * 4);
FileInputStream fin = null;
try {
fin = new FileInputStream("some.file");
// Max size of buffer
int bSize = 3 * 512;
// Buffer
byte[] buf = new byte[bSize];
// Actual size of buffer
int len = 0;
while((len = fin.read(buf)) != -1) {
byte[] encoded = Base64.encodeBase64(buf);
// Although you might want to write the encoded bytes to another
// stream, otherwise you'll run into the same problem again.
sb.append(new String(buf, 0, len));
}
} catch(IOException e) {
if(null != fin) {
fin.close();
}
}
String base64EncodedFile = sb.toString();
You are not reading the whole file, just the first few kb. The read method returns how many bytes were actually read. You should call read in a loop until it returns -1 to be sure that you have read everything.
The file is too big for both it and its base64 encoding to fit in memory. Either
process the file in smaller pieces or
increase the memory available to the JVM with the -Xmx switch, e.g.
java -Xmx1024M YourProgram
This is best code to upload image of more size
bitmap=Bitmap.createScaledBitmap(bitmap, 100, 100, true);
ByteArrayOutputStream stream = new ByteArrayOutputStream();
bitmap.compress(Bitmap.CompressFormat.PNG, 100, stream); //compress to which format you want.
byte [] byte_arr = stream.toByteArray();
String image_str = Base64.encodeBytes(byte_arr);
Well, looks like your file is too large to keep the multiple copies necessary for an in-memory Base64 encoding in the available heap memory at the same time. Given that this is for a mobile device, it's probably not possible to increase the heap, so you have two options:
make the file smaller (much smaller)
Do it in a stram-based way so that you're reading from an InputStream one small part of the file at a time, encode it and write it to an OutputStream, without ever keeping the enitre file in memory.
In Manifest in applcation tag write following
android:largeHeap="true"
It worked for me
Java 8 added Base64 methods, so Apache Commons is no longer needed to encode large files.
public static void encodeFileToBase64(String inputFile, String outputFile) {
try (OutputStream out = Base64.getEncoder().wrap(new FileOutputStream(outputFile))) {
Files.copy(Paths.get(inputFile), out);
} catch (IOException e) {
throw new UncheckedIOException(e);
}
}