As our network based applications fall in love with webp image formats, I found my self in need of a method or a lib which can decode it,
I have written this piece of code, but it only misses the native decoder(how ever I prefer it to be a jar lib) :
public BufferedImage decodeWebP(byte[] encoded, int w, int h) {
int[] width = new int[]{w};
int[] height = new int[]{h};
byte[] decoded = decodeRGBAnative(encoded); //here is the missing part ,
if (decoded.length == 0) return null;
int[] pixels = new int[decoded.length / 4];
ByteBuffer.wrap(decoded).asIntBuffer().get(pixels);
BufferedImage bufferedImage = new BufferedImage(width[0], height[0], BufferedImage.TYPE_INT_RGB);
// bufferedImage.setRGB(x, y, your_value);
int BLOCK_SIZE = 3;
for(int r=0; r< height[0]; r++) {
for (int c = 0; c < width[0]; c++) {
int index = r * width[0] * BLOCK_SIZE + c * BLOCK_SIZE;
int red = pixels[index] & 0xFF;
int green = pixels[index + 1] & 0xFF;
int blue = pixels[index + 2] & 0xFF;
int rgb = (red << 16) | (green << 8) | blue;
bufferedImage.setRGB(c, r, rgb);
}
}
return bufferedImage;
}
Please use OpenCV. I use maven and org.openpnp:opencv:4.5.1-2. All it takes to encode an image that is stored in a Mat is:
public static byte [] encodeWebp(Mat image, int quality) {
MatOfInt parameters = new MatOfInt(Imgcodecs.IMWRITE_WEBP_QUALITY, quality);
MatOfByte output = new MatOfByte();
if(Imgcodecs.imencode(".webp", image, output, parameters))
return output.toArray();
else
throw new IllegalStateException("Failed to encode the image as webp with quality " + quality);
}
I am converting it to byte [] arrays since I store it mostly in S3 and DB and rather sheldom in the filesystem.
The bitbucket link of the original answer is not available anymore, but forks from the original repository can be found, in example: https://github.com/sejda-pdf/webp-imageio
I tried using the webp-imageio implementation from the mentioned github, but after 2 days of using it in production, I got a segmentation fault coming from the native library that crashed the whole server.
I resorted to using the compiled tools provided by google here: https://developers.google.com/speed/webp/download and do a small wrapper class to call the binaries.
In my case, I needed to convert from other images formats to webp, so I needed the "cwebp" binary. The wrapper I wrote is:
import java.io.BufferedReader;
import java.io.File;
import java.io.InputStream;
import java.io.InputStreamReader;
import java.io.PrintStream;
import java.util.concurrent.TimeUnit;
public class ImageWebpLibraryWrapper {
private static final String CWEBP_BIN_PATH = "somepath/libwebp-1.1.0-linux-x86-64/bin/cwebp";
public static boolean isWebPAvailable() {
if ( CWEBP_BIN_PATH == null ) {
return false;
}
return new File( CWEBP_BIN_PATH ).exists();
}
public static boolean convertToWebP( File imageFile, File targetFile, int quality ) {
Process process;
try {
process = new ProcessBuilder( CWEBP_BIN_PATH, "-q", "" + quality, imageFile.getAbsolutePath(), "-o",
targetFile.getAbsolutePath() ).start();
process.waitFor( 10, TimeUnit.SECONDS );
if ( process.exitValue() == 0 ) {
// Success
printProcessOutput( process.getInputStream(), System.out );
return true;
} else {
printProcessOutput( process.getErrorStream(), System.err );
return false;
}
} catch ( Exception e ) {
e.printStackTrace();
return false;
}
}
private static void printProcessOutput( InputStream inputStream, PrintStream output ) throws IOException {
try ( InputStreamReader isr = new InputStreamReader( inputStream );
BufferedReader bufferedReader = new BufferedReader( isr ) ) {
String line;
while ( ( line = bufferedReader.readLine() ) != null ) {
output.println( line );
}
}
}
An implementation around ImageIO is nicer, but I couldn't have a segmentation fault crashing the production server.
Sample usage:
public static void main( String args[] ) {
if ( !isWebPAvailable() ) {
System.err.println( "cwebp binary not found!" );
return;
}
File file = new File( "/home/xxx/Downloads/image_11.jpg" );
File outputFile = new File( "/home/xxx/Downloads/image_11-test.webp" );
int quality = 80;
if ( convertToWebP( file, outputFile, quality ) ) {
System.out.println( "SUCCESS" );
} else {
System.err.println( "FAIL" );
}
}
Used OpenPnP OpenCV in kotlin:
fun encodeToWebP(image: ByteArray): ByteArray {
val matImage = Imgcodecs.imdecode(MatOfByte(*image), Imgcodecs.IMREAD_UNCHANGED)
val parameters = MatOfInt(Imgcodecs.IMWRITE_WEBP_QUALITY, 100)
val output = MatOfByte()
if (Imgcodecs.imencode(".webp", matImage, output, parameters)) {
return output.toArray()
} else {
throw IllegalStateException("Failed to encode the image as webp")
}
}
among all searches possible , this one was best and easiest :
https://bitbucket.org/luciad/webp-imageio
not full java implemention , but very easy comparing to others
for java developers, who are coming from a search engine, I have converted #DuĊĦan Salay answer to java:
private byte[] encodeToWebP(byte[] data) {
Mat matImage = Imgcodecs.imdecode(new MatOfByte(data), Imgcodecs.IMREAD_UNCHANGED);
MatOfInt parameters = new MatOfInt(Imgcodecs.IMWRITE_WEBP_QUALITY, 100);
MatOfByte output = new MatOfByte();
Imgcodecs.imencode(".webp", matImage, output, parameters);
return output.toArray();
}
I have used apache commons to readFileToByteArray.
Also you will need to load library first in static block
static {
OpenCV.loadLocally();
}
Related
The goal is to read a file name from a file, which is a max of 100 bytes, and the actual name is the file name filled with "null-bytes".
Here is what it looks like in GNU nano
Where .PKGINFO is the valid file name, and the ^# represent "null bytes".
I tried here with StringBuilder
package falken;
import java.io.*;
public class Testing {
public Testing() {
try {
FileInputStream tarIn = new FileInputStream("/home/gala/falken_test/test.tar");
final int byteOffset = 0;
final int readBytesLength = 100;
StringBuilder stringBuilder = new StringBuilder();
for ( int bytesRead = 1, n, total = 0 ; (n = tarIn.read()) != -1 && total < readBytesLength ; bytesRead++ ) {
if (bytesRead > byteOffset) {
stringBuilder.append((char) n);
total++;
}
}
String out = stringBuilder.toString();
System.out.println(">" + out + "<");
System.out.println(out.length());
} catch (Exception e) {
/*
This is a pokemon catch not used in final code
*/
e.printStackTrace();
}
}
}
But it gives an invalid String length of 100, while the output on IntelliJ shows the correct string passed withing the >< signs.
>.PKGINFO<
100
Process finished with exit code 0
But when i paste it here on StackOverflow I get the correct string with unknown "null-characters", whose size is actually 100.
>.PKGINFO <
What regex can i use to get rid of the characters after the valid file name?
The file I am reading is ASCII encoded.
I also tried ByteArrayOutputStream, with the same result
package falken;
import java.io.*;
import java.nio.charset.StandardCharsets;
public class Testing {
public Testing() {
try {
FileInputStream tarIn = new FileInputStream("/home/gala/falken_test/test.tar");
final int byteOffset = 0;
final int readBytesLength = 100;
ByteArrayOutputStream byteArrayOutputStream = new ByteArrayOutputStream();
for ( int bytesRead = 1, n, total = 0 ; (n = tarIn.read()) != -1 && total < readBytesLength ; bytesRead++ ) {
if (bytesRead > byteOffset) {
byteArrayOutputStream.write(n);
total++;
}
}
String out = byteArrayOutputStream.toString();
System.out.println(">" + out + "<");
System.out.println(out.length());
} catch (Exception e) {
/*
This is a pokemon catch not used in final code
*/
e.printStackTrace();
}
}
}
What could be the issue here?
Well, it seems to be reading null characters as actual characters, spaces in fact. If it's possible, see if you can read the filename, then, cut out the null characters. In your case, you need a data.trim(); and a data2 = data.substring(0,(data.length()-1))
You need to stop appending to the string buffer once you read the first null character from the file.
You seem to want to read a tar archive, have a look at the following code which should get you started.
byte[] buffer = new byte[500]; // POSIX tar header is 500 bytes
FileInputStream is = new FileInputStream("test.tar");
int read = is.read(buffer);
// check number of bytes read; don't bother if not at least the whole
// header has been read
if (read == buffer.length) {
// search for first null byte; this is the end of the name
int offset = 0;
while (offset < 100 && buffer[offset] != 0) {
offset++;
}
// create string from byte buffer using ASCII as the encoding (other
// encodings are not supported by tar)
String name = new String(buffer, 0, offset,
StandardCharsets.US_ASCII);
System.out.println("'" + name + "'");
}
is.close();
You really shouldn't use trim() on the filename, this will break whenever you encounter a filename with leading or trailing blanks.
I am making an application that takes an image and applies a grayscale filter to it using file i/o. Then the user is asked for a threshold and another save location that will take this processed image and make it pure black and white. The issue I am having is when the 2nd image is created and I try to open it, Windows reports the file is damaged even though the file size is the same as the processed image so it seems to be working correctly. Here is my code for the application. Also I would like to continue using file IO to create this, I understand Java has a built in function for creating a binary image.
import java.io.*;
import javax.swing.*;
public class Bitmapper
{
public static void main(String[] args)
{
String threshold;
int thresholdInt;
JFileChooser chooser1 = new JFileChooser();
JFileChooser chooser2 = new JFileChooser();
JFileChooser chooser3 = new JFileChooser();
int status1 = chooser1.showOpenDialog(null);
int status2 = chooser2.showSaveDialog(null);
if(status1 == JFileChooser.APPROVE_OPTION && status2 == JFileChooser.APPROVE_OPTION)
{
try
{
// Handling binary (not text) data, so use FileInputStream
FileInputStream in = new FileInputStream(chooser1.getSelectedFile());
FileOutputStream out = new FileOutputStream(chooser2.getSelectedFile() + "_gray.bmp");
int i = 0;
int counter = 0;
while((i=in.read())!=-1)
{
if (++counter>54) // skip past Bitmap headers
{
int b = i;
int g = in.read();
int r = in.read();
int gray = (b + g + r)/3;
out.write(gray);
out.write(gray);
i = gray;
}
out.write(i);
}
out.close();
in.close();
threshold = JOptionPane.showInputDialog(null, "Please enter a threshold to turn the picture black and white.");
try
{
thresholdInt = Integer.parseInt(threshold);
int status3 = chooser3.showSaveDialog(null);
if(status3 == JFileChooser.APPROVE_OPTION)
{
in = new FileInputStream(chooser2.getSelectedFile() + "_gray.bmp");
out = new FileOutputStream(chooser3.getSelectedFile() + "_bw.bmp");
while((i=in.read())!=-1)
{
if (++counter>54) // skip past Bitmap headers
{
int b = i;
int g = in.read();
int r = in.read();
if(b > thresholdInt)
out.write(0);
else
out.write(255);
if(g > thresholdInt)
out.write(0);
else
out.write(255);
if(r > thresholdInt)
i = 0;
else
i = 255;
}
out.write(i);
}
}
else
JOptionPane.showMessageDialog(null, "You did not select a save location for the second image.");
}
catch(NumberFormatException ex){
JOptionPane.showMessageDialog(null, "Issue with user input, ensure you entered an integer. Error: " + ex);
}
}
catch(IOException ex)
{
JOptionPane.showMessageDialog(null,"Error in input/output of file:" + " '" + ex + "'");
}
}
else
JOptionPane.showMessageDialog(null,"You did not specify a file or a save location for the new file.");
}
}
Your problem is that you are not resetting to 0 the counter variable after creating the _gray image, before creating the _bw one. Therefore you are reading / writing the headers as color bytes, tresholding them and corrupting them. Resetting it should fix it.
I need help; what I'm trying to do is to put each line of a text file into an table of object.
Let's get into example :
I have that text file:
ATI
8
10
AMX
12
15
As we can see, this text file contain different types of mother boards.
So I have this constructor (which is on another class) of mother boards objects:
motherBoard(String type, int size, int max_size){
this.type = type;
this.size = size;
this.max_size = max_size;
}
So what I'm trying to do is to make a table of object where (in this example) the first object of the table would contain the 3 first lines of the text file. Then, the 2nd object of the table would contain the next 3 lines so on...
All the operations I want to do are in a method...
Here is my code so far :
public static CarteMere[] chargerEnMemoire(String nomFichier) throws IOException{
CarteMere[] newCarteMere = new CarteMere[0];
try {
FileReader reader = new FileReader( "liste.txt" );
BufferedReader buffer = new BufferedReader( reader );
int nmLines = 0;
while ( buffer.ready() ) {
nmLines++; // To evaluate the size of the table
}
buffer.close();
newCarteMere = new CarteMere[nmLines/3];
for(int i=0; i<(nmLines/3);i++){
}
} catch ( FileNotFoundException e ) {
System.out.println( e.getMessage() );
} catch ( IOException e ) {
System.out.println( e.getMessage() );
}
return newCarteMere;
}
That's where I need a push...
Start with what you know, you have a file, it contains data in groups of three lines, one's a String, the other two are numbers.
You have a class which represents that data, always easier to store data in an object where possible, so you could do something like...
try (BufferedReader br = new BufferedReader(new FileReader("liste.txt"))) {
String name = br.readLine();
int size = Integer.parseInt(br.readLine());
int max = Integer.parseInt(br.readLine());
Motherboard mb = new MotherBoard(name, size, max);
// Add it to a List or what ever else you want to do it...
} catch (IOException exp) {
exp.printStackTrace();
}
In case you have getter and setter for all 3 variables in your pojo class and you have overloaded toString() method in motherBoard class,
Integer noOfLines=0, i=0, j=0;
String [] data=null;
String sCurrentLine=null;
BufferedReader br = new BufferedReader(new FileReader("liste.txt"));
while ((sCurrentLine = br.readLine()) != null)
{
noOfLines++;
data[i]=sCurrentLine;
}
i=-1;
j=noOfLines/3;
motherBoard mb=null;
for(int x=0;x<=j;x++)
{
mb=new motherBoard(data[++i], Integer.parseInt(data[++i]), Integer.parseInt(data[++i]))
System.out.println(mb);
}
Regarding just reading the data from a file and loading it into classes, hopefully this example will help you:
MotherBoard.java
public class MotherBoard {
String type;
int size;
int max_size;
public MotherBoard (String type, int size, int max_size) {
this.type = type;
this.size = size;
this.max_size = max_size;
}
public String toString() {
return "Motherboard data : type=" + type + " size=" + size + ", max_size=" + max_size;
}
}
Solution.java:
import java.io.BufferedReader;
import java.io.FileReader;
import java.io.FileNotFoundException;
import java.io.IOException;
public class Solution {
public static void main(String[] args) {
try {
FileReader reader = new FileReader( "liste.txt" );
BufferedReader buffer = new BufferedReader( reader );
int nmLines = 0;
int numMBs = 0;
while (buffer.ready()) {
numMBs++;
/* Not a good approach if your data isn't perfect - just an example for StackOverflow */
MotherBoard mb = new MotherBoard (buffer.readLine(),
Integer.parseInt(buffer.readLine()),
Integer.parseInt(buffer.readLine()));
System.out.println("Motherboard " + numMBs + ":");
System.out.println(mb.toString());
}
/* Probably want to do this in a finally block too */
buffer.close();
reader.close();
} catch ( FileNotFoundException e ) {
System.out.println( e.getMessage() );
} catch ( IOException e ) {
System.out.println( e.getMessage() );
}
}
}
To compile and run:
javac -cp . MotherBoard.java Solution.java
java Solution
Output:
Motherboard 1:
Motherboard data : type=ATI size=8, max_size=10
Motherboard 2:
Motherboard data : type=AMX size=12, max_size=15
So i found a way... I think i found a solution...
Here is my new code :
public static CarteMere[] chargerEnMemoire(String nomFichier) throws IOException{
CarteMere[] newCarteMere = new CarteMere[0];
try {
FileReader reader = new FileReader( nomFichier );
BufferedReader buffer = new BufferedReader( reader );
int nmLines = 0;
while ( buffer.readLine() != null) {
nmLines++; // To evaluate the size of the table
}
buffer.close();
reader.close();
//RESET THE BUFFER AND READER TO GET ON TOP OF THE FILE
FileReader reader2 = new FileReader( nomFichier );
BufferedReader buffer2 = new BufferedReader( reader2 );
newCarteMere = new CarteMere[nmLines/3];
for(int i=0; i<(nmLines/3);i++){
int forme = CarteMere.chaineFormeVersCode(buffer2.readLine());
int mem_inst = Integer.parseInt(buffer2.readLine());
int mem_max = Integer.parseInt(buffer2.readLine());
newCarteMere[i] = new CarteMere(forme, mem_inst, mem_max);
}
buffer2.close();
reader2.close();
} catch ( FileNotFoundException e ) {
System.out.println( e.getMessage() );
} catch ( IOException e ) {
System.out.println( e.getMessage() );
}
return newCarteMere;
}
But the idea of reseting the buffer and redear like that seems ugly to me... Do you guys have an idea to do the same sime but more correctly ?
Thank in advance
The gzip input/output stream dont operate on Java direct buffers.
Is there any compression algorithm implementation out there that operates directly on direct buffers?
This way there would be no overhead of copying a direct buffer to a java byte array for compression.
I don't mean to detract from your question, but is this really a good optimization point in your program? Have you verified with a profiler that you indeed have a problem? Your question as stated implies you have not done any research, but are merely guessing that you will have a performance or memory problem by allocating a byte[]. Since all the answers in this thread are likely to be hacks of some sort, you should really verify that you actually have a problem before fixing it.
Back to the question, if you're wanting to compress the data "in place" in on a ByteBuffer, the answer is no, there is no capability to do that built into Java.
If you allocated your buffer like the following:
byte[] bytes = getMyData();
ByteBuffer buf = ByteBuffer.wrap(bytes);
You can filter your byte[] through a ByteBufferInputStream as the previous answer suggested.
Wow old question, but stumbled upon this today.
Probably some libs like zip4j can handle this, but you can get the job done with no external dependencies since Java 11:
If you are interested only in compressing data, you can just do:
void compress(ByteBuffer src, ByteBuffer dst) {
var def = new Deflater(Deflater.DEFAULT_COMPRESSION, true);
try {
def.setInput(src);
def.finish();
def.deflate(dst, Deflater.SYNC_FLUSH);
if (src.hasRemaining()) {
throw new RuntimeException("dst too small");
}
} finally {
def.end();
}
}
Both src and dst will change positions, so you might have to flip them after compress returns.
In order to recover compressed data:
void decompress(ByteBuffer src, ByteBuffer dst) throws DataFormatException {
var inf = new Inflater(true);
try {
inf.setInput(src);
inf.inflate(dst);
if (src.hasRemaining()) {
throw new RuntimeException("dst too small");
}
} finally {
inf.end();
}
}
Note that both methods expect (de-)compression to happen in a single pass, however, we could use slight modified versions in order to stream it:
void compress(ByteBuffer src, ByteBuffer dst, Consumer<ByteBuffer> sink) {
var def = new Deflater(Deflater.DEFAULT_COMPRESSION, true);
try {
def.setInput(src);
def.finish();
int cmp;
do {
cmp = def.deflate(dst, Deflater.SYNC_FLUSH);
if (cmp > 0) {
sink.accept(dst.flip());
dst.clear();
}
} while (cmp > 0);
} finally {
def.end();
}
}
void decompress(ByteBuffer src, ByteBuffer dst, Consumer<ByteBuffer> sink) throws DataFormatException {
var inf = new Inflater(true);
try {
inf.setInput(src);
int dec;
do {
dec = inf.inflate(dst);
if (dec > 0) {
sink.accept(dst.flip());
dst.clear();
}
} while (dec > 0);
} finally {
inf.end();
}
}
Example:
void compressLargeFile() throws IOException {
var in = FileChannel.open(Paths.get("large"));
var temp = ByteBuffer.allocateDirect(1024 * 1024);
var out = FileChannel.open(Paths.get("large.zip"));
var start = 0;
var rem = ch.size();
while (rem > 0) {
var mapped=Math.min(16*1024*1024, rem);
var src = in.map(MapMode.READ_ONLY, start, mapped);
compress(src, temp, (bb) -> {
try {
out.write(bb);
} catch (IOException e) {
throw new UncheckedIOException(e);
}
});
rem-=mapped;
}
}
If you want fully zip compliant data:
void zip(ByteBuffer src, ByteBuffer dst) {
var u = src.remaining();
var crc = new CRC32();
crc.update(src.duplicate());
writeHeader(dst);
compress(src, dst);
writeTrailer(crc, u, dst);
}
Where:
void writeHeader(ByteBuffer dst) {
var header = new byte[] { (byte) 0x8b1f, (byte) (0x8b1f >> 8), Deflater.DEFLATED, 0, 0, 0, 0, 0, 0, 0 };
dst.put(header);
}
And:
void writeTrailer(CRC32 crc, int uncompressed, ByteBuffer dst) {
if (dst.order() == ByteOrder.LITTLE_ENDIAN) {
dst.putInt((int) crc.getValue());
dst.putInt(uncompressed);
} else {
dst.putInt(Integer.reverseBytes((int) crc.getValue()));
dst.putInt(Integer.reverseBytes(uncompressed));
}
So, zip imposes 10+8 bytes of overhead.
In order to unzip a direct buffer into another, you can wrap the src buffer into an InputStream:
class ByteBufferInputStream extends InputStream {
final ByteBuffer bb;
public ByteBufferInputStream(ByteBuffer bb) {
this.bb = bb;
}
#Override
public int available() throws IOException {
return bb.remaining();
}
#Override
public int read() throws IOException {
return bb.hasRemaining() ? bb.get() & 0xFF : -1;
}
#Override
public int read(byte[] b, int off, int len) throws IOException {
var rem = bb.remaining();
if (rem == 0) {
return -1;
}
len = Math.min(rem, len);
bb.get(b, off, len);
return len;
}
#Override
public long skip(long n) throws IOException {
var rem = bb.remaining();
if (n > rem) {
bb.position(bb.limit());
n = rem;
} else {
bb.position((int) (bb.position() + n));
}
return n;
}
}
and use:
void unzip(ByteBuffer src, ByteBuffer dst) throws IOException {
try (var is = new ByteBufferInputStream(src); var gis = new GZIPInputStream(is)) {
var tmp = new byte[1024];
var r = gis.read(tmp);
if (r > 0) {
do {
dst.put(tmp, 0, r);
r = gis.read(tmp);
} while (r > 0);
}
}
}
Of course, this is not cool since we are copying data to a temporary array, but nevertheless, it is sort of a roundtrip check that proves that nio-based zip encoding writes valid data that can be read from standard io-based consumers.
So, if we just ignore crc consistency checks we can just drop header/footer:
void unzipNoCheck(ByteBuffer src, ByteBuffer dst) throws DataFormatException {
src.position(src.position() + 10).limit(src.limit() - 8);
decompress(src, dst);
}
If you are using ByteBuffers you can use some simple Input/OutputStream wrappers such as these:
public class ByteBufferInputStream extends InputStream {
private ByteBuffer buffer = null;
public ByteBufferInputStream( ByteBuffer b) {
this.buffer = b;
}
#Override
public int read() throws IOException {
return (buffer.get() & 0xFF);
}
}
public class ByteBufferOutputStream extends OutputStream {
private ByteBuffer buffer = null;
public ByteBufferOutputStream( ByteBuffer b) {
this.buffer = b;
}
#Override
public void write(int b) throws IOException {
buffer.put( (byte)(b & 0xFF) );
}
}
Test:
ByteBuffer buffer = ByteBuffer.allocate( 1000 );
ByteBufferOutputStream bufferOutput = new ByteBufferOutputStream( buffer );
GZIPOutputStream output = new GZIPOutputStream( bufferOutput );
output.write("stackexchange".getBytes());
output.close();
buffer.position( 0 );
byte[] result = new byte[ 1000 ];
ByteBufferInputStream bufferInput = new ByteBufferInputStream( buffer );
GZIPInputStream input = new GZIPInputStream( bufferInput );
input.read( result );
System.out.println( new String(result));
I've got a ZIP file containing a number of PNG images that I am trying to load into my Java application as ImageIcon resources directly from the archive. Here's my code:
import java.io.*;
import java.util.Enumeration;
import java.util.zip.*;
import javax.swing.ImageIcon;
public class Test {
public static void main( String[] args )
{
if( args.length == 0 )
{
System.out.println("usage: java Test.java file.zip");
return;
}
File archive = new File( args[0] );
if( !archive.exists() || !archive.canRead() )
{
System.err.printf("Unable to find/access %s.\n", archive);
return;
}
try {
ZipFile zip = new ZipFile(archive);
Enumeration <? extends ZipEntry>e = zip.entries();
while( e.hasMoreElements() )
{
ZipEntry entry = (ZipEntry) e.nextElement();
int size = (int) entry.getSize();
int count = (size % 1024 == 0) ? size / 1024 : (size / 1024)+1;
int offset = 0;
int nread, toRead;
byte[] buffer = new byte[size];
for( int i = 0; i < count; i++ )
{
offset = 1024*i;
toRead = (size-offset > 1024) ? 1024 : size-offset;
nread = zip.getInputStream(entry).read(buffer, offset, toRead);
}
ImageIcon icon = new ImageIcon(buffer); // boom -- why?
}
zip.close();
} catch( IOException ex ) {
System.err.println(ex.getMessage());
}
}
}
The sizes reported by entry.getSize() match the uncompressed size of the PNG files, and I am able to read the data out of the archive without any exceptions, but the creation of the ImageIcon blows up. The stacktrace:
sun.awt.image.PNGImageDecoder$PNGException: crc corruption
at sun.awt.image.PNGImageDecoder.getChunk(PNGImageDecoder.java:699)
at sun.awt.image.PNGImageDecoder.getData(PNGImageDecoder.java:707)
at sun.awt.image.PNGImageDecoder.produceImage(PNGImageDecoder.java:234)
at sun.awt.image.InputStreamImageSource.doFetch(InputStreamImageSource.java:246)
at sun.awt.image.ImageFetcher.fetchloop(ImageFetcher.java:172)
at sun.awt.image.ImageFetcher.run(ImageFetcher.java:136)
sun.awt.image.PNGImageDecoder$PNGException: crc corruption
at sun.awt.image.PNGImageDecoder.getChunk(PNGImageDecoder.java:699)
at sun.awt.image.PNGImageDecoder.getData(PNGImageDecoder.java:707)
at sun.awt.image.PNGImageDecoder.produceImage(PNGImageDecoder.java:234)
at sun.awt.image.InputStreamImageSource.doFetch(InputStreamImageSource.java:246)
at sun.awt.image.ImageFetcher.fetchloop(ImageFetcher.java:172)
at sun.awt.image.ImageFetcher.run(ImageFetcher.java:136)
Can anyone shed some light on it? Google hasn't turned up any useful information.
You might must pull getInputStream() out of the inner loop, instead of invoking it repeatedly for each block.