Reading GZIP File Causing Unexpected end of ZLIB input stream in java - java

I am converting GZIP byte array to String form in Java.This is considerably large file and idea is to convert this in to JSON.
But Exception I am getting is quite weird and is not making much Sense.
Code Snippeet:
public static String convert(byte[] bytes) throws IOException {
final byte[] BUFFER = new byte[16234];
GZIPInputStream gzipInputStream = new GZIPInputStream(new ByteArrayInputStream(bytes));
ByteArrayOutputStream byteArrayOutputStream = new ByteArrayOutputStream();
int len;
while ((len = gzipInputStream.read(BUFFER)) >= 0) {
byteArrayOutputStream.write(BUFFER, 0, len);
if(byteArrayOutputStream.size ()>60812918){
System.out.println ( "stopping here" );
}
}
byteArrayOutputStream.flush();
byteArrayOutputStream.close();
gzipInputStream.close();
final byte[] dataPart = byteArrayOutputStream.toByteArray();
String data = new String(dataPart, StandardCharsets.UTF_8);
return data;
}
Exception Trace:
java.io.EOFException: Unexpected end of ZLIB input stream
at java.util.zip.InflaterInputStream.fill(InflaterInputStream.java:240)
at java.util.zip.InflaterInputStream.read(InflaterInputStream.java:158)
at java.util.zip.GZIPInputStream.read(GZIPInputStream.java:117)
at java.io.FilterInputStream.read(FilterInputStream.java:107)
at com.here.rcs.discoverkernels.testS3FileReader.convert(testS3FileReader.java:84)
at com.here.rcs.discoverkernels.testS3FileReader.viewJson(testS3FileReader.java:45)
at com.here.rcs.discoverkernels.testS3FileReader.main(testS3FileReader.java:21)
From Coding point of view,I don't think there is something wrong with this piece of code.
Any Suggestions how to move forward with this.
Adding To Byte Conversion Part;
public static byte[] compress(final String data) throws IOException {
final byte[] dataPart = data.getBytes( StandardCharsets.UTF_8 );
ByteArrayOutputStream byteArrayOutputStream = new ByteArrayOutputStream();
GZIPOutputStream gzipOutputStream = new GZIPOutputStream(byteArrayOutputStream);
gzipOutputStream.write(dataPart);
gzipOutputStream.flush();
gzipOutputStream.close();
byte[] bytes = byteArrayOutputStream.toByteArray();
return bytes;
}

I tried your program with a small .gz file, and it works as expected. I guess that
there are issues handling large files, or, more likely,
you did not loaded correctly data from file to bytes array. How did you? I followed this article: https://netjs.blogspot.com/2015/11/how-to-convert-file-to-byte-array-java.html

Related

Reading InputStream bytes and writing to ByteArrayOutputStream

I have code block to read mentioned number of bytes from an InputStream and return a byte[] using ByteArrayOutputStream. When I'm writing that byte[] array to a file, resultant file on the filesystem seems broken. Can anyone help me find out problem in the below code block.
public byte[] readWrite(long bytes, InputStream in) throws IOException {
ByteArrayOutputStream bos = new ByteArrayOutputStream();
int maxReadBufferSize = 8 * 1024; //8KB
long numReads = bytes/maxReadBufferSize;
long numRemainingRead = bytes % maxReadBufferSize;
for(int i=0; i<numReads; i++) {
byte bufr[] = new byte[maxReadBufferSize];
int val = in.read(bufr, 0, bufr.length);
if(val != -1) {
bos.write(bufr);
}
}
if(numRemainingRead > 0) {
byte bufr[] = new byte[(int)numRemainingRead];
int val = in.read(bufr, 0, bufr.length);
if(val != -1) {
bos.write(bufr);
}
}
return bos.toByteArray();
}
My understanding of the problem statement
Read bytes number of bytes from the given InputStream in a ByteArrayOutputStream.
Finally, return a byte array.
Key observations
A lot of work is done to make sure bytes are read in chunks of 8KB.
Also, the last remaining chunk of odd size is read separately.
A lot of work is also done to make sure we are reading from the correct offset.
My views
Unless we are reading a very large file (>10MB) I don't see a valid reason for reading in chunks of 8KB.
Let Java libraries do all the hard work of maintaining offset and making sure we don't read outside limits.
Eg: We don't have to give offset, simply do inputStream.read(b) over and over, the next byte array of size b.length will be read. Similarly, we can simply write to outputStream.
Code
public byte[] readWrite(long bytes, InputStream in) throws IOException {
ByteArrayOutputStream bos = new ByteArrayOutputStream();
byte[] buffer = new byte[(int)bytes];
is.read(buffer);
bos.write(buffer);
return bos.toByteArray();
}
References
About InputStreams
Byte Array to Human Readable Format

Unable to decompress gzipped files after uploading input stream chunks in S3

I'd like to take my input stream and upload gzipped parts to s3 in a similar fashion to the multipart uploader.
However, I want to store the individual file parts in S3 and not turn the parts into a single file.
To do so, I have created the following methods.
But, when I try to gzip decompress each part gzip throws an error and says: gzip: file_part_2.log.gz: not in gzip format.
I'm not sure if I am compressing each part correctly?
If I re-initialise the gzipoutputstream: gzip = new GZIPOutputStream(baos); and set gzip.finish() after reseting the byte array output stream baos.reset(); then I am able to decompress each part. Not sure why I need todo this, is there a similar reset for the gzipoutputstream?
public void upload(String bucket, String key, InputStream is, int partSize) throws Exception
{
String row;
BufferedReader br = new BufferedReader(new InputStreamReader(is, ENCODING));
ByteArrayOutputStream baos = new ByteArrayOutputStream();
GZIPOutputStream gzip = new GZIPOutputStream(baos);
int partCounter = 0;
int lineCounter = 0;
while ((row = br.readLine()) != null) {
if (baos.size() >= partSize) {
partCounter = this.uploadChunk(bucket, key, baos, partCounter);
baos.reset();
}else if(!row.equals("")){
row += '\n';
gzip.write(row.getBytes(ENCODING));
lineCounter++;
}
}
gzip.finish();
br.close();
baos.close();
if(lineCounter == 0){
throw new Exception("Aborting upload, file contents is empty!");
}
//Final chunk
if (baos.size() > 0) {
this.uploadChunk(bucket, key, baos, partCounter);
}
}
private int uploadChunk(String bucket, String key, ByteArrayOutputStream baos, int partCounter)
{
ObjectMetadata metaData = new ObjectMetadata();
metaData.setContentLength(baos.size());
String[] path = key.split("/");
String[] filename = path[path.length-1].split("\\.");
filename[0] = filename[0]+"_part_"+partCounter;
path[path.length-1] = String.join(".", filename);
amazonS3.putObject(
bucket,
String.join("/", path),
new ByteArrayInputStream(baos.toByteArray()),
metaData
);
log.info("Upload chunk {}, size: {}", partCounter, baos.size());
return partCounter+1;
}
The problem is that you're using a single GZipOutputStream for all chunks. So you're actually writing pieces of a GZipped file, which would have to be recombined to be useful.
Making the minimal change to your existing code:
if (baos.size() >= partSize) {
gzip.close();
partCounter = this.uploadChunk(bucket, key, baos, partCounter);
baos = baos = new ByteArrayOutputStream();
gzip = new GZIPOutputStream(baos);
}
You need to do the same at the end of the loop. Also, you shouldn't throw an exception if the line counter is 0: it's entirely possible that the file is exactly divisible into a set number of chunks.
To improve the code, I would wrap the GZIPOutputStream in an OutputStreamWriter and a BufferedWriter, so that you don't need to do the string-bytes conversion explicitly.
And lastly, don't use ByteArrayOutputStream.reset(). It doesn't save you anything over just creating a new stream, and opens the door for errors if you ever forget to reset.

Java: File to String, problems with using a buffer, byte array not clean

Take the following static method:
public static String fileToString(String filename) throws Exception {
FileInputStream fis = new FileInputStream(filename);
byte[] buffer = new byte[8192];
StringBuffer sb = new StringBuffer();
int bytesRead; // unused? weird compiler messages...
while((bytesRead = fis.read(buffer)) != -1) { // InputStream.read() returns -1 at EOF
sb.append(new String(buffer));
}
return new String(sb);
}
As you can see everything looks okay, and it is perfect for small text files. But once you get to big files with thousands of lines, you encounter problems with repeating text. Based on my intuition, I thoughtbyte[] buffer was "unclean", so to speak. So I added the following line to the method:
buffer = new byte[8192];
So that it is now:
public static String fileToString(String filename) throws Exception {
FileInputStream fis = new FileInputStream(filename);
byte[] buffer = new byte[8192];
StringBuffer sb = new StringBuffer();
int bytesRead; // unused? weird compiler messages...
while((bytesRead = fis.read(buffer)) != -1) { // InputStream.read() returns -1 at EOF
sb.append(new String(buffer));
buffer = new byte[8192]; // added new line here
}
return new String(sb);
}
And it's perfect, except for the fact that at the end of the String that the static method returns, I get a lot of null characters (depends on the buffer size). What's going on here?
actually: // unused? weird compiler messages...
is not weird. You never read this.
how could sb.append(new String(buffer)); know how many bytes are written to the buffer.
Exactly, this is where bytesRead comes into play.
So you need new String(bytes, offset, length)
public static String fileToString(String filename) throws Exception {
FileInputStream fis = new FileInputStream(filename);
byte[] buffer = new byte[8192];
StringBuffer sb = new StringBuffer();
int bytesRead; // unused? weird compiler messages...
while((bytesRead = fis.read(buffer)) != -1) { // InputStream.read() returns -1 at EOF
sb.append(new String(buffer,0,bytesRead));
buffer = new byte[8192];
bytesRead=0;
}
return new String(sb);
}
might work
You really shouldnt be reading bytes and creating a String from the raw bytes. THis is wrong because it completely ignores the encoding of the text. You might be lucky and be reading ASCII in which case things will just work out. In all other cases this is asking for trouble.
You really should use a BufferedReader which wraps an InputStreamReader which wraps your source InputStream.
Don't reinvent wheel. If you are not doing a school homework, use existing library like Apache commons IO.
http://commons.apache.org/io/apidocs/org/apache/commons/io/IOUtils.html#toString%28java.io.InputStream,%20java.nio.charset.Charset%29
For example you can read the File into a String in just a few lines like following:
public static String fileToString(String filepath) throws Exception {
return IOUtils.toString(new FileInputStream(filepath), "utf-8");
}
This will save you from lot of hand -written custom code and possibly have much lesser bugs.

GZIP decompress string and byte conversion

I have a problem in code:
private static String compress(String str)
{
String str1 = null;
ByteArrayOutputStream bos = null;
try
{
bos = new ByteArrayOutputStream();
BufferedOutputStream dest = null;
byte b[] = str.getBytes();
GZIPOutputStream gz = new GZIPOutputStream(bos,b.length);
gz.write(b,0,b.length);
bos.close();
gz.close();
}
catch(Exception e) {
System.out.println(e);
e.printStackTrace();
}
byte b1[] = bos.toByteArray();
return new String(b1);
}
private static String deCompress(String str)
{
String s1 = null;
try
{
byte b[] = str.getBytes();
InputStream bais = new ByteArrayInputStream(b);
GZIPInputStream gs = new GZIPInputStream(bais);
ByteArrayOutputStream baos = new ByteArrayOutputStream();
int numBytesRead = 0;
byte [] tempBytes = new byte[6000];
try
{
while ((numBytesRead = gs.read(tempBytes, 0, tempBytes.length)) != -1)
{
baos.write(tempBytes, 0, numBytesRead);
}
s1 = new String(baos.toByteArray());
s1= baos.toString();
}
catch(ZipException e)
{
e.printStackTrace();
}
}
catch(Exception e) {
e.printStackTrace();
}
return s1;
}
public String test() throws Exception
{
String str = "teststring";
String cmpr = compress(str);
String dcmpr = deCompress(cmpr);
}
This code throw java.io.IOException: unknown format (magic number ef1f)
GZIPInputStream gs = new GZIPInputStream(bais);
It turns out that when converting byte new String (b1) and the byte b [] = str.getBytes () bytes are "spoiled." At the output of the line we have already more bytes. If you avoid the conversion to a string and work on the line with bytes - everything works. Sorry for my English.
public String unZip(String zipped) throws DataFormatException, IOException {
byte[] bytes = zipped.getBytes("WINDOWS-1251");
Inflater decompressed = new Inflater();
decompressed.setInput(bytes);
byte[] result = new byte[100];
ByteArrayOutputStream buffer = new ByteArrayOutputStream();
while (decompressed.inflate(result) != 0)
buffer.write(result);
decompressed.end();
return new String(buffer.toByteArray(), charset);
}
I'm use this function to decompress server responce. Thanks for help.
You have two problems:
You're using the default character encoding to convert the original string into bytes. That will vary by platform. It's better to specify an encoding - UTF-8 is usually a good idea.
You're trying to represent the opaque binary data of the result of the compression as a string by just calling the String(byte[]) constructor. That constructor is only meant for data which is encoded text... which this isn't. You should use base64 for this. There's a public domain base64 library which makes this easy. (Alternatively, don't convert the compressed data to text at all - just return a byte array.)
Fundamentally, you need to understand how different text and binary data are - when you want to convert between the two, you should do so carefully. If you want to represent "non text" binary data (i.e. bytes which aren't the direct result of encoding text) in a string you should use something like base64 or hex. When you want to encode a string as binary data (e.g. to write some text to disk) you should carefully consider which encoding to use. If another program is going to read your data, you need to work out what encoding it expects - if you have full control over it yourself, I'd usually go for UTF-8.
Additionally, the exception handling in your code is poor:
You should almost never catch Exception; catch more specific exceptions
You shouldn't just catch an exception and continue as if it had never happened. If you can't really handle the exception and still complete your method successfully, you should let the exception bubble up the stack (or possibly catch it and wrap it in a more appropriate exception type for your abstraction)
When you GZIP compress data, you always get binary data. This data cannot be converted into string as it is no valid character data (in any encoding).
So your compress method should return a byte array and your decompress method should take a byte array as its parameter.
Futhermore, I recommend you use an explicit encoding when you convert the string into a byte array before compression and when you turn the decompressed data into a string again.
When you GZIP compress data, you always get binary data. This data
cannot be converted into string as it is no valid character data (in
any encoding).
Codo is right, thanks a lot for enlightening me. I was trying to decompress a string (converted from the binary data). What I amended was using InflaterInputStream directly on the input stream returned by my http connection. (My app was retrieving a large JSON of strings)

Java InputStream reading problem

I have a Java class, where I'm reading data in via an InputStream
byte[] b = null;
try {
b = new byte[in.available()];
in.read(b);
} catch (IOException e) {
e.printStackTrace();
}
It works perfectly when I run my app from the IDE (Eclipse).
But when I export my project and it's packed in a JAR, the read command doesn't read all the data. How could I fix it?
This problem mostly occurs when the InputStream is a File (~10kb).
Thanks!
Usually I prefer using a fixed size buffer when reading from input stream. As evilone pointed out, using available() as buffer size might not be a good idea because, say, if you are reading a remote resource, then you might not know the available bytes in advance. You can read the javadoc of InputStream to get more insight.
Here is the code snippet I usually use for reading input stream:
byte[] buffer = new byte[BUFFER_SIZE];
int bytesRead = 0;
while ((bytesRead = in.read(buffer)) >= 0){
for (int i = 0; i < bytesRead; i++){
//Do whatever you need with the bytes here
}
}
The version of read() I'm using here will fill the given buffer as much as possible and
return number of bytes actually read. This means there is chance that your buffer may contain trailing garbage data, so it is very important to use bytes only up to bytesRead.
Note the line (bytesRead = in.read(buffer)) >= 0, there is nothing in the InputStream spec saying that read() cannot read 0 bytes. You may need to handle the case when read() reads 0 bytes as special case depending on your case. For local file I never experienced such case; however, when reading remote resources, I actually seen read() reads 0 bytes constantly resulting the above code into an infinite loop. I solved the infinite loop problem by counting the number of times I read 0 bytes, when the counter exceed a threshold I will throw exception. You may not encounter this problem, but just keep this in mind :)
I probably will stay away from creating new byte array for each read for performance reasons.
read() will return -1 when the InputStream is depleted. There is also a version of read which takes an array, this allows you to do chunked reads. It returns the number of bytes actually read or -1 when at the end of the InputStream. Combine this with a dynamic buffer such as ByteArrayOutputStream to get the following:
InputStream in = ...
ByteArrayOutputStream buffer = new ByteArrayOutputStream();
int read;
byte[] input = new byte[4096];
while ( -1 != ( read = in.read( input ) ) ) {
buffer.write( input, 0, read );
}
input = buffer.toByteArray()
This cuts down a lot on the number of methods you have to invoke and allows the ByteArrayOutputStream to grow its internal buffer faster.
File file = new File("/path/to/file");
try {
InputStream is = new FileInputStream(file);
byte[] bytes = IOUtils.toByteArray(is);
System.out.println("Byte array size: " + bytes.length);
} catch (IOException e) {
e.printStackTrace();
}
Below is a snippet of code that downloads a file (*. Png, *. Jpeg, *. Gif, ...) and write it in BufferedOutputStream that represents the HttpServletResponse.
BufferedInputStream inputStream = bo.getBufferedInputStream(imageFile);
try {
ByteArrayOutputStream buffer = new ByteArrayOutputStream();
int bytesRead = 0;
byte[] input = new byte[DefaultBufferSizeIndicator.getDefaultBufferSize()];
while (-1 != (bytesRead = inputStream.read(input))) {
buffer.write(input, 0, bytesRead);
}
input = buffer.toByteArray();
response.reset();
response.setBufferSize(DefaultBufferSizeIndicator.getDefaultBufferSize());
response.setContentType(mimeType);
// Here's the secret. Content-Length should equal the number of bytes read.
response.setHeader("Content-Length", String.valueOf(buffer.size()));
response.setHeader("Content-Disposition", "inline; filename=\"" + imageFile.getName() + "\"");
BufferedOutputStream outputStream = new BufferedOutputStream(response.getOutputStream(), DefaultBufferSizeIndicator.getDefaultBufferSize());
try {
outputStream.write(input, 0, buffer.size());
} finally {
ImageBO.close(outputStream);
}
} finally {
ImageBO.close(inputStream);
}
Hope this helps.

Categories