I am trying to decompress a csv file that is in the form name.csv.gz and its I think its something like 600M compressed and we'll say something in the ballpark of 7Gb when decompressed
byte[] buffer = new byte[4096];
try {
GZIPInputStream gzis = new GZIPInputStream(new FileInputStream("/run/media/justin/DATA/2000000033673205_53848.TEST_SCHEDULE_GCO.20180706.090850.2000000033673205.x04q13.csv.gz"));
FileOutputStream out = new FileOutputStream("/run/media/justin/DATA/unzipped.txt");
int len;
while((len = gzis.read(buffer)) > 0) {
out.write(buffer,0,len);
}
gzis.close();
out.close();
System.out.println("DONE!!");
} catch(IOException e) {e.printStackTrace();}
this is the code I am using to decompress it, and at the end, I get the error Unexpected end of ZLIB stream and I am missing several million lines at the end of the file. I haven't found anything on google that has led me in any prosperous directions so any help is greatly appreciated!
Edit: I forgot a line of code at the top (*facepalm) also, I have increased the buffer size, from 2048 to 4096, and I am getting more lines after decompression, so would I be correct in assuming that I just didn't allocate a large enough buffer? (or is this a naive assumption?)
I have increased the buffer size, from 2048 to 4096, and I am getting more lines after decompression, so would I be correct in assuming that I just didn't allocate a large enough buffer? (or is this a naive assumption?)
This is no problem of your buffer size, it's more a problem with the GZIPInputStream.read() methode. The buffer size only declares how "often" the while-loop should read and write, cause a bigger buffer => higher transfer rate => less loops
Your problem is inside of the GZIPInputStream class or has something to do with the used files, maybe try a smaller file first.
Related
So I have created my own personal HTTP Server in Java from scratch.
So far it is working fine but with one major flaw.
When I try to pass big files to the browser I get a Java Heap Space error. I know how to fix this error through the JVM but I am looking for the long term solution for this.
//declare an integer for the byte length of the file
int length = (int) f.length();
//start the fileinput stream.
FileInputStream fis = new FileInputStream(f);
//byte array with the length of the file
byte[] bytes = new byte[length];
//write the file until the bytes is empty.
while ((length = fis.read(bytes)) != -1 ){
write(bytes, 0, length);
}
flush();
//close the file input stream
fis.close();
This way sends the file to the browser successfully and streams it perfectly but the issue is, because I am creating a byte array with the length of the file. When the file is very big I get the Heap Space error.
I have eliminated this issue by using a buffer as shown below and I dont get Heap Space errors anymore. BUT the way shown below does not stream the files in the browser correctly. It's as if the file bytes are being shuffled and are being sent to the browser all together.
final int bufferSize = 4096;
byte buffer[] = new byte[bufferSize];
FileInputStream fis = new FileInputStream(f);
BufferedInputStream bis = new BufferedInputStream(fis);
while ( true )
{
int length = bis.read( buffer, 0, bufferSize );
if ( length < 0 ) break;
write( buffer, 0, length );
}
flush();
bis.close();
fis.close();
NOTE1:
All the correct Response Headers are being sent perfectly to the browser.
Note2:
Both ways work perfectly on a computer browser but only the first way works on a smartphone's browser (but sometimes it gives me Heap Space error).
If someone knows how to correctly send files to a browser and stream them correctly I would be a very very happy man.
Thank you in advance! :)
When reading from a BufferedInputStream you can allow its' buffer to handle the buffering, there is no reason to read everything into a byte[] (and certainly not a byte[] of the entire File). Read one byte at a time, and rely on the internal buffer of the stream. Something like,
FileInputStream fis = new FileInputStream(f);
BufferedInputStream bis = new BufferedInputStream(fis);
int abyte;
while ((abyte = bis.read()) != -1 ){
write(abyte);
}
Emm... As I can see it, you try to use chunks in your code anyway,
as I can remember, even the apache HttpClient+FileUpload solution has file size limit about <=2.1GB or something (correct me if I am wrong) so it is a bit hard thing...
I haven't tried the solution yet but as a test you can use java.io.RandomAccessFile in combination with File(Input/Output)Stream on the client and server not to read and write the whole file at a time but sequence of lets say <=30MB blocks for example to avoid the annoying outofmemory errors ; An example of using RandomAccessFile can be found here https://examples.javacodegeeks.com/core-java/io/randomaccessfile/java-randomaccessfile-example/
But still you give less details :( I mean is your client suppose to be a common Java application or not?
If you have some additional information please let me know
Good luck :)
I've written a rest resource that serves a .tar.gz file. It's working OK. I've tried requesting it, saving the data, unpacking it (with tar xzvf [filename]) and I get the correct data.
However, I'm trying to use java.util.zip.GZIPInputStream and org.apache.tools.tar.TarInputStream to unzip and untar a .tar.gz that I'm serving in a JUnit test, to verify that it's working automatically. This is the code in my unit test with some details removed:
HttpResponse response = <make request code here>
byte[] receivedBytes = FileHelper.copyInputStreamToByteArray(response.getEntity().getContent(), true);
GZIPInputStream gzipInputStream = new GZIPInputStream(new ByteArrayInputStream(receivedBytes));
TarInputStream tarInputStream = new TarInputStream(gzipInputStream);
TarEntry tarEntry = tarInputStream.getNextEntry();
ByteArrayOutputStream byteArrayOutputStream = null;
System.out.println("Record size: " + tarInputStream.getRecordSize());
while (tarEntry != null) // It only goes in here once
{
byteArrayOutputStream = new ByteArrayOutputStream();
tarInputStream.copyEntryContents(byteArrayOutputStream);
tarEntry = tarInputStream.getNextEntry();
}
byteArrayOutputStream.flush();
byteArrayOutputStream.close();
byte[] archivedBytes = byteArrayOutputStream.toByteArray();
byte[] actualBytes = <get actual bytes>
Assert.assertArrayEquals(actualBytes, archivedBytes);
The final assert fails with a difference at byte X = (n * 512) + 1, where n is the greatest natural number such that n * 512 <= l and l is the length of the data. That is, I get the the biggest possible multiple of 512 bytes of data correctly, but debugging the test I can see that all the remaining bytes are zero. So, if the total amount of data is 1000 bytes, the first 512 bytes in archivedBytes are correct, but the last 488 are all zero / unset, and if the total data is 262272 bytes I get the first 262144 (512*512) bytes correctly, but the remaining bytes are all zero again.
Also, the tarInputStream.getRecordSize() System out above prints Record size: 512, so I presume that this is somehow related. However, since the archive works if I download it, I guess the data must be there, and there's just something I'm missing.
Stepping into the tarInputStream.copyEntryContents(byteArrayOutputStream) with the 1000 byte data, in
int numRead = read(buf, 0, buf.length);
the numRead is 100, but looking at the buffer, only the first 512 bytes are non-zero. Maybe I shouldn't be using that method to get the data out of the TarInputStream?
If anyone knows how it's supposed to work, I'd be very grateful for any advice or help.
You can specify the output block size to be used when you create a tar archive. Thus the size of the archive will be a multiple of the block size. As the archive size doesn't normally fit in a whole number of blocks, zeros are added to the last block of data to make it of the right size.
It turned out that I was wrong in my original question, and the error was in the resource code. I wasn't closing the entry on the TarOutputStream when writing to it. I guess this was not causing any problems when requesting it manually from the server, maybe because the entry was closed with the connection or something, but working differently when being requested from a Unit test... though I must admit that doesn't make a whole lot of sense to be :P
Looking at the fragment of my writing code below, I was missing line 3.
1: tarOutputStream.putNextEntry(tarEntry);
2: tarOutputStream.write(fileRawBytes);
3: tarOutputStream.closeEntry();
4: tarOutputStream.close();
I didn't even know there was such a thing as a "closeEntry" on the TarOutputStream... I do now! :P
I am trying read and write large files (larger than 100 MBs) using BufferedInputStream & BufferedOutputStream. I am getting Heap Memory issue & OOM exception.
The code looks like :
BufferedInputStream buffIn = new BufferedInputStream(iStream);
/** iStream is the InputStream object **/
BufferedOutputStream buffOut=new BufferedOutputStream(new FileOutputStream(file));
byte []arr = new byte [1024 * 1024];
int available = -1;
while((available = buffIn.read(arr)) > 0) {
buffOut.write(arr, 0, available);
}
buffOut.flush();
buffOut.close();
My question is when we use the BufferedOutputStreeam is it holding the memory till the full file is written out ?
What is the best way to write large files using BufferedOutputStream?
there is nothing wrong with the code you have provided. your memory issues must lie elsewhere. the buffered streams have a fixed memory usage limit.
the easiest way to determine what has caused an OOME, of course, is to have the OOME generate a heap dump and then examine that heap dump in a memory profiler.
I'm reading a large xml file using HttpURLConnection in java as follows.
StringBuilder responseBuilder = new StringBuilder(1024);
char[] buffer = new char[4096];
BufferedReader br = new BufferedReader(new InputStreamReader(((InputStream)new DataInputStream(new GZIPInputStream(connection.getInputStream()))),"UTF-8"));
int n = 0;
while(n>=0){
n=br.read(buffer,0,buffer.length);
if(n>0) responseBuilder.append(buffer,0,n);
}
Is there any way to get the total number of bytes loaded to the BufferedReader before finish reading it char by char / line by line / char block by char block.
It sounds like you're trying to find out the size of the BufferedReader without consuming it.
You could try using the HttpURLConnection's getContentLength() method. This may or may not work. What it certainly wouldn't do is give you the uncompressed size of the stream. If it's the latter that you're after, you're almost certainly out of luck.
If I have misunderstood your question, please clarify what it is exactly that you're after.
If the content-length header has been set then you can access that through the connection. But if the content has been compressed then it might not be set, or might give the compressed size, where I assume you are looking for the uncompressed size.
I'm trying to figure out why this particular snippet of code isn't working for me. I've got an applet which is supposed to read a .pdf and display it with a pdf-renderer library, but for some reason when I read in the .pdf files which sit on my server, they end up as being corrupt. I've tested it by writing the files back out again.
I've tried viewing the applet in both IE and Firefox and the corrupt files occur. Funny thing is, when I trying viewing the applet in Safari (for Windows), the file is actually fine! I understand the JVM might be different, but I am still lost. I've compiled in Java 1.5. JVMs are 1.6. The snippet which reads the file is below.
public static ByteBuffer getAsByteArray(URL url) throws IOException {
ByteArrayOutputStream tmpOut = new ByteArrayOutputStream();
URLConnection connection = url.openConnection();
int contentLength = connection.getContentLength();
InputStream in = url.openStream();
byte[] buf = new byte[512];
int len;
while (true) {
len = in.read(buf);
if (len == -1) {
break;
}
tmpOut.write(buf, 0, len);
}
tmpOut.close();
ByteBuffer bb = ByteBuffer.wrap(tmpOut.toByteArray(), 0,
tmpOut.size());
//Lines below used to test if file is corrupt
//FileOutputStream fos = new FileOutputStream("C:\\abc.pdf");
//fos.write(tmpOut.toByteArray());
return bb;
}
I must be missing something, and I've been banging my head trying to figure it out. Any help is greatly appreciated. Thanks.
Edit:
To further clarify my situation, the difference in the file before I read then with the snippet and after, is that the ones I output after reading are significantly smaller than they originally are. When opening them, they are not recognized as .pdf files. There are no exceptions being thrown that I ignore, and I have tried flushing to no avail.
This snippet works in Safari, meaning the files are read in it's entirety, with no difference in size, and can be opened with any .pdf reader. In IE and Firefox, the files always end up being corrupted, consistently the same smaller size.
I monitored the len variable (when reading a 59kb file), hoping to see how many bytes get read in at each loop. In IE and Firefox, at 18kb, the in.read(buf) returns a -1 as if the file has ended. Safari does not do this.
I'll keep at it, and I appreciate all the suggestions so far.
Just in case these small changes make a difference, try this:
public static ByteBuffer getAsByteArray(URL url) throws IOException {
URLConnection connection = url.openConnection();
// Since you get a URLConnection, use it to get the InputStream
InputStream in = connection.getInputStream();
// Now that the InputStream is open, get the content length
int contentLength = connection.getContentLength();
// To avoid having to resize the array over and over and over as
// bytes are written to the array, provide an accurate estimate of
// the ultimate size of the byte array
ByteArrayOutputStream tmpOut;
if (contentLength != -1) {
tmpOut = new ByteArrayOutputStream(contentLength);
} else {
tmpOut = new ByteArrayOutputStream(16384); // Pick some appropriate size
}
byte[] buf = new byte[512];
while (true) {
int len = in.read(buf);
if (len == -1) {
break;
}
tmpOut.write(buf, 0, len);
}
in.close();
tmpOut.close(); // No effect, but good to do anyway to keep the metaphor alive
byte[] array = tmpOut.toByteArray();
//Lines below used to test if file is corrupt
//FileOutputStream fos = new FileOutputStream("C:\\abc.pdf");
//fos.write(array);
//fos.close();
return ByteBuffer.wrap(array);
}
You forgot to close fos which may result in that file being shorter if your application is still running or is abruptly terminated. Also, I added creating the ByteArrayOutputStream with the appropriate initial size. (Otherwise Java will have to repeatedly allocate a new array and copy, allocate a new array and copy, which is expensive.) Replace the value 16384 with a more appropriate value. 16k is probably small for a PDF, but I don't know how but the "average" size is that you expect to download.
Since you use toByteArray() twice (even though one is in diagnostic code), I assigned that to a variable. Finally, although it shouldn't make any difference, when you are wrapping the entire array in a ByteBuffer, you only need to supply the byte array itself. Supplying the offset 0 and the length is redundant.
Note that if you are downloading large PDF files this way, then ensure that your JVM is running with a large enough heap that you have enough room for several times the largest file size you expect to read. The method you're using keeps the whole file in memory, which is OK as long as you can afford that memory. :)
I thought I had the same problem as you, but it turned out my problem was that I assumed you always get the full buffer until you get nothing. But you do not assume that.
The examples on the net (e.g. java2s/tutorial) use a BufferedInputStream. But that does not make any difference for me.
You could check whether you actually get the full file in your loop. Than the problem would be in the ByteArrayOutputStream.
Have you tried a flush() before you close the tmpOut stream to ensure all bytes written out?