using write() method, the file gets too big - java

I try to write to a file, the data that I receive from a socket , I store the data in an array but when I write them, the file gets too big ...
I think it is caused by using a big array , as i don't know the length of the data stream...
But checking the method write it is stated that write(byte[] b) Writes b.length bytes from the specified byte array to this file output stream,
the write() method reads the length of the array but the length is 2000...
How can i know the length of the data that will be written?
...
byte[] Rbuffer = new byte[2000];
dis = new DataInputStream(socket.getInputStream());
dis.read(Rbuffer);
writeSDCard.writeToSDFile(Rbuffer);
...
void writeToSDFile(byte[] inputMsg){
File root = android.os.Environment.getExternalStorageDirectory();
File dir = new File (root.getAbsolutePath() + "/download");
if (!(dir.exists())) {
dir.mkdirs();
}
Log.d("WriteSDCard", "Start writing");
File file = new File(dir, "myData.txt");
try {
FileOutputStream f = new FileOutputStream(file, true);
f.write(inputMsg);
f.close();
} catch (FileNotFoundException e) {
e.printStackTrace();
Log.i(TAG, "******* File not found. Did you" +
" add a WRITE_EXTERNAL_STORAGE permission to the manifest?");
} catch (IOException e) {
e.printStackTrace();
}
}

read() returns the number of bytes that were read, or -1. You are ignoring both possibilities, and assuming that it filled the buffer. All you have to do is store the result in a variable, check for -1, and otherwise pass it to the write() method.
Actually you should pass the input stream to your method, and use a loop after creating the file:
int count;
byte[] buffer = new byte[8192];
while ((count = in.read(buffer)) > 0)
{
out.write(buffer, 0, count);
}
Your statement in a now-deleted comment that a new input stream is created per packet is not correct.

Related

From FileInputStream to BufferedInputStream conversion

we were given a few exercises in lab and one of these is to convert the file transferring method from FileInputStream to BufferedInputStream. It's a client sending a GET request to a web server, which sends the file requested.
I came up with a simple solution, and I just wanted to check if it's correct.
Original code:
try {
FileInputStream fis = new FileInputStream(req);
// req, String containing file name
byte[] data = new byte [fis.available()];
fis.read(data);
out.write(data); // OutputStream out = socket.getOutputStream();
} catch (FileNotFoundException e){
new PrintStream(out).println("404 Not Found");
}
My try:
try {
BufferedInputStream bis = new BufferedInputStream (new FileInputStream(req));
byte[] data = new byte[4];
while(bis.read(data) > -1) {
out.write(data);
data = new byte[4];
}
} catch (FileNotFoundException e){
new PrintStream(out).println("404 Not Found");
}
The file is a web page named index.html, which contains a simple html page.
I have to reallocate the array every time, because at the last execution of the while loop, if the file isn't a multiple of 4 in size, the data array will contain characters from the previous execution, which are shown in the browser.
I chose 4 as data size for debugging purposes.
Output is correct.
Is this a good solution or can I do better?
There's no need to re-create the byte array each time - just overwrite it. More importantly though, you have a conceptual mistake inside your loop. Each iteration just writes the array to the stream assuming it's all valid. If you examine BufferedInputStream#read's documentation you'll see it may not read enough data to fill the entire array, and will return the number of bytes it actually read. You should use this number to limit the amount of bytes you're writing:
while((int len = bis.read(data)) > -1) {
out.write(data, 0, len);
}
I suggest you close off your file once you are done. The BufferedInputStream uses an 8 KB buffer by default which you are reducing to a smaller buffer. A simpler solution is to copy 8 KB at a time and not use the added buffer
try (InputStream in = new FileInputStream(req)) {
byte[] data = new byte[8 << 10];
for (int len; (len = bis.read(data)) > -1; )
out.write(data, 0, len);
} catch (IOException e) {
out.write("404 Not Found\n".getBytes());
}

Why does getResourceAsStream() and reading file with FileInputStream return arrays of different length?

I want to read files as byte arrays and realised that amount of read bytes varies depending on the used method. Here the relevant code:
public byte[] readResource() {
try (InputStream is = getClass().getClassLoader().getResourceAsStream(FILE_NAME)) {
int available = is.available();
byte[] result = new byte[available];
is.read(result, 0, available);
return result;
} catch (Exception e) {
log.error("Failed to load resource '{}'", FILE_NAME, e);
}
return new byte[0];
}
public byte[] readFile() {
File file = new File(FILE_PATH + FILE_NAME);
try (InputStream is = new FileInputStream(file)) {
int available = is.available();
byte[] result = new byte[available];
is.read(result, 0, available);
return result;
} catch (Exception e) {
log.error("Failed to load file '{}'", FILE_NAME, e);
}
return new byte[0];
}
Calling File.length() and reading with the FileInputStream returns the correct length of 21566 bytes for the given test file, though reading the file as a resources returns 21622 bytes.
Does anyone know why I get different results and how to fix it so that readResource() returns the correct result?
Why does getResourceAsStream() and reading file with FileInputStream return arrays of different length?
Because you're misusing the available() method in a way that is specifically warned against in the Javadoc:
"It is never correct to use the return value of this method to allocate a buffer intended to hold all data in this stream."
and
Does anyone know why I get different results and how to fix it so that readResource() returns the correct result?
Read in a loop until end of stream.
According to the the API docs of InputStream, InputStream.available() does not return the size of the resource - it returns
an estimate of the number of bytes that can be read (or skipped over) from this input stream without blocking
To get the size of a resource from a stream, you need to fully read the stream, and count the bytes read.
To read the stream and return the contents as a byte array, you could do something like this:
try ( InputStream is = getClass().getClassLoader().getResourceAsStream(FILE_NAME);
ByteArrayOutputStream bos = new ByteArrayOutputStream()) {
byte[] buffer = new byte[4096];
int bytesRead = 0;
while ((bytesRead = is.read(buffer)) != -1) {
bos.write(buffer, 0, bytesRead);
}
return bos.toByteArray();
}

Error while sending large files through socket

I'm trying to send large files via socket. The program works fine for small files (such as html pages or pdf), but when i send files over 3/4 mb the output is always corrupted (viewing it with a text editor i noticed that the last few lines are always missing).
Here's the code of the server:
BufferedInputStream in = null;
FileOutputStream fout = null;
try {
server = new ServerSocket(port);
sock = server.accept();
in = new BufferedInputStream(sock.getInputStream());
setPerc(0);
received = 0;
int incByte = -1;
fout = new FileOutputStream(path+name, true);
long size = length;
do{
int buffSize;
if(size >= 4096){
buffSize = 4096;
}else{
buffSize = 1;
}
byte[] o = new byte[buffSize];
incByte = in.read(o, 0, buffSize);
fout.write(o);
received+=buffSize;
setPerc(calcPerc(received, length));
size -= buffSize;
//d("BYTE LETTI => "+incByte);
}while(size > 0);
server.close();
} catch (IOException e) {
e("Errore nella ricezione file: "+e);
}finally{
try {
fout.flush();
fout.close();
in.close();
} catch (IOException e) {
e("ERRORE INCOMINGFILE");
}
}
pr.release(port);
And here's the code of the client:
FileInputStream fin = null;
BufferedOutputStream out = null;
try {
sock = new Socket(host, port);
fin = new FileInputStream(file);
out = new BufferedOutputStream(sock.getOutputStream());
long size = file.length();
int read = -1;
do{
int buffSize = 0;
if(size >= 4096){
buffSize = 4096;
}else{
buffSize = (int)size;
}
byte[] o = new byte[buffSize];
for(int i = 0; i<o.length;i++){
o[i] = (byte)0;
}
read = fin.read(o, 0, buffSize);
out.write(o);
size -= buffSize;
//d("BYTE LETTI DAL FILE => "+read);
}while(size > 0);
} catch (UnknownHostException e) {
} catch (IOException e) {
d("ERRORE NELL'INVIO DEL FILE: "+e);
e.printStackTrace();
}finally{
try {
out.flush();
out.close();
fin.close();
} catch (IOException e) {
d("Errore nella chiusura dei socket invio");
}
}
i think it's something related with the buffer size, but i can't figure out what's wrong here.
This is incorrect:
byte[] o = new byte[buffSize];
incByte = in.read(o, 0, buffSize);
fout.write(o);
You are reading up to buffSize bytes and then writing exactly buffSize bytes.
You are doing the same thing at the other end as well.
You may be able to get away with this when reading from a file1, but when you read from a socket then a read is liable to give you a partially filled buffer, especially if the writing end can't always keep ahead of the reading end 'cos you are hammering the network with a large transfer.
The right way to do it is:
incByte = in.read(o, 0, buffSize);
fout.write(o, 0, incByte);
1 - It has been observed that when you read from a local file, a read call will typically give you all of the bytes that you requested (subject to the file size, etc). So, if you set buffSize to the length of the file, this code would probably work when reading from a local file. But doing this is a bad idea, because you are relying behaviour that is not guaranteed by either Java or a typical operating system.
You might have a problem e.g. here.
read = fin.read(o, 0, buffSize);
out.write(o);
Here read gives you the count of bytes you've actually just read.
On the next line you should write out only as many bytes as you've read.
In other words, you cannot expect the size of the file
you're reading to be multiple of your buffer size.
Review your server code too for the same issue.
The correct way to copy streams in Java is as follows:
while ((count = in.read(buffer)) > 0)
{
out.write(buffer, 0, count);
}
where count is an int, and buffer is a byte[] array of length > 0, typically 8k. You don't need to allocate byte arrays inside the loop, and you don't need a byte array of a specific size. Specifically, it's a complete waste of space to allocate a buffer as large as the file; it only works up to files of Integer.MAX_VALUE bytes, and it doesn't scale.
You do need to save the count returned by 'read()' and use it in the 'write()' method as shown above.

java extract zip Unexpected end of ZLIB input stream

I am creating a program that will extract a zip and then insert the files into a database, every so often I get the error
java.lang.Exception: java.io.EOFException: Unexpected end of ZLIB input stream
I can not pinpoint the reason for this as the extraction code is pretty much the same as all the other code you can find on the web. My code is as follows:
public void extract(String zipName, InputStream content) throws Exception {
int BUFFER = 2048;
//create the zipinputstream
ZipInputStream zis = new ZipInputStream(content);
//Get the name of the zip
String containerName = zipName;
//container for the zip entry
ZipEntry entry;
// Process each entry
while ((entry = zis.getNextEntry()) != null) {
//get the entry file name
String currentEntry = entry.getName();
try {
ByteArrayOutputStream baos = new ByteArrayOutputStream();
// establish buffer for writing file
byte data[] = new byte[BUFFER];
int currentByte;
// read and write until last byte is encountered
while ((currentByte = zis.read(data, 0, BUFFER)) != -1) {
baos.write(data, 0, currentByte);
}
baos.flush(); //flush the buffer
//this method inserts the file into the database
insertZipEntry(baos.toByteArray());
baos.close();
}
catch (Exception e) {
System.out.println("ERROR WITHIN ZIP " + containerName);
}
}
}
This is probably caused by this JVM bug (JVM-6519463)
I previously has about one or two errors on 1000 randomly created documents, I applied the proposed solution (catch EOFException and do nothing with it) and I have no more errors.
I would say you are occasionally being given truncated Zip files to process. Check upstream.
I had the same exception and the problem was in the compressing method (not extracting). I did not close the ZipOutputStream with zos.closeEntry() after writing to the output stream. Without that, compressing worked well but I got an exception while extracting.
public static byte[] zip(String outputFilename, byte[] output) {
try (ByteArrayOutputStream baos = new ByteArrayOutputStream();
ZipOutputStream zos = new ZipOutputStream(baos)) {
zos.putNextEntry(new ZipEntry(outputFilename));
zos.write(output, 0, output.length);
zos.closeEntry(); //this line must be here
return baos.toByteArray();
} catch (IOException e) {
//catch exception
}
}
Never attempt to read more bytes than the entry contains. Call ZipEntry.getSize() to get the actual size of the entry, then use this value to keep track of the number of bytes remaining in the entry while reading from it. See below :
try{
...
int bytesLeft = (int)entry.getSize();
while ( bytesLeft>0 && (currentByte=zis.read(data, 0, Math.min(BUFFER, bytesLeft))) != -1) {
...
}
...
}

How to split file into chunks while still writing into it?

I tried to create byte array blocks from file whil the process was still using the file for writing. Actually I am storing video into file and I would like to create chunks from the same file while recording.
The following method was supposed to read blocks of bytes from file:
private byte[] getBytesFromFile(File file) throws IOException{
InputStream is = new FileInputStream(file);
long length = file.length();
int numRead = 0;
byte[] bytes = new byte[(int)length - mReadOffset];
numRead = is.read(bytes, mReadOffset, bytes.length - mReadOffset);
if(numRead != (bytes.length - mReadOffset)){
throw new IOException("Could not completely read file " + file.getName());
}
mReadOffset += numRead;
is.close();
return bytes;
}
But the problem is that all array elements are set to 0 and I guess it is because the writing process locks the file.
I would bevery thankful if anyone of you could show any other way to create file chunks while writing into file.
Solved the problem:
private void getBytesFromFile(File file) throws IOException {
FileInputStream is = new FileInputStream(file); //videorecorder stores video to file
java.nio.channels.FileChannel fc = is.getChannel();
java.nio.ByteBuffer bb = java.nio.ByteBuffer.allocate(10000);
int chunkCount = 0;
byte[] bytes;
while(fc.read(bb) >= 0){
bb.flip();
//save the part of the file into a chunk
bytes = bb.array();
storeByteArrayToFile(bytes, mRecordingFile + "." + chunkCount);//mRecordingFile is the (String)path to file
chunkCount++;
bb.clear();
}
}
private void storeByteArrayToFile(byte[] bytesToSave, String path) throws IOException {
FileOutputStream fOut = new FileOutputStream(path);
try {
fOut.write(bytesToSave);
}
catch (Exception ex) {
Log.e("ERROR", ex.getMessage());
}
finally {
fOut.close();
}
}
If it were me, I would have it chunked by the process/thread writing to the file. This is how Log4j seems to do it, at any rate. It should be possible to make an OutputStream which automatically starts writing to a new file every N bytes.

Categories