I don't get a good understanding about HttpUrlConnection.setChunkedStreamingMode, what is the effect of this mode?
I have following sample code:
HttpURLConnection conn = getHttpURLConnection(_url);
conn.setChunkedStreamingMode(4096); //4k
conn.setConnectTimeout(3000);
conn.setDoInput(true);
conn.setDoOutput(true);
conn.setRequestMethod("POST");
OutputStream out = conn.getOutputStream();
byte[] buffer = new byte[1024 * 10];//10k
FileInputStream in= new FileInputStream(file); //Write the content of the file to the server
int len;
while ((len = in.read(buffer)) != -1) {
out.write(buffer, 0, len);
}
out.flush();
in.close();
Say, The file size is 101k, and I set the chunk size to be 4096.
The HttpUrlConnection will send 4096 bytes to the server every write? and 1k for the last time?
Note that I have used a 10k buffer to write to the outputstream, Does it matter that the chunk size and buffer size are not the same?
If I disable the ChunkedStreamMode in my code, what's the effect compared to the code that I have set 4096?
The HttpUrlConnection will send 4096 bytes to the server every write? and 1k for the last time?
Yes.
Note that I have used a 10k buffer to write to the outputstream, Does it matter that the chunk size and buffer size are not the same?
No.
If I disable the ChunkedStreamMode in my code, what's the effect compared to the code that I have set 4096?
The effect is that the entire output is buffered until you close, so that the Content-length header can be set and sent first, which adds a lot of latency and memory. Not recommended in the case of large files.
Related
In server side code, I have set buffer size and content length as File.length() and then Opened File using FileInputStream.
Later fetching output stream using HttpResponse.getOutputStream() and dumping bytes of data that is read using FileInputStream
I am using Apache Tomcat 7.0.52, Java 7
On Client
File Downloader.java
URL url = new URL("myFileURL");
HttpURLConnection con = (HttpURLConnection) obj.openConnection();
con.setDoInput(true);
con.setConnectTimeout(10000);
con.setReadTimeout(10000);
con.setRequestMethod("GET");
con.setUseCaches(false);
con.setRequestProperty("User-Agent", "Mozilla/5.0");
con.connect();
FileOutputStream fos = new FileOutputStream("filename");
if(con.getResponseCode()==200){
InputStream is = con.getInputStream();
int readVal;
while((readVal=is.read())!=-1) fos.write(readVal);
}
fos.flush()
fos.close();
So above code failed to download large file.
On client using Java 7
Can You try this
FileOutputStream outputStream = new FileOutputStream(fileName);
int bytesRead;
byte[] buffer = new byte[1024];
while ((bytesRead = inputStream.read(buffer)) != -1) {
outputStream.write(buffer, 0, bytesRead);
}
Quoting from https://stackoverflow.com/a/45453874/4121845
Because you only want to write data that you actually read. Consider the case where the input consists of N buffers plus one byte. Without the len parameter you would write (N+1)*1024 bytes instead of N*1024+1 bytes. Consider also the case of reading from a socket, or indeed the general case of reading: the actual contract of InputStream.read() is that it transfers at least one byte, not that it fills the buffer. Often it can't, for one reason or another.
fos.flush();
} finally {
fos.close();
con.close();
}
I want to know when and why the "java.net.SocketTimeoutException readtime out" is raised.
URL obj = new URL(url);
HttpURLConnection con = (HttpURLConnection) obj.openConnection();
con.setConnectTimeout(5000);
con.setReadTimeout(421);
int responseCode = con.getResponseCode();
InputStream is = con.getInputStream();
System.out.println("Inputstream done");
FileOutputStream fos = fos = new FileOutputStream("D:\\tryfile\\file1.csv");
byte[] buffer = new byte[4096]; //declare 4KB buffer
int len;
while ((len = is.read(buffer)) > 0) {
fos.write(buffer, 0, len);
}
fos.close();
is.close();
Here is the question.I set read timeout value 421 and I take "java.net.SocketTimeoutException readtime out" exception at that line 55.
while ((len = is.read(buffer)) > 0) {
So I take the inputstream succesfuly but when I start reading/writing it I take this exception.And I check that the 732 Kb of the file is transfered until the exception.
So I really confused about it.Please explain readtimeout method exactly.
It is raised when no data arrives between the inception of the read() call and the expiration of the specified timeout starting at that point. In other words, no data arrived within the timeout period.
So I take the inputstream succesfully
Not surprising. It takes zero time and does nothing on the network, as you have already called getResponseCode() (and done nothing with it).
but when I start reading/writing it I take this exception.
So the data was slow arriving.
And I check that the 732 Kb of the file is transfered until the exception.
So the end of the data was slow arriving.
NB 421ms is far too short for a read timeout. It should be tens of seconds.
In java input or output stream , there always has a byte array size of 1024.
Just like below:
URL url = new URL(src);
URLConnection connection = url.openConnection();
InputStream is = connection.getInputStream();
OutputStream os = new FileOutputStream("D:\\images"+"\\"+getName(src)+getExtension(src));
byte[] byteArray = new byte[1024];
int len = 0;
while((len = is.read(byteArray))!= -1){
os.write(byteArray, 0, len);
}
is.close();
os.close();
Why initialize this array to 1024?
That is called buffering and ,each time you overwrite the contents of the buffer each time you go through the loop.
Simply reading file in chunks, instead of allocating memory for the file content at a time.
Reason behind this to do is you will become a clear victim of OutOfMemoryException if file is too large.
And coming to the specific question, That is need not to be 1024, even you can do with 500. But a minimum 1024 is a common usage.
In the following code. I'm reading a file into a small buffer ( len=CHUNK_SIZE) And I simply want to write this buffer to outputstream. But even though i am flushing after every chunk I get an heap overflow exception. Well if I want to stream small files everything is fine. But shouldn't flush also delete all data in the stream?
URL url = new URL(built);
HttpURLConnection con = (HttpURLConnection)url.openConnection();
con.setDoOutput(true);
con.setDoInput(true);
con.setUseCaches(false);
//con.setChunkedStreamingMode(CHUNK_SIZE);
con.setRequestProperty("Content-Type", "multipart/form-data; boundary="
+ Boundary);
FileInputStream is = new FileInputStream(m_FileList.get(i));
DataOutputStream os = new DataOutputStream(con.getOutputStream());
// .....
while((read = is.read(temp, 0, CHUNK_SIZE)) != -1) {
bytesTotal += read;
os.write(temp, 0, read); // heap overflow here if the file is to big
os.flush();
}
DataOutputStream doesn't buffer at all, but HttpURLConnection's output stream buffers everything by default, so it can set the Content-Length header. Use chunked transfer mode to prevent that.
You don't actually need the DataOutputStream at all here: just write to the connection's output stream.
Don't flush() inside the loop either.
I'm trying to send an image file from a server to a client via a socket. The socket was previously used to send some strings from the server to the client (with buffered input/output streams).
The trouble is the image file can't be received properly, with "Premature end of JPEG file" error.
The server first sends the file size to the client, the client then creates a byte[] of that size, and starts to receive the file.
Here are the codes:
Server:
DataOutputStream dos = new DataOutputStream(socket.getOutputStream());
//Send file size
dos.writeInt((int) file.length());
BufferedInputStream bis = new BufferedInputStream(new FileInputStream(file));
byte[] fileBytes = new byte[bis.available()];
bis.read(fileBytes);
bis.close();
BufferedOutputStream bos = new BufferedOutputStream(socket.getOutputStream());
bos.write(fileBytes);
bos.flush();
Client:
DataInputStream dis = new DataInputStream(socket.getInputStream());
//Receive file size
int size = dis.readInt();
BufferedInputStream bis = new BufferedInputStream(socket.getInputStream());
byte[] fileBytes = new byte[size];
bis.read(fileBytes, 0, fileBytes.length);
More interestingly, if I let server sleep for about 2 seconds between sending the file size and writing the byte[], then the image is received properly. I wonder if there's some kind of race condition between the server and the client
The error is most likely here:
byte[] fileBytes = new byte[bis.available()];
The method available does not return the size of the file. It might return only the size of the input buffer, which is smaller than the size of the file. See the API documentation of the method in BufferedInputStream.
Also, read in the line below is not guaranteed to read the whole file in one go. It returns the number of bytes that were actually read, which can be less than what you asked for. And in the client code, you are using read in the same way, without actually checking if it read all the data.
Please check commons-io with FileUtils and IOUtils. This should make work a lot easier.
http://commons.apache.org/io/
The correct way to copy a stream in Java is as follows:
int count;
byte[] buffer = new byte[8192]; // more if you like, but over a network it won't make much difference
while ((count = in.read(buffer)) > 0)
{
out.write(buffer, 0, count);
}
Your code fails to logically match this at several points.
Also available() is not a valid way to determine either a file size or the size of an incoming network transmission - see the Javadoc. It has few if any correct uses and these aren't two of them.