HttpUrlConnection and setReadTimeout() method - java

I want to know when and why the "java.net.SocketTimeoutException readtime out" is raised.
URL obj = new URL(url);
HttpURLConnection con = (HttpURLConnection) obj.openConnection();
con.setConnectTimeout(5000);
con.setReadTimeout(421);
int responseCode = con.getResponseCode();
InputStream is = con.getInputStream();
System.out.println("Inputstream done");
FileOutputStream fos = fos = new FileOutputStream("D:\\tryfile\\file1.csv");
byte[] buffer = new byte[4096]; //declare 4KB buffer
int len;
while ((len = is.read(buffer)) > 0) {
fos.write(buffer, 0, len);
}
fos.close();
is.close();
Here is the question.I set read timeout value 421 and I take "java.net.SocketTimeoutException readtime out" exception at that line 55.
while ((len = is.read(buffer)) > 0) {
So I take the inputstream succesfuly but when I start reading/writing it I take this exception.And I check that the 732 Kb of the file is transfered until the exception.
So I really confused about it.Please explain readtimeout method exactly.

It is raised when no data arrives between the inception of the read() call and the expiration of the specified timeout starting at that point. In other words, no data arrived within the timeout period.
So I take the inputstream succesfully
Not surprising. It takes zero time and does nothing on the network, as you have already called getResponseCode() (and done nothing with it).
but when I start reading/writing it I take this exception.
So the data was slow arriving.
And I check that the 732 Kb of the file is transfered until the exception.
So the end of the data was slow arriving.
NB 421ms is far too short for a read timeout. It should be tens of seconds.

Related

Download large video file getting corrupted

In server side code, I have set buffer size and content length as File.length() and then Opened File using FileInputStream.
Later fetching output stream using HttpResponse.getOutputStream() and dumping bytes of data that is read using FileInputStream
I am using Apache Tomcat 7.0.52, Java 7
On Client
File Downloader.java
URL url = new URL("myFileURL");
HttpURLConnection con = (HttpURLConnection) obj.openConnection();
con.setDoInput(true);
con.setConnectTimeout(10000);
con.setReadTimeout(10000);
con.setRequestMethod("GET");
con.setUseCaches(false);
con.setRequestProperty("User-Agent", "Mozilla/5.0");
con.connect();
FileOutputStream fos = new FileOutputStream("filename");
if(con.getResponseCode()==200){
InputStream is = con.getInputStream();
int readVal;
while((readVal=is.read())!=-1) fos.write(readVal);
}
fos.flush()
fos.close();
So above code failed to download large file.
On client using Java 7
Can You try this
FileOutputStream outputStream = new FileOutputStream(fileName);
int bytesRead;
byte[] buffer = new byte[1024];
while ((bytesRead = inputStream.read(buffer)) != -1) {
outputStream.write(buffer, 0, bytesRead);
}
Quoting from https://stackoverflow.com/a/45453874/4121845
Because you only want to write data that you actually read. Consider the case where the input consists of N buffers plus one byte. Without the len parameter you would write (N+1)*1024 bytes instead of N*1024+1 bytes. Consider also the case of reading from a socket, or indeed the general case of reading: the actual contract of InputStream.read() is that it transfers at least one byte, not that it fills the buffer. Often it can't, for one reason or another.
fos.flush();
} finally {
fos.close();
con.close();
}

Effect of HttpUrlConnection.setChunkedStreamingMode

I don't get a good understanding about HttpUrlConnection.setChunkedStreamingMode, what is the effect of this mode?
I have following sample code:
HttpURLConnection conn = getHttpURLConnection(_url);
conn.setChunkedStreamingMode(4096); //4k
conn.setConnectTimeout(3000);
conn.setDoInput(true);
conn.setDoOutput(true);
conn.setRequestMethod("POST");
OutputStream out = conn.getOutputStream();
byte[] buffer = new byte[1024 * 10];//10k
FileInputStream in= new FileInputStream(file); //Write the content of the file to the server
int len;
while ((len = in.read(buffer)) != -1) {
out.write(buffer, 0, len);
}
out.flush();
in.close();
Say, The file size is 101k, and I set the chunk size to be 4096.
The HttpUrlConnection will send 4096 bytes to the server every write? and 1k for the last time?
Note that I have used a 10k buffer to write to the outputstream, Does it matter that the chunk size and buffer size are not the same?
If I disable the ChunkedStreamMode in my code, what's the effect compared to the code that I have set 4096?
The HttpUrlConnection will send 4096 bytes to the server every write? and 1k for the last time?
Yes.
Note that I have used a 10k buffer to write to the outputstream, Does it matter that the chunk size and buffer size are not the same?
No.
If I disable the ChunkedStreamMode in my code, what's the effect compared to the code that I have set 4096?
The effect is that the entire output is buffered until you close, so that the Content-length header can be set and sent first, which adds a lot of latency and memory. Not recommended in the case of large files.

InputStream reader

I'm currently trying to read in a image file from the server but either getting a incomplete data or
Exception in thread "main"
java.lang.NegativeArraySizeException.
Has this something to do with the buffer size? I have tried to use static size instead of contentlength. Please kindly advise.
URL myURL = new URL(url);
HttpURLConnection connection = (HttpURLConnection)myURL.openConnection();
connection.setRequestMethod("GET");
status = connection.getResponseCode();
if (status == 200)
{
int size = connection.getContentLength() + 1024;
byte[] bytes = new byte[size];
InputStream input = new ByteArrayInputStream(bytes);
FileOutputStream out = new FileOutputStream(file);
input = connection.getInputStream();
int data = input.read(bytes);
while(data != -1){
out.write(bytes);
data = input.read(bytes);
}
out.close();
input.close();
Let's examine the code:
int size = connection.getContentLength() + 1024;
byte[] bytes = new byte[size];
why do you add 1024 bytes to the size? What's the point? The buffer size should be something large enough to avoid too many reads, but small enough to avoid consuming too much memory. Set it at 4096, for example.
InputStream input = new ByteArrayInputStream(bytes);
FileOutputStream out = new FileOutputStream(file);
input = connection.getInputStream();
Why do you create a ByteArrayInputStream, and then forget about it completely? You don't need a ByteArrayInputStream, since you don't read from a byte array, but from the connection's input stream.
int data = input.read(bytes);
This reads bytes from the input. The max number of bytes read is the length of the byte array. The actual number of bytes read is returned and stored in data.
while (data != -1) {
out.write(bytes);
data = input.read(bytes);
}
So you have read data bytes, but you don't write only the first data bytes of the array. You write the whole array of bytes. That is wrong. Suppose your array if of size 4096 and data is 400, instead of writing the 400 bytes that have been read, you write the 400 bytes + the remaining 3696 bytes of the array, which could be 0, or could have values coming from a previous read. It should be
out.write(bytes, 0, data);
Finally:
out.close();
input.close();
If any exception occurs before, those two streams will never be closed. Do that a few times, and your whold OS won't have file descriptos available anymore. Use the try-with-resources statement to be sure your streams are closed, no matter what happens.
This code can help you
input = connection.getInputStream();
byte[] buffer = new byte[4096];
int n = - 1;
OutputStream output = new FileOutputStream( file );
while ( (n = input.read(buffer)) != -1)
{
if (n > 0)
{
output.write(buffer, 0, n);
}
}
output.close();

Why initialize byte array to 1024 while reading or writing a file?

In java input or output stream , there always has a byte array size of 1024.
Just like below:
URL url = new URL(src);
URLConnection connection = url.openConnection();
InputStream is = connection.getInputStream();
OutputStream os = new FileOutputStream("D:\\images"+"\\"+getName(src)+getExtension(src));
byte[] byteArray = new byte[1024];
int len = 0;
while((len = is.read(byteArray))!= -1){
os.write(byteArray, 0, len);
}
is.close();
os.close();
Why initialize this array to 1024?
That is called buffering and ,each time you overwrite the contents of the buffer each time you go through the loop.
Simply reading file in chunks, instead of allocating memory for the file content at a time.
Reason behind this to do is you will become a clear victim of OutOfMemoryException if file is too large.
And coming to the specific question, That is need not to be 1024, even you can do with 500. But a minimum 1024 is a common usage.

Can a byte stream be written straight to SDCard from HTTP bypassing the HEAP?

I'm downloading video files that are larger than the memory space that Android apps are given. When they're *on the device, the MediaPlayer handles them quite nicely, so their overall size isn't the issue.
The problem is that if they exceed the relatively small number of megabytes that a byte[] can be then I get the dreaded OutOfMemory exception as I download them.
My intended solution is to just write the incoming byte stream straight to the SD card, however, I'm using the Apache Commons library and the way I'm doing it tries to get the entire video read in before it hands it back to me.
My code looks like this:
HttpClient client = new HttpClient();
PostMethod filePost = new PostMethod(URL_PATH);
client.setConnectionTimeout(timeout);
byte [] ret ;
try{
if(nvpArray != null)
filePost.setRequestBody(nvpArray);
}catch(Exception e){
Log.d(TAG, "download failed: " + e.toString());
}
try{
responseCode = client.executeMethod(filePost);
Log.d(TAG,"statusCode>>>" + responseCode);
ret = filePost.getResponseBody();
....
I'm curious what another approach would be to get the byte stream one byte at a time and just write it out to disk as it comes.
You should be able to use the GetResponseBodyAsStream method of your PostMethod object and stream it to a file. Here's an untested example....
InputStream inputStream = filePost.getResponseBodyAsStream();
FileInputStream outputStream = new FileInputStream(destination);
// Per your question the buffer is set to 1 byte, but you should be able to use
// a larger buffer.
byte[] buffer = new byte[1];
int bytesRead;
while ((bytesRead = input.read(buffer)) != -1)
{
outputStream.write(buffer, 0, bytesRead);
}
outputStream.close();
inputStream.close();

Categories