In my Android program, I have some code that downloads a file. This works fine, but since on a cell phone, you can be disconnected at any time, I need to change it do it reconnects and resumes the download when you are halfway through and somebody calls/you lose cell reception/etc. I cannot figure out how to detect the InputStream has stopped working. See the code below:
InputStream in = c.getInputStream();
byte[] buffer = new byte[8024];
int len1 = 0;
while ( (len1 = in.read(buffer)) > 0 ) {
Log("-"+len1+"- Downloaded.");
f.write(buffer,0, len1);
Thread.sleep(50);
}
When I lose internet connection, My log shows:
Log: -8024- Downloaded.
Log: -8024- Downloaded.
Log: -8024- Downloaded.
Log: -8024- Downloaded.
Log: -6024- Downloaded. (some lower number)
And then my program just hangs on the while( (len1 = etc. I need to make it so when the internet gets disconnected I wait for the internet to be connected again and then resume the download.
Take a look here: http://developer.android.com/reference/java/nio/channels/SocketChannel.html
EDIT (based on comment):
http://www.jguru.com/faq/view.jsp?EID=72378
So thoughts based on the above.... you might put the reading in a thread and periodically check to see if the thread has stopped reading data (update a shared variable probably). If it has kill the connection and the thread and deal with it however you need to.
Another alternative is to not use the HTTPURLConnection and deal with the bits you need your self.
Related
I have this code. (Used it in other old project of mine, worked wonderfully)
SOCKET Connect(char * host, int port){
struct sockaddr_in sin = {0};
struct hostent * entry = 0;
SOCKET s = socket(AF_INET, SOCK_STREAM, IPPROTO_TCP);
if(s == INVALID_SOCKET){
return INVALID_SOCKET;
}
entry = gethostbyname(host);
if(entry == 0){
closesocket(s);
return INVALID_SOCKET;
}
sin.sin_addr = *((LPIN_ADDR)*entry->h_addr_list);
sin.sin_family = AF_INET;
sin.sin_port = htons(port);
// The process becomes dealocked after this line
if( connect(s,(const LPSOCKADDR)&sin,sizeof(SOCKADDR)) == SOCKET_ERROR){
closesocket(s);
return INVALID_SOCKET;
}
return s;
}
I started this morning working on a Delphi project using TTcpClient and Indy's TIdTcpClient wrappers and I noticed the process did not make any connections rather it just hung after calling connect. I then switched to C/C++ and tried with this code which does the same thing. After it hangs, there's no way to kill it (unless when it's being debugged where I had to exit the debugger). TaskManager, Process Explorer didn't do shit.
There are no threads or loops or whatever that may cause it to hang just this code and another function that writes to the socket after it connects.
When debugging with Visual Studio, after sometime there's a message (below)
Even Wireshark doesn't show anything at all. Restarted my computer and still the same problem.
So has anyone ever had this problem before?
Used compilers
Visual Studio 2010
Pelles-C
Delphi 7
OS : Windows 7 64 bit, Ultimate
Winsock Version: 2.2
Update:
So I thought I would getaway and switched to Java only to find out the same problem after a couple of times. What the hell is wrong here. The Java takes around 2 minutes to connect even on localhost. This simple code takes ~2 minutes during which java.exe can't be killed also.
long startTime = System.currentTimeMillis(), endTime;
Socket clientSock = new Socket("localhost",80); // running Apache on localhost
endTime = System.currentTimeMillis();
Log("Connection time " + (endTime - startTime) + " ms");
clientSock.close();
run:
Connection time 125088 ms
As for Java I did some searches and this problem was a bug in version 1 of the JDK but the change log showed it had been patched. But then again this happens in the underlying winsock library. WHY ? This program connects instantly and it also uses winsock: http://flatassembler.net/examples/quetannon.zip
So now I have to re-write 976 lines of JAVA in assembly just because of this? Help me out here people.
Since you are encountering the same problem in multiple wrappers that all ultimately delegate to Winsock, its safe to assume that this is an OS issue, not a coding issue. Something on your system has hosed your Winsock installation, or the OS is having networking problems in general, especially since a simple OS reboot did not clear the issue. Try using Windows' command-line netsh tool to reset both the TCP and Winsock subsystems, the command-line ipconfig tool to flush the DNS cache, reboot, and see if the problem continues.
On the coding side, you should implement a timeout on the connect() to avoid further deadlocks. There are two ways to do that:
Put the socket into non-blocking mode and then call select() if connect() returns a WSAEWOULDBLOCk error. If select() times out, close the socket.
Leave the socket in blocking mode and use a separate thread to manage the timeout. Call connect() in the thread, or run your timeout logic in the thread, it does not really matter, but if the timeout elapses while connect() is still running then you can close the socket, aborting connect(). This is the approach that TIdTCPClient uses.
Ok. For the JAVA part at least I solved it by using the following code based on the answer here Java Socket creation takes more time.
So basically the default timeout value is (possibly) huge.So what I did was set a 3 second timeout then once the timeout exception is thrown, the next call works instantly.
private static final int CONNECT_TIMEOUT = 3000; // 3 seconds
private static Socket AttemptConnection(String host, int port) {
Socket temp;
try {
temp = new Socket();
temp.connect(new InetSocketAddress(host, port), CONNECT_TIMEOUT);
return temp;
} catch (Exception ex) {
temp = null;
lastException = ex.getMessage();
return temp;
}
}
And somewhere in your code (at least in my app)
while ( (clientSock = AttemptConnection("localhost",80)) == null ){
Log("Attempting connection. Last exception: " + lastException);
try{Thread.sleep(2500);}catch(Exception ex){} /* This is necessary in my application */
}
So looking at this I think the fix to all the socket implementations (JAVA,Delphi, etc) is to set a small timeout value then connect again.
EDIT:
The root of the problem was found: I have a HIPS program (COMODO Firewall) running on my laptop. If COMODO's cmdagent.exe is active, it'll show me an alert of an outgoing connection to which I can accept/deny. If not, it will silently deny the connection, so therefore something becomes deadlocked in the low levels.I was worried my PC was effed up.
The piece of code below downloads a file from some URL and saves it to a local file. Piece of cake. What could possible be wrong here?
protected long download(ProgressMonitor montitor) throws Exception{
long size = 0;
DataInputStream dis = new DataInputStream(is);
int read = 0;
byte[] chunk = new byte[chunkSize];
while( (read = dis.read(chunk)) != -1){
os.write(chunk, 0, read);
size += read;
if(montitor != null)
montitor.worked(read);
}
chunk = null;
dis.close();
os.flush();
os.close();
return size;
}
The reason I am posting a question here is because it works in 99.999% of the time and doesn't work as expected whenever there is an antivirus or some other protection software installed on a computer running this code. I am blindly pointing a finger that way because whenever I stop (or disable) it, the code works perfect again. The end result of such interference is that the MD5 of downloaded file don't match the expected, and a whole new saga begins.
So, the question is - is it really possible that some smart "protection" software would alter the actual stream coming from the URL without me knowing about it? And if yes - how do you deal with this? (verified with Kasperksy and Norton products).
EDIT-1:
Apparently I've got a hold on the problem and it's got nothing to do with antiviruses. The download takes place from the FTP server (FileZilla in particular) and we use apache commons ftp on client side . What I did is went to the FTP server and terminated the connection (kicked it out) in a middle of the download. I expected that is.read(..) would throw an IOException on client side, but this never happened. Instead, the is.read(..) returns -1 meaning that there is no more data coming from the stream. This is definitely unexpected and explains why sometimes I get partial files. This doesn't explain however why sometimes the data gets altered as well.
Yeah this happens to me all the time. In my case it's caused by transparent HTTP proxying by Websense on my corporate network. The worst problem are caused by the block page being returned with 200 OK.
Do you get the same or similar corruption every time? E.g., do you get some HTML explaining why the request was blocked? The best you can probably do is compare the first few bytes of the downloaded data to some text in the block page, and throw an exception in this case.
Edit: based on your update, have you got the FTP client set to image/binary mode?
My question is: is there a way to perform a socket OutputStream shutdown or it is not right/fully implemented as it should be by nokia? (J2ME nokia implementation, tested at nokia c6-00 and not closing stream, tested on emulator and works fine)
The main problem is that J2SE server application does not get the end of stream info, the condition read(buffer) == -1 is never true, tries to read from an empty stream and hangs until client is force-killed. This works with a very, very, very ugly workaround on the server side application
Thread.sleep(10);//wait some time for data else you would get stuck........
while ((count = dataInputStream.read(buffer)) != -1) {
byteArrayOutputStream.write(buffer, 0, count);
if (count != BUFFER_SIZE_1024 || dataInputStream.available() == 0) { //the worlds worst condition ever written... but works
break;
}
Thread.sleep(10);//wait for data input to get some data for dataInputStream.available() to return != 0 if client still sends data else you would not read all data......
}
but this solution is absolutely not acceptable (i dont know something about nokia java coding, i'm missing something, or is it maybe similar to a some sort of nokia-J2ME coding standard and i should get used to it or change platform)
I can't close the client socket after sending data because server sends a response to the client after receiving and processing data.
It looks like this: J2ME client -> J2SE server (hangs on read because client does not perform a outputstream shutdown) -> J2ME
I've tried to:
close the dataOutputStream on the J2ME client - no effect
setSocketOptions (KEEPALIVE, SNDBUF and others) - no effect or errors
nothing seems to work on the target device
sorry but i'm a bit furious right now after this nonsense fight with little java.
I'have searched for the solution but non seems to work
Client code:
SocketConnection socketConnection = (SocketConnection) Connector.open("socket://" + ip + ":" + port);
int count;
byte[] buffer = new byte[BUFFER_SIZE_1024];
// client -> server
DataOutputStream dataOutputStream = new DataOutputStream(socketConnection.openDataOutputStream());
ByteArrayInputStream byteArrayInputStream = new ByteArrayInputStream(bytes);
while ((count = byteArrayInputStream.read(buffer)) != -1) {
dataOutputStream.write(buffer, 0, count);
dataOutputStream.flush();
}
dataOutputStream.close();
byteArrayInputStream.close();
With J2SE, my advice would be to initialize Socket from the java.nio.channels.SocketChannel and just interrupt the blocked thread after reasonable timeout has expired.
I'm not sure which side you are trying to fix, but looks like with J2ME your only option would be to set socket timeout.
EDIT
Actually, now that you've posted client code, I see the problem. If the exception is thrown from the while loop for whatever reason, the output stream is not closed.
Here is my proposed fix for that:
ByteArrayInputStream byteArrayInputStream = new ByteArrayInputStream(bytes);
try
{
DataOutputStream dataOutputStream = new DataOutputStream(
socketConnection.openDataOutputStream()
);
try
{
while ((count = byteArrayInputStream.read(buffer)) != -1) {
dataOutputStream.write(buffer, 0, count);
dataOutputStream.flush();
}
}
finally
{
dataOutputStream.close();
}
}
finally
{
byteArrayInputStream.close();
}
Note, that it is not strictly necessary to close ByteArrayInputStream, but the code has a habit to mutate, and some day that input stream may become something that needs explicit close.
I've tried the code with the same effect - on the emulator works like a charm, on the device hangs but i solved my problem as follows:
On the J2ME client before sending the 1024 byte packet I'm sending its length and its state (IsNext or IsLast) after this on the J2SE server side in a while(true) loop. I'm reading first the length with a readShort, then state with a readByte (I know it's better to combine it on a one short but I didn't knew if it will work and if the effort was worth it and now when it works I'm not touching this, besides it is easy to add a new state if necessarily and it works quite fast).
After this server goes in to a second nested loop [ while (dataInputStream.available() < length) {} - I'll have to put here a timeout but I'll worry about that later. Also note that on J2ME dataInputStream.available() always returns a 0 (!) so in the J2ME client read in this place is a for (int i = 0; i < length... loop reading a single byte]
When the while(dataInputStream.available() ... loop breaks I'm reading a block of data which length I have, and if the state is IsLast I break the while(true) loop. Works perfectly and stable.
Thanks for the advice and hope this info will help someone
I have an application that does a lot work on S3, mostly downloading files from it. I am seeing a lot of these kind of errors and I'd like to know if this is something on my code or if the service is really unreliable like this.
The code I'm using to read from the S3 object stream is as follows:
public static final void write(InputStream stream, OutputStream output) {
byte[] buffer = new byte[1024];
int read = -1;
try {
while ((read = stream.read(buffer)) != -1) {
output.write(buffer, 0, read);
}
stream.close();
output.flush();
output.close();
} catch (IOException e) {
throw new RuntimeException(e);
}
}
This OutputStream is a new BufferedOutputStream( new FileOutputStream( file ) ). I am using the latest version of the Amazon S3 Java client and this call is retried four times before giving up. So, after trying this for 4 times it still fails.
Any hints or tips on how I could possibly improve this are appreciated.
I just managed to overcome a very similar problem. In my case the exception I was getting was identical; it happened for larger files but not for small files, and it never happened at all while stepping through the debugger.
The root cause of the problem was that the AmazonS3Client object was getting garbage collected in the middle of the download, which caused the network connection to break. This happened because I was constructing a new AmazonS3Client object with every call to load a file, while the preferred use case is to create a long-lasting client object that survives across calls - or at least is guaranteed to be around during the entirety of the download. So, the simple remedy is to make sure a reference to the AmazonS3Client is kept around so that it doesn't get GC'd.
A link on the AWS forums that helped me is here: https://forums.aws.amazon.com/thread.jspa?threadID=83326
The network is closing the connection, prior to the client getting all the data, for one reason or another, that's what is going on.
Part of any HTTP Request is the content length, Your code is getting the header, saying hey buddy, here's data, and its this much of it.. and then the connection is dropping before the client has read all of the data.. so its bombing out with the exception.
I'd look at your OS/NETWORK/JVM connection timeout settings (though JVM generally inherit from the OS in this situation). The key is to figure out what part of the network is causing the problem. Is it your computer level settings saying, nope not going to wait any longer for packets.. is it that you are using a non blocking read, which has a timeout setting in your code, where it is saying, hey, haven't gotten any data from the server since longer than I'm supposed to wait so I'm going to drop the connection and exception. etc etc etc.
Best bet is to low level snoop the packet traffic and trace backwards, to see where the connection drop is happening, or see if you can up timeouts in things you can control, like your software, and OS/JVM.
First of all, your code is operating entirely normally if (and only if) you suffer connectivity troubles between yourself and Amazon S3. As Michael Slade points out, standard connection-level debugging advice applies.
As to your actual source code, I note a few code smells you should be aware of. Annotating them directly in the source:
public static final void write(InputStream stream, OutputStream output) {
byte[] buffer = new byte[1024]; // !! Abstract 1024 into a constant to make
// this easier to configure and understand.
int read = -1;
try {
while ((read = stream.read(buffer)) != -1) {
output.write(buffer, 0, read);
}
stream.close(); // !! Unexpected side effects: closing of your passed in
// InputStream. This may have unexpected results if your
// stream type supports reset, and currently carries no
// visible documentation.
output.flush(); // !! Violation of RAII. Refactor this into a finally block,
output.close(); // a la Reference 1 (below).
} catch (IOException e) {
throw new RuntimeException(e); // !! Possibly indicative of an outer
// try-catch block for RuntimeException.
// Consider keeping this as IOException.
}
}
(Reference 1)
Otherwise, the code itself seems fine. IO exceptions should be expected occurrences in situations where you're connecting to a fickle remote host, and your best course of action is to draft a sane policy to cache and reconnect in these scenarios.
Try using wireshark to see what is happening on the wire when this happens.
Try temporarily replacing S3 with your own web server and see if the problem persists. If it does it's your code and not S3.
The fact that it's random suggests network issues between your host and some of the S3 hosts.
Also S3 could close slow connections according to my experience.
I would take a very close look at the network equipment nearest your client app. This problem smacks of some network device dropping packets between you and the service. Look to see if there was a starting point when the problem first occurred. Was there any change like a firmware update to a router or replacement of a switch around that time?
Verify your bandwidth usage against the amount purchased from your ISP. Are there times of the day where you're approaching that limit? Can you obtain graphs of your bandwidth usage? See if the premature terminations can be correlated with high-bandwidth usage, particularly if it approaches some known limit. Does the problem seem to pick on smaller files and on large files only when they're almost finished downloading? Purchasing more bandwidth from your ISP may fix the problem.
I am trying to write a server that accepts files and write it in certain directory using DataInputStream and BufferedInputStream.
The server gets 'user name(string)' 'number of files(int)' 'file name(string)' 'size of each file(long)' and 'contents of file which is uninterpreted bytes(byte[])'
and if everything is successful then, I am supposed to send boolean value.
But the problem is that it is not receiving file correctly.
From time to time I get 'broken pipe' error message or the file is corrupted after I receive.
Fixed the problem..
One small thing which may be related to your problem. You should be decrementing your file size variable by the number of bytes actually read, instead of the number of bytes requested to be read:
while(fileSize>0){
if(fileSize < byteSize)
byteSize = (int)fileSize;
int byteRead = din.read(b, 0, byteSize);
fos.write(b);
fileSize -= byteRead; // <-- See here
}
You might be getting this error if when reading the input, the sender closes the connection. It probably has nothing to do with your code. The sender might have timed out, closed the connection before the transfer has finished, or many other things.
Take a look at this related question: How to fix java.net.SocketException: Broken pipe?