i have implemented file upload code which uses a secure socket to upload files to a server using content-type Multipart Form-data to write the bytes.
Now and again I get a bad socket id error which through analysis in wireshark tells me that a fin packet is being sent from the server to the client for some reason. The identical code uploads 80% of the time so I dont think it is a bad format error so why would the server be disconnecting the connection when the content type states that there is moe data to be sent?
Anyways, If i cant solve the bad socket id issue would tcp/socket connections allow for a reconnection to resume the upload where it left off before disconnection.
Looking forward to insights on to this matter.
Thank you
Are you calling flush on your socket? Sometimes you need to explicitly flush any remaining data otherwise "weird" behavior (i.e. not sending the last packet) can occur. Just an idea.
Related
When my colleague's Java(Client) TCP socket program send a package (500KB~1MB) to my Python(Server) TCP socket program, there are cases of subcontracting and sticking. I want to know, is there a universally valid solution to this situation? And what people usually do in this case?
I think if it is possible to manually refresh the buffer zone requested by socket function recv(buffer_zone).If I can do that, the program will be free from sticking packages at least.
It's me, and I've solved this problem by Head-Tail Package Mode.
Specifically, when send a serial of data to the server, the client may send a head package including stamp information and data length. When the server receive the head package sent from client, it can know the length that client will send to server, and the server can preper the same length cache for incoming data. When the client send the tail (data) package to the server, the server can depend on the length of the data it knows in advance and take out the data. That's my solution for now.
-- EDIT: --
To rephrase the question.
Does HTTP know anything about the status of underlying TCP connection?
TCP is a reliable protocol. When the server sends data to the client it expects an acknowledgment signal from that client. What happens in HTTP when the underlying server side TCP connection fails to receive the ACK signal?
-- ORIGINAL Question: --
I am trying to solve a design issue on our HTTP client/server app.
Here is the situation:
The server runs on Tomcat, and we are somewhat limited to using Jersey or Servlets for the server side implementation.
The client requests data from the Server, which once read is deleted.
Data must not be deleted if the client has not received it.
There is no confirmation from the client if the data is received or not.
The client impl cannot be changed in any way.
The network connection is unstable and can be interrupted for long periods of time (e.g. 30 sec.) and also often.
The problem: if the client made a request and shortly after lost connection to the server, the server will not recognize this and it will delete and send the data to the client over the dead connection.
Ideally, we want to get an IOException when flushing the data stream to the client and handle it accordingly:
try (ServletOutputStream outputStream = httpServletResponse.getOutputStream()) {
outputStream.write(bytes);
outputStream.flush();
} catch (Exception e) {
// TODO: do something ...
}
I simulated this locally by killing the client shortly after sending the request or by setting a very low client read timeout value. In both cases I got a server side exception (with bioth Jersey and Servlets).
The last test was sending a request over a network and pulling the network cable in the process.
Unfortunately I did not get the expected result. The server streamed the data back without recognizing the interrupted connection.
So, does anyone have an idea how to force a Server side exception when the connection to the client is broken?
Any other ideas that don't involve using Sockets or confirmation calls from the client?
Thanks in advance!
Instead of deleting the file in real time, you can write a message on a queue in order to delete it later. The delete would have to check a database where you write if the client received the file completely.
I don't think there's a way to know for certain whether the data arrived to the client unless the client sends an acknowledgement message.
The only solution seems to be not actually deleting the data, but keeping it and setting a 'deleted' flag. But since I don't know the particular use case, I'm not sure if this helps...
TCP is a two way protocol.
If you set up an input stream and call InputStream.read(), this should return -1 if the client has disconnected.
More detail here:
Java Sockets: check if client is able to receive message from server
I need some assistance on a project I am working on. It's a library itself using Jersey 1.x (1.19.1) aiming at HTTP posting a JSON document and getting the corresponding JSON response from a server.
I am facing a problem when the response from the server is "big". The JSON document that is posted by my client application contains several jobs that must be executed by the server, and
the JSON document sent back by the server is made of the outputs of these jobs. The jobs can be considered independent from each other. The server works in streaming mode, which means it
starts to process the jobs before it receives the entire JSON document posted by the client. And it starts to send the outputs of the jobs as soon as they are finished. So the server
starts to reply to my client application while it is still posting the request. Here is my problem. When the request gets big so gets the response (more jobs to do), my application freezes
and at some point terminates.
I spent a lot of time trying to figure out what's happening and here is what is found and what I infered.
Jersey, for handling HTTP communication is using a class from the JDK (in rt.jar) I forgot the exact name and don't have access to my work right now but let's call it HttpConnection.
In this class there is a method checkError() that is invoked and throws a IOException with only a message saying it was impossible to write to server.
Debugging I was able to understand that an attribute of this class named trouble was set to true because a write() method caught an IOException before. checkError() throws a
IOException based on that trouble boolean flag. It's not possible to easily see the cause IOException because the classes of the JRE are compiled without the debugging symbols but
I managed to see that this IOExeption was a "connection reset by peer" problem.
Then I tried to understand why the server resets the connection. I used a HTTP proxy that captures the HTTP traffic between my client application and the server but this gave me no more clues,
it even seems that the proxy is unable to handle properly the connection with the server as well!
So I tried to use Wireshark to capture the traffic and see what's wrong. Here is what I found.
On client side, packets corresponding to the post of the request JSON document are sent and the server starts to reply shortly after, as explained above. The server side sends
more and more packets and I noticed that the buffer of the TCP layer (called TCP window in Wireshark) on client side has a size that decreases more and more as the server sends packets.
Until it beomes full (size: 0 byte). So the TCP layer on server side cannot send data to the TCP layer on client side anymore and thus becomes full too. The conversation, in the end is
only about retrying to send data, on both sides, failing again and again. Ultimately the server decides to send a reset packet. This corresponds to the cause IOExcpetion I mentioned
above I believe.
My understanding is: as long as the server does not start to stream the response everything is fine. When the server starts to send the response, the TCP buffer on client side starts to
get filled. But as the client application does not read the response yet, the content of this buffer is not consumed. When the server has sent enough data to fill this buffer it cannot
send anymore data and the buffer of its TCP layer gets full too because the server continues to push data. As a result, the client application cannot finish to send the request JSON
document. The communication is blocked on both sides and the server decides to reset the connection.
My conclusion is: the code, as currently written, does not support such full duplex communication, because the response from the server is not consumed as it is received. Indeed, walking
through the Jersey code that is executed by my library, by debugging, it is clear that the pattern is:
first: connection.getOutputStream().write()
and then: response.getInputStream().read()
In my opinion, the root cause of the problem is that the library I am working on uses Jersey in this synchronous manner which does not fit well the way the server works (streaming the
response while the request is still being sent to it).
I searched a lot on the Internet a solution keeping Jersey 1.19.1 for me to improve the library with as few impacts as possible but I failed. This is the reason why I am asking help
here now ;)
So basicaly my question is: is it possible to do what I need to do keeping Jersey client library 1.19.1 and if yes how? If not, what HTTP client library should I use for my library (to
write a post request and read the corresponding response at the same time) and if you could give me a basic example so I can be on track quickly it would be much appreciated.
One last thing: curl works just fine, I can fully post the exact same JSON document and get the response using it, so there is no problem on server side as I suspected at the very
beginning of my investigation. And it scales fine (I tried to send huge JSON documents). Of course I made sure the HTTP header of the post is the same in the case of my library and in the
curl case.
Thanks a lot for reading me and for your answers.
Best regards,
Loïc
From a FTP server perspective, if a client requests a file through RETR command, the server creates a data connection (socket) to the client through the specified port and starts the transfer by writing in the outputstream. The server is coded (JAVA) in such a way that after the write is complete in the socket, the outputstream is flushed and then the socket is closed. After this the code "226" is sent to the client in control channel.
Since the connection is over a very slow network, the 226 message reaches before the actual data transfer is complete. This is a tricky situation where the client code cannot be changed and the server has to make sure that the 226 is sent after client received the data.
I tried searching in the internet and got few inputs, but not sure which one is the standard.
1. to use setSoLinger() method to turn on SO_LINGER and to set timeout.
2. to introduce a delay after writing each byte in to socket (performance will be impacted for fast connections).
Is there any other options other than the above to solve the situation. Any idea about the standard followed for sending 226 in Linux/ Solaris/Windows FTP Servers.
I could see a similar thread in stackoverflow "When should 226 be sent from the FTP server?" , but could not find much info from that related to my question.
Help here is really appreciated...Thanks
Do not go with the delay for sure, the only thing I can think of is that you build a proxy layer that intercepts the acknowledge code, checks for the file, and reroutes the code to the application, sort of what telerik fiddler does as an application.
The same concept I used before with the JMS acknowledge mode when delivering messages to the server and I had to implement the same.
Wish you all luck my friend
I have a server implemented using Apache HTTPCore which can accept posts from an httpclient implementation. I have enough of it working so that I can send to the server, processes the post contents, and get the response back on the client. Everything seems to work, however I am noticing that the server keeps the connection alive until it times out, even though the client connection has completed successfully. I'm assuming that I need to close the connection on the client side after receiving the response, however I believe I am already doing that as I am using BasicResponseHandler(), which returns a String, so I can't figure out what if anything I need to actually close.
Any thoughts on this? I was going to try using a different response handler that returns an InputStream, and see if closing that works, but I assumed that BasicResponseHandlerwas doing that behind the scenes already as it returns a String
If the server hasn't read an EOS, the client hasn't closed the connection. Having a read timeout on the connection to the client is the correct strategy.