From a FTP server perspective, if a client requests a file through RETR command, the server creates a data connection (socket) to the client through the specified port and starts the transfer by writing in the outputstream. The server is coded (JAVA) in such a way that after the write is complete in the socket, the outputstream is flushed and then the socket is closed. After this the code "226" is sent to the client in control channel.
Since the connection is over a very slow network, the 226 message reaches before the actual data transfer is complete. This is a tricky situation where the client code cannot be changed and the server has to make sure that the 226 is sent after client received the data.
I tried searching in the internet and got few inputs, but not sure which one is the standard.
1. to use setSoLinger() method to turn on SO_LINGER and to set timeout.
2. to introduce a delay after writing each byte in to socket (performance will be impacted for fast connections).
Is there any other options other than the above to solve the situation. Any idea about the standard followed for sending 226 in Linux/ Solaris/Windows FTP Servers.
I could see a similar thread in stackoverflow "When should 226 be sent from the FTP server?" , but could not find much info from that related to my question.
Help here is really appreciated...Thanks
Do not go with the delay for sure, the only thing I can think of is that you build a proxy layer that intercepts the acknowledge code, checks for the file, and reroutes the code to the application, sort of what telerik fiddler does as an application.
The same concept I used before with the JMS acknowledge mode when delivering messages to the server and I had to implement the same.
Wish you all luck my friend
Related
-- EDIT: --
To rephrase the question.
Does HTTP know anything about the status of underlying TCP connection?
TCP is a reliable protocol. When the server sends data to the client it expects an acknowledgment signal from that client. What happens in HTTP when the underlying server side TCP connection fails to receive the ACK signal?
-- ORIGINAL Question: --
I am trying to solve a design issue on our HTTP client/server app.
Here is the situation:
The server runs on Tomcat, and we are somewhat limited to using Jersey or Servlets for the server side implementation.
The client requests data from the Server, which once read is deleted.
Data must not be deleted if the client has not received it.
There is no confirmation from the client if the data is received or not.
The client impl cannot be changed in any way.
The network connection is unstable and can be interrupted for long periods of time (e.g. 30 sec.) and also often.
The problem: if the client made a request and shortly after lost connection to the server, the server will not recognize this and it will delete and send the data to the client over the dead connection.
Ideally, we want to get an IOException when flushing the data stream to the client and handle it accordingly:
try (ServletOutputStream outputStream = httpServletResponse.getOutputStream()) {
outputStream.write(bytes);
outputStream.flush();
} catch (Exception e) {
// TODO: do something ...
}
I simulated this locally by killing the client shortly after sending the request or by setting a very low client read timeout value. In both cases I got a server side exception (with bioth Jersey and Servlets).
The last test was sending a request over a network and pulling the network cable in the process.
Unfortunately I did not get the expected result. The server streamed the data back without recognizing the interrupted connection.
So, does anyone have an idea how to force a Server side exception when the connection to the client is broken?
Any other ideas that don't involve using Sockets or confirmation calls from the client?
Thanks in advance!
Instead of deleting the file in real time, you can write a message on a queue in order to delete it later. The delete would have to check a database where you write if the client received the file completely.
I don't think there's a way to know for certain whether the data arrived to the client unless the client sends an acknowledgement message.
The only solution seems to be not actually deleting the data, but keeping it and setting a 'deleted' flag. But since I don't know the particular use case, I'm not sure if this helps...
TCP is a two way protocol.
If you set up an input stream and call InputStream.read(), this should return -1 if the client has disconnected.
More detail here:
Java Sockets: check if client is able to receive message from server
I'm writing highly loaded client/server application. There are cases on some OSes, when connection is lost, but netty doesn't know about it (due to TCP/IP protocol doesn't have pinging). So I decided to implement connection pinging on my app level.
Then I've faced the next problem: ping from server can not reach client and back during reasonable time in cases when server sends too much messages to the client via slow network connection (write buffer high water mark is rather big, several MB). In this case server breaks connection despite its alive and working.
So I've decided to look on IO processing while pinging as well. So I could consider as normal the next situation: when ping is timed out, but bytes from server are still being processed and written to the socket.
However, looks like its impossible in netty to count actual written bytes to socket and measure last to socket write time, because NioSocketChannel.doWrite(ChannelOutboundBuffer in) doesn't have any callbacks for that. And I don't want to hack the netty code by overwriting somehow NioSocketChannel doWrite method.
I'm using netty 4.0.42.
Any help is appreciated!
It is very interesting for me where all data is stored until I read request body.
For example, a file is uploading to the server. The Java program receives this file. It is impossible to store whole file content in buffers if file is very big - 100 GB.
Does Java streams this file from remote computer? I mean remote computer sends small part of data, Java receives this part and waits for next part. When remote computer decided that server read first part it sends second part of data and so on.
Does Java and its HttpServer works in this way or it stores whole file on the disk as Apache+PHP do?
The mechanism you're looking for is implemented by the TCP stack of the operating system. Buffers are used both on the sending and the receiving side.
TCP generally works like the receiving machine replies to the sender that "OK, got it, now send the next part" - also known as an ACK packet. This mechanism is also resposible to adjust transfer speed to the speed of your connection (instead of sending data too fast and resulting in packet loss).
It is a well-oiled machine, but if something goes wrong it's usually manifested by a TIMEOUT. (In your example if you are waiting a lot of time before processing the request body, and not reading, the sending machine will just give up).
i have implemented file upload code which uses a secure socket to upload files to a server using content-type Multipart Form-data to write the bytes.
Now and again I get a bad socket id error which through analysis in wireshark tells me that a fin packet is being sent from the server to the client for some reason. The identical code uploads 80% of the time so I dont think it is a bad format error so why would the server be disconnecting the connection when the content type states that there is moe data to be sent?
Anyways, If i cant solve the bad socket id issue would tcp/socket connections allow for a reconnection to resume the upload where it left off before disconnection.
Looking forward to insights on to this matter.
Thank you
Are you calling flush on your socket? Sometimes you need to explicitly flush any remaining data otherwise "weird" behavior (i.e. not sending the last packet) can occur. Just an idea.
EDIT: Learned that Webmethods actually uses NLST, not LIST, if that matters
Our business uses the WebMethods integration server to handle most of our outbound communications, and its FTP functionality leaves something to be desired. We are having a problem that may be specific to WebMethods, but if anyone can point me in a direction of what kinds of things might cause this I'd appreciate it.
When polling two of our partners' FTP servers, we connect without issue but, when doing a NLST on a directory that is empty (no files and no subdirectories) it's timing out. The actual error is:
com.wm.net.ftpCException: [ISC.0064.9010] java.net.SocketTimeoutException: Accept timed out
It's being thrown during the invocation of the pub.client.ftp:ls service. I've logged in with a number of FTP clients without a problem to the same sites. I've used whatever the default FTP client is in windows, FileZilla and lftp. All without issue. The servers themselves aren't the same FTP server software from what I can tell. One is Microsoft FTP, the other I'm uncertain on but is definitely not Microsoft.
Any idea what could cause an FTP client to timeout when waiting for a NLST response on an empty directory? The visible responses from the FTP server appear to be the same, but is there a difference in how NLST responds for an empty directory that I'm unaware of?
This problem is consistent on these two servers. Everything functions fine on directories with files or subdirectories within it, but not when empty.
Any thoughts or directions would be appreciated.
Thanks!
Eric Sipple
I tried this in WebMethods IS Version 6.5 Updates WmPRT_6-5-1_SP1, IS_6-5_SP3.
It worked perfectly first time.
I turned on debugging on the FTP server (Debian's default ftpd). WebMethods' NLST honours the active/passive parameter passed to it.
There's nothing special about the NLST command, nor its correct behaviour with an empty directory -- if LIST works, then so should RETR, STOR and NLST. If NLST works with a non-empty directory, it should work with an empty one.
So my guess is that either:
Your version of WM has a bug mine doesn't
Your FTP server has a bug mine doesn't
There's a wacky protocol-aware firewall in your system that doesn't like FTP data sockets with no data in them.
Firewall vendors are a bit wayward when it comes to FTP...
When testing with other clients, make sure it's from the same machine on which WebMethods Integration Server is running.
Just for the record, here's what should happen for an active NLST
client opens a listening socket, and sends a PORT command with that socket's details
client sends NLST command
server connects to client's listening socket (this is the data socket)
server transmits listing over data socket (in this case, zero bytes)
server closes data socket
... and in passive mode
client sends PASV command
server opens a listening socket, and replies with PASV response containing its details
client connects to listening socket (this is the data socket)
client sends NLST command
server transmits listing over data socket (zero bytes again)
server closes data socket
I am not sure if it was the same problem but I had similar symptoms a while ago using another FTP client in Java (commons.net). The problem turned out to be caused by the active/passive mode of the connection.
I am sorry I can't give you more details, that's all I can remember... hope that help.
Guillermo Vasconcelos was correct in his answer. There are two FTP modes, Active and Passive. The default FTP mode is active. Active requires the server to connect back to the client on some TCP/IP port. This does not work with firewalls because chances are that this port would be blocked or if you are behind a Router with NAT, not mapped.
If you use Passive (PASV) mode instead, you should not get the hang.
I'm going to run some new tests with the settings to passive tomorrow when maintenance is done here, but I'm not sure that's the issue. We are able to get a directory listing if there are files or subdirectories in that directory. It only fails when the directory we're NLST-ing on is empty.
Would the active/passive difference only manifest for an empty directory, or is there another possibility?
FTP requires that both the specified port and the one above it be opened through the firewall. When I had problems with webMethods timing out, it was because the firewall did not have the return port open.
Howard