It is very interesting for me where all data is stored until I read request body.
For example, a file is uploading to the server. The Java program receives this file. It is impossible to store whole file content in buffers if file is very big - 100 GB.
Does Java streams this file from remote computer? I mean remote computer sends small part of data, Java receives this part and waits for next part. When remote computer decided that server read first part it sends second part of data and so on.
Does Java and its HttpServer works in this way or it stores whole file on the disk as Apache+PHP do?
The mechanism you're looking for is implemented by the TCP stack of the operating system. Buffers are used both on the sending and the receiving side.
TCP generally works like the receiving machine replies to the sender that "OK, got it, now send the next part" - also known as an ACK packet. This mechanism is also resposible to adjust transfer speed to the speed of your connection (instead of sending data too fast and resulting in packet loss).
It is a well-oiled machine, but if something goes wrong it's usually manifested by a TIMEOUT. (In your example if you are waiting a lot of time before processing the request body, and not reading, the sending machine will just give up).
Related
I am developing a nodemcu websocket server android client app using java.i successfully created client and connected to it through a websocket client service.i can detect server failure/closed when sending data.but can't detect it at the time of failure that is if server powered off cant know untill some data is send.how to know the server failure at the time of failure.using okhttp 4.1.0 library.can anyone help
how to know the server failure at the time of failure.using okhttp 4.1.0 library.can anyone help
You can't. It's not possible, but, there are workarounds, see below.
Why isn't it possible? Internally, the internet is packet switched, which means data is first gathered up into packets, and then these packets are sent.
Most of the stuff you do on the web feels like it is 'streams' instead (you send 1 character, and one character arrives on the other side). But that's all based on protocols that are built on top of the packet nature of the internet.
When you have an open connection between 2 computers via the internet, no data is actually being sent, at all. It's not like you have a line reserved. Old telephone networks did work like that: When you dialled somebody, you got a dedicated line, and once the line got interrupted, you'd hear beeps to indicate this.
That is not how the internet works. Those wires and everything in between have no idea that there is an open connection at all. That's just some bits in memory on your computer and on the server which lets them identify certain packets as part of the longer conversation those 2 machines were having, is all.
Thus we arrive at why this isn't possible: Given that no packets are flowing whatsoever until one side actually sends data to the other, it is impossible to tell the difference between 'no data being sent right now' and 'somebody tripped over the power cable in the server park'. That's why you don't get that info until you send something (and the reason you get that is only because when you send something, the protocol dictates that the server sends you back a confirmation of receiving what you sent. If that takes too long, your computer will send it a few more times just in case the packet just got lost somewhere, and will eventually give up and conclude that the server can no longer be reached or crashed or lost power, and only then do you get the IOException).
Workarounds
A simple one is to upgrade your own protocol: Dictate that the server or client (doesn't matter who takes the responsibility to do this) sends a do-nothing message at least once a minute. You can then conclude after not receiving that for 100 seconds or so that the connection is probably dead. You can start a timer for 100 seconds, reset it every time you receive any data whatsoever. If the timer ever runs out? Connection is likely dead.
This is somewhat take on this idea built into the protocol that lets you make connections that feel like streams of data. That protocol is called TCP/IP, and the feature is called KeepAlive.
The problem is, you possibly don't get to dictate the TCP/IP settings for your websocket connection. If you can, you can turn on keepalive (for example in java, you use Socket to make raw TCP/IP connections, and it has a .setSoKeepAlive(true) method. Check the API if you can get at the socket or otherwise scan the docs for 'keepalive' and see if there's anything there.
I bet there won't be, which means you have to use the trick I mentioned above: Update your server code to use a timer to send a 'hello!' 60 seconds after any conversation, and update your client code to give up on the connection once 100 seconds have passed (give it 40 additional seconds; sometimes the internet gets a little backed up or servers get a little busy).
I want to create a UDP OUTPUT=UPLOAD stream using java. I will get my INPUT=SOURCE data from a named pipe or contiguous file opened as a file input stream.
My problem is, ALL of the UDP examples i can find on the internet, only demonstrate console sessions. Echo servers and such.
I'm trying to create a way to stream continuous content such as audio/video, and i don't care what gets lost, i'm leaving that up to my user to be concerned with, however my code does need to allow setting the buffer size, and creating a UDP connection.
Ideally a GOOD example would show how to do upload and download mode connections (client mode and server mode)
Can you provide some code to do this, or show me a link on the internets? The fact that I cannot find a UDP stream client/server example is ridiculous. Using UDP to do console sessions tests the limits of a person's sanity! That should never even be considered optional, let alone useful. The client/server code i need must be compatible with GNU netcat. (to ensure correct performance)
I have tried this with a client:
byte[] buffer = new byte[udpPacketSize]; // 4096
int len;
while ((len = standardInput.read(buffer)) != -1) {
udpSocket.send(new DatagramPacket(buffer, len, host, port));
}
But when I stop sending data, and disconnect, I cannot reconnect to send more data. I'm not sure if that is what is supposed to happen, because 1) I am completely out of my element here and 2) when I disconnect after sending the data, the remote instance of GNU netcat does not exit like it does in TCP mode.
HELP, I need a real network systems engineer, to show me how to implement UDP for practical applications!
[and somebody to remove all of that garbage from the internet, but let's keep it simple]
[further: please do not respond with libraries, packages, or shell commands as a solution. i must be able to execute on any embedded device which may not have the programs, and libraries are not teaching me or anyone else how to do anything on their own.]
But when I stop sending data, and disconnect, I cannot reconnect to send more data.
That's the way GNU Netcat UDP mode works, as far as netcat to netcat on a single machine goes...
Your client needs to read a response from the server before disconnecting. So, while acting as "middle-man", you should not be concerned with this, as long as you can connect your network-client's local-client, to the server's response mechanism.
In other words, you need to provide a bi-directional-half-duplex-connection (1:1 communications), since you are not managing the protocol.
Alternatively, you can use a different UDP server than Gnu Netcat. I have tested this one, and it works without the MUST-READ-REPLY-BUG, effectively meaning, there is nothing wrong with my example code. The must-read-reply feature has nothing to do with a correct UDP server implementation (unless you must connect with a Gnu Netcat 0.7.1 compatible server).
It is worth nothing that it isn't very useful to use Gnu Netcat UDP server mode without a driver (script/program) behind it, especially if you want a continuous process, as you could lock your remote client out of access until the process is respawned.
I need some assistance on a project I am working on. It's a library itself using Jersey 1.x (1.19.1) aiming at HTTP posting a JSON document and getting the corresponding JSON response from a server.
I am facing a problem when the response from the server is "big". The JSON document that is posted by my client application contains several jobs that must be executed by the server, and
the JSON document sent back by the server is made of the outputs of these jobs. The jobs can be considered independent from each other. The server works in streaming mode, which means it
starts to process the jobs before it receives the entire JSON document posted by the client. And it starts to send the outputs of the jobs as soon as they are finished. So the server
starts to reply to my client application while it is still posting the request. Here is my problem. When the request gets big so gets the response (more jobs to do), my application freezes
and at some point terminates.
I spent a lot of time trying to figure out what's happening and here is what is found and what I infered.
Jersey, for handling HTTP communication is using a class from the JDK (in rt.jar) I forgot the exact name and don't have access to my work right now but let's call it HttpConnection.
In this class there is a method checkError() that is invoked and throws a IOException with only a message saying it was impossible to write to server.
Debugging I was able to understand that an attribute of this class named trouble was set to true because a write() method caught an IOException before. checkError() throws a
IOException based on that trouble boolean flag. It's not possible to easily see the cause IOException because the classes of the JRE are compiled without the debugging symbols but
I managed to see that this IOExeption was a "connection reset by peer" problem.
Then I tried to understand why the server resets the connection. I used a HTTP proxy that captures the HTTP traffic between my client application and the server but this gave me no more clues,
it even seems that the proxy is unable to handle properly the connection with the server as well!
So I tried to use Wireshark to capture the traffic and see what's wrong. Here is what I found.
On client side, packets corresponding to the post of the request JSON document are sent and the server starts to reply shortly after, as explained above. The server side sends
more and more packets and I noticed that the buffer of the TCP layer (called TCP window in Wireshark) on client side has a size that decreases more and more as the server sends packets.
Until it beomes full (size: 0 byte). So the TCP layer on server side cannot send data to the TCP layer on client side anymore and thus becomes full too. The conversation, in the end is
only about retrying to send data, on both sides, failing again and again. Ultimately the server decides to send a reset packet. This corresponds to the cause IOExcpetion I mentioned
above I believe.
My understanding is: as long as the server does not start to stream the response everything is fine. When the server starts to send the response, the TCP buffer on client side starts to
get filled. But as the client application does not read the response yet, the content of this buffer is not consumed. When the server has sent enough data to fill this buffer it cannot
send anymore data and the buffer of its TCP layer gets full too because the server continues to push data. As a result, the client application cannot finish to send the request JSON
document. The communication is blocked on both sides and the server decides to reset the connection.
My conclusion is: the code, as currently written, does not support such full duplex communication, because the response from the server is not consumed as it is received. Indeed, walking
through the Jersey code that is executed by my library, by debugging, it is clear that the pattern is:
first: connection.getOutputStream().write()
and then: response.getInputStream().read()
In my opinion, the root cause of the problem is that the library I am working on uses Jersey in this synchronous manner which does not fit well the way the server works (streaming the
response while the request is still being sent to it).
I searched a lot on the Internet a solution keeping Jersey 1.19.1 for me to improve the library with as few impacts as possible but I failed. This is the reason why I am asking help
here now ;)
So basicaly my question is: is it possible to do what I need to do keeping Jersey client library 1.19.1 and if yes how? If not, what HTTP client library should I use for my library (to
write a post request and read the corresponding response at the same time) and if you could give me a basic example so I can be on track quickly it would be much appreciated.
One last thing: curl works just fine, I can fully post the exact same JSON document and get the response using it, so there is no problem on server side as I suspected at the very
beginning of my investigation. And it scales fine (I tried to send huge JSON documents). Of course I made sure the HTTP header of the post is the same in the case of my library and in the
curl case.
Thanks a lot for reading me and for your answers.
Best regards,
Loïc
From a FTP server perspective, if a client requests a file through RETR command, the server creates a data connection (socket) to the client through the specified port and starts the transfer by writing in the outputstream. The server is coded (JAVA) in such a way that after the write is complete in the socket, the outputstream is flushed and then the socket is closed. After this the code "226" is sent to the client in control channel.
Since the connection is over a very slow network, the 226 message reaches before the actual data transfer is complete. This is a tricky situation where the client code cannot be changed and the server has to make sure that the 226 is sent after client received the data.
I tried searching in the internet and got few inputs, but not sure which one is the standard.
1. to use setSoLinger() method to turn on SO_LINGER and to set timeout.
2. to introduce a delay after writing each byte in to socket (performance will be impacted for fast connections).
Is there any other options other than the above to solve the situation. Any idea about the standard followed for sending 226 in Linux/ Solaris/Windows FTP Servers.
I could see a similar thread in stackoverflow "When should 226 be sent from the FTP server?" , but could not find much info from that related to my question.
Help here is really appreciated...Thanks
Do not go with the delay for sure, the only thing I can think of is that you build a proxy layer that intercepts the acknowledge code, checks for the file, and reroutes the code to the application, sort of what telerik fiddler does as an application.
The same concept I used before with the JMS acknowledge mode when delivering messages to the server and I had to implement the same.
Wish you all luck my friend
I want to implement a server/client application using Netty. As an example, suppose it needs to upload and download files and receive notifications when new files are uploaded. The problem is that the client must receive notifications even while downloading (or uploading) a file. I can see a few options:
Only send small messages over TCP containing URLs to files, download and upload over HTTP.
Open several parallel connections over TCP, using one for small messages and one for large (or one for each large message).
Write a chunking handler which automatically splits messages into chunks under 64Kb (e.g.) and allows chunks from different messages to be interleaved. From documentation, it seems ChunkedWriteHandler does not do this.
What I like in option 3 is that the client only needs to authenticate once, there is no possibility of one connection breaking while another is maintained, etc. But is it reasonable? And if yes, does such a solution already exist?
Chunks are nothing but http messages, try to use a socket client which buffers then writes your file to netty chunk by chunk in one single connection, then use netty http chunk aggregator handler to decode the chunks. The client implementation is pretty simple. Most of the server side implementation can be found under org.jboss.netty.example.http.upload .
If you have control of both client and server, use websockets. You are free to invent your own file transer protocol on top of it, including notifications and whatnot. Kermit goes websocket ;-)