Keep TCP socket-connection alive if no data is currently available - java

I have implemented a small HTTP-server which allows clients to connect via HTTP and stream audio-data to them.
My problem is, that in case there's currently no audio-data available, the connection seems to break, either because the client is disconnecting, or due to another reason inside Android.
I'm acting like the following way:
serverSocket = new ServerSocket(0);
Socket socket = serverSocket.accept();
socket.setKeepAlive(true);
BufferedReader in = new BufferedReader(new InputStreamReader(socket.getInputStream()));
BufferedWriter out = new BufferedWriter(new OutputStreamWriter(socket.getOutputStream()));
out.write("HTTP/1.1 200 OK\r\n");
out.write("Content-Type: audio/wav\r\n");
out.write("Accept-Ranges: none\r\n");
out.write("Connection: keep-alive\r\n"); // additionally added due to answer below
out.write("\r\n");
out.flush();
..
while(len=otherInput.read(audioBuffer)){
out.write(audioBuffer, 0, len);)
}
For sure this is just a snipped of the real code, but it shows what I'm doing.
Now, in case the "otherinput.read()" takes a long time because there's no data available at the moment, I get a
java.net.SocketException: sendto failed: EPIPE (Broken pipe)
at libcore.io.IoBridge.maybeThrowAfterSendto(IoBridge.java:499)
at libcore.io.IoBridge.sendto(IoBridge.java:468)
at java.net.PlainSocketImpl.write(PlainSocketImpl.java:508)
at java.net.PlainSocketImpl.access$100(PlainSocketImpl.java:46)
at java.net.PlainSocketImpl$PlainSocketOutputStream.write(PlainSocketImpl.java:270)
at java.io.BufferedOutputStream.flushInternal(BufferedOutputStream.java:185)
at java.io.BufferedOutputStream.write(BufferedOutputStream.java:139)
Who can tell me how I can prevent the connection from breaking/closing without a manual heartbeat? Do I miss some header or am I using something the wrong way?
Thanks for your help in advance, tried and searched myself crazy meanwhile.

There are at least two problems here.
Clients of HTTP servers are not well-behaved in the way you seem to expect. Consider a browser. The user can shut it down, go back, navigate away etc, any time he likes, even in the middle of a page load. If you get any error transmitting to the client there's nothing you can do except close the connection and forget about it. Same applies to any server really, but it applies unsolder to HTTP servers.
You're not reading the entire request sent by the client. You need to read all the headers until a blank line, then you need to read the body up to the length specified in the Content-length: header, or all the chunks, or until end of stream, as the case may be: see RFC 2616. The effect of this may be that you cause the behaviour at (1).

Related

Java Server - TCP Socket detect EOF without closing the socket connection

Is there a way to detect the EOF when reading from a TCP Socket whilst the socket connection stays open?
Most of the examples I have seen are something along the lines of:
int n=0;
While((n = read.inStream(data)) != -1){
destType.write(data, 0, n);
}
However this means that you are forced to create a new connection every time you want to receive a new piece of data.
In my case this is a constant stream of images that are sent across the socket as bytes and I would like to process each image without having to close the connection between so that I can have all the images associated with a single user, aswell as it is just more efficient to not constantly open and close connections at a high frequency.
So is there a way to do this or some information on a possible alternative?
No - if the connection stays open, the stream hasn't reached its end. The idea of a stream reporting EOF and then having more data later goes against the principle of a stream.
If you want to send multiple messages across a TCP stream, the simplest way is to prefix each message with its length:
HEADER BODY
HEADER BODY
HEADER BODY
Then the client will read the header, find out how long the message is, read it, then read the next header, etc.

Safe use of HttpURLConnection

When using HttpURLConnection does the InputStream need to be closed if we do not 'get' and use it?
i.e. is this safe?
HttpURLConnection conn = (HttpURLConnection) uri.getURI().toURL().openConnection();
conn.connect();
// check for content type I don't care about
if (conn.getContentType.equals("image/gif") return;
// get stream and read from it
InputStream is = conn.getInputStream();
try {
// read from is
} finally {
is.close();
}
Secondly, is it safe to close an InputStream before all of it's content has been fully read?
Is there a risk of leaving the underlying socket in ESTABLISHED or even CLOSE_WAIT state?
According to http://docs.oracle.com/javase/6/docs/technotes/guides/net/http-keepalive.html
and OpenJDK source code.
(When keepAlive == true)
If client called HttpURLConnection.getInputSteam().close(), the later call to HttpURLConnection.disconnect() will NOT close the Socket. i.e. The Socket is reused (cached)
If client does not call close(), call disconnect() will close the InputStream and close the Socket.
So in order to reuse the Socket, just call InputStream.close(). Do not call HttpURLConnection.disconnect().
is it safe to close an InputStream
before all of it's content has been
read
You need to read all of the data in the input stream before you close it so that the underlying TCP connection gets cached. I have read that it should not be required in latest Java, but it was always mandated to read the whole response for connection re-use.
Check this post: keep-alive in java6
Here is some information regarding the keep-alive cache. All of this information pertains Java 6, but is probably also accurate for many prior and later versions.
From what I can tell, the code boils down to:
If the remote server sends a "Keep-Alive" header with a "timeout" value that can be parsed as a positive integer, that number of seconds is used for the timeout.
If the remote server sends a "Keep-Alive" header but it doesn't have a "timeout" value that can be parsed as a positive integer and "usingProxy" is true, then the timeout is 60 seconds.
In all other cases, the timeout is 5 seconds.
This logic is split between two places: around line 725 of sun.net.www.http.HttpClient (in the "parseHTTPHeader" method), and around line 120 of sun.net.www.http.KeepAliveCache (in the "put" method).
So, there are two ways to control the timeout period:
Control the remote server and configure it to send a Keep-Alive header with the proper timeout field
Modify the JDK source code and build your own.
One would think that it would be possible to change the apparently arbitrary five-second default without recompiling internal JDK classes, but it isn't. A bug was filed in 2005 requesting this ability, but Sun refused to provide it.
If you really want to make sure that the connection is close you should call conn.disconnect().
The open connections you observed are because of the HTTP 1.1 connection keep alive feature (also known as HTTP Persistent Connections).
If the server supports HTTP 1.1 and does not send a Connection: close in the response header Java does not immediately close the underlaying TCP connection when you close the input stream. Instead it keeps it open and tries to reuse it for the next HTTP request to the same server.
If you don't want this behaviour at all you can set the system property http.keepAlive to false:
System.setProperty("http.keepAlive","false");
When using HttpURLConnection does the InputStream need to be closed if we do not 'get' and use it?
Yes, it always needs to be closed.
i.e. is this safe?
Not 100%, you run the risk of getting a NPE. Safer is:
InputStream is = null;
try {
is = conn.getInputStream()
// read from is
} finally {
if (is != null) {
is.close();
}
}
You also have to close error stream if the HTTP request fails (anything but 200):
try {
...
}
catch (IOException e) {
connection.getErrorStream().close();
}
If you don't do it, all requests that don't return 200 (e.g. timeout) will leak one socket.
Since Java 7 the recommended way is
try (InputStream is = conn.getInputStream()) {
// read from is
// ...
}
as for all other classes implementing Closable. close() is called at the end of the try {...} block.
Closing the input stream also means you are done with reading. Otherwise the connection hangs around until the finalizer closes the stream.
Same applies to the output stream, if you are sending data.
There is no need to get an close the ErrorStream. Even if it implements the InputStream interface: It's using the InputStream in combination with a buffer. Closing the InputStream is sufficient.

java.net.SocketTimeoutException: Read timed out

I have an application with client server architecture. The client
use Java Web Start with Java Swing / AWT and the sert uses HTTP server / Servlet with
Tomcat.
The communication is made from the serialization of objects, create a
ObjectOutput serializes a byte array and send to the server
respectively called the ObjectInputStream and deserializes.
The application follows communicating correctly to a certain
time of concurrency where starting to show error
"SocketException read timeout". The erro happens when the server invoke the method
ObjectInputStream.getObject() in my servlet doPost method.
The tomcat will come slow and the errors start to decrease server response time until the crash time where i must restart the server and after everything works.
Someone went through this problem ?
Client Code
URLConnection conn = url.openConnection();
conn.setDoOutput(true);
OutputStream os = conn.getOutputStream();
ObjectOutputStream oss = new ObjectOutputStream(os);
oss.writeUTF("protocol header sample");
oss.writeObject(_parameters);
oss.flush();
oss.close();
Server Code
ObjectInputStream input = new ObjectInputStream(_request.getInputStream());
String method = input.readUTF();
parameters = input.readObject();
input.readObject() is where the error is
You haven't given us much information to go on, especially about the client side. But my suspicion is that the client side is:
failing to setting the Content-length header (or setting it to the wrong value),
failing to flush the output stream, and/or
not closing the output side of the socket.
Mysterious.
Based on your updated question, it looks like none of the above. Here are a couple of other possibilities:
For some reason the client side is either locking up entirely during serialization or taking a VERY LONG TIME.
There is a proxy between the client and server that is causing problems.
You are experiencing load-related network problems, or network hardware problems.
Another possible explanation is that you have a memory leak, and that the slowdown is caused by the GC taking more and more time as you run out of memory. This will show up in the GC logs if you have them enabled.
I think During high Concurrency, the Socket Timeout set in Tomcat is Expired and the connection is closed. The next read by Tomcat for that connection is greater than the server socket timeout specified in the server.
If you want to avoid this problem you have to increase the timeout on the server-side which is expired in your case. But not advisable.
BTW you did not give enough information. Did you increase the no of threads for connection in Tomcat? If you did, this surely would happen.

How does getWriter() function in an HttpServletResponse?

In the method service(), we use
PrintWriter out = res.getWriter();
Please tell me how it returns the PrintWriter class object, and then makes a connection to the Browser and sends the data to the Browser.
It doesn't make a connection to the browser - the browser has already made a connection to the server. It either buffers what you write in memory, and then transmits the data at the end of the request, or it makes sure all the headers have been written to the network connection and then returns a PrintWriter which writes data directly to that network connection.
In the buffering scenario there may be a fixed buffer size, and if you exceed that the data written so far will be "flushed" to the network connection. The big advantage of having a buffer at all is that if something goes wrong half-way through, you can change your response to an error page. If you've already started writing the response when something goes wrong, there's not a lot you can do to indicate the error cleanly.
(There's also the matter of transmitting the content length before any of the content, for keep-alive connections. If you run out of buffer before completing the response, I'm reliably informed that the response will use a chunked encoding.)
One fairly simple implementation:
PrintWriter getWriter() throws java.io.IOException {
return new PrintWriter(socket.getOutputStream());
}
Also note that several open source implementations of the Servlet API is available. This allows you to see how it can be done.
I believe the official implementation has been open sourced too, and is included with the Glassfish server.

java.net.SocketException: Software caused connection abort: recv failed

I haven't been able to find an adequate answer to what exactly the following error means:
java.net.SocketException: Software caused connection abort: recv failed
Notes:
This error is infrequent and unpredictable; although getting this error means that all future requests for URIs will also fail.
The only solution that works (also, only occasionally) is to reboot Tomcat and/or the actual machine (Windows in this case).
The URI is definitely available (as confirmed by asking the browser to do the fetch).
Relevant code:
BufferedReader reader;
try {
URL url = new URL(URI);
reader = new BufferedReader(new InputStreamReader(url.openStream())));
} catch( MalformedURLException e ) {
throw new IOException("Expecting a well-formed URL: " + e);
}//end try: Have a stream
String buffer;
StringBuilder result = new StringBuilder();
while( null != (buffer = reader.readLine()) ) {
result.append(buffer);
}//end while: Got the contents.
reader.close();
This also happens if your TLS client is unable to be authenticate by the server configured to require client authentication.
This usually means that there was a network error, such as a TCP timeout. I would start by placing a sniffer (wireshark) on the connection to see if you can see any problems. If there is a TCP error, you should be able to see it. Also, you can check your router logs, if this is applicable. If wireless is involved anywhere, that is another source for these kind of errors.
This error occurs when a connection is closed abruptly (when a TCP connection is reset while there is still data in the send buffer). The condition is very similar to a much more common 'Connection reset by peer'. It can happen sporadically when connecting over the Internet, but also systematically if the timing is right (e.g. with keep-alive connections on localhost).
An HTTP client should just re-open the connection and retry the request. It is important to understand that when a connection is in this state, there is no way out of it other than to close it. Any attempt to send or receive will produce the same error.
Don't use URL.open(), use Apache-Commons HttpClient which has a retry mechanism, connection pooling, keep-alive and many other features.
Sample usage:
HttpClient httpClient = HttpClients.custom()
.setConnectionTimeToLive(20, TimeUnit.SECONDS)
.setMaxConnTotal(400).setMaxConnPerRoute(400)
.setDefaultRequestConfig(RequestConfig.custom()
.setSocketTimeout(30000).setConnectTimeout(5000).build())
.setRetryHandler(new DefaultHttpRequestRetryHandler(5, true))
.build();
// the httpClient should be re-used because it is pooled and thread-safe.
HttpGet request = new HttpGet(uri);
HttpResponse response = httpClient.execute(request);
reader = new BufferedReader(new InputStreamReader(response.getEntity().getContent()));
// handle response ...
Are you accessing http data? Can you use the HttpClient library instead of the standard library? The library has more options and will provide better error messages.
http://hc.apache.org/httpclient-3.x/
The only time I've seen something like this happen is when I have a bad connection, or when somebody is closing the socket that I am using from a different thread context.
Try adding 'autoReconnect=true' to the jdbc connection string
This will happen from time to time either when a connection times out or when a remote host terminates their connection (closed application, computer shutdown, etc). You can avoid this by managing sockets yourself and handling disconnections in your application via its communications protocol and then calling shutdownInput and shutdownOutput to clear up the session.
Look if you have another service or program running on the http port. It happened to me when I tried to use the port and it was taken by another program.
If you are using Netbeans to manage Tomcat, try to disable HTTP monitor in Tools - Servers
I too had this problem. My solution was:
sc.setSoLinger(true, 10);
COPY FROM A WEBSITE -->By using the setSoLinger() method, you can explicitly set a delay before a reset is sent, giving more time for data to be read or send.
Maybe it is not the answer to everybody but to some people.

Categories