Stop Socket with timeout from waiting after data read from socket - java

I am trying to create a java http server using tcp sockets. HTTP 1.1 has a timeout value that will enable the connection to be persistent and wait for a short while for possible data from the client. I am trying to implement this timer in my program by using:clientSocket.setSoTimeout(). Even though this will help to leave the connection open for a certain amount of time, but it will wait for that exact amount of time before allowing the next request to be read.
For example:
If timeout is set to 5 seconds,
Request 1 is read. Then the socket hangs and wait until 5 seconds is over.
Request 2 is read. The socket waits until 5 seconds is up again.
This proves to be a problem if my timeout is set to big values. This should not be the case as the request should be processed once it is received and the timeout should only expire only if no data is received throughout the specified duration.
Can anyone advise me on how I could resolve this?
Edit:
For people who face a similar problem, here is my solution:
Since the client waits until the timeout before receiving all the data, I guessed that the client does not know that all the data from the server has been received. Hence, I added a content-length field to the HTTP response packet. Now, my client no longer hangs after receiving the data. The setSoTimeout does indeed work as stated!

Ok, when you receive a connection, then please start a new Thread like this:
class ClientService extends Thread {
private final Socket clientSocket;
public ClientService(Socket clientSocket) {
this.clientSocket=clientSocket;
}
public void run() {
// do your work with the Socket clientSocket here
}
}
this is how then your server code should look like:
while (true) {
Socket clientSocket = server.accept();
new ClientService(clientSocket).start();
}
It will allow you to process responses without waiting for one another till it timeouts.

HTTP 1.1 has a timeout value that will enable the connection to be persistent and wait for a short while for possible data from the client.
Not really. It has a connection: keep-alive setting, which is the default behaviour, and it allows endpoints to close connections that aren't in use after a period of idleness, but it doesn't have a timeout property itself.
I am trying to implement this timer in my program by using:clientSocket.setSoTimeout().
This has nothing whatsoever to do with HTTP. It is a socket read timeout.
Even though this will help to leave the connection open for a certain amount of time, but it will wait for that exact amount of time before allowing the next request to be read.
No it won't. It will cause read methods to throw SocketTimeoutException if no data arrives within the timeout period. Nothing else.
For example:
If timeout is set to 5 seconds,
Request 1 is read. Then the socket hangs and wait until 5 seconds is over.
No it doesn't.
Request 2 is read. The socket waits until 5 seconds is up again.
No it doesn't. You've made all this up. It is fantasy.
This proves to be a problem if my timeout is set to big values.
It isn't a problem with any timeout values whether large or small, because it simply does not happen.
This should not be the case as the request should be processed once it is received and the timeout should only expire only if no data is received throughout the specified duration.
That is exactly what Socket.setSoTimeout() already does.
Your question is founded on a fallacy.

Related

Close TCP connection after a minute of inactivity

I have a server that creates a new thread (client handler) every time a client connects to it. I want the client handler to close the connection if it has not received a DataInputstream message from the client in a minute.
I have tried:
if (System.currentTimeMillis() > startTime + 60000) {
System.out.println("Time up!!! for client: "+this.socket);
dos.writeUTF("close");
this.socket.close
break;
}
However, this only gets to the if statement after the client sends a message and I want to close the connection after 1 minute of inactivity from the client automatically.
There's multiple scenarios:
The timeout after the last read should be 60 seconds: Simply set the read() timeout: socket.setSoTimeout(60*1000); So if the reading thread has not received any data for a minute, the read() request will fail. Handle that exception accordingly, and you've got it.
The total connection timeout shall be 60 seconds, but the input still done in blockning mode: same as (1), but after each successful read you reduce the given timeout: socket.setSoTimeout(msLeft);
The total connection timeout shall be 60 seconds, but the input can be done in non-blocking mode: This uses is very different from those classes you use in your example, but here is a good guide to that: Non-blocking sockets.
However, I would stick to blocking I/O if possible, it's simply better for a lot of reasons.

send keep alive on long asynchronous request in spring server

I have a controller in spring which getting a POST request which is handling as asynchronous(using DeferredResult object as a return value).
The response for this request is writing bytes to the HTTP stream directly (HttpServletResponse.getWriter().print()) and when it's done writing it sets result on the DeferredResult object for close the connection.
I'm writing my response in stream chunks.
I have an issue in this request handling because the client is closing the connection if I'm not writing to it for 1 minute. (I can write some chunks and then stop writing for 1 minute - therefore the connection will be closed in the middle of my procedure).
I want to control the closing connection procedure - I want to send keep alive when I'm not writing any data to the stream so that the connection won't be closed until I decided to close it from the server-side.
I didn't find out how should I get control of the connection from the controller in the server.
Please assist.
Thanks.
There is no such thing as a "keep alive" during an ongoing request or response in HTTP which can help with idle timeouts when receiving a request or response.
HTTP keep alive is only about keeping the TCP connection open after a response in order to process more requests on the same connection. TCP keep alive is instead used to detect connection loss without TCP shutdown and can also be used to prevent idle timeouts in stateful packet filters (as used in firewalls or NAT routers) in between client and server. It does not prevent idle timeouts at the application level though since it does not transport any data visible to the application level.
Note that the way you want to use HTTP is contrary to how HTTP was designed originally. It was designed for a client sending a full request and the server sending a full response immediately and not for the server sending some parts of the response, idling some time and then send some more. The proper way to implement such behavior would be by using WebSockets. With WebSockets both client and server can send new messages at any time (i.e. no request-response schema) and it also supports keep-alive messages. If WebSockets are not an option you can instead implement a polling client which regularly polls for new data from the server with a new request.
I ran into similar need just recently. The server code executes a long running operation that can take as long as 30 minutes to return, and the client times out long before that. The solution was to have the long running operation send periodic "keep alive" packets of data to the client via a "callback" argument provided by the request handler method. The callback is nothing more than a function (think of Lambda in Java) that takes as parameter the "keep alive" data packet to send to client, and then writes that data packet to the client via the java.io.PrintWriter reference that you can get off of javax.servlet.http.HttpServletResponse. Below code is the handler method that does this. I had to refactor the code in the call hierarchy to accept this new "callback" parameter until the "callback" can reach the method that is performing the long running operation, and inside that code I invoke the "callback" every so often, for example every time 10 records are processed. Not that below is Groovy code (scripting code on top of Java that runs on the JVM) and the server-side framework is Spring,
...
#Autowired
DataImporter dataImporter
#PostMapping("/my/endpoint")
void importData(#RequestBody MyDto myDto, HttpServletResponse response) {
// Callback to allow servant code deep in the call hierarchy to report back to client any arbitrary message
Closure<Void> callback = { String str ->
response.writer.print str
response.writer.flush()
}
// This leads to the code that is performing a long running operation. Using
// this "hook" that code has a direct connection to the client whereby
// it can send packets of data to keep the connection from timing out.
dataImporter.importData(myDto, callback)
}
}

Heart-beating in STOMP client

The design of my current stomp client process is as follows:
Open stomp connection (sending CONNECT frame)
Subscribe to a feed (send a SUBSCRIBE frame)
Do a loop to continually receive feed:
while (true) {
connection.begin("txt1");
StompFrame message = connection.receive();
System.out.println("message get header"+message.toString());
LOG.info(message.getBody());
connection.ack(message, "txt1");
connection.commit("txt1");
}
My problem with this process is that I get
java.net.SocketTimeoutException: Read timed out
at java.net.SocketInputStream.socketRead0(Native Method)...
and I think the cause of this is mostly because the feed I am subscribed to gives information slower on certain times (as I normally get this error when the weekend comes, holidays or evenings).
I have been reading up on this here and I think this would help with my problem. However, I'm not so sure how to incorporate it with the current layout of my stomp client. Would I have to send a CONNECT header within Step 3?
I am currently using activemq to create my stomp client if that helps.
In the stomp spec we have:
Regarding the heart-beats themselves, any new data received over the
network connection is an indication that the remote end is alive. In a
given direction, if heart-beats are expected every milliseconds:
the sender MUST send new data over the network connection at least every milliseconds
if the sender has no real STOMP frame to send, it MUST send a single newline byte (0x0A)
if, inside a time window of at least milliseconds, the receiver did not receive any new data, it CAN consider the
connection as dead
because of timing inaccuracies, the receiver SHOULD be tolerant and take into account an error margin
Would that mean my client would need to send a newline bye every n seconds?
The stomp server you are connected to has timed out your connection due to innactivity.
Providing the server supports Stomp version 1.1 or newer, the easiest solution for your client is to include a heart-beat instruction in the header of your CONNECT, such as "0,10000". This tells the server that you cannot send heart-beats, but you want it to send one every 10 seconds. This way you don't need to implement them, and the server will keep the connection active by sending them to you.
Of course the server will have its own requirements of the client. In your comment it responds to your request with "1000,0". This indicates that it will send a heart-beat every 1000 millisecs, and it expects you to send one every 0 millisecs, 0 indicating none at all. So your job will be minimal.

SocketTimeoutException when ConnectTimeout and ReadTimeout is infinite? [duplicate]

This question already has answers here:
Closed 10 years ago.
Possible Duplicate:
Receiving request timeout even though connect timeout and read timeout is set to default (infinite)?
I tried to connect to a web service and received a SocketTimeoutException after approximately 20 seconds. The Tomcat server hosting the web service is down so the Exception is expected. However, I did not set the value of my ConnectTimeout and ReadTimeout. According to the documentation, the default values of these two are infinite.
One possibility for this is that the server I tried connecting to has its own timeout. But when my friend tried to connect to it using iOS, his connection timed out after approximately 1 minute and 15 seconds. If the server is the one issuing the timeout, our connection should have timed out at almost the same time. Please note that he is also using the default time out of iOS.
Why did my socket timed out so early when my connect and read timeout are set to infinite?
Is socket timeout different to connect and read timeout? If so, how is it different?
How can I know the value of my socket timeout? I am using HttpURLConnection.
Is there a way to set the socket timeout? How?
Below is a snippet of my code:
httpURLConnection = (HttpURLConnection) ((new URL("http://www.website.com/webservice")).openConnection());
httpURLConnection.setDoInput(isDoInput);
httpURLConnection.setDoOutput(isDoOutput);
httpURLConnection.setRequestMethod(method);
try
{
OutputStreamWriter writer = new OutputStreamWriter(httpURLConnection.getOutputStream());
writer.write("param1=value1");
writer.flush;
}catch(Exception e)
{
}
Why did my socket timed out so early when my connect and read timeout are set to infinite?
Code please.
Is socket timeout different to connect and read timeout? If so, how is it different?
SocketTimeoutException is a read timeout.
How can I know the value of my socket timeout? I am using HttpURLConnection.
HttpURLConnection.getReadTimeout(); also HttpURLConnection.getConnectTimeout().
Is there a way to set the socket timeout? How?
HttpURLConnection.setReadTimeout().
You have already cited all these methods in your original post. Why are you asking about them here?
Finally, I found what causing my timeout! It turns out that it is indeed the server who is causing my timeout. I doubt on this one at first because I am receiving a different timeout when using iOS which is more than 1 minute.
So here it is:
The operating system holding my Tomcat server is Windows. Windows' default number of retries for unanswered connection is 2. So when your first attempt to connect fails, you still have 2 retries left. The retries are all done internally. I'm not sure how the time for each retry is calculated but basically it's 3 + 6 + 12 = 21 seconds.
1st retry = 3 seconds
2nd retry = 6 seconds
3rd retry = 12 seconds
After the 3rd retry, your connection will be cut-off. Also, by that time, you already waited for 21 seconds.

How do you prevent a denial of service from exhausting a thread pool on a socket server in Java?

For work I have written a specialized HTTP server which only performs 301/302/Frame redirections for web sites. Recently, some nefarious clients have been intentionally opening sockets and writing one character every 500 milliseconds in order to defeat my TCP socket timeout. Then they keep the socket open indefinitely and have multiple clients doing the same thing in a distributed denial of service. This eventually exhausts the thread pool which handles the TCP connections. How would you write your code to make it less susceptible to this sort of bad behavior? Here's my socket accept code:
while (true) {
// Blocks while waiting for a new connection
log.debug("Blocking while waiting for a new connection.") ;
try {
Socket server = httpServer.accept() ;
// After receiving a new connection, set the SO_LINGER and SO_TIMEOUT options
server.setReuseAddress(true) ;
server.setSoTimeout(timeout) ;
server.setSoLinger(true, socketTimeout) ;
// Hand off the new socket connection to a worker thread
threadPool.execute(new Worker(cache, server, requests, geoIp)) ;
} catch (IOException e) {
log.error("Unable to accept socket connection.", e) ;
continue ;
}
}
timeout and socketTimeout are currently set to 500 milliseconds.
Start closing sockets after a certain time has passed. If a socket has stayed open too long just close it down. You could do this in two ways:
You could also put a time limit on how long the client takes to send you a request. If they don't sustain a certain level of throughput close em. That can be pretty easy to do in your read loop when your thread is reading the request by adding a System.currentTimeInMillis() at the start and compare to where you are as you loop. If it drifts past a certain limit they are shutdown and dropped.
An alternative idea to this idea is possibly not reject them but let your thread return to the pool, but put the socket on a stack to watch. Let the bytes pile up and after they reached a certain size you can them pass them to a thread in the pool to process. This the hybrid approach to cut em off vs. maybe they aren't bad but slow.
Another way to handle that is watch how long a thread has been working on a request, and if it's not finished within a time limit close the underlying socket. Then the thread will get a SocketException and it can shutdown and clean up.
Here are some other ideas that mostly involve using outside hardware like firewalls, load balancers, etc.
https://security.stackexchange.com/questions/114/what-techniques-do-advanced-firewalls-use-to-protect-againt-dos-ddos/792#792

Categories