I have a controller in spring which getting a POST request which is handling as asynchronous(using DeferredResult object as a return value).
The response for this request is writing bytes to the HTTP stream directly (HttpServletResponse.getWriter().print()) and when it's done writing it sets result on the DeferredResult object for close the connection.
I'm writing my response in stream chunks.
I have an issue in this request handling because the client is closing the connection if I'm not writing to it for 1 minute. (I can write some chunks and then stop writing for 1 minute - therefore the connection will be closed in the middle of my procedure).
I want to control the closing connection procedure - I want to send keep alive when I'm not writing any data to the stream so that the connection won't be closed until I decided to close it from the server-side.
I didn't find out how should I get control of the connection from the controller in the server.
Please assist.
Thanks.
There is no such thing as a "keep alive" during an ongoing request or response in HTTP which can help with idle timeouts when receiving a request or response.
HTTP keep alive is only about keeping the TCP connection open after a response in order to process more requests on the same connection. TCP keep alive is instead used to detect connection loss without TCP shutdown and can also be used to prevent idle timeouts in stateful packet filters (as used in firewalls or NAT routers) in between client and server. It does not prevent idle timeouts at the application level though since it does not transport any data visible to the application level.
Note that the way you want to use HTTP is contrary to how HTTP was designed originally. It was designed for a client sending a full request and the server sending a full response immediately and not for the server sending some parts of the response, idling some time and then send some more. The proper way to implement such behavior would be by using WebSockets. With WebSockets both client and server can send new messages at any time (i.e. no request-response schema) and it also supports keep-alive messages. If WebSockets are not an option you can instead implement a polling client which regularly polls for new data from the server with a new request.
I ran into similar need just recently. The server code executes a long running operation that can take as long as 30 minutes to return, and the client times out long before that. The solution was to have the long running operation send periodic "keep alive" packets of data to the client via a "callback" argument provided by the request handler method. The callback is nothing more than a function (think of Lambda in Java) that takes as parameter the "keep alive" data packet to send to client, and then writes that data packet to the client via the java.io.PrintWriter reference that you can get off of javax.servlet.http.HttpServletResponse. Below code is the handler method that does this. I had to refactor the code in the call hierarchy to accept this new "callback" parameter until the "callback" can reach the method that is performing the long running operation, and inside that code I invoke the "callback" every so often, for example every time 10 records are processed. Not that below is Groovy code (scripting code on top of Java that runs on the JVM) and the server-side framework is Spring,
...
#Autowired
DataImporter dataImporter
#PostMapping("/my/endpoint")
void importData(#RequestBody MyDto myDto, HttpServletResponse response) {
// Callback to allow servant code deep in the call hierarchy to report back to client any arbitrary message
Closure<Void> callback = { String str ->
response.writer.print str
response.writer.flush()
}
// This leads to the code that is performing a long running operation. Using
// this "hook" that code has a direct connection to the client whereby
// it can send packets of data to keep the connection from timing out.
dataImporter.importData(myDto, callback)
}
}
Related
I have a distributed system application that uses JBoss as an application server. I have a client application that serves as a simulation engine. When client is up, it sends an registration message(JMS message) to Server, then some field is set in the database. When Server is up, it sends a message ( a topic) to all clients to check that they are alive. If clients are alive, they can read message and send a response to server (queue) that it is alive.
If user close client normally, client send a message to server that I will unregister. Then server unregisters it. This is done in database side.
If user close client abnormally(kill) , then client can not send a message to server for unregistration. Then server does not know this client is not alive anymore. This causes inconsistency in my application. So I need a way to understand that client subscribed a topic is not subscribed anymore.
Server sends a message to topic to check that clients are alive.
#Schedule(hour = "*", minute = "*", second = "30", persistent = false)
public void sendNodeStatusRequest() {
Message msg = MessageFactory.createStatusRequestMessage();
publishNodeMessage(msg);
}
After a time, Server show following logs. Could I catch this warning from Java?
07:17:00,698 WARN [org.hornetq.core.protocol.core.impl.RemotingConnectionImpl] Connection failure
has been detected: Did not receive ping from /127.0.0.1:61888. It is likely
the client has exited or crashed without closing its connection, or the
network between the server and client has failed. The connection will now be closed. [code=3]
07:17:00,698 WARN [org.hornetq.core.server.impl.ServerSessionImpl] Client
connection failed, clearing up resources for session 4e4e9dc6-153e-11e7-
80fa-742b62812c29
To me the whole point of messaging system is decoupled communication. The sender (server in your case) send its stuff to the topic without actually knowing who will get the message. The clients come and go, and they should be able to read the message whenever it (still) resides in the topic.
Now from your question I understand that the server keeps track of all the connected clients by means of receiving the message back to the dedicated queue.
So I'm asking myself - maybe its something wrong with the design here.
Let me propose slightly different way of implementation.
The server should not be aware of any client, at most (because your system seems to work this way) it should know that client A, B and C are alive now only because these clients passed to the server this knowledge.
Why just don't make clients sending the "keep-alive" message every, say 1 minute (or less, depending on your needs) to the server queue without prior message from the server.
The message can include some client identifier and probably time if its not added by the infrastructure or something)
So the server will just get this message and it will keep track in memory the list of available clients along with the last time they've sent something.
So if some client disconnects "gracefully" - it can send a special message to the server like "I'm client A and consider me disconnected". Otherwise (abnormal termination/network outage/whatever) - it just won't send anything, the server will have a special process that will check whether there are stale clients on the list and if it finds them - it knows that something went wrong.
If you still want to stick with JMS way of doing, then you can try to send the message synchronously, meaning the producer will wait until it hears from the consumer. More information here : http://docs.oracle.com/javaee/6/tutorial/doc/bncfa.html
I am trying to create a java http server using tcp sockets. HTTP 1.1 has a timeout value that will enable the connection to be persistent and wait for a short while for possible data from the client. I am trying to implement this timer in my program by using:clientSocket.setSoTimeout(). Even though this will help to leave the connection open for a certain amount of time, but it will wait for that exact amount of time before allowing the next request to be read.
For example:
If timeout is set to 5 seconds,
Request 1 is read. Then the socket hangs and wait until 5 seconds is over.
Request 2 is read. The socket waits until 5 seconds is up again.
This proves to be a problem if my timeout is set to big values. This should not be the case as the request should be processed once it is received and the timeout should only expire only if no data is received throughout the specified duration.
Can anyone advise me on how I could resolve this?
Edit:
For people who face a similar problem, here is my solution:
Since the client waits until the timeout before receiving all the data, I guessed that the client does not know that all the data from the server has been received. Hence, I added a content-length field to the HTTP response packet. Now, my client no longer hangs after receiving the data. The setSoTimeout does indeed work as stated!
Ok, when you receive a connection, then please start a new Thread like this:
class ClientService extends Thread {
private final Socket clientSocket;
public ClientService(Socket clientSocket) {
this.clientSocket=clientSocket;
}
public void run() {
// do your work with the Socket clientSocket here
}
}
this is how then your server code should look like:
while (true) {
Socket clientSocket = server.accept();
new ClientService(clientSocket).start();
}
It will allow you to process responses without waiting for one another till it timeouts.
HTTP 1.1 has a timeout value that will enable the connection to be persistent and wait for a short while for possible data from the client.
Not really. It has a connection: keep-alive setting, which is the default behaviour, and it allows endpoints to close connections that aren't in use after a period of idleness, but it doesn't have a timeout property itself.
I am trying to implement this timer in my program by using:clientSocket.setSoTimeout().
This has nothing whatsoever to do with HTTP. It is a socket read timeout.
Even though this will help to leave the connection open for a certain amount of time, but it will wait for that exact amount of time before allowing the next request to be read.
No it won't. It will cause read methods to throw SocketTimeoutException if no data arrives within the timeout period. Nothing else.
For example:
If timeout is set to 5 seconds,
Request 1 is read. Then the socket hangs and wait until 5 seconds is over.
No it doesn't.
Request 2 is read. The socket waits until 5 seconds is up again.
No it doesn't. You've made all this up. It is fantasy.
This proves to be a problem if my timeout is set to big values.
It isn't a problem with any timeout values whether large or small, because it simply does not happen.
This should not be the case as the request should be processed once it is received and the timeout should only expire only if no data is received throughout the specified duration.
That is exactly what Socket.setSoTimeout() already does.
Your question is founded on a fallacy.
I have a web application running behind nginx. Some pages are accessible via http, some others via https. I have some "pages", which are rather streams as the application does not close the connection and feeds data as they come. The feed then looks like this:
TIME1 MESSAGE1
TIME2 MESSAGE2
...
TIMEn MESSAGEn
After each line I write "\n" and then call flush(). Over http, it works correctly and my client can listen to new data. However, over https the client is not receiving any data until the connection is closed.
ServletOutputStream stream = applicationModel.getOutputStream();
OutputStreamWriter streamWriter = new OutputStreamWriter(stream);
BufferedWriter writer = new BufferedWriter(streamWriter);
while (true) {
wait();
writer.write(newMessage);
writer.flush();
}
Unless the application is tightly integrated with the web server a flush on the writer will only flush the buffers inside your application, so that the data gets send to the web server. Inside the web server there are more buffers, which are necessary to optimize the traffic by sending larger TCP packets and thus decrease the overhead for the data. And, if you use SSL there is yet another layer to watch, because your data will be encapsulated into an SSL frame which again adds overhead, so it is good to not only have a few bytes payload inside. Finally you have the buffering at the OS kernel, which might defer the sending of a small TCP packet for some time if there is hope that there will be more data.
Please be aware, that your wish to control the buffers is against a fundamental design concept of HTTP. HTTP is based on the idea that you have a request from the client to the server and then a response from the server, ideally with a known content-length up-front. There is no idea in the original design of a response which evolves slowly and where the browser will update the display once new data arrive. The real way to get updates would be instead to let the client send another request and then send the new response back. Another way would be to use WebSockets.
The design of my current stomp client process is as follows:
Open stomp connection (sending CONNECT frame)
Subscribe to a feed (send a SUBSCRIBE frame)
Do a loop to continually receive feed:
while (true) {
connection.begin("txt1");
StompFrame message = connection.receive();
System.out.println("message get header"+message.toString());
LOG.info(message.getBody());
connection.ack(message, "txt1");
connection.commit("txt1");
}
My problem with this process is that I get
java.net.SocketTimeoutException: Read timed out
at java.net.SocketInputStream.socketRead0(Native Method)...
and I think the cause of this is mostly because the feed I am subscribed to gives information slower on certain times (as I normally get this error when the weekend comes, holidays or evenings).
I have been reading up on this here and I think this would help with my problem. However, I'm not so sure how to incorporate it with the current layout of my stomp client. Would I have to send a CONNECT header within Step 3?
I am currently using activemq to create my stomp client if that helps.
In the stomp spec we have:
Regarding the heart-beats themselves, any new data received over the
network connection is an indication that the remote end is alive. In a
given direction, if heart-beats are expected every milliseconds:
the sender MUST send new data over the network connection at least every milliseconds
if the sender has no real STOMP frame to send, it MUST send a single newline byte (0x0A)
if, inside a time window of at least milliseconds, the receiver did not receive any new data, it CAN consider the
connection as dead
because of timing inaccuracies, the receiver SHOULD be tolerant and take into account an error margin
Would that mean my client would need to send a newline bye every n seconds?
The stomp server you are connected to has timed out your connection due to innactivity.
Providing the server supports Stomp version 1.1 or newer, the easiest solution for your client is to include a heart-beat instruction in the header of your CONNECT, such as "0,10000". This tells the server that you cannot send heart-beats, but you want it to send one every 10 seconds. This way you don't need to implement them, and the server will keep the connection active by sending them to you.
Of course the server will have its own requirements of the client. In your comment it responds to your request with "1000,0". This indicates that it will send a heart-beat every 1000 millisecs, and it expects you to send one every 0 millisecs, 0 indicating none at all. So your job will be minimal.
I am trying to create a client socket connection, when a new request is created a connection is established & data transfer takes place. Is there any way that once the Connection is created it will be open for all time ? If yes then how can create it & also how can I identify what request is sent & got the response for the same request?
Looking forward for your response.
You can create a connection for all time, by not closing it. However the trick is detecting when a connection has failed. e.g. the client/server has restarted.
If you want to match requests to responses you can use a request id, but a much simpler approach is to only send one request at a time per socket, that way the response you get is for the request you just sent. You can use more than one socket in a thread if this is required.