reconnect/resume websocket connection with netty - java

I am developing a chat server through netty websocket. Our client side is mostly browser based.
What's happening is, when I refresh the browser it closes the websocket connection and losses everything and creates a new socket when browser is loaded again.
Is there any mechanism which shall reconnect with my previous websocket session at server side.
I am planning to cache all user session and if received any connection close event from client side then without deleting user session information waiting more 30-60s,in between if server receive new connection request from same client(detecting through cookies id) then replacing by new session information.
My problem is if I do not remove session when server receive connection close event , other read/write operation through this session's channel creating problem.

What I can understand from following is
My problem is if I do not remove session when server receive connection close event , other read/write operation through this session's channel creating problem.
Your chat logic is tightly coupled with the channel/channel handler object.
If you can move the chat logic to a separate class which I call "Session" and have onEventXXX callbacks in that class and have chat logic on top of it (separate chain of classes may be), when there is a write operation, write the message using session.write(...). The "Session" have to delegate it to the underlying channel.
When the underlying channel is closed, you can keep the "Session" object for while and reattach with it's new channel.
To do this you might need to create a separate Netty handler called "SessionHandler" and process channel events/session creation.

Related

send keep alive on long asynchronous request in spring server

I have a controller in spring which getting a POST request which is handling as asynchronous(using DeferredResult object as a return value).
The response for this request is writing bytes to the HTTP stream directly (HttpServletResponse.getWriter().print()) and when it's done writing it sets result on the DeferredResult object for close the connection.
I'm writing my response in stream chunks.
I have an issue in this request handling because the client is closing the connection if I'm not writing to it for 1 minute. (I can write some chunks and then stop writing for 1 minute - therefore the connection will be closed in the middle of my procedure).
I want to control the closing connection procedure - I want to send keep alive when I'm not writing any data to the stream so that the connection won't be closed until I decided to close it from the server-side.
I didn't find out how should I get control of the connection from the controller in the server.
Please assist.
Thanks.
There is no such thing as a "keep alive" during an ongoing request or response in HTTP which can help with idle timeouts when receiving a request or response.
HTTP keep alive is only about keeping the TCP connection open after a response in order to process more requests on the same connection. TCP keep alive is instead used to detect connection loss without TCP shutdown and can also be used to prevent idle timeouts in stateful packet filters (as used in firewalls or NAT routers) in between client and server. It does not prevent idle timeouts at the application level though since it does not transport any data visible to the application level.
Note that the way you want to use HTTP is contrary to how HTTP was designed originally. It was designed for a client sending a full request and the server sending a full response immediately and not for the server sending some parts of the response, idling some time and then send some more. The proper way to implement such behavior would be by using WebSockets. With WebSockets both client and server can send new messages at any time (i.e. no request-response schema) and it also supports keep-alive messages. If WebSockets are not an option you can instead implement a polling client which regularly polls for new data from the server with a new request.
I ran into similar need just recently. The server code executes a long running operation that can take as long as 30 minutes to return, and the client times out long before that. The solution was to have the long running operation send periodic "keep alive" packets of data to the client via a "callback" argument provided by the request handler method. The callback is nothing more than a function (think of Lambda in Java) that takes as parameter the "keep alive" data packet to send to client, and then writes that data packet to the client via the java.io.PrintWriter reference that you can get off of javax.servlet.http.HttpServletResponse. Below code is the handler method that does this. I had to refactor the code in the call hierarchy to accept this new "callback" parameter until the "callback" can reach the method that is performing the long running operation, and inside that code I invoke the "callback" every so often, for example every time 10 records are processed. Not that below is Groovy code (scripting code on top of Java that runs on the JVM) and the server-side framework is Spring,
...
#Autowired
DataImporter dataImporter
#PostMapping("/my/endpoint")
void importData(#RequestBody MyDto myDto, HttpServletResponse response) {
// Callback to allow servant code deep in the call hierarchy to report back to client any arbitrary message
Closure<Void> callback = { String str ->
response.writer.print str
response.writer.flush()
}
// This leads to the code that is performing a long running operation. Using
// this "hook" that code has a direct connection to the client whereby
// it can send packets of data to keep the connection from timing out.
dataImporter.importData(myDto, callback)
}
}

Jms How to know subscriber is not alive anymore

I have a distributed system application that uses JBoss as an application server. I have a client application that serves as a simulation engine. When client is up, it sends an registration message(JMS message) to Server, then some field is set in the database. When Server is up, it sends a message ( a topic) to all clients to check that they are alive. If clients are alive, they can read message and send a response to server (queue) that it is alive.
If user close client normally, client send a message to server that I will unregister. Then server unregisters it. This is done in database side.
If user close client abnormally(kill) , then client can not send a message to server for unregistration. Then server does not know this client is not alive anymore. This causes inconsistency in my application. So I need a way to understand that client subscribed a topic is not subscribed anymore.
Server sends a message to topic to check that clients are alive.
#Schedule(hour = "*", minute = "*", second = "30", persistent = false)
public void sendNodeStatusRequest() {
Message msg = MessageFactory.createStatusRequestMessage();
publishNodeMessage(msg);
}
After a time, Server show following logs. Could I catch this warning from Java?
07:17:00,698 WARN [org.hornetq.core.protocol.core.impl.RemotingConnectionImpl] Connection failure
has been detected: Did not receive ping from /127.0.0.1:61888. It is likely
the client has exited or crashed without closing its connection, or the
network between the server and client has failed. The connection will now be closed. [code=3]
07:17:00,698 WARN [org.hornetq.core.server.impl.ServerSessionImpl] Client
connection failed, clearing up resources for session 4e4e9dc6-153e-11e7-
80fa-742b62812c29
To me the whole point of messaging system is decoupled communication. The sender (server in your case) send its stuff to the topic without actually knowing who will get the message. The clients come and go, and they should be able to read the message whenever it (still) resides in the topic.
Now from your question I understand that the server keeps track of all the connected clients by means of receiving the message back to the dedicated queue.
So I'm asking myself - maybe its something wrong with the design here.
Let me propose slightly different way of implementation.
The server should not be aware of any client, at most (because your system seems to work this way) it should know that client A, B and C are alive now only because these clients passed to the server this knowledge.
Why just don't make clients sending the "keep-alive" message every, say 1 minute (or less, depending on your needs) to the server queue without prior message from the server.
The message can include some client identifier and probably time if its not added by the infrastructure or something)
So the server will just get this message and it will keep track in memory the list of available clients along with the last time they've sent something.
So if some client disconnects "gracefully" - it can send a special message to the server like "I'm client A and consider me disconnected". Otherwise (abnormal termination/network outage/whatever) - it just won't send anything, the server will have a special process that will check whether there are stale clients on the list and if it finds them - it knows that something went wrong.
If you still want to stick with JMS way of doing, then you can try to send the message synchronously, meaning the producer will wait until it hears from the consumer. More information here : http://docs.oracle.com/javaee/6/tutorial/doc/bncfa.html

Multiple SMPP Sessions

Lets say there are two receiver session from same application to a SMPP over different TCP ports.
Message is sent to application and reply is (i.e deliver_sm_resp) is coming to SMPP via the other session
Is this possible or reply should be come over the same SMPP session?
No, the deliver_sm_resp should be sent back using the same session as the deliver_sm was received on.
The response is linked with the request by a sequence number that is incremented with each request on the session so the it only makes sense within the same session.

Client socket maintaining queue/pooling

I am trying to create a client socket connection, when a new request is created a connection is established & data transfer takes place. Is there any way that once the Connection is created it will be open for all time ? If yes then how can create it & also how can I identify what request is sent & got the response for the same request?
Looking forward for your response.
You can create a connection for all time, by not closing it. However the trick is detecting when a connection has failed. e.g. the client/server has restarted.
If you want to match requests to responses you can use a request id, but a much simpler approach is to only send one request at a time per socket, that way the response you get is for the request you just sent. You can use more than one socket in a thread if this is required.

How can I force the server socket to re-accept a request from a client?

For those who do not want to read a long question here is a short version:
A server has an opened socket for a client. The server gets a request to open a socket from
the same client-IP and client-port. I want to fore the server not to refuse such a request but to close the old socket and open a new one. How can I do ti?
And here is a long (original) question:
I have the following situation. There is an established connection between a server and client. Then an external software (Bonjour) says to my client the it does not see the server in the local network. Well, client does nothing about that because of the following reasons:
If Bonjour does not see the server it does not necessarily means that client cannot see the server.
Even if the client trusts the Bonjour and close the socket it does not improve the situation ("to have no open socket" is worser that "to have a potentially bad socket").
So, client do nothing if server becomes invisible to Bonjour. But than the server re-appears in the Bonjour and Bonjour notify the client about that. In this situation the following situations are possible:
The server reappears on a new IP address. So, the client needs to open a new socket to be able to communicate with the server.
The server reappears on the old IP address. In this case we have two subcases:
2.1. The server was restarted (switched off and then switched on). So, it does not remember the old socket (which is still used by the client). So, client needs to close the old socket and open a new one (on the same server-IP address and the same server-port).
2.2. We had a temporal network problem and the server was running the whole time. So, the old socket is still available for the use. In this case the client does not really need to close the old socket and reopen a new one.
But to simplify my life I decide to close and reopen the socket on the client side in any case (in spite on the fact that it is not really needed in the last described situation).
But I can have problems with that solution. If I close the socket on the client side and than try to reopen a socket from the same client-IP and client-port, server will not accept the call for a new socket. The server will think that such a socket already exists.
Can I write the server in such a way, that it does not refuse such calls. For example, if it (the server) sees that a client send a request for a socket from the same client-IP and client-port, it (server) close the available socket, associated with this client-IP and client-port and than it reopens a new socket.
You can't "reopen" a socket on your server. If the socket already exists and the client is trying to reconnect then you should get an BindException (see your previous question). The scenario that may be possible:
Client Shuts down socket
Server OS "notices" socket is dead on client side and shuts its side down
Client reconnects on the same port, but with a "new" socket
In this case you may consider it be the "same" socket, but it really isn't. That said a strategy you may wish to adopt is to have some sort of map (hash of client IP/port) to whatever mechanism you are using to service the socket or some kind of persistent state data, so that it can simulate a continuation of a previous socket (in the same vein as http sessioning). Something along the lines of:
HashMap<Client, State> sessions = ...;
public void server(){
...
while(true){
Socket socket = server.accept();
Client client = new Client(socket);
State s = sessions.get(client);
if(s == null){
s = new State();
sessions.put(client, s);
}
client.setState(s);
service(client);
}
...
}
and you can adjust the map lookup to define what a "session" means within your application (same client IP, same client IP & client port, some sessionid sent over the wire, etc).
If you are just trying to make it possible for the client to reconnect and force the server to "notice" the client is disconnected, the only real way in Java is to try and read/write data, and if it has been shutdown then it should throw an exception. Therefore as was mentioned in your other question you could add some kind of ack/nak feature to your protocol and add some type of check if you believe the client is disconnected (for example if you haven't read any data in the last N milliseconds, send a message the client must echo within M milliseconds, otherwise it is assumed to be disconnected). You can also try isConnected, isInputShutdown, isOutputShutdown, but I have found those to be unreliable in my own code to indicate the socket state, unless you have closed the socket (i.e. the one you are testing on the server).
The situation you describe is impossible. You can't get a new connect request from the same remote IP:port as an existing connection. The client will not permit it to occur.
Based on the comments:
You cannot write the server in a way that it will close a socket it still thinks is connected and automatically accept the new connection, as application code does not have that kind of control over the TCP stack, nor is there a way to reopen a connection.
The chance of the port numbers being the same between your client restarts is very small.
But still, if that happens, the server will note that that you're trying to set up an already connected socket, and refuse your new connection. There's not much else your client can do in this case besides close your socket, create a new one and try to connect again - and another random port will be selected.
additional note, your server should take some form of action to detect and close dead sockets, if all your server does is read incoming data, the "dead" sockets will never be
closed as they will never be detected as dead.(enabling tcp keepalive is one cheap measure to take against dead sockets staying up for months, though it will take a couple of hours to detect them as such by default.)

Categories