java - [Apache Curator] How to properly close curator - java

I'm trying to implement a fallback logic on my connection to Zookeeper using Apache Curator, basically I have two sets of connection strings and if I receive a LOST state on my state listener I try to reconnect my curator client on the another set of connection strings. I could simple put all machines on the same connection string but I want to connect on fallback only when all machines for the default cluster are offline.
The problem is that I can't close the previous curator client when I try to change to the fallback cluster, I keep receiving the LOG message saying that curator is trying to reconnect, even after I connect on the fallback set of zookeepers. Below you can find a code example of what I'm trying to do:
final ConnectionStateListener listener = (client1, state) -> {
if (state == ConnectionState.LOST) {
reconnect();
}
};
And the reconnect method (will change the lastHost to the fallback cluster):
if (client != null) {
client.close();
}
...
client = CuratorFrameworkFactory.newClient(
lastHost,
sessionTimeout,
connectionTimeout,
retryPolicy);
...
client.start()
I can successfully connect on the new set of connection strings (fallback) but the problem is that the previous client keep trying to connect on the previous connection strings.
Looking at the close() method I saw that curator only close things if the State of the client is STARTED, I think that's why curator keep trying to connect on the previous cluster.
Is there a way to close() the curator client without having the STARTED state on it?
If not is there another way to implement this logic (fallback zookeeper servers) on curator?
Thanks.

Related

AWS JAVA IoT client reconnections and timeouts

I use IoT Rules on CONNECTED/DISCONNECTED topic (from here). So I want to get email when a device is connected or disconnected. On my device I run next code on startup (only on startup):
iotClient = new AWSIotMqttClient(Configuration.IOT_CLIENT_ENDPOINT,
deviceId,
keyStore,
keystorePass);
iotClient.setKeepAliveInterval(1200000); //20 minutes (maximum)
iotClient.connect();
But I get very strange behavior. I have 3 devices, and on each device I get this stacktrace but due to different reasons:
[pool-8-thread-1] com.amazonaws.services.iot.client.core.AwsIotConnection.onConnectionSuccess Connection successfully established
[pool-8-thread-1] com.amazonaws.services.iot.client.core.AbstractAwsIotClient.onConnectionSuccess Client connection active: <client ID>
[pool-8-thread-1] com.amazonaws.services.iot.client.core.AwsIotConnection.onConnectionFailure Connection temporarily lost
[pool-8-thread-1] com.amazonaws.services.iot.client.core.AbstractAwsIotClient.onConnectionFailure Client connection lost: <client ID>
[pool-8-thread-1] com.amazonaws.services.iot.client.core.AwsIotConnection$1.run Connection is being retried
[pool-8-thread-1] com.amazonaws.services.iot.client.core.AwsIotConnection.onConnectionSuccess Connection successfully established
[pool-8-thread-1] com.amazonaws.services.iot.client.core.AbstractAwsIotClient.onConnectionSuccess Client connection active: <client ID>
Sometimes I get this stacktrace due to DUPLICATE_CLIENTID disconnection reason, or sometimes due to MQTT_KEEP_ALIVE_TIMEOUT disconnection reason (MQTT_KEEP_ALIVE_TIMEOUT happens every 30-35 minutes, DUPLICATE_CLIENTID happens every 10 minutes)
So, I don't understand why do I need to deal with DUPLICATE_CLIENTID if each client has a unique ID, and to deal with MQTT_KEEP_ALIVE_TIMEOUT if there no an intermittent connectivity issue (I get logs every minute to my server, so it isn't WIFI/internet issue). I use the latest AWS IoT SDK from here - https://github.com/aws/aws-iot-device-sdk-java.
How can I solve these issues?
MY TRICKY SOLUTION:
I added a scheduled thread that sends empty messages to topic - ${iot:Connection.Thing.ThingName}/ping every 20 minutes:
scheduledExecutor.scheduleAtFixedRate(() -> {
try {
iotClient.publish(String.format(Configuration.PING_TOPIC, deviceId), AWSIotQos.QOS0, "");
} catch (AWSIotException e) {
LOGGER.error("Failed to send ping", e);
}
}, Configuration.PING_INITIAL_DELAY_IN_MINUTES, Configuration.PING_PERIOD_IN_MINUTES, TimeUnit.MINUTES);
So this solution solves inactive issue, but I still want to find a more elegant solution...
Looking at your logs, it definitly seems like it is connection lost, then connection retried.
During reconnection, it is still connecting using the deviceID you are passing, (however the connection might not have existed from MQTT side), and therefore it sees that it is trying to connect with the same id.
Reading a bit about this, looks like you might not be actually registering your device as a (thing) in aws..
If you were, they when you create an MQTT connection and pass that thingId, then even on reconnection, it wont give you that DuplicateID error.
AWSIotMqttClient client = new AWSIotMqttClient(...);
SomeDevice someDevice = new SomeDevice(thingName); // SomeDevice extends AWSIotDevice
client.attach(someDevice);
client.connect();
you can also experiment with iotClient.cleanSession(true/false) to see if that can help you.
/**
* Sets whether the client and server should establish a clean session on each connection.
* If false, the server should attempt to persist the client's state between connections.
* This must be set before {#link #connect()} is called.
*
* #param cleanSession
* If true, the server starts a clean session with the client on each connection.
* If false, the server should persist the client's state between connections.
*/
#Override
public void setCleanSession(boolean cleanSession) { super.setCleanSession(cleanSession); }
https://docs.aws.amazon.com/iot/latest/developerguide/iot-thing-management.html
MQTT_KEEP_ALIVE_TIMEOUT If there is no client-server communication
for 1.5x of the client's keep-alive time, the client is disconnected.
That means you are not sending/receiving messages..there is no way to fix that, unless you keep an active connection and do things

Hazelcast - Client mode - How to recover after cluster failure?

We are using hazelcast distributed lock and cache functions in our products. Usage of distributed locking is vitally important for our business logic.
Currently we are using the embedded mode(each application node is also a hazelcast cluster member). We are going to switch to client - server mode.
The problem we have noticed for client - server is that, once the cluster is down for a period, after several attempts clients are destroyed and any objects (maps, sets, etc.) that were retrieved from that client are no longer usable.
Also the client instance does not recover even after the Hazelcast cluster comes back up (we receive HazelcastInstanceNotActiveException )
I know that this issue has been addressed several times and ended up as being a feature request:
issue1
issue2
issue3
My question : What should be the strategy to recover the client? Currently we are planning to enqueue a task in the client process as below. Based on a condition it will try to restart the client instance...
We will check whether the client is running or not via clientInstance.getLifecycleService().isRunning() check.
Here is the task code:
private class ClientModeHazelcastInstanceReconnectorTask implements Runnable {
#Override
public void run() {
try {
HazelCastService hazelcastService = HazelCastService.getInstance();
HazelcastInstance clientInstance = hazelcastService.getHazelcastInstance();
boolean running = clientInstance.getLifecycleService().isRunning();
if (!running) {
logger.info("Current clientInstance is NOT running. Trying to start hazelcastInstance from ClientModeHazelcastInstanceReconnectorTask...");
hazelcastService.startHazelcastInstance(HazelcastOperationMode.CLIENT);
}
} catch (Exception ex) {
logger.error("Error occured in ClientModeHazelcastInstanceReconnectorTask !!!", ex);
}
}
}
Is this approach suitable? I also tried to listen LifeCycle events but could not make it work via events.
Regards
In Hazelcast 3.9 we changed the way connection and reconnection works in clients. You can read about the new behavior in the docs: http://docs.hazelcast.org/docs/3.9.1/manual/html-single/index.html#configuring-client-connection-strategy
I hope this helps.
In Hazelcast 3.10 you may increase connection attempt limit from 2 (by default) to maximum:
ClientConfig clientConfig = new ClientConfig();
clientConfig.getNetworkConfig().setConnectionAttemptLimit(Integer.MAX_VALUE);

Java: How to properly close a socket connection using ServerSocket and Socket

I currently have a simple instant messaging program which is utilizing Java's Socket and ServerSocket classes. It is functioning as intended but when I attempt to close the connection it is not using the 4 way handshake TCP teardown to close the connection. Instead it is closing the connection abruptly with an RST packet.
The way in which I am closing the connection is sending a string from the client to the server which the server will recognize as the command to close the connection. I then use the ServerSocket.close() method on the server and the Socket.close() method on the client.
What is the correct way and/or order of events to properly close a TCP connection utilizing these classes?
Client side disconnect code:
//Disconnects from remote server
//Returns true on success, false on failure
public boolean disconnect(){
try{
this.clientOut.println("0x000000");
this.clientRemoteSocket.close();
this.isConnected = false;
return true;
}catch(Exception e){
return false;
}
}
Server side disconnect code:
//Check to see if the client wants to close the connection
//If yes, then close the connection and break out of the while loop
if(incoming.equals("0x000000")){
serverLocalSocket.close();
break;
}
EDIT:
The code works perfectly fine. I'm just trying to learn socket programming in Java and know that a proper TCP teardown process is to include a 4 way handshake. A FIN packet to the remote host, then an ACK packet from the remote host back. Then a FIN packet from the remote host, then an ACK packet to the remote host. When monitoring the traffic via Wireshark I am not getting that. Instead I am getting a FIN to the remote server, then a RST/ACK back from the server.
This image depicts a proper TCP 4 way teardown process.
So far everything I've found suggest that all one needs is a call to close() or to just let Java's Try-with-resources statement handle the clean up. I can't see Java implementing functionality which does not comply with the standard TCP specifications though. It is very possible I may be calling certain lines in an incorrect order or something of the sort, I'm just unaware of it.
If you are resetting your own connection on close, either:
You haven't read all the pending incoming data that was sent by the peer, or
You had already written to the connection which had previously already been closed by the peer.
In both cases, an application protocol error.
The great part about TCP is if you close your socket, your partner will automatically know and throw an error on reading.
So all you have to do in the client is:
clientRemoteSocket.close();
And with the server, just add an error case to your normal reading of data:
try {
// Read from the socket:
incoming = socketInputStream.read();
// Handle the data here
} catch (IOException e) {
// Client has disconnected
}
There might be a more specfic exception you can catch, I'm not sure, it's been a while. But that should work. Good luck!

Client-Server communication where Server initiates

I would like to have this setup:
Server hosting TCP socket server
Multiple clients connected over TCP (keeping connection open)
Then I would like to initiate a message from the Server to the client. I can't figure out how to do this, and have multiple client sessions at the same time. Techniques I've read involve the Server listening on a port, and when it receives communicate from a client, it launches a new thread to handle and process that, and then it goes back to listening on the port for the next request of another client.
So, then how would I tap into that and send a message to a client running on one of those threads?
My actual usage scenario if you are interested is below. Final goal is like a remote control for your file system to upload files to the server.
- Each client has a java background application running in the system tray that connects to the server
- Server hosts connections, and also hosts a RESTFul webservice to initiate communication
- Mobile device connects to Server over RESTFul webservices to request informatino about the client's filesystem. So it can drill down and find a file, then click and have the file uploaded to the server.
The idea here is mobile users needing to upload files from their desktop to the server while away from their office on a mobile device. (and this is for custom product, so can't use a third-party app_
PS: I've been looking at the simple Client-Server chat program here: http://way2java.com/networking/chat-program-two-way-communication/
You want to have a server listening at all times on a specified port. Once the server notices an incoming connection on that port you should create a new Thread to handle the communication between that client and the server, while the main thread keeps on listening for other incoming connections. This way you can have multiple clients connected to one server. Like so:
private void listen() throws IOException {
serverSocket = new ServerSocket(port)
while (GlobalFlags.listening) {
new ServerThread(serverSocket.accept();
if (GlobalFlags.exit) {
serverSocket.close();
break;
}
}
}
Where the GlobalFlags are variables to control the listening process and are not really necessary. You could do a while True and just keep listening for ever and ever.
In my project I have a main server controller which had listeners running in Threads. The controller controlled the GlobalFlags. I'm sure instead of using global flags there is a better way to do inter thread communication but for me this was the simplest at the time.
The ServerThread should be looping all the time switching between sending output to the client and receiving input from the client. Like so:
ServerThread(Socket socket) {
super("GameServerThread");
this.socket = socket;
try {
this.socket.setTcpNoDelay(true);
} catch (SocketException e) {
// Error handling
}
this.terminate = false;
}
#Override
public void run() {
try {
out = new PrintWriter(socket.getOutputStream(), true);
in = new BufferedReader(
new InputStreamReader(
socket.getInputStream()));
String inputLine, outputLine;
while ((inputLine = in.readLine()) != null) {
outputLine = processInput(inputLine);
out.println(outputLine);
if (terminate) {
break;
}
}
}
out.close();
in.close();
socket.close();
} catch (Exception e) {
// Error handling, should not use Exception but handle all exceptions by themselves.
}
On the client side you have a thread running through a similar loop, receiving input from the server and then sending output to the server.
In this example processInput is the function used to process the client's input. If you want the server to initiate contact you can make the server send something to the outputstream before listening for input and make the client listen first.
I have extracted this example from one of my own projects and the this.socket.setTcpNoDelay(true) is supposed to make the process faster. Reference here: http://www.rgagnon.com/javadetails/java-0294.html
"java.net.Socket.setTcpNoDelay() is used to enable/disable TCP_NODELAY which disable/enable Nagle's algorithm.
Nagle's algorithm try to conserve bandwidth by minimizing the number of segments that are sent. When applications wish to decrease network latency and increase performance, they can disable Nagle's algorithm (that is enable TCP_NODELAY). Data will be sent earlier, at the cost of an increase in bandwidth consumption. The Nagle's algorithm is described in RFC 896.
You get the current "TCP_NODELAY" setting with java.net.Socket.getTcpNoDelay()"
So to send a message to a specific client you could put all the threads upon creation in an ArrayList so you can keep track of all the currently connected clients. You can have the processInput method halt and polling a queue/variable until another class puts the message to be send in the queue/variable. So how to gain a handle on the class depends on your implementation of processInput. You could give every thread an ID (which is what I did in my project) and maybe have the processInput method poll an ArrayList at index=ID. Then to send output to the client you would have to set the variable at index=ID.
This method seems kind of clunky to me personally but I'm not really sure how else I would do it. You would probably use Queues and have processInput write the input to its Queue and then wait for another class to read it and put its response in the Queue. But I have personally never worked with Queues in java so you should read up on that yourself.
In my knowledge
1) Server hosting TCP socket server -- Possible
2) Multiple clients connected over TCP -- Possible
3) Then I would like to initiate a message from the Server to the client -- Not Possible. The Client has to initiate a connection creation, then the server might be able to send data packets to You. Example: You need to open Facebook website on your browser, Facebook server cannot decide to send its page to your PC on its own because your PC will not have a static IP address, and also if Facebook hypothetically writes code to initiate connection to Your PC, then it is as good as Your PC is the server and Facebook website/server acts as client.

How to refuse incoming connections in Netty?

I have a Netty TCP server, and I want to reject/refuse incoming connection attempts selectively (based on their remote address). I guess I have to use ServerBootstrap.setParentHandler(ChannelHandler), but what do I do in the ChannelHandler? What event am I handling? How do I refuse the connection?
As Norman said, there is no way to refuse the connection, but you can close it immediately by adding a Netty's IpFilterHandler to server pipeline as the first handler. It will also stop propagating the upstream channel state events for filtered connection too.
#ChannelHandler.Sharable
public class MyFilterHandler extends IpFilteringHandlerImpl {
private final Set<InetSocketAddress> deniedRemoteAddress;
public MyFilterHandler(Set<InetSocketAddress> deniedRemoteAddress) {
this.deniedRemoteAddress = deniedRemoteAddress;
}
#Override
protected boolean accept(ChannelHandlerContext ctx, ChannelEvent e, InetSocketAddress inetSocketAddress) throws Exception {
return !deniedRemoteAddress.contains(inetSocketAddress);
}
}
if you have list of patterns of IP address to block, you can use IpFilterRuleHandler,
//Example: allow only localhost:
new IPFilterRuleHandler().addAll(new IpFilterRuleList("+n:localhost, -n:*"))
If you have several network interfaces and you want to accept connections from one interface only you just need to set the local address in ServerBootstrap. This may be enough if your server is running in a machine that's connected to several networks and you want to serve only one of them. In this case any connection attempts from the other networks would be refused by the OS.
Once you have a connection in the application layer it's too late to refuse it. The best you can do is close it immediately.
This is enough if for example you want the server available only on localhost and invisible to the outside world: the loopback network 127.0.0.0/8 is served by a separate interface.
After having looked at org.jboss.netty.channel.socket.nio.NioServerSocketPipelineSink in the Netty sources, I am fairly certain that Netty accepts all incoming connections, and there is no way to refuse them (but, of course, they can be closed after being accepted).

Categories