I am looking at the documentation of PoolingHttpClientConnectionManager https://hc.apache.org/httpcomponents-client-ga/httpclient/apidocs/org/apache/http/impl/conn/PoolingHttpClientConnectionManager.html
There is an API setValidateAfterInactivity. validateAfterInactivity is not very clear to me. It says - "Defines period of inactivity in milliseconds after which persistent connections must be re-validated prior to being leased to the consumer"
How exactly does it re-validate the connection? Wanted to understand the process. Does it send any http request to server or something to re-validate, or its something else?
What is the criteria/mechanism it uses to revalidate the connection? How does it all work?
It use JDBC connection to do validate.
final ManagedHttpClientConnection conn = poolEntry.getConnection();
if (conn != null) {
conn.activate();
} else {
poolEntry.assignConnection(connFactory.createConnection(null));
}
if (log.isDebugEnabled()) {
log.debug("Connection leased: " + ConnPoolSupport.formatStats(
poolEntry.getConnection(), route, state, pool));
}
source code here
Related
I connect to external host successfully
void createConnection()
{
logger.info("Connecting to " + host_ + ":" + port_);
Socket sock_ = new Socket(host_, port_);
}
Connection is executed successfully, however I need to implement a reconnection mechanism, that is triggered when host is down/killed, and subsequently reconnect to A new host and port.
Is there such mechanism in JDK? Something like trigger event or Observer?
I have recently started using JedisCluster for my application. There is little to no documentation and examples for the same. I tested a use case and the results are not what I expected
public class test {
private static JedisCluster setConnection(HashSet<HostAndPort> IP) {
JedisCluster jediscluster = new JedisCluster(IP, 30000, 3,
new GenericObjectPoolConfig() {{
setMaxTotal(500);
setMinIdle(1);
setMaxIdle(500);
setBlockWhenExhausted(true);
setMaxWaitMillis(30000);
}});
return jediscluster;
}
public static int getIdleconn(Map<String, JedisPool> nodes){
int i = 0;
for (String k : nodes.keySet()) {
i+=nodes.get(k).getNumIdle();
}
return i;
}
public static void main(String[] args) {
HashSet IP = new HashSet<HostAndPort>() {
{
add(new HostAndPort("host1", port1));
add(new HostAndPort("host2", port2));
}};
JedisCluster cluster = setConnection(IP);
System.out.println(getIdleconn(cluster.getClusterNodes()));
cluster.set("Dummy", "0");
cluster.set("Dummy1", "0");
cluster.set("Dummy3", "0");
System.out.println(getIdleconn(cluster.getClusterNodes()));
try {
Thread.sleep(60000);
} catch (InterruptedException e) {
e.printStackTrace();
}
System.out.println(getIdleconn(cluster.getClusterNodes()));
}
}
The output for this snippet is:
0
3
3
Questions=>
I have set the timeout to 30000 JedisCluster(IP, 30000, 3,new GenericObjectPoolConfig() . I believe this is the connection timeout which means Idle connections are closed after 30 seconds. Although this doesn't seem to be happening. After sleeping for 60 seconds, the number of idle connections is still 3. What I am doing/understanding wrong here? I want the pool to close the connection if not used for more than 30 seconds.
setMinIdle(1). Does this mean that regardless the connection timeout, the pool will always maintain one connection?
I prefer availability more than throughput for my app. What should be the value for setMaxWaitMillis if conn timeout is 30 secs?
Though rare, the app fails with redis.clients.jedis.exceptions.JedisNoReachableClusterNodeException: No reachable node in cluster. This i think is connected to 1. How to prevent this?
30000 or 30 seconds here refers to (socket) timeout; the timeout for single socket (read) operation. It is not related with closing idle connections.
Closing idle connections are controlled by GenericObjectPoolConfig. So check the parameters there.
Yes (mostly).
setMaxWaitMillis is the timeout for getting a connection object from a connection object pool. It is not related to 30 secs and not really solve you anything in terms of availability.
Keep your cluster nodes available.
There has been changes in Jedis related to this. You can try a recent version (4.x, even better 4.2.x).
We're having some trouble trying to implement a Pool of SftpConnections for our application.
We're currently using SSHJ (Schmizz) as the transport library, and facing an issue we simply cannot simulate in our development environment (but the error keeps showing randomly in production, sometimes after three days, sometimes after just 10 minutes).
The problem is, when trying to send a file via SFTP, the thread gets locked in the init method from schmizz' TransportImpl class:
#Override
public void init(String remoteHost, int remotePort, InputStream in, OutputStream out)
throws TransportException {
connInfo = new ConnInfo(remoteHost, remotePort, in, out);
try {
if (config.isWaitForServerIdentBeforeSendingClientIdent()) {
receiveServerIdent();
sendClientIdent();
} else {
sendClientIdent();
receiveServerIdent();
}
log.info("Server identity string: {}", serverID);
} catch (IOException e) {
throw new TransportException(e);
}
reader.start();
}
isWaitForServerIdentBeforeSendingClientIdent is FALSE for us, so first of all the client (we) send our identification, as appears in logs:
"Client identity String: blabla"
Then it's turn for the receiveServerIdent:
private void receiveServerIdent() throws IOException
{
final Buffer.PlainBuffer buf = new Buffer.PlainBuffer();
while ((serverID = readIdentification(buf)).isEmpty()) {
int b = connInfo.in.read();
if (b == -1)
throw new TransportException("Server closed connection during identification exchange");
buf.putByte((byte) b);
}
}
The thread never gets the control back, as the server never replies with its identity. Seems like the code is stuck in this While loop. No timeouts, or SSH exceptions are thrown, my client just keeps waiting forever, and the thread gets deadlocked.
This is the readIdentification method's impl:
private String readIdentification(Buffer.PlainBuffer buffer)
throws IOException {
String ident = new IdentificationStringParser(buffer, loggerFactory).parseIdentificationString();
if (ident.isEmpty()) {
return ident;
}
if (!ident.startsWith("SSH-2.0-") && !ident.startsWith("SSH-1.99-"))
throw new TransportException(DisconnectReason.PROTOCOL_VERSION_NOT_SUPPORTED,
"Server does not support SSHv2, identified as: " + ident);
return ident;
}
Seems like ConnectionInfo's inputstream never gets data to read, as if the server closed the connection (even if, as said earlier, no exception is thrown).
I've tried to simulate this error by saturating the negotiation, closing sockets while connecting, using conntrack to kill established connections while the handshake is being made, but with no luck at all, so any help would be HIGHLY appreciated.
: )
I bet following code creates a problem:
String ident = new IdentificationStringParser(buffer, loggerFactory).parseIdentificationString();
if (ident.isEmpty()) {
return ident;
}
If the IdentificationStringParser.parseIdentificationString() returns empty string, it will be returned to the caller method. The caller method will keep calling the while ((serverID = readIdentification(buf)).isEmpty()) since the string is always empty. The only way to break the loop would be if call to int b = connInfo.in.read(); returns -1... but if server keeps sending the data (or resending the data) this condition is never met.
If this is the case I would add some kind of artificial way to detect this like:
private String readIdentification(Buffer.PlainBuffer buffer, AtomicInteger numberOfAttempts)
throws IOException {
String ident = new IdentificationStringParser(buffer, loggerFactory).parseIdentificationString();
numberOfAttempts.incrementAndGet();
if (ident.isEmpty() && numberOfAttempts.intValue() < 1000) { // 1000
return ident;
} else if (numberOfAttempts.intValue() >= 1000) {
throw new TransportException("To many attempts to read the server ident").
}
if (!ident.startsWith("SSH-2.0-") && !ident.startsWith("SSH-1.99-"))
throw new TransportException(DisconnectReason.PROTOCOL_VERSION_NOT_SUPPORTED,
"Server does not support SSHv2, identified as: " + ident);
return ident;
}
This way you would at least confirm that this is the case and can dig further why .parseIdentificationString() returns empty string.
Faced a similar issue where we would see:
INFO [net.schmizz.sshj.transport.TransportImpl : pool-6-thread-2] - Client identity string: blablabla
INFO [net.schmizz.sshj.transport.TransportImpl : pool-6-thread-2] - Server identity string: blablabla
But on some occasions, there were no server response.
Our service would typically wake up and transfer several files simultaneously, one file per connection / thread.
The issue was in the sshd server config, we increased maxStartups from default value 10
(we noticed the problems started shortly after batch sizes increased to above 10)
Default in /etc/ssh/sshd_config:
MaxStartups 10:30:100
Changed to:
MaxStartups 30:30:100
MaxStartups
Specifies the maximum number of concurrent unauthenticated connections to the SSH daemon. Additional connections will be dropped until authentication succeeds or the LoginGraceTime expires for a connection. The default is 10:30:100. Alternatively, random early drop can be enabled by specifying the three colon separated values start:rate:full (e.g. "10:30:60"). sshd will refuse connection attempts with a probability of rate/100 (30%) if there are currently start (10) unauthenticated connections. The probability increases linearly and all connection attempts are refused if the number of unauthenticated connections reaches full (60).
If you cannot control the server, you might have to find a way to limit your concurrent connection attempts in your client code instead.
In a non-blocking connect on the client side, it might be the case that the server is not up and the connection cannot be established. I use selector to wait for OP_CONNECT to figure out if the connection can be established in the following way:
connection = SocketChannel.open();
connection.configureBlocking(false);
// Kick off connection establishment
connection.connect(hostAddress);
connection.register(selector, SelectionKey.OP_CONNECT);
this.selector.select(2000);
// Iterate over the set of keys for which events are available
Iterator<SelectionKey> selectedKeys = this.selector.selectedKeys().iterator();
if (!selectedKeys.hasNext()) {
throw new IllegalStateException("\"Could not connect to \" + hostAddress");
}
SelectionKey key = selectedKeys.next();
boolean valid = key.isValid();
if (!key.isConnectable()) {
throw new IllegalStateException("\"Could not connect to \" + hostAddress");
}
finishConnection(key);
However, even if I do not start the server, the key.isConnectable() returns true... I don;t understand why that is the case and how to make sure that I only call selector.select() again when I am connected
I am using Google AppEngine's Channel API. I am having some issue to re-start the lost connection due to the user's network connection. When you loose the internet connection, channel call onError but it will not call onClose. As far as JavaScript object is concerned, the channel socket is open.
How do you handle lost connection due to the internet issue? I am thinking of 1) by trigger to close the channel and re-open it when RPC unrelated to the channel somewhere in the application succeeds for the first time (which indicates the internet is alive again) or 2) Use timer that runs all the time and pings the server for network status (which was the point of introducing the long polling to avoid consuming unwanted resource this way). Any other ideas would be great.
Observation:
When the internet connection is dead, onError is called in incremental interval (10sec, 20sec, 40sec) twice. Once the internet connection is back, channel does not resume connection. It stops working without any indication that it is dead.
Thanks.
When you see the javascript console, presumably you will see "400 Unknown SID Error".
If so, here is my workaround for this. This is a service module for AngularJS, but please look at the onerror callback. Please try this workaround and let me know if it works or not.
Added: I neglected to answer your main question, but in my opinion, it is hard to determine if you're connected to the internet unless actually pinging the "internet". So probably you may want to use some retry logic similar to the following code with some tweak. In the following example, I just retrying 3 times, but you can do it more with some back offs. However, I think the best way to handle this is, when the app consume the retry max count, you can indicate the user that the app lost the connection, ideally showing a button or a link to re-connect to the channel service.
And, you can also track the connection on the server side, see:
https://developers.google.com/appengine/docs/java/channel/#Java_Tracking_client_connections_and_disconnections
app.factory('channelService', ['$http', '$rootScope', '$timeout',
function($http, $rootScope, $timeout) {
var service = {};
var isConnectionAlive = false;
var callbacks = new Array();
var retryCount = 0;
var MAX_RETRY_COUNT = 3;
service.registerCallback = function(pattern, callback) {
callbacks.push({pattern: pattern, func: callback});
};
service.messageCallback = function(message) {
for (var i = 0; i < callbacks.length; i++) {
var callback = callbacks[i];
if (message.data.match(callback.pattern)) {
$rootScope.$apply(function() {
callback.func(message);
});
}
}
};
service.channelTokenCallback = function(channelToken) {
var channel = new goog.appengine.Channel(channelToken);
service.socket = channel.open();
isConnectionAlive = false;
service.socket.onmessage = service.messageCallback;
service.socket.onerror = function(error) {
console.log('Detected an error on the channel.');
console.log('Channel Error: ' + error.description + '.');
console.log('Http Error Code: ' + error.code);
isConnectionAlive = false;
if (error.description == 'Invalid+token.' || error.description == 'Token+timed+out.') {
console.log('It should be recovered with onclose handler.');
} else {
// In this case, we need to manually close the socket.
// See also: https://code.google.com/p/googleappengine/issues/detail?id=4940
console.log('Presumably it is "Unknown SID Error". Try closing the socket manually.');
service.socket.close();
}
};
service.socket.onclose = function() {
isConnectionAlive = false;
console.log('Reconnecting to a new channel');
openNewChannel();
};
console.log('A channel was opened successfully. Will check the ping in 20 secs.');
$timeout(checkConnection, 20000, false);
};
function openNewChannel(isRetry) {
console.log('Retrieving a clientId.');
if (isRetry) {
retryCount++;
} else {
retryCount = 0;
}
$http.get('/rest/channel')
.success(service.channelTokenCallback)
.error(function(data, status) {
console.log('Can not retrieve a clientId');
if (status != 403 && retryCount <= MAX_RETRY_COUNT) {
console.log('Retrying to obtain a client id')
openNewChannel(true);
}
})
}
function pingCallback() {
console.log('Got a ping from the server.');
isConnectionAlive = true;
}
function checkConnection() {
if (isConnectionAlive) {
console.log('Connection is alive.');
return;
}
if (service.socket == undefined) {
console.log('will open a new connection in 1 sec');
$timeout(openNewChannel, 1000, false);
return;
}
// Ping didn't arrive
// Assuming onclose handler automatically open a new channel.
console.log('Not receiving a ping, closing the connection');
service.socket.close();
}
service.registerCallback(/P/, pingCallback);
openNewChannel();
return service;
}]);