Connection Eviction strategy in Http Connection Pooling in Java - java

I am trying to implement a http connection pooling in java for a web service. The service will receive a request and then call other http services.
public final class HttpClientPool {
private static HttpClientPool instance = null;
private PoolingHttpClientConnectionManager manager;
private IdleConnectionMonitorThread monitorThread;
private final CloseableHttpClient client;
public static HttpClientPool getInstance() {
if (instance == null) {
synchronized(HttpClientPool.class) {
if (instance == null) {
instance = new HttpClientPool();
}
}
}
return instance;
}
private HttpClientPool() {
manager = new PoolingHttpClientConnectionManager();
client = HttpClients.custom().setConnectionManager(manager).build();
monitorThread = new IdleConnectionMonitorThread(manager);
monitorThread.setDaemon(true);
monitorThread.start();
}
public CloseableHttpClient getClient() {
return client;
}
}
class IdleConnectionMonitorThread extends Thread {
private final HttpClientConnectionManager connMgr;
private volatile boolean shutdown;
IdleConnectionMonitorThread(HttpClientConnectionManager connMgr) {
super();
this.connMgr = connMgr;
}
#Override
public void run() {
try {
while (!shutdown) {
synchronized(this) {
wait(5000);
// Close expired connections
connMgr.closeExpiredConnections();
// Optionally, close connections
// that have been idle longer than 30 sec
connMgr.closeIdleConnections(60, TimeUnit.SECONDS);
}
}
} catch (InterruptedException ex) {
//
}
}
void shutdown() {
shutdown = true;
synchronized(this) {
notifyAll();
}
}
}
As mentioned in Connection Management doc for Connection Eviction strategy instead of using a IdleConnectionMonitorThread what if I use manager.setValidateAfterInactivity. What are the pros & cons of the above two approach?
Is the above Http Connection Pool implementation correct?

With #setValidateAfterInactivity set to a positive value persistent connections will get validated upon lease request. That is, stale and non-reusable connections will not get automatically evicted from the pool until an attempt is made to re-use them.
Running a dedicated thread that iterates over persistent connections at the specified time interval and removes expired or idle connections from the pool ensures proactive connection eviction at the cost of an extra thread and slightly higher pool lock contention.

In HttpClient 4.5.3, manager.setValidateAfterInactivity has a defult value of 2000 which is 2 seconds. So I would suggest not to use a IdleConnectionMonitorThread unless you want the application to validate inactive connections and clean up at the same time.

Related

Java: Quit Callable instance in an Executor wrapped under CompletionService

I have got a Connector class that establishes the connection and delegates tasks to two subtasks - JobManager and DataRetriever. I used observer pattern with JobManager as Observable. This submits an entry pair to the Connector class.
A typical connector class looks like:
class Connector implements Observable, Closeable
{
....
private void submitandMonitor(List<Callable<String>> bulkTasks, List<Callable<String>> soapTasks)
throws InterruptedException
{
// Bulk job submission
bulkExecutor = Executors.newFixedThreadPool(NBULKTHREADS,
new ThreadFactoryBuilder().setNameFormat("BulkDownloader-%d").build());
bulkCompletionService = new ExecutorCompletionService<String>(bulkExecutor);
bulkTasks.forEach(task -> bulkCompletionService.submit(task));
// Status poll thread configuration
statusPollExec = Executors.newScheduledThreadPool(0,
new ThreadFactoryBuilder().setNameFormat("StatusPoller").build());
statusPollExec.scheduleAtFixedRate(statusPoller, 15, 15, TimeUnit.MINUTES);
// Wait until all the bulk jobs are completed
shutdownLatch.await();
bulkExecutor.shutdown();
}
#Override
public void close() throws SQLException, ClientProtocolException, IOException,
RetriesExhaustedException
{
try
{
if (bulkExecutor != null)
{
if (!bulkExecutor.isShutdown())
bulkExecutor.shutdown();
bulkExecutor.awaitTermination(15, TimeUnit.SECONDS);
logger.debug("Bulk executor shutdown completed");
}
}
catch (InterruptedException e)
{
logger.warn("Auto shutdown duration exceeded, manually terminating Bulk executor!");
bulkExecutor.shutdownNow();
logger.warn("Manual shutdown for Bulk executor completed");
}
....// Same set of try catches for executors
}
}
The job manager consists of:
class JobManager
{
// Method that does not bother about thread shutdown
private void submitJobs()
{
// Had sObjects.parallelStream() but changed to an iterative loop suspecting not responding to shutdown - Probably the offending method
for(Entry<SalesforceObject, Boolean> item : sObjects.entrySet())
{
SalesforceObject sObject = item.getKey();
Boolean queryAll = item.getValue();
try
{
// Method to submit the values for bulk requests. No loop
submitBulkJob(sObject, queryAll);
// Add to jobDetailMap <,> when job successful; used in monitorJobs()
}
catch (Exception e)
{
// Set the params and send info to observer
}
}
}
private void monitorJobs() throws InterruptedException
{
while (jobdetailMap.size() > 0)
{
for (Iterator<Entry<SalesforceObject, JobDetail>> iterator = jobdetailMap.entrySet().iterator(); iterator
.hasNext();)
{
Entry<SalesforceObject, JobDetail> entry = iterator.next();
SalesforceObject sObject = entry.getKey();
String sObjname = sObject.getsObjname();
// Check for status and send info to observer
}
Thread.sleep(Constants.sleep5000);
}
}
#Override
public String call() throws Exception
{
submitJobs();
monitorJobs();
setsObjectstatus(null);
return this.getClass().getSimpleName();
}
}
submitJobs() is the one that submits on iterating through tasklist and submitting. monitorJobs() iterates and checks for the status of the tasklist and does not terminate until it is complete.
Since close already closes and terminates this, I noticed that the job manager still notifies the connector and I frequently end up with TaskRejectedExecution. Does this imply that the shutDownNow() does not terminate the instance? Follow up: another question that arises is that if I use a parallel stream to submit the jobs in Job manager, and if the thread terminates, how should I handle ending the parallel stream?

How to control the okHttpClient connections size?

I am debugging an issue in my android app. I found the root cause is file descriptors went beyond the limit. After further investigation I found that the app has too many sockets open. I use OkHttpClient 2.5 for all of my network communication, thus I am wondering how should I limit my connection pool size. Below is my code snippet:
OkHttpClient okHttpClient = new OkHttpClient().setConnectTimeout(TIMEOUT);
ConnectionPool connectionPool = new ConnectionPool(MAX_IDLE_CONNECTIONS,
KEEP_ALIVE_DURATION_MS);
okHttpClient.set(connectionPool);
#RequireArgsConstructor
public HttpEngineCallable implements Callable<IHttpResponse>
{
private final String url;
public IHttpResponse call () throws Exception
{
try
{
Request request = Request.Builder().url(url).build();
Call call = okHttpClient.newCall(request);
Response rawResponse = call.execute();
return new OkHttpResponse(rawResponse);
}
catch (Exception e)
{
throw new IllegalStateException(e);
}
}
private final Function<IHttpResponse, T> httpResponseParser = new Function<IHttpResponse, T>()
{
#Nullable
#Override
public T apply(#Nullable IHttpResponse httpResponse)
{
if(httpResponse == null)
{
return null;
}
InputStream stream = httpResponse.getBody();
JsonParser parser = null;
T result = null;
try
{
parser = jsonFactory.createParser(stream);
result = strategy.parseData(parser);
}
catch (Exception e)
{
log.error("Unable to convert {} with {}.", stream, strategy, e);
}
finally
{
IOUtils.closeQuietly(parser);
IOUtils.closeQuietly(stream);
}
return result;
}
};
Future<T> future = executorService.submit(new HttpEngineCallable(url));
Future<V> finalFuture = Futures.transform(future, httpResponseParser, executorService);
T result = timeoutExecutorService.submit(new Runnable()
{
try
{
T result = finalFuture.get(CLIENT_TIMEOUT, TIMEUNIT)
if (result == null)
{
notify onFailure listeners
}
else
{
notify onSuccess Listeners
}
}
catch(Exception ex)
{
notify onFailure listeners
}
}
So I have a few questions regarding this implementation:
My CLIENT_TIMEOUT is shorter than OkHttp ConnectTimeout. If my finalFuture.get(CLINT_TIMEOUT, TIMEUNIT) throws timeout exception, would my finally block in the Parser Function still be executed? I am counting on it to close my connection.
How can limits the size of my ConnectionPool? Is there way I can auto-recycle oldest connections if connection went beyond limit?
We had a similar issue with too many open file descriptors crashing our app.
The problem was that we created one OkHttpClient per request. By default each OkHttpClient comes with its own connection pool, which of course blows up the number of connections/threads/file handles and prevents proper reuse in the pool.
We solved the problem by manually creating a global ConnectionPool in a singleton, and then passing that to the OkHttpClient.Builder object which builds the actual OkHttpClient.
...
builder.connectionPool(GLOBAL_CONNECTION_POOL);
OkHttpClient client = builder.build();
...
This still allows for per-request configuration using the OkHttpClient.Builder and makes sure all OkHttpClient instances are still using a common connection pool.
We were then able to properly size the global connection pool.

Timing out DriverManager.getConnection()?

I'm working with Java and mysql for database and I ran into a weird problem:
One of my clients have a very unstable connection and sometimes packet loss can be high. Ok that's not software's fault I know, but I went there to test and, when the program calls "DriverManager.getConnection()" and the network connection gets unstable, that line gets to lock the application (or the given thread) by several minutes. I have added some logics of course to use another datasource for caching data locally then saving to the network host when possible, but, I can't often let the program hang for longer than 10s (And this method doesn't seem to have any timeout specification).
So, I came out with a workaround like this:
public class CFGBanco implements Serializable {
public String driver = "com.mysql.jdbc.Driver";
public String host;
public String url = "";
public String proto = "jdbc:mysql://";
public String database;
public String user;
public String password;
}
private static java.sql.Connection Connect(HostConfig dataHost) throws java.sql.SQLException, ClassNotFoundException
{
dataHost.url = dataHost.proto+dataHost.host;
if(dataHost.database != null && !dataHost.database.equals("")) dataHost.url += "/"+dataHost.database;
java.lang.Class.forName(dataHost.driver);
ArrayList<Object> lh = new ArrayList<>();
lh.add(0, null);
Thread ConThread = new Thread(()-> {
try {
lh.add(0, java.sql.DriverManager.getConnection(
dataHost.url, dataHost.user, dataHost.password));
} catch(Exception x ) {
System.out.println(x.getMessage());
}
}, "ConnThread-"+SessId);
ConThread.start();
Thread TimeoutThread = new Thread(() -> {
int c = 0;
int delay = 100;
try {
try {
do {
try {
if(t.isAlive())
Thread.sleep(delay);
else
break;
} catch(Exception x) {}
} while((c+=delay) < 10000);
} catch(Exception x){}
} finally {
try {
t.stop();
} catch(Exception x){}
}
}, "ConTimeout-"+SessId);
TimeoutThread.start();
try {
ConThread.join();
} catch(Exception x) {}
if(lh.get(0) == null)
throw new SQLException();
return (Connection) lh.get(0);
}
I call getConnection from another thread, then make a secondary "timeout" thread to watch it and then Join the calling thread to the ConThread.
I have been getting results close to expected, indeed, but it got me wondering:
Is there a better way to do this? Does the creation of 2 threads eat up much on system resources, enough to make this approach unpractical?
You need connection pooling. Pool in the connection and reuse it rather than recreating everytime. One such library for DB connection pooling is DBCP by Apache
It will take care of when connection gets dropped off and so on. You could have validation Query and it would query DB say before borrowing connection from the pool and once it validates successfully, it will fire your actual query.

Netty Connection Retries

Retry Connection in Netty
I am building a client socket system. The requirements are:
First attemtp to connect to the remote server
When the first attempt fails keep on trying until the server is online.
I would like to know whether there is such feature in netty to do it or how best can I solve that.
Thank you very much
This is the code snippet I am struggling with:
protected void connect() throws Exception {
this.bootstrap = new ClientBootstrap(new NioClientSocketChannelFactory(
Executors.newCachedThreadPool(),
Executors.newCachedThreadPool()));
// Configure the event pipeline factory.
bootstrap.setPipelineFactory(new SmpPipelineFactory());
bootstrap.setOption("writeBufferHighWaterMark", 10 * 64 * 1024);
bootstrap.setOption("sendBufferSize", 1048576);
bootstrap.setOption("receiveBufferSize", 1048576);
bootstrap.setOption("tcpNoDelay", true);
bootstrap.setOption("keepAlive", true);
// Make a new connection.
final ChannelFuture connectFuture = bootstrap
.connect(new InetSocketAddress(config.getRemoteAddr(), config
.getRemotePort()));
channel = connectFuture.getChannel();
connectFuture.addListener(new ChannelFutureListener() {
#Override
public void operationComplete(ChannelFuture future)
throws Exception {
if (connectFuture.isSuccess()) {
// Connection attempt succeeded:
// Begin to accept incoming traffic.
channel.setReadable(true);
} else {
// Close the connection if the connection attempt has
// failed.
channel.close();
logger.info("Unable to Connect to the Remote Socket server");
}
}
});
}
Assuming netty 3.x the simplest example would be:
// Configure the client.
ClientBootstrap bootstrap = new ClientBootstrap(
new NioClientSocketChannelFactory(
Executors.newCachedThreadPool(),
Executors.newCachedThreadPool()));
ChannelFuture future = null;
while (true)
{
future = bootstrap.connect(new InetSocketAddress("127.0.0.1", 80));
future.awaitUninterruptibly();
if (future.isSuccess())
{
break;
}
}
Obviously you'd want to have your own logic for the loop that set a max number of tries, etc. Netty 4.x has a slightly different bootstrap but the logic is the same. This is also synchronous, blocking, and ignores InterruptedException; in a real application you might register a ChannelFutureListener with the Future and be notified when the Future completes.
Add after OP edited question:
You have a ChannelFutureListener that is getting notified. If you want to then retry the connection you're going to have to either have that listener hold a reference to the bootstrap, or communicate back to your main thread that the connection attempt failed and have it retry the operation. If you have the listener do it (which is the simplest way) be aware that you need to limit the number of retries to prevent an infinite recursion - it's being executed in the context of the Netty worker thread. If you exhaust your retries, again, you'll need to communicate that back to your main thread; you could do that via a volatile variable, or the observer pattern could be used.
When dealing with async you really have to think concurrently. There's a number of ways to skin that particular cat.
Thank you Brian Roach. The connected variable is a volatile and can be accessed outside the code or further processing.
final InetSocketAddress sockAddr = new InetSocketAddress(
config.getRemoteAddr(), config.getRemotePort());
final ChannelFuture connectFuture = bootstrap
.connect(sockAddr);
channel = connectFuture.getChannel();
connectFuture.addListener(new ChannelFutureListener() {
#Override
public void operationComplete(ChannelFuture future)
throws Exception {
if (future.isSuccess()) {
// Connection attempt succeeded:
// Begin to accept incoming traffic.
channel.setReadable(true);
connected = true;
} else {
// Close the connection if the connection attempt has
// failed.
channel.close();
if(!connected){
logger.debug("Attempt to connect within " + ((double)frequency/(double)1000) + " seconds");
try {
Thread.sleep(frequency);
} catch (InterruptedException e) {
logger.error(e.getMessage());
}
bootstrap.connect(sockAddr).addListener(this);
}
}
}
});

Java WeakReferences = Understandingproblem (with HornetQ JMS Implementation)?

The code below does NOT work:
Cause:
I assume I tracked down the cause to:
http://community.jboss.org/thread/150988
=> This article says that HornetQ uses Weak References.
My Question:
Why does the code not run? (I have this code running with a slight different implementation, but the code blow fails repeatedly). My only guess is, that the
following references:
private Connection connection = null;
private Session session = null;
private MessageProducer producer = null;
are not regarded as strong references? (And this leads to the fact that the garbage collector removes the objects... But way arent they strong references?
Or is there another problem with the code (as said the code runs fine if I copy everything into one single method. But if I use the Singleton approach below the code does not work...) Another assumption was that it might have to do with ThreadLocal stuff, but I am using only a single thread...
The Code not working (stripped down):
public class JMSMessageSenderTest {
private static final Logger logger = Logger.getLogger(JMSMessageSenderTest.class);
private static JMSMessageSenderTest instance;
private Connection connection = null;
private Session session = null;
private MessageProducer producer = null;
private JMSMessageSenderTest() {
super();
}
public static JMSMessageSenderTest getInstance() throws JMSException {
if (instance==null) {
synchronized(JMSMessageSenderTest.class) {
if (instance==null) {
JMSMessageSenderTest instanceTmp = new JMSMessageSenderTest();
instanceTmp.initializeJMSConnectionFactory();
instance = instanceTmp;
}
} }
return instance;
}
private void createConnectionSessionQueueProducer() throws Exception {
try {
Queue queue = HornetQJMSClient.createQueue("testQueue");
connection = initializeJMSConnectionFactory();
session = connection.createSession(false, Session.AUTO_ACKNOWLEDGE);
producer = session.createProducer(queue);
connection.start();
} catch (Exception e) {
cleanupAfterError();
throw e;
}
}
private void cleanupAfterError() {
if (connection != null){
try{
connection.close();
}catch(JMSException jmse) {
logger.error("Closing JMS Connection Failed",jmse);
}
}
session = null;
producer = null;
}
public synchronized void sendRequest(String url) throws Exception {
if (connection==null) {
createConnectionSessionQueueProducer();
}
try {
//HERE THE EXCEPTION IS THROWN, at least when debugging
TextMessage textMessage = session.createTextMessage(url);
producer.send(textMessage);
} catch (Exception e) {
cleanupAfterError();
throw e;
}
}
private Connection initializeJMSConnectionFactory() throws JMSException{
Configuration configuration = ConfigurationFactory.getConfiguration(null, null);
Map<String, Object> connectionParams = new HashMap<String, Object>();
connectionParams.put(org.hornetq.core.remoting.impl.netty.TransportConstants.PORT_PROP_NAME, 5445);
connectionParams.put(org.hornetq.core.remoting.impl.netty.TransportConstants.HOST_PROP_NAME, "localhost");
TransportConfiguration transportConfiguration = new TransportConfiguration(NettyConnectorFactory.class.getName(), connectionParams);
ConnectionFactory connectionFactory = (ConnectionFactory) HornetQJMSClient.createConnectionFactoryWithoutHA(JMSFactoryType.CF, transportConfiguration);
// return connectionFactory.createConnection(login, password);
return connectionFactory.createConnection();
}
/**
* Orderly shutdown of all resources.
*/
public void shutdown() {
cleanupAfterError();
}
}
TestCode to run the code above
JMSMessageSenderTest jmsMessageSender = JMSMessageSenderTest.getInstance();
jmsMessageSender.sendRequest("www.example.com)");
jmsMessageSender.shutdown();
Gives the following error:
I'm closing a JMS connection you left open. Please make sure you close all JMS connections explicitly before letting them go out of scope!
The JMS connection you didn't close was created here:
java.lang.Exception
at org.hornetq.jms.client.HornetQConnection.<init>(HornetQConnection.java:152)
at org.hornetq.jms.client.HornetQConnectionFactory.createConnectionInternal(HornetQConnectionFactory.java:662)
at org.hornetq.jms.client.HornetQConnectionFactory.createConnection(HornetQConnectionFactory.java:121)
Solution:
1.) You also have to Keep a reference to the ConnectionFactory (see the answer from Clebert below)
private ConnectionFactory factory = null;
2.) AND this code contains a severe hidden bug (that is not so easy to spot):
I initialized the Connection in the Constructor as well as in the createConnectionSessionQueueProducer() method. It will therefore override the old value and (as it is a Ressource that needs to be closed) will lead to a stale connection that HornetQ then will close and will then throw the error.
Thanks very very much! Markus
HornetQ will close the connection factory when the connection factory is released.
You need to hold a reference for the connection factory.
I also have similar issues. But it is not supposed to crash . Your implementation looks good. But only thing is that you are not closing the JMS connection , which in turn is getting closed by the hornetQ gc.
One thing probably wrong with the code is that you are calling cleanupAfterError() only after an exception. You should call the same method also after you have posted a message and a JMS connection is lying idle . Since you are just opening a connection to post a message and then not closing that connection unless an exception happens , Hornetq GC is finding that object and removing it while throwing this error.
Let me know if I missed something

Categories