In java LDAP connection pooling, I have noticed that the pool timeout setting property is not working properly. Given below are the ldap connection pooling logs while testing my application.
Following are the values for initSize & pool timeout properties
com.sun.jndi.ldap.connect.pool.initsize = 10
com.sun.jndi.ldap.connect.pool.timeout = 300000
==================== LDAP LOGS ======================
21:38:48,480 Create com.sun.jndi.ldap.LdapClient#e2d0ca3
21:38:48,480 Create com.sun.jndi.ldap.LdapClient#4f652edf
21:38:48,495 Create com.sun.jndi.ldap.LdapClient#53bb2d84
21:38:48,558 Create com.sun.jndi.ldap.LdapClient#45766ee8
21:38:48,558 Create com.sun.jndi.ldap.LdapClient#703c62f5
21:38:48,573 Create com.sun.jndi.ldap.LdapClient#279dd5ca
21:38:48,589 Create com.sun.jndi.ldap.LdapClient#51c329b8
21:38:48,605 Create com.sun.jndi.ldap.LdapClient#7ec5afb0
21:38:48,605 Create com.sun.jndi.ldap.LdapClient#3e3659c6
21:38:48,620 Create com.sun.jndi.ldap.LdapClient#5bef29e1
21:38:48,620 Create and use com.sun.jndi.ldap.LdapClient#332bf735
21:38:50,102 Release com.sun.jndi.ldap.LdapClient#332bf735
21:41:05,661 Expired com.sun.jndi.ldap.LdapClient#e2d0ca3 expired
21:41:05,661 Expired com.sun.jndi.ldap.LdapClient#4f652edf expired
21:41:05,661 Expired com.sun.jndi.ldap.LdapClient#53bb2d84 expired
21:41:05,661 Expired com.sun.jndi.ldap.LdapClient#45766ee8 expired
21:41:05,662 Expired com.sun.jndi.ldap.LdapClient#703c62f5 expired
21:41:05,693 Expired com.sun.jndi.ldap.LdapClient#279dd5ca expired
21:41:05,693 Expired com.sun.jndi.ldap.LdapClient#51c329b8 expired
21:41:05,693 Expired com.sun.jndi.ldap.LdapClient#7ec5afb0 expired
21:41:05,693 Expired com.sun.jndi.ldap.LdapClient#3e3659c6 expired
21:41:05,709 Expired com.sun.jndi.ldap.LdapClient#5bef29e1 expired
21:46:05,724 Expired com.sun.jndi.ldap.LdapClient#332bf735 expired
When the application requested for a first LDAP connection, 10 new connections are created (as init size is set to 10) along with the requested connection. After the used connection is released back to pool, it is not expired after 5 mins and same is the case with other 10 connections. I am not sure why the ldap pool timeout property is not working properly as I am getting different behaviors (some times connections getting expired in 1 min) while testing the same scenario different times. Has any one has experienced the same kind of behavior and any resolution for this?
Finally, after breaking my head for while I understand that the connection pool timeout property is working as designed and I have interpreted this in a wrong way.
My first LDAP connection pool was created at 16:36:04. So, starting from that time the pool cleaning job will run every 5 minutes. So, lets say if new LDAP connections are created just few seconds or minutes before this scheduled job, they will be removed from the pool as part of the cleaning process. So, all the connections shown in the above log were expired at 21:41:05 even though they were created at 21:38:48.
So, don't expect that the LDAP connection will be removed from the pool only after 5 minutes of idle time.
Related
I have a Spring Boot (v2.0.8) application which makes use of a HikariCP (v2.7.9) Pool (connecting to MariaDB) configured with:
minimumIdle: 1
maximumPoolSize: 10
leakDetectionThreshold: 30000
The issue is that our production component, once every few weeks, is repeatedly throwing SQLTransientConnectionException " Connection is not available, request timed out after 30000ms...". The issue is that it never recovers from this and consistently throws the exception. A restart of the componnent is therefore required.
From looking at the HikariPool source code, it would seem that this is happening because every time it is calling connectionBag.borrow(timeout, MILLISECONDS) the poolEntry is null and hence throws the timeout Exception. For it to be null, the connection pool must have no free entries i.e. all PoolEntry in the sharedList are marked IN_USE.
I am not sure why the component would not recover from this since eventually I would expect a PoolEntry to be marked NOT_IN_USE and this would break the repeated Exceptions.
Possible scenarios I can think of:
All entries are IN_USE and the DB goes down temporarily. I would expect Exceptions to be thrown for the in-flight queries. Perhaps at this point the PoolEntry status is never reset and therefore is stuck at IN_USE. In this case I would have thought if an Exception is thrown the status is changed so that the connection can cleared from the pool. Can anyone confirm if this is the case?
A flood of REST requests are made to the component which in turn require DB queries to be executed. This fills the connection pool and therefore subsequent requests timeout waiting for previous requests to complete. This makes sense however I would expect the component to recover once the requests complete, which it is not.
Does anyone have an idea of what might be the issue here? I have tried configuring the various timeouts that are in the Hikari documentation but have had no luck diagnosing / resolving this issue. Any help would be appreciated.
Thanks!
Scenario 2 is most likely what is happening. I ran into the same issue when using it with cloud dataflow and receiving a large amount of connection requests. The only solution I found was to play with the config to find a combination that worked for my use case.
I'll leave you my code that works for 50-100 requests per second and wish you luck.
private static DataSource pool;
final HikariConfig config = new HikariConfig();
config.setMinimumIdle(5);
config.setMaximumPoolSize(50);
config.setConnectionTimeout(10000);
config.setIdleTimeout(600000);
config.setMaxLifetime(1800000);
config.setJdbcUrl(JDBC_URL);
config.setUsername(JDBC_USER);
config.setPassword(JDBC_PASS);
pool = new HikariDataSource(config);
I have a Spring Boot app with below database pool settings. If the app continuously runs for 2 to 3 days, I get pool empty error. So, 2 questions regarding this,
I really suspect connection leaks might have happened. No. of users for our application is very less, so 32 should not be exhausted for our user base. How can I find leaked connections?
If I want to tell Spring to create few more connections when maxActive(32 in my case) is reached, what is the setting needs to be added?
poolProperties.setTestOnBorrow(true);
poolProperties.setTestOnConnect(true);
poolProperties.setTestWhileIdle(true);
poolProperties.setInitialSize(10);
poolProperties.setMinIdle(10);
poolProperties.setMaxIdle(10);
poolProperties.setMaxActive(32);
poolProperties.setMaxWait(5000); //5 secs
poolProperties.setLogValidationErrors(true);
poolProperties.setLogAbandoned(true);
poolProperties.setValidationQuery("SELECT 1");
o.h.e.jdbc.spi.SqlExceptionHelper.logExceptions(129) - [http-nio-6061-exec-9] Timeout: Pool empty. Unable to fetch a connection in 5 seconds, none available[size:32; busy:32; idle:0; lastwait:5000]
WebClientTestService service = new WebClientTestService() ;
int connectionTimeOutInMs = 5000;
Map<String,Object> context=((BindingProvider)service).getRequestContext();
context.put("com.sun.xml.internal.ws.connect.timeout", connectionTimeOutInMs);
context.put("com.sun.xml.internal.ws.request.timeout", connectionTimeOutInMs);
context.put("com.sun.xml.ws.request.timeout", connectionTimeOutInMs);
context.put("com.sun.xml.ws.connect.timeout", connectionTimeOutInMs);
Please share the differences mainly in connect timeout and request timeout.
I need to know the recommended values for these parameter values.
What are the criteria for setting timeout value ?
Please share the differences mainly in connect timeout and request timeout.
I need to know the recommended values for these parameter values.
Connect timeout (10s-30s): How long to wait to make an initial connection e.g. if service is currently unavailable.
Socket timeout (10s-20s): How long to wait if the service stops responding after data is sent.
Request timeout (30s-300s): How long to wait for the entire request to complete.
What are the criteria for setting timeout value ?
It depends a web user will get impatient if nothing has happened after 1-2 minutes, however a back end request could be allowed to run longer.
Also consider server resources are not released until request completes (or times out) - so if you have too many requests and long timeouts your server could run out of resources and be unable to service further requests.
request timeout should be set to a value greater then the expected time for the request to complete, perhaps with some room to allow occasionally slower performance under heavy loads.
connect/socket timeouts are often set lower as normally indicate a server problem where waiting another 10-15s usually won't resolve.
I am trying to benchmark SSL handshakes per second using a variety of tools, JMeter included. I have successfully created a test plan that meets my needs except I now want to test how the SSL handshakes per second compare with and without SSL session reuse. As I understand, by default Java has an unlimited size SSL session cache and entries expire after 24 hours.
I've tried using the JMeter properties "https.use.cached.ssl.context" and "https.sessioncontext.shared", but even when these properties are false it doesn't meet my needs. When both are false, the first HTTPS request in the thread uses a new session id, but each HTTPS request after that in the thread reuses a session id. Even if I set the undocumented Java property "javax.net.ssl.sessionCacheSize" to 1 to only allow one SSL session ID to be cached, if I have 10 threads making a total of 5 HTTP requests each, I see 10 new SSL session negotiated, and 40 SSL sessions reused (verified with ssldump and STunnel logs).
Is it possible through JMeter or Java to have every HTTPS request use a new SSL session id?
This works:
https.use.cached.ssl.context=false is set in user.properties
AND use either HTTPClient 3.1 or 4 implementations for HTTP Request
EDIT (after Kaelen comment):
Setting the https.use.cached.ssl.context property to false (with HTTPClient 3.1/4) does work, the only tricky part is that the SSL session context only gets reset at the end of an iteration of the Thread Group. In the test plan the thread group did not iterate, there was an infinite loop inside the group that ran until a # of requests occurred. Because the thread group never iterated, the SSL context wasn't reset.
In this case remove the Loop inside Thread Group and configure number of iterations in Thread Group.
what is java app engine,default session time out ?
will that be any bad impact if we set sesion time out to very very long time, since google app engine session is just store in datastore by default? (just like facebook, each time you go to the page, the session still exist forever) ?
Default session timeout is set to 30 Minutes. (you can verify it calling getMaxInactiveInterval method)
With that fairly limited info about your app, I don't see any impact.
Using setMaxInactiveInterval(-1) indicates that the Session should never timeout.
Keep in mind that you also need to overwrite the JSESSIONID cookie MaxAge to prevent to lose the Session when the browser is closed.
I've just tested on my GAE webapp and the default timeout is getMaxInactiveInterval()=86400 (s) = 24 hours = 1 day