Tomcat DataSource - Max Active Connections - java

I set max active connections to 1 using the below code :
ConnectionPool initializePool(DataSource dataSource) {
if (!(org.apache.tomcat.jdbc.pool.DataSource.class.isInstance(dataSource))) {
return null;
}
org.apache.tomcat.jdbc.pool.DataSource tomcatDataSource = (org.apache.tomcat.jdbc.pool.DataSource) dataSource;
final String poolName = tomcatDataSource.getName();
try {
ConnectionPool pool = tomcatDataSource.createPool();
pool.getPoolProperties().setMaxActive(1);
pool.getPoolProperties().setInitialSize(1);
pool.getPoolProperties().setTestOnBorrow(true);
return pool;
} catch (SQLException e) {
logger.info(String.format(" !--! creation of pool failed for %s", poolName), e);
}
return null;
}
Now using threads , I have opened number of concurrent connections to DB. I also printed out the current active number of connections using the below listed code
System.out.println("Current Active Connections = " + ((org.apache.tomcat.jdbc.pool.DataSource) datasource).getActive());
System.out.println("Max Active Connections = " + ((org.apache.tomcat.jdbc.pool.DataSource) datasource).getMaxActive());
I see results similar to below. Active connections is being displayed as more than 1. However I want to restrict the max active connections to 1. Are there any other parameters that I need to set?
Current Active Connections = 9
Max Active Connections = 1
EDIT: However when I try with 15 or 20 as max active , it is always limited to 15 or 20 respectively.

Try with maxIdle and minIdle
ConnectionPool pool = tomcatDataSource.createPool();
pool.getPoolProperties().setMaxActive(1);
pool.getPoolProperties().setInitialSize(1);
pool.getPoolProperties().setMaxIdle(1);
pool.getPoolProperties().setMinIdle(1);
pool.getPoolProperties().setTestOnBorrow(true);

Related

JedisCluster configurations and how it maintains the pool of connections

I have recently started using JedisCluster for my application. There is little to no documentation and examples for the same. I tested a use case and the results are not what I expected
public class test {
private static JedisCluster setConnection(HashSet<HostAndPort> IP) {
JedisCluster jediscluster = new JedisCluster(IP, 30000, 3,
new GenericObjectPoolConfig() {{
setMaxTotal(500);
setMinIdle(1);
setMaxIdle(500);
setBlockWhenExhausted(true);
setMaxWaitMillis(30000);
}});
return jediscluster;
}
public static int getIdleconn(Map<String, JedisPool> nodes){
int i = 0;
for (String k : nodes.keySet()) {
i+=nodes.get(k).getNumIdle();
}
return i;
}
public static void main(String[] args) {
HashSet IP = new HashSet<HostAndPort>() {
{
add(new HostAndPort("host1", port1));
add(new HostAndPort("host2", port2));
}};
JedisCluster cluster = setConnection(IP);
System.out.println(getIdleconn(cluster.getClusterNodes()));
cluster.set("Dummy", "0");
cluster.set("Dummy1", "0");
cluster.set("Dummy3", "0");
System.out.println(getIdleconn(cluster.getClusterNodes()));
try {
Thread.sleep(60000);
} catch (InterruptedException e) {
e.printStackTrace();
}
System.out.println(getIdleconn(cluster.getClusterNodes()));
}
}
The output for this snippet is:
0
3
3
Questions=>
I have set the timeout to 30000 JedisCluster(IP, 30000, 3,new GenericObjectPoolConfig() . I believe this is the connection timeout which means Idle connections are closed after 30 seconds. Although this doesn't seem to be happening. After sleeping for 60 seconds, the number of idle connections is still 3. What I am doing/understanding wrong here? I want the pool to close the connection if not used for more than 30 seconds.
setMinIdle(1). Does this mean that regardless the connection timeout, the pool will always maintain one connection?
I prefer availability more than throughput for my app. What should be the value for setMaxWaitMillis if conn timeout is 30 secs?
Though rare, the app fails with redis.clients.jedis.exceptions.JedisNoReachableClusterNodeException: No reachable node in cluster. This i think is connected to 1. How to prevent this?
30000 or 30 seconds here refers to (socket) timeout; the timeout for single socket (read) operation. It is not related with closing idle connections.
Closing idle connections are controlled by GenericObjectPoolConfig. So check the parameters there.
Yes (mostly).
setMaxWaitMillis is the timeout for getting a connection object from a connection object pool. It is not related to 30 secs and not really solve you anything in terms of availability.
Keep your cluster nodes available.
There has been changes in Jedis related to this. You can try a recent version (4.x, even better 4.2.x).

Neo4j Causal Cluster Bolt Driver Performance Too Low

We are evaluating Neo4J Enterprise Edition Causal Cluster using Bolt Driver for Java.
We have 3 node Core Cluster.
The performance we saw is too low.
We are creating just 1 node with 2 property 10,00,000 times. When tracked, we are getting 300TPS (i.e. only 300 nodes are created per second).
OS is Linux, RHEL.
Each core is running with 32GB.
We were estimating close to 50,000 TPS for creation of just 1 node however it is only 300 TPS which is way way way too low.
I am sure we are missing something big.
This function is called 10,00,000 times by a thread pool of 64 threads.
Code Snippet:
#Override
public void createNode() throws InterruptedException {
try (Session session = RTNeo4j.getInstance().getWriteDriver().session(AccessMode.WRITE)) {
try (final Transaction tx = session.beginTransaction()) {
try {
tx.run("CREATE (a:Person {name: {name}, id: {id}})",
parameters("name", "king", "id", System.currentTimeMillis()));
tx.success();
} catch (Exception e) {
e.printStackTrace();
}
}
}
}
Appreciate quick help for evaluation.
You do not have to create each time a session within a method. Move the creation of the session outside method:
Session session = RTNeo4j.getInstance().getWriteDriver().session(AccessMode.WRITE)

How to set time limit for creating HBase connection

I'm currently using HBase v0.98.6. I would like to check the current connection status from an external Java program. Right now, I'm doing something like this to check:
connectionSuccess = true;
try {
HConnection hConnection = createConnection(config);
} catch (Exception ex) {
connectionSuccess = false;
}
When the connection is working, this returns fairly quickly. The problem is when the connection is not working, and it takes 20 minutes for it to finally return connectionSuccess=false. Is there a way to reduce this time limit, as I'm just interested in getting the connection status at the current time?
The reason it takes so long is that by default if the connection fails it will retry multiple times (I think 6? don't quote me), and each connection attempt takes a while. Try a combination of these commands to limit time per connection before timeout, and number of permitted retry attempts.
hbase.client.retries.number = 3
hbase.client.pause = 1000
zookeeper.recovery.retry = 1 (i.e. no retry)
Credit to Lars from http://hadoop-hbase.blogspot.com/2012/09/hbase-client-timeouts.html
You can set the retries value to 1 to get the status of the connection at the current time.
Configuration conf = HBaseConfiguration.create();
conf.setInt("hbase.client.retries.number",1);
conf.setInt("zookeeper.recovery.retry",0);
Or you can use the below in-built HbaseAdmin method which does the same thing.
connectionSuccess = true;
try
{
HBaseAdmin.checkHBaseAvailable(config);
}
catch(MasterNotRunningException e)
{
connectionSuccess = false;
}
My org.apache.hadoop.conf.Configuration object contains following key value pairs:
Configuration conf = HBaseConfiguration.create();
//configuring timeout and retry parameters
conf.set("hbase.rpc.timeout", "10000");
conf.set("hbase.client.scanner.timeout.period", "10000");
conf.set("hbase.cells.scanned.per.heartbeat.check", "10000");
conf.set("zookeeper.session.timeout", "10000");
conf.set("phoenix.query.timeoutMs", "10000");
conf.set("phoenix.query.keepAliveMs", "10000");
conf.set("hbase.client.retries.number", "3");
conf.set("hbase.client.pause", "1000");
conf.set("zookeeper.recovery.retry", "1");

Random occurrences of java.net.ConnectException

I'm experiencing java.net.ConnectException in random ways.
My servlet runs in Tomcat 6.0 (JDK 1.6).
The servlet periodically fetches data from 4-5 third-party web servers.
The servlet uses a ScheduledExecutorService to fetch the data.
Run locally, all is fine and dandy. Run on my prod server, I see semi-random failures to fetch data from 1 of the third parties (Canadian weather data).
These are the URLs that are failing (plain RSS feeds):
http://weather.gc.ca/rss/city/pe-1_e.xml
http://weather.gc.ca/rss/city/pe-2_e.xml
http://weather.gc.ca/rss/city/pe-3_e.xml
http://weather.gc.ca/rss/city/pe-4_e.xml
http://weather.gc.ca/rss/city/pe-5_e.xml
http://weather.gc.ca/rss/city/pe-6_e.xml
http://meteo.gc.ca/rss/city/pe-1_f.xml
http://meteo.gc.ca/rss/city/pe-2_f.xml
http://meteo.gc.ca/rss/city/pe-3_f.xml
http://meteo.gc.ca/rss/city/pe-4_f.xml
http://meteo.gc.ca/rss/city/pe-5_f.xml
http://meteo.gc.ca/rss/city/pe-6_f.xml
Strange: each cycle, when I periodically fetch this data, the success/fail is all over the map: some succeed, some fail, but it never seems to be the same twice. So, I'm not completely blocked, just randomly blocked.
I slowed down my fetches, by introducing a 61s pause between each one. That had no effect.
The guts of the code that does the actual fetch:
private static final int TIMEOUT = 60*1000; //msecs
public String fetch(String aURL, String aEncoding /*UTF-8*/) {
String result = "";
long start = System.currentTimeMillis();
Scanner scanner = null;
URLConnection connection = null;
try {
URL url = new URL(aURL);
connection = url.openConnection(); //this doesn't talk to the network yet
connection.setConnectTimeout(TIMEOUT);
connection.setReadTimeout(TIMEOUT);
connection.connect(); //actually connects; this shouldn't be needed here
scanner = new Scanner(connection.getInputStream(), aEncoding);
scanner.useDelimiter(END_OF_INPUT);
result = scanner.next();
}
catch (IOException ex) {
long end = System.currentTimeMillis();
long time = end - start;
fLogger.severe(
"Problem connecting to " + aURL + " Encoding:" + aEncoding +
". Exception: " + ex.getMessage() + " " + ex.toString() + " Cause:" + ex.getCause() +
" Connection Timeout: " + connection.getConnectTimeout() + "msecs. Read timeout:" +
connection.getReadTimeout() + "msecs."
+ " Time taken to fail: " + time + " msecs."
);
}
finally {
if (scanner != null) scanner.close();
}
return result;
}
Example log entry showing a failure:
SEVERE: Problem connecting to http://weather.gc.ca/rss/city/pe-5_e.xml Encoding:UTF-8.
Exception: Connection timed out java.net.ConnectException: Connection timed out
Cause:null
Connection Timeout: 60000msecs.
Read timeout:60000msecs.
Time taken to fail: 15028 msecs.
Note that the time to fail is always 15s + a tiny amount.
Also note that it fails to reach the configured 60s timeout for the connection.
The host-server admins (Environment Canada) state that they don't have any kind of a blacklist for the IP address of misbehaving clients.
Also important: the code had been running for several months without this happening.
Someone suggested that instead I should use curl, a bash script, and cron. I implemented that, and it works fine.
I'm not able to solve this problem using Java.

JDBC Connection pool monitoring GlassFish

I am trying to find a setting by which the connection pool monitoring information would come in the server.log whenever an error like "error allocating a connection" or "connection closed" occur.
I found some blog entries that talk about this but they mention it from GUI prespective. However, I want a setting on the connection pool itself so that perodically the connection pool monitoring information would be shown in the logs.
Does anyone know of such a setting?
On Sun app Server 8.X it used to be perf-monitor
I don't know if this might help you .... but you can interrogate the connection pool monitoring information via jmx.
This code code will print the max-pool-size and number of used connections for all connection pools in the appserver (there are loads more stuff you can pull from the MBeans) :
MBeanServerConnection conn = getMbeanServerConnection();
//search the jmx register for the specified beans
Set<ObjectInstance> connectorPoolSet = conn.queryMBeans(new ObjectName("*:type=jdbc-connection-pool,*"), null);
Map<String , ObjectName> configMap = new HashMap<String, ObjectName>();
Map<String , ObjectName> monitorMap = new HashMap<String, ObjectName>();
//get a map of each config & monitor object found for the search
for(ObjectInstance oi : connectorPoolSet) {
String name = oi.getObjectName().getKeyProperty("name");
//if the category of the mbean is config - put it in the config map - else if it is monitor
//place it in the monitor map.
String category = oi.getObjectName().getKeyProperty("category");
if("config".equalsIgnoreCase(category)) {
configMap.put(name, oi.getObjectName());
} else if("monitor".equalsIgnoreCase(category)){
monitorMap.put(name, oi.getObjectName());
}
}
//iterate the pairs of config & monitor mbeans found
for(String name : configMap.keySet()) {
ObjectName configObjectName = configMap.get(name);
ObjectName monitorObjectName = monitorMap.get(name);
if(monitorObjectName == null) {
//no monitor found - throw an exception or something
}
int maxPoolSizeVal = getAttributeValue(conn, configObjectName, "max-pool-size");
int connectionsInUse = getAttributeValue(conn, monitorObjectName, "numconnused-current");
System.out.println(name + " -> max-pool-size : " + maxPoolSizeVal);
System.out.println(name + " -> connections in use : " + connectionsInUse);
}

Categories