I use Lettuce to connect to Redis in a Springboot environment. I've set up a LettuceConnectionFactory with a GenericObjectPoolConfig config object. In the GenericObjectPoolConfig i've set the maximum total connections, the max idle connections, the min idle connections and the maximum wait for connections.
I would like to monitor the connections to redis with micrometer. Is there any way to get the active and idle connection count for redis?
I've fount a similar question: Spring data Redis. How to know number of Active, Idle Connection?. But they are using jedis as a redis client.
Related
I have a distributed system that connects to around 150 different schemas/databases at the same time. For connecting to each schema/database, the application spins up a separate connection pool.
Application has varying usage, sometimes it needs active connections to only 10 schemas, sometimes it needs active connections for all.
In order to better manage resources, I want to have Hikari connection pool, which should have 0 connection by default, and then as the need grows, the connections should grow out to a specified threshold and then come back to 0.
My configurations are such:
hikariConfig.setMinimumIdle(0);
hikariConfig.setMaximumPoolSize(10);
hikariConfig.setIdleTimeout(180000);
However, I see at least 1 active connection per pool in my MySQL database when I run sql
> show processList;
How do I ensure that when there is no need for the connection, the connection pool shrinks to 0?
I am creating an ETL application where I actually need a large number of concurrent connections running long slow queries. It is not uncommon to see the number of concurrent connections to be as large as 100 depending on the machine running the application.
Let's assume it takes about 2s to establish a connection with the database. If I don't use pooling and parallelize connection retrieval with 100 threads then all connection are still established in about 2s. However, while using HikariCP I've noticed that time to establish 100 connections at the start of the application when there is a spike in connection requests, it takes about 200s to establish all connections and often results in timeout.
This drives me to the conclusion that obtaining a new connection is a blocking call. Also it seems that hikariCP pool is lazy initialized and I assume that once it establishes all 100 connections it will try to keep the pool size at 100.
Is there a way to enable more concurrent behaviour of establishing connections in hikariCP? Could I at least force it to concurrently initialize (establish 100 connections) the pool?
One could say that the time to initially establish all connections is irelevant in the lifetime of the application, but I also want to have timeout set to 30seconds which will always result in timeout exception during initial spike demand.
Does it make sense to perform producer/consumer connection pooling of kafka clients?
Does kafka internally maintain a list of connection objects initialized and ready to use?
We'd like to minimize time of connection creation, so that there is no additional overhead when it comes to send/receive messages.
Currently we're using apache commons-pool library GenericObjectPool to keep connections around.
Any help will be appreciated.
Kafka clients maintain their own connections to the clusters.
Both the Producer and Consumer keep connections alive to the brokers they are interacting with. In case they stop interacting, after connections.max.idle.ms the connection will be closed. This setting also exists on the broker so you may want to verify with your admin if they changed this value.
So in most cases, once started Kafka clients don't create many new connections but just use the ones created at startup
I have a Java client that connects to MQ with 10 connections. These remain open for the duration that the Java client runs. For each thread we create a message, create a session, send the message and close the session. We are using the Spring CachingConnectionFactory and have a sessionCacheSize of 100. We have been told by our MQ engineering team is that that our queue manager has a max connections of 500 and that we are exceeding this. The QM.ini file has:
maxChannels=500
maxActiveChannels=256
maxHandles=256
What I have observed in MQ explorer is that the open output count on the queue remains static at 10, however if we load balance across 2 queues it's 10 on each, even though we still only have 10 connections. So what I'd like to know is what do jms connections and sessions equate to in MQ terminology?
I did think that a connection equates to an active channel and a session is a handle, so it was the handles that we were possibly exceeding as the amount of sessions we open (and close) run into hundreds or thousands, whereas we only have 10 connections. Although going against this, the snippet below from IBM's Technote "Explanation of connection pool and session pool settings for JMS connection factories", suggests that the max channels should be greater than the max sessions, however we never know this as it depends on the load (unless this should be greater than the sessionCacheSize?).
Each session represents a TCP/IP connection to the queue manager.
With the settings mentioned here, there can be a maximum of 100 TCP/IP
connections. If you are using WebSphere MQ, it is important to tune
the queue manager's MaxChannels setting, located in the qm.ini file,
to a value greater than the sum of the maximum possible number of
sessions from each JMS connection factory that connects to the queue
manager.
Any assistance on how best to configure MQ would be appreciated.
Assuming that your maximum number of conversations is set to 1 on the MQ Channel (the default is 10 in MQ v7 and v7.5) then a JMS Connection will result in a TCP connection (MQ channel instance) and a JMS Session will result in another TCP connection (MQ channel instance).
From your update it sounds like you have 10 JMS Connections configured and the sessionCacheSize in Spring set to 100, so 10 x 100 means 1000 potential MQ channel instances being created. The open output count will show how many 'active' JMS Sessions are attempting send a message and not necessarily how many have been 'cached'.
The conversation sharing on the MQ channel might help you here as this defines how many logical connections can be shared over one TCP connection (MQ channel instance). So the default of 10 conversations means you can have 10 JMS Sessions created that operate over just one TCP connection.
I am using Datastax Java Driver.
There is a tutorial to use the same.
What I do not understand is how would one close the connection to cassandra?
There is no close method available and I assume we do not want to shutdown the Session as it it expected to be one per application.
Regards
Gaurav
tl;dr Calling shutdown on Session is the correct way to close connections.
You should be safe to keep a Session object at hand and shut it when you're finished with Cassandra - this can be long-lived. You can get individual connections in the form of Session objects as you need them and shut them down when done but ideally you should only create one Session object per application. A Session is a fairly heavyweight object that keeps pools of pools of connections to the node in the cluster, so creating multiple of those will be inefficient (and unnecessary) (taken verbatim from advice given by Sylvain Lebresne on the mailing list). If you forget to shut down the session(s) they will be all closed when you call shutdown on your Cluster instance... really simple example below:
Cluster cluster = Cluster.builder().addContactPoints(host).withPort(port).build();
Session session = cluster.connect(keyspace);
// Do something with session...
session.shutdown();
cluster.shutdown();
See here - http://www.datastax.com/drivers....
The driver uses connections in an asynchronous manner. Meaning that
multiple requests can be submitted on the same connection at the same
time. This means that the driver only needs to maintain a relatively
small number of connections to each Cassandra host. These options
allow the driver to control how many connections are kept exactly.
For each host, the driver keeps a core pool of connections open at all
times determined by calling . If the use of those connections reaches
a configurable threshold , more connections are created up to the
configurable maximum number of connections. When the pool exceeds the
maximum number of connections, connections in excess are reclaimed if
the use of opened connections drops below the configured threshold
Each of these parameters can be separately set for LOCAL and REMOTE
hosts (HostDistance). For IGNORED hosts, the default for all those
settings is 0 and cannot be changed.