I have a Java client that connects to MQ with 10 connections. These remain open for the duration that the Java client runs. For each thread we create a message, create a session, send the message and close the session. We are using the Spring CachingConnectionFactory and have a sessionCacheSize of 100. We have been told by our MQ engineering team is that that our queue manager has a max connections of 500 and that we are exceeding this. The QM.ini file has:
maxChannels=500
maxActiveChannels=256
maxHandles=256
What I have observed in MQ explorer is that the open output count on the queue remains static at 10, however if we load balance across 2 queues it's 10 on each, even though we still only have 10 connections. So what I'd like to know is what do jms connections and sessions equate to in MQ terminology?
I did think that a connection equates to an active channel and a session is a handle, so it was the handles that we were possibly exceeding as the amount of sessions we open (and close) run into hundreds or thousands, whereas we only have 10 connections. Although going against this, the snippet below from IBM's Technote "Explanation of connection pool and session pool settings for JMS connection factories", suggests that the max channels should be greater than the max sessions, however we never know this as it depends on the load (unless this should be greater than the sessionCacheSize?).
Each session represents a TCP/IP connection to the queue manager.
With the settings mentioned here, there can be a maximum of 100 TCP/IP
connections. If you are using WebSphere MQ, it is important to tune
the queue manager's MaxChannels setting, located in the qm.ini file,
to a value greater than the sum of the maximum possible number of
sessions from each JMS connection factory that connects to the queue
manager.
Any assistance on how best to configure MQ would be appreciated.
Assuming that your maximum number of conversations is set to 1 on the MQ Channel (the default is 10 in MQ v7 and v7.5) then a JMS Connection will result in a TCP connection (MQ channel instance) and a JMS Session will result in another TCP connection (MQ channel instance).
From your update it sounds like you have 10 JMS Connections configured and the sessionCacheSize in Spring set to 100, so 10 x 100 means 1000 potential MQ channel instances being created. The open output count will show how many 'active' JMS Sessions are attempting send a message and not necessarily how many have been 'cached'.
The conversation sharing on the MQ channel might help you here as this defines how many logical connections can be shared over one TCP connection (MQ channel instance). So the default of 10 conversations means you can have 10 JMS Sessions created that operate over just one TCP connection.
Related
Developed a Spring Boot Application, which comprises of JMS Message Listener which is listening the JMS Queue. Before starting of the Spring Boot Application, the connection usage in the IBM MQ Server is 24. After starting up the Spring boot application, connection size is incremented to 26, that is 2 connection is created. But I was expecting only one connection has to be created in this case. PSB the connection details
DEV.APP.SVRCONN,,127.0.0.1,,,,,,,NONE,IBM MQ Channel,jmslistener-1.0-SNAPSHOT.jar
DEV.APP.SVRCONN,,127.0.0.1,,,,,REQ_QUEUE_A,QUEUE,ACTIVE,IBM MQ Channel,jmslistener-1.0-SNAPSHOT.jar
Seems the first connection is created for connecting to MQ Channel. I wasn't sure, whether this is the expected behaviour. Can anyone help me to understand on the connection creation and usage pattern in IBM MQ?
Each JMS "Connection" and each JMS "Session" correspond to a separate MQ Connection. So a simple JMS listener (usually 1 connection+1 session) is likely to result in 2 MQ connections as you've seen.
We are trying to connect to IBMMQ using CCDT file and JMS configuration.
We are able to connect to it but we have an issue here:
since we are using spring to set connection factory with CCDT file, this is initialized once at the start of the application, but unfortunately it picks only one queue manager at a time,i.e it sends all the messages to same queue manager and it does not load balance.
Though i observed, if i manually set the CCDT file before every request then its able to load balance the Queue Managers, ideally it looks to me Queue Manager is decided whenever i set the URL to CCDT file. which is wrong practice. My expectation was to initialize the connection factory with CCDT file and then this configuration will be able to load balance on its own.
Can you help me this?
This is the expected behavior. MQ does not load balance clients, it connection balances them. The connection is the single most time consuming API call and in the case of a mutually authenticated TLS connection can take seconds to complete. Therefore a good application design will attempt to connect once, then maintain that connection for the duration of the session. The JMS architecture and Spring framework both expect this pattern.
The way that MQ provides load distribution (again, not true balancing, but rather round-robin distribution) is that the client connects a hop away from a clustered destination queue. A message addressed to that clustered destination queue will round-robin among all the instances of that queue.
If it is a request-reply application, the thing listening for requests on these clustered queue instances addresses the reply message using the Reply-To QMgr and Reply-To Queue name from the requesting message. In this scenario the requestors can fail over QMgr to QMgr if they lose their connection. The systems of record listening on the clustered queues generally do not fail over across queue managers in order to ensure all queue instances are served and because of transaction recovery.
Short answer is that CCDT and MQ client in general are not where MQ load distribution occurs. The client should make a connection and hold it as long as possible. Client reconnect and CCDT are for connection balancing only.
Load distribution is a feature of the MQ cluster. It requires multiple instances of a clustered queue and these are normally a network hop away from the client app that puts the message.
I'm using the ActiveMQ client library to connect my server application to ActiveMQ. Several different consumers and producers run in individual threads. How should the relationship between ActiveMQConnectionFactory, ActiveMQConnection and ActiveMQSession be?
one connection factory per JVM
one connection to the broker per JVM or n connections, one per consumer
n sessions, one per consumer (the Javadoc seems to strongly suggest this)
Have a look at How do I use JMS efficiently?.
You should also think about using connection pooling.
As all of us might know, websocket maintains opened connection between server and client to achieve server push not like server pull where the connection wouldn't remains open. My question is how many number of TCP connections can be open at one time? What is the limitation of server push compared to server pull in this regard?
Default maximum number of websocket connections allowed in FireFox is 200.
Source: https://developer.mozilla.org/en/docs/WebSockets#Gecko_7.0
From the comment given in line #48 of http://src.chromium.org/viewvc/chrome/trunk/src/net/socket/client_socket_pool_manager.cc?r1=128044&r2=128043&pathrev=128044 seems to talk of dynamic numbers with minimum of 6 for normal socket pool and 30 for websocket pool.
More Info: https://groups.google.com/a/chromium.org/forum/#!topic/chromium-reviews/4sHNK-Eotn0
I am using Datastax Java Driver.
There is a tutorial to use the same.
What I do not understand is how would one close the connection to cassandra?
There is no close method available and I assume we do not want to shutdown the Session as it it expected to be one per application.
Regards
Gaurav
tl;dr Calling shutdown on Session is the correct way to close connections.
You should be safe to keep a Session object at hand and shut it when you're finished with Cassandra - this can be long-lived. You can get individual connections in the form of Session objects as you need them and shut them down when done but ideally you should only create one Session object per application. A Session is a fairly heavyweight object that keeps pools of pools of connections to the node in the cluster, so creating multiple of those will be inefficient (and unnecessary) (taken verbatim from advice given by Sylvain Lebresne on the mailing list). If you forget to shut down the session(s) they will be all closed when you call shutdown on your Cluster instance... really simple example below:
Cluster cluster = Cluster.builder().addContactPoints(host).withPort(port).build();
Session session = cluster.connect(keyspace);
// Do something with session...
session.shutdown();
cluster.shutdown();
See here - http://www.datastax.com/drivers....
The driver uses connections in an asynchronous manner. Meaning that
multiple requests can be submitted on the same connection at the same
time. This means that the driver only needs to maintain a relatively
small number of connections to each Cassandra host. These options
allow the driver to control how many connections are kept exactly.
For each host, the driver keeps a core pool of connections open at all
times determined by calling . If the use of those connections reaches
a configurable threshold , more connections are created up to the
configurable maximum number of connections. When the pool exceeds the
maximum number of connections, connections in excess are reclaimed if
the use of opened connections drops below the configured threshold
Each of these parameters can be separately set for LOCAL and REMOTE
hosts (HostDistance). For IGNORED hosts, the default for all those
settings is 0 and cannot be changed.