Load balancing issue while connecting to IBM MQ using JMS + CCDT file - java

We are trying to connect to IBMMQ using CCDT file and JMS configuration.
We are able to connect to it but we have an issue here:
since we are using spring to set connection factory with CCDT file, this is initialized once at the start of the application, but unfortunately it picks only one queue manager at a time,i.e it sends all the messages to same queue manager and it does not load balance.
Though i observed, if i manually set the CCDT file before every request then its able to load balance the Queue Managers, ideally it looks to me Queue Manager is decided whenever i set the URL to CCDT file. which is wrong practice. My expectation was to initialize the connection factory with CCDT file and then this configuration will be able to load balance on its own.
Can you help me this?

This is the expected behavior. MQ does not load balance clients, it connection balances them. The connection is the single most time consuming API call and in the case of a mutually authenticated TLS connection can take seconds to complete. Therefore a good application design will attempt to connect once, then maintain that connection for the duration of the session. The JMS architecture and Spring framework both expect this pattern.
The way that MQ provides load distribution (again, not true balancing, but rather round-robin distribution) is that the client connects a hop away from a clustered destination queue. A message addressed to that clustered destination queue will round-robin among all the instances of that queue.
If it is a request-reply application, the thing listening for requests on these clustered queue instances addresses the reply message using the Reply-To QMgr and Reply-To Queue name from the requesting message. In this scenario the requestors can fail over QMgr to QMgr if they lose their connection. The systems of record listening on the clustered queues generally do not fail over across queue managers in order to ensure all queue instances are served and because of transaction recovery.
Short answer is that CCDT and MQ client in general are not where MQ load distribution occurs. The client should make a connection and hold it as long as possible. Client reconnect and CCDT are for connection balancing only.
Load distribution is a feature of the MQ cluster. It requires multiple instances of a clustered queue and these are normally a network hop away from the client app that puts the message.

Related

CCDT fle configuration for multi queue channels- IBM MQ

I am new to the IBM MQ and I am doing research for one of the requirement. I have two MQ channels in the queue manager that I will be connecting from JMS client (Standalone Java Application) and I need to configure those in CCDT file.
When primary MQ channel that I am connected to is down, it has to connect to secondary standby MQ channel after waiting for certain amount of time while trying to reconnect to primary MQ channel.
Can I configure that wait time in CCDT file or in JMS client java class?
I have gone through the some of the IBM documentation and I see the setCCDTURL() and setConnectionTimeout() methods for factory object, I am not sure if that fulfills my requirement.

Configuring users for JMS queue

Recently we set up a JMS queue on glassfish server so that people from remote machines can get some data from it.
We established connection, but we have a problem now.
There will be a lot of users, and we need to use JMS. How does one create a list of users (username+password) so that only those who are authorised can read from queue? Is this even possible?

how does jms interact with the underlying database?

I understand JMS as depicted by the following diagram:
(source: techhive.com)
Is there any way for me to access the underlying database using JMS or some other thing? Further, the JDBC connections that the JMS server maintains, can I add new connections in it so as to access other databases also and do CRUD operations on them? If yes, how?
Where did you get this from?
Normally JMS is used to send messages to queue (or topics). You have message producers that push messages in the queue and message consumers consume them and process it.
In your exemple it seems that you have multiple queues. One for the messages that need to be processed, and one for each client to retrieve the result the processing of its messages.
With JMS Server you don't necessarily have a database behind. Everything can stay in memory, or can be written to files. You will need database server behind only if you configure your JMS server to be persistent (and to assure that even if server/application crash your messages won't be lost). But in that case you will never have to interact with the database. Only the JMS server will and you will interact with the JMS server sending and consuming messages.

Configure HAProxy for rabbitmq

I want to use HAProxy as a load balancer. I want to put two rabbitmq server behind haproxy. Both the rabbitmq server are on different instance of EC2. I have configure HAProxy server by following this reference. I works but the problem is messages are not published in roundrobin pattern. Messages are publish only on one server. Is there any different configuration for my requirement?
My configureation in /etc/haproxy/haproxy.cfg
listen rabbitmq 0.0.0.0:5672
mode tcp
stats enable
balance roundrobin
option tcplog
no option clitcpka
no option srvtcpka
server rabbit01 46.XX.XX.XX:5672 check
server rabbit02 176.XX.XX.XX:5672 check
listen web-service *:80
mode http
balance roundrobin
option httpchk HEAD / HTTP/1.0
option httpclose
option forwardfor
option httpchk OPTIONS /health_check.html
stats enable
stats refresh 10s
stats hide-version
stats scope .
stats uri /lb?stats
stats realm LB2\ Statistics
stats auth admin:Adm1nn
Update:
I have made some R&D on this and found that HAProxy is round robin the connection on the rabbitmq server. for ex: if i request for 10 connections then it will round robin the 10 connection over my 2 rabbitmq servers and publish the message.
But the problem is I want to round robin the messages, not connection it should be manage by HAProxy server. i.e if i send 1000 msg at a time to HAProxy then 500 msg should go to rabbit server1 and 500 msg should go to rabbit server2. What should be the configuration that i have to follow?
Update:
I have also test with leastconn in balancing but HAProxy behavior in unexpected. I have posted that question on serverfault.com
Messages get published to an exchange which will route to a queue.
You probably didn't configure you queues with {"x-ha-policy","all"}. Based on the fact that the exchange routing is working on both nodes this is probably all you are missing.
Note: Pre Rabbit 3.0 you would declare a queue with the x-ha-policy argument and it would be mirrored. With rabbit 3.0 you need to apply a policy (ha-mode = all). You can set policies through the api or the api tools (rabbitmqctl, management gui). i.e.
rabbitmqctl set_policy -p '/' MirrorAllQueues '.+' '{"ha-mode": "all"}'
The AMQP protocol is designed to use persistent connections, meaning you won't get a new connection per AMQP message (to avoid the overhead of constantly reconnecting). This means that a load balancer such as HAProxy will not be effective in balancing out your messages - it can only help with balancing out your connections.
Thus you cannot achieve your stated goal. If, however, your actual goal is to distribute messages evenly to consumers of those RabbitMQ instances, then you can use clustering as Karsten describes or you can use federation.
Federation setup:
First you need to enable the federation plugins:
rabbitmq-plugins enable rabbitmq_federation
rabbitmq-plugins enable rabbitmq_federation_management
Then for each of your servers log on to the RabbitMQ Web UI as an administrator and go to Admin > "Federation Upstreams" > "Add a new upstream" and add the other server(s) as upstream(s).
Now you need to define a policy for each exchange/queue you want to be federated. I only managed to get federation working for queues mind you, so I would try that first. Go to Admin > "Policies" > "Add / update a policy" and add a policy that targets the queue(s) you want federated.
Remove the 'backup' from the server definitions.
A backup server is one that will be used when all others are down. Specifying all your servers as backup without using option allbackups will likely have untoward consequences.
Change the relevant portion of your config to the following:
listen rebbitmq *:5672
mode tcp
balance roundrobin
stats enable
option forwardfor
option tcpka
server web2 46.XX.XX.XXX:5672 check inter 5000
server web1 176.XX.XX.XX:5672 check inter 5000

How to design an MQ Server?

I'm unclear as to whether there should be a 1-1 or a 1-* relationship between:
Server-Connection Channel and JMS Topic
Server-Connection Channel and Listener
Listener and Topic
Regards the design of our application layer, there is a single MDB that in response to a message, does some work, then publishes messages onto a variety of output topics. The service layer is listening on these output topics.
Currently I have a 1-1-1 between Channel-Listener-Topic, and therefore an instance of JmsConnectionFactory for each publisher (on the app side) and listener (on the service side).
There are a couple of different ways to look at this. From the point of view of your application one connection factory can have many sessions. Each session may have many consumers but units of work are scoped per session, not per consumer. So more than likely you want one connection factory with multiple sessions where each session has a listener on a particular topic. If you have a listener assigned to multiple consumers on a single session, any acknowledge (or COMMIT in a transacted session) commits all messages got or put in that session.
From the WMQ server's point of view, one channel definition can have many running instances. So you only need the one SVRCONN channel defined per app, regardless of how many channel instances it needs to start. Try not to put different apps on the same SVRCONN though because you often want to administer or authorize the apps separately. For example, with apps on separate channels you could easily figure out which app was misbehaving if you suddenly found yourself with 3000 running channels.
For purposes of administration and debugging, I'd probably have one CF for the app side and one CF for the service side. Each would point to a different SVRCONN channel as described above. Inside the app server I'd stick with one topic per session unless it is valid for your app to consume off multiple topics in a single unit of work. In the subscription you can specify a wildcard topic to get all topics below a certain point in the topic tree with a single subscriber.
Just for best practices, I'd also set the CF to use FAILIFQUIESCE to make sure the QMgr can be stopped in an orderly fashion and I'd use SYNCPOINTALLGETS (or a transacted session with explicit commit calls) in order to improve reliability as per the JMS 1.1 spec, sestion 4.4.13 which states:
If a failure occurs between the time a client commits its work on a Session and the commit method returns, the client cannot determine if the transaction was committed or rolled back. The same ambiguity exists when a failure occurs between the non-transactional send of a PERSISTENT message and the return from the sending method. It is up to a JMS application to deal with this ambiguity. In some cases, this may cause a client to produce functionally duplicate messages.
A message that is redelivered due to session recovery is not considered a duplicate message.
The SYNCPOINTALLGETS (a.k.a. SPAG) insures that messages retrieved from the queue are delivered to your app before being committed and permanently removed from the queue. Otherwise if you lose your connection while the QMgr is trying to return a message, it's gone for good. With SPAG set you might see the same message twice as described in the JMS spec, but you'll never drop one.
For more details of the options available to the CF, queue and topic objects, see: Properties of objects in the WebSphere MQ Using Java manual.
WMQ v6.0 is end-of-life as of September 2012 so please be sure to develop using the v7 clients, even if the server is at v6. This will reduce your migration effort next year. Download v7 client here and the WMQ v7.1 client here.
An MDB in a container creates a pool of MDBs that concurrently process messages. If you simply process and write to the topic you will be fine. With this in mind you do not have a 1-1-1 relationship.
In your MDB just do a JNDI lookup of your TopicConnectionFactory and your Topic and then just write. Look here: http://middleware.its.state.nc.us/middleware/Documentation/en_US/htm/csqzaw09/csqzaw0931.htm

Categories