Using Java how do i configure ActiveMQ to load balance and reconnect when disconnection occur?
As i understand those configuration should be made on the ActiveMQConnectionFactory object.
To do that, you set the brokerUrl of the ActiveMQConnectionFactory to a failover transport URI;
failover:(tcp://primary:61616,tcp://secondary:61616)
If you only have one broker, the below should be enough, ok for testing of reconnects but for production you'll most likely want more than one;
failover:(tcp://primary:61616)
Unlimited reconnection attempts is the default, but you can tweak quite a few options if you look at the documentation linked above.
Related
We have a solace broker running in a docker container. When we create a JNDI Connection Factory there are default properties such as
Reconnect Retry Attempts
Connect Retry Attempts
Connect Retry Attempts per Host
and so on
When we establish a producer using JMS we give properties like so
env.put(SupportedProperty.SOLACE_JMS_JNDI_CLIENT_ID, config.getJndiClientID());
env.put(SupportedProperty.SOLACE_JMS_PROP_SENDER_ID, config.getSenderID());
env.put(SupportedProperty.SOLACE_JMS_VPN, config.getVpn());
env.put(SupportedProperty.SOLACE_JMS_JNDI_CONNECT_RETRIES, 0);
env.put(SupportedProperty.SOLACE_JMS_JNDI_RECONNECT_RETRIES, 0);
env.put(SupportedProperty.SOLACE_JMS_JNDI_CONNECT_RETRIES_PER_HOST, 0);
however at the run-time of application and at the point when connection is getting established it seems that these properties that I set on the client side take no effect. Specifically I was able to test that by stopping the docker container of solace and seeing that it is trying to reconnect 3 times which is what happens to be the default is on the broker side.
Therefore, the question, how to force the override of these properties on the client side, if at all possible? Under what circumstances does setting these properties on a client side take affect?
Loading of a JMS ConnectionFactory over JNDI is, per definition, a two step process: first the API connects to JNDI and then loads whatever JMS ConnectionFactory object has been created.
Property SOLACE_JMS_JNDI_CONNECT_RETRIES (note the JNDI) is actually the parameter for the first step ! It defines the #retries for contacting JNDI. If you want to change the definition of the loaded JMS ConnectionFactory, you need to do this in your Solace administrator. For example within admin GUI as shown below.
When you use env.put(), you are trying to set the JMS Property using the Initial Context.
But these properties could also be set through the JNDI properties file as well as the command line.
If you turn on the API debugging, you should be able to see which value is taken from where.
Now, once you are able to connect with the JNDI connection factory on the broker, the values will be taken from the broker side.
We are trying to connect to IBMMQ using CCDT file and JMS configuration.
We are able to connect to it but we have an issue here:
since we are using spring to set connection factory with CCDT file, this is initialized once at the start of the application, but unfortunately it picks only one queue manager at a time,i.e it sends all the messages to same queue manager and it does not load balance.
Though i observed, if i manually set the CCDT file before every request then its able to load balance the Queue Managers, ideally it looks to me Queue Manager is decided whenever i set the URL to CCDT file. which is wrong practice. My expectation was to initialize the connection factory with CCDT file and then this configuration will be able to load balance on its own.
Can you help me this?
This is the expected behavior. MQ does not load balance clients, it connection balances them. The connection is the single most time consuming API call and in the case of a mutually authenticated TLS connection can take seconds to complete. Therefore a good application design will attempt to connect once, then maintain that connection for the duration of the session. The JMS architecture and Spring framework both expect this pattern.
The way that MQ provides load distribution (again, not true balancing, but rather round-robin distribution) is that the client connects a hop away from a clustered destination queue. A message addressed to that clustered destination queue will round-robin among all the instances of that queue.
If it is a request-reply application, the thing listening for requests on these clustered queue instances addresses the reply message using the Reply-To QMgr and Reply-To Queue name from the requesting message. In this scenario the requestors can fail over QMgr to QMgr if they lose their connection. The systems of record listening on the clustered queues generally do not fail over across queue managers in order to ensure all queue instances are served and because of transaction recovery.
Short answer is that CCDT and MQ client in general are not where MQ load distribution occurs. The client should make a connection and hold it as long as possible. Client reconnect and CCDT are for connection balancing only.
Load distribution is a feature of the MQ cluster. It requires multiple instances of a clustered queue and these are normally a network hop away from the client app that puts the message.
When my MQ server becomes unavailable, the call to QueueConnectionFactory.createQueueConnection() hangs and eventually (1-2 minutes later) "javax.transaction.TransactionRolledbackException: Transaction is ended due to timeout" is thrown.
I cannot find a JavaEE call to set a timeout for the function.
Is there a way to get this function to fail faster or throw an exception on WebSphere when the MQ server cannot be reached?
The QCF is accessed by dependency injection.
#Resource(name = "jndi-name-for-QCF")
private QueueConnectionFactory queueConnectionFactory;
…
// this line is timing out.
QueueConnection connection = queueConnectionFactory.createQueueConnection();
I think typically this would be handled administratively with config rather than programmatically in your app code. E.g. see this article for some examples.
Not sure there's much you can do with configuring the WebSphere connection pooling settings either. This seems to point to configuring the MQ provider itself (e.g. the channels).
We have a JEE6 app built in Apache TomEE v1.6.0+. There are two parts, a cloud part, and a ground part. The cloud part is intended to be never restarted, since it monitors a transient source of information, but creates JMS messages and sends them to it's broker.
The ground part is intended to be restart-able during the day and is where the complex processing logic is. It too has a broker which connects to the cloud broker.
The problem we are having is that if we take down the ground instance of TomEE for more than a few mins, then start it up again, the cloud broker will not deliver all the messages that stacked up. Furthermore, it doesn't deliver any new messages either, forcing us to restart it, which makes us lose our messages.
Here are the two connection URIs... What on earth are we doing wrong??
Cloud:
<Resource
id="ActiveMQResourceAdapter"
type="ActiveMQResourceAdapter">
BrokerXmlConfig = broker:(ssl://0.0.0.0:61617?needClientAuth=true&transport.keepAlive=true&transport.soTimeout=30000,vm://localhost,network:static:(failover:(ssl://ground.somedomain.com:61617?keepAlive=true&soTimeout=30000)))?persistent=true
ServerUrl = vm://localhost
DataSource = jdbc/activemq
</Resource>
Ground:
<Resource
id="ActiveMQResourceAdapter"
type="ActiveMQResourceAdapter">
BrokerXmlConfig = broker:(ssl://0.0.0.0:61617?needClientAuth=true&transport.keepAlive=true&transport.soTimeout=30000,vm://localhost,network:static:(failover:(ssl://cloud.somedomain.com:61617?keepAlive=true&soTimeout=30000)))?persistent=true
ServerUrl = vm://localhost
DataSource = jdbc/activemq
</Resource>
Any help is much appreciated. Thank you very much!!
Ok we learned a couple of things.
First, we switched to using an external instance of ActiveMQ, instead of relying on the embedded one inside TomEE. You must start the broker first, before start TomEE, or TomEE will create an internal broker on startup and you'll be scratching your head going gee why aren't any messages processing. You then connect TomEE to the broker by setting BrokerXmlConfig = and ServerUrl = tcp://localhost.
Next, we switched to using the activemq http transport. This completely negates any network disconnect problems, since http is stateless. It is VERY slow however relative to tcp/ssl, but the message transport is not the slowest point in our system so it doesn't matter anyway. You MUST have the external broker listen on both http and tcp since TomEE connects via TCP and the remote broker connects via http.
These two things fixed our problems and we have a completely solid system running now. I hope this helps someone!!
Not sure if you are using topics or queues but the JMS spec says that only queues and durable subscribers can take advantage of the store-and-forward guaranteed delivery.
For a non-durable subscriber, a non-persistent message will be delivered “at most once” but will be missed if inactive.
Please take a look to the following URL which explains in detail how guaranteed messaging works for topics and queues in ActiveMQ:
http://www.christianposta.com/blog/?p=265
My system has the following parts:
ActiveMQ broker exposed on tcp, port 61616
3 Grails/Spring wars that live in their own Tomcat servers, they publish and consume messages to the JMS broker
n times remote client system with a JMS listener component to receive client specific messages, connect to the JMS broker through VPN using a hostname and port 61616
So far, all works fine throughout dev, Test and production environments.
We've just connected a new client system in production and we've noticed that it's logs start to report 'channel was inactive for too long' exceptions and drops the connection.
Worrying the overall effect of this one client is that it stops all message consumption on the broker so brings are whole system to a halt.
This client listener (using Spring caching connection factory) appears to connect to the JMS broker ok, process some messages, then 3 mins reports the exception. Turned on DEBUG in ActiveMQ and got loads of output, nothing suggesting a warning or error on the broker around the same time though.
Believe that ActiveMQ has some internal keep alive that should keep the connection even if inactive for longer than the default 30 seconds.
Infrastructure guys have monitored the VPN of this client and confirm it stays up and connected the whole time.
Don't believe it is code or Spring config that is at fault, as we have numerous other instances of the listener in different clients and they all behave themselves fine.
Suppose I have 2 questions really:
What is causing 'channel inactive' exceptions?
Why does this exception in a single client stop ActiveMQ from working?
EDIT - adding exception stacktrace:
2013-04-24 14:02:06,359 WARN - Encountered a JMSException - resetting the underlying JMS Connection (org.springframework.jms.connection.CachingConnectionFactory)
javax.jms.JMSException: Channel was inactive for too (>30000) long: jmsserver/xxx.xx.xx.xxx:61616
at org.apache.activemq.util.JMSExceptionSupport.create(JMSExceptionSupport.java:49)
at org.apache.activemq.ActiveMQConnection.onAsyncException(ActiveMQConnection.java:1833)
at org.apache.activemq.ActiveMQConnection.onException(ActiveMQConnection.java:1850)
at org.apache.activemq.transport.TransportFilter.onException(TransportFilter.java:101)
at org.apache.activemq.transport.ResponseCorrelator.onException(ResponseCorrelator.java:126)
at org.apache.activemq.transport.TransportFilter.onException(TransportFilter.java:101)
at org.apache.activemq.transport.TransportFilter.onException(TransportFilter.java:101)
at org.apache.activemq.transport.WireFormatNegotiator.onException(WireFormatNegotiator.java:160)
at org.apache.activemq.transport.InactivityMonitor.onException(InactivityMonitor.java:266)
at org.apache.activemq.transport.InactivityMonitor$4.run(InactivityMonitor.java:186)
at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:693)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:719)
at java.lang.Thread.run(Thread.java:813)
Caused by: org.apache.activemq.transport.InactivityIOException: Channel was inactive for too (>30000) long: jmsserver/xxx.xx.xx.xxx:61616
... 4 more
Have you tried the following:
Disable the InactivityMonitor; wireFormat.maxInactivityDuration=0 e.g.
URL: tcp://localhost:61616?wireFormat.maxInactivityDuration=0
If you don't wish to disable, have you tried setting it to a high number e.g.: URL: tcp://localhost:61616?wireFormat.maxInactivityDuration=5000000 (just an example - use your own time in ms)
Also, ensure that the jar files are the same version for both client and server.
Hope it helps
You just need to change the activemq.xml (configuration file):
transportConnectors section:
transportConnector name="ws" uri="ws://0.0.0.0:61614"
change
transportConnector name="ws" uri="tcp://0.0.0.0:61614"
It works for my windows and linux virtual machines