I am trying to figure out how to conveniently pause all consumers/message-listeners, while my application is in controlled maintenance mode. The application is using ActiveMQ 5.13.3 client libraries.
Some time ago I have switched from a single ActiveMQConnectionFactory to a PooledConnectionFactory. It is being setup like so:
ActiveMQConnectionFactory amcf = new ActiveMQConnectionFactory(config.getMessageBrokerUrl());
amcf.setTrustedPackages(Arrays.asList(new String[] {
"some.package.or.other",
"java.lang",
"java.util"
}));
connectionFactory = new PooledConnectionFactory(amcf);
connectionFactory.setCreateConnectionOnStartup(true);
Consumers and producers "create" (= fetch) a connection from the connection pool and "close" it when they are done, returning it to the pool. Obviously in the case of MessageListeners, it is obtained once at startup and returned on application shutdown.
ActiveMQConnection.stop() says it Temporarily stops a connection's delivery of incoming messages. Perfect for what I want, only the pool obviously contains many connections, not just one.
How do you pause all connections of an ActiveMQ connection pool?
I guess you have to resort to other means of pausing the message delivery using pooled connection pools. See this question for example when using spring DMLC (may not be the case for you): Start and Stop JMS Listener using Spring
You can also pause that queue from the broker side. There is a pause/resume operation on the JMX MBean of the queue. See attached screenshot.
It does not answer the question about pausing the client, but may solve your problem.
Related
In my Java application I am using the failover transport to connect to a local ActiveMQ broker:
failover:(tcp://0.0.0.0:61616)
I create one single connection that I reuse in the rest of the application:
ActiveMQConnection connection = (ActiveMQConnection) connectionFactory.createConnection();
In another part of the application when I receive some external call I need to send a message to the broker, and so, for doing that I create a new "Session":
Session locSession = connection.createSession(false, Session.AUTO_ACKNOWLEDGE);
When the broker is down my app tries to reconnect to the broker forever (this is the expected behavior I really want to have).
However, the problem is that if the broker is down and I receive a call that invokes the code that executes the connection.createSession(false, Session.AUTO_ACKNOWLEDGE) then my app hangs forever on this line of code waiting for the app to reconnect successfully to the broker and then create the session.
Please, do you know any way to check before I execute createSession if the connection object is trying to reconnect or it is really connected? If I am able to know this I could avoid the creation of the session if the app is not connected to the broker (only trying to reconnect) and therefore I would avoid to hanging on connection.createSession forever (I would raise an exception).
I wasn't able to find any property or method on ActiveMQConnection to gather this information.
The failover: url provides a setting startupMaxReconnectAttempts to prevent infinite retry when connecting to the broker the first time.
Also note-- If you want an exception to bubble up, that conflicts with requirement to have infinite retry. You would need to adjust the failover settings to match your intended behavior, by setting a max count or max time to perform retry, then throw an exception and unblock your caller.
For example, you could indicate you only want to retry for 5 minutes, then receive an exception to handle in the code to prevent the infinite blocking.
Thank you all for your help and suggestions. They helped me a lot in re-focusing the problem.
However I f found the answer to my question using the method "getTransport().isConnected()".
I am Using Amazon Mq as my Mqtt broker and when around 1000 requests are received simultaneously the mqtt broker breaks and disconnects. Can Anyone tell me how to use Amazon Mq as my broker & simultaneously solve the scaling problem also.
I'm assuming that you have created ActiveMQ as a singleton class. Right?
-For producing a message, you create an instance of PooledConnectionFactory like
-------//some code here
ActiveMQConnectionFactory connectionFactory = new ActiveMQConnectionFactory(MQTT_END_POINT);
connectionFactory.setUserName();
connectionFactory.setPassword();
PooledConnectionFactory pooledConnectionFactory = getActiveMQInstance().configurePooledConnectionFactory(activeMQConnectionFactory);
-------
This pooledConnectionFactory is used to create a connection then session and then destination is entered (as mentioned on AmazonMQ documentation). You send the message using MessageProducer object and close the MessageProducer, session and connection
-For consumption, there will be an always-alive-listener that is always ready for message to arrive. The consumer part, it follows the same process like consumerConnection, then session and then destination queue to listen on.
As far as I remember, this part is also mentioned in amazonMQ documentation.
There is one problem that the connection to broker is lost for consumer sometimes, (since producer reopens the connections, produces and closes, it is not observed in it). Remember, you will have to reestablish the connection for consumer.
If there is any variance from the above approach please mention. Also, add your amazonMQ broker picture showing the connection, queue, active consumers.
Just out of curiosity, what are the maximum connections you have set for the PooledConnectionFactory?
I am using AWS-S3 consumer to poll files on a certain location on S3 at regular intervals. After polling for certain no of times, it starts failing with exceptions as given,
Will try again at next poll. Caused by:[com.amazonaws.AmazonClientException - Unable to execute HTTP request:
Timeout waiting for connection from pool]
com.amazonaws.AmazonClientException: Unable to execute HTTP request:Timeout waiting for connection from pool
at com.amazonaws.http.AmazonHttpClient.executeHelper(AmazonHttpClient.java:376) ~[aws-java-sdk-1.5.5.jar:na]
at com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:202) ~[aws-java-sdk-1.5.5.jar:na]
at com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:3037) ~[aws-java-sdk-1.5.5.jar:na]
at com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:3008) ~[aws-java-sdk-1.5.5.jar:na]
at com.amazonaws.services.s3.AmazonS3Client.listObjects(AmazonS3Client.java:531) ~[aws-java-sdk-1.5.5.jar:na]
at org.apache.camel.component.aws.s3.S3Consumer.poll(S3Consumer.java:69) ~[camel-aws-2.12.0.jar:2.12.0]
at org.apache.camel.impl.ScheduledPollConsumer.doRun(ScheduledPollConsumer.java:187) [camel-core-2.12.0.jar:2.12.0]
at org.apache.camel.impl.ScheduledPollConsumer.run(ScheduledPollConsumer.java:114) [camel-core-2.12.0.jar:2.12.0]
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) [na:1.7.0_60]
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:304) [na:1.7.0_60]
From what I understand, the reason shall be the consumer exhausting the available connections from the pool as it uses a new connection every poll. What I need to know is how to release the resources after every poll and why does the component itself doesn't do it.
Camel Version: 2.12
Edit:
I modified the consumer to pick custom S3 client with specific connection timeout, maxconnections, maxerrorretry and sockettimeout, but of no use. Resultant is same.
S3 Client configuration:
ClientConfiguration clientConfiguration = new ClientConfiguration();
clientConfiguration.setMaxConnections(50);
clientConfiguration.setConnectionTimeout(6000);
clientConfiguration.setMaxErrorRetry(3);
clientConfiguration.setSocketTimeout(30000);
main.bind("s3Client", new AmazonS3Client(awsCredentials, clientConfiguration));
The object of AmazonS3Client named "s3Client" is bounded to the Camel context and is provided to Camel AWS-S3 component based route. Now, Camel on its own manages this resource.
Required solution: Am expecting solution specific to Camel Aws-S3 consumer and not generic Java solution as am aware that connection shall be closed after its task is done for it to be released and used again. What am confused about is why is Camel not doing this automatically when provided with the connection pool or if I am missing any configuration specifically.
Camel Consumer class opens connection for each "Key" and creates an exchange out of it. This exchange is forwarded on to the route for processing but never closed automatically, even on calling "Stop". Resultant, the connection pool runs out of free connections. What needs to be done is extract the S3ObjectInputStream out of the exchange and close it.
S3ObjectInputStream s3InputStream = exchange.getIn().getBody(S3ObjectInputStream.class);
s3InputStream.close();
The answer is pretty much close to what the others suggest that is to close the connection. But as explained, Camel specific answer was expected and an explanation to why doesn't Camel handles this on its own.
Any where Connection Pooling concept is the same.If you not able to get a connection even it is idle,in development purpose we need to explicitly call close() by checking whether connection Idle. for exmaple:
if(con.isIdle()&& !con.closed()){
con.close();
}
Then only we'll get the conncetions available.Even though most frameworks do it its better to finalise this code from our connectionFactory classes.
Edit:
https://forums.aws.amazon.com/message.jspa?messageID=296676
This link will surley helps you in getting your specific answer as you didn't specify your code of S3Object connection class.
Edit 2 :
Try this method in your ClientConfiguration
public ClientConfiguration withConnectionMaxIdleMillis(long connectionMaxIdleMillis)
this might resolve your error because it closes the connection automatically if there is Idle connection the pool and not reused.
I put Jsch into commons-pool (with spring pool support) with initial success
http://docs.spring.io/spring/docs/3.2.4.RELEASE/spring-framework-reference/htmlsingle/#aop-ts-pool
However:
Should we pool the channels within the Session instead of pooling the sessions? Each Jsch session creates one thread. Pooling Jsch sessions will create x threads. Pooling channels, there will really be only one Jsch thread.
(commons-pool) what happens if the Jsch session went stale? How to regenerate the session in the context of the commons-pool or using spring pool support? How to detect whether it goes stale?
Thanks
Figured out my own question. I will share my project in the next day or two.
Pooling channels are much more effective. There is really no need to create multiple sessions (if the session connects to the same sftp endpoint).
I implemented a JSch connection pool (pooling channels) with spring pool and commons-pool. I will post to the github in the next day or two. The most important question is, what if the connection went stale.
I found out that based on my implementation of 1 Session - multiple channels, and if the connection went stale, the pooled objects (in this case, the channel) will be stale. The pooled object should be invalidated and deleted from the pool. When the connection comes back up, and when new application thread "borrows" from the pool, new pool objects will be created.
To validate my observation, my not-so-automated test:
a) Create a set (say 10) of app threads checking out channel resource from the pool.
b) Have the thread to sleep 20 seconds
c) Create another set of app threads checking out channel resources from the pool.
At a), set breakpoint when i==7, break the connection by "iptable drop (linux) or pfctl -e; pfctl -f /etc/pf.conf (mac, google how to do!)". This first set of app threads will get exception because the channel is broken.
At b), restart the connection
At c), the 2nd set of app threads will be successfully completing the operation because the broken connection has been restored.
I have a JBoss-6 server with HornetQ and a single queue:
<queue name="my.queue">
<entry name="/queue/test"/>
</queue>
There a different consumers (on different machines) connected to this queue, but only a single consumer is active at a time. If I shut down this consumer, the messages are immediately processed by one of the other consumers.
Since my messages have some time consuming processing, I want multiple consumer process their unique messages concurrently.
I remember a similar in earlier versions of JBoss where this setup worked without problems. Here in Jboss-6 the messaging system is working well -- except of the issue described above. This question is similar to Are multiple client consumers possible in hornetq?, but the scenario is not similar to mine.
Update 1: If I close (STRG+C) one consumer there is a short timeout (until the server recognized the lost consumer) until the next consumer gets the message.
Update 2: Code Snippet
VoidListener ml = new VoidListener();
QueueConnectionFactory qcf = (QueueConnectionFactory)
ctx.lookup("ConnectionFactory");
QueueConnection conn = qcf.createQueueConnection();
Queue queue = (Queue) ctx.lookup(queueName);
QueueSession session = conn.createQueueSession(false,
QueueSession.AUTO_ACKNOWLEDGE);
QueueReceiver recv = session.createReceiver(queue,"");
recv.setMessageListener(ml);
conn.start();
And the MessageListerner:
public class OlVoidListener implements MessageListener
{
public void onMessage(Message msg)
{
counter++;
logger.debug("Message ("+counter+") received");
try {Thread.sleep(15*1000);} catch (InterruptedException e) {}
}
}
With multiple consumers on a queue, messages are load balanced between the consumers.
As you have some time consuming the message, you should disable buffering by setting consumer-window-size.
On hornetQ there's an example on the distribution, about how to disable client buffering and give a better support for slow consumers. (a slow consumer is a consumer that will have some time processing the message)
message systems will pre-fetch/read-ahead messages to the client buffer to speed up processing and avoid network latency. This is not an issue if you have fast processing queues and a single consumer.
JBoss Messaging offered the slow-consumer option at the connection factory and hornetq offers the consumer window size.
Most Message systems will provide you a way to enable or disable client pre-fetching.
I am sorry but I cannot understand what exactly the problem is. We've used hornetq in 2.0.0.GA version and 2.2.2.Final. In both cases, queue-based load balancing works fine. If you will define multiple consumers for one queue and all of them are active, messages will be distributed between them automatically. First message to consumer A, second to consumer B, third to consumer C and so on. This is how queues with multiple consumers works - it's free load balancing :) That's normal that when you shut down one consumer, others would receive more messages.