How to configure ActiveMQ exclusive consumer with Spring boot app - java

I wanted to configure exclusive consumer for ActiveMQ with Spring boot
Configuring with java is easy
queue = new ActiveMQQueue("TEST.QUEUE?consumer.exclusive=true");
consumer = session.createConsumer(queue);
But with Spring boot, listener is configured as below.
#JmsListener(destination = "TEST.QUEUE", containerFactory = "myFactory")
public void receiveMessage(Object message) throws Exception {
......
}
Now, how to make this exclusive consumer? Does the below work?
#JmsListener(destination = "TEST.QUEUE?consumer.exclusive=true", containerFactory = "myFactory")
public void receiveMessage(Object message) throws Exception {
......
}

Yes, it's working this way.
Just set a breakpoint to the org.apache.activemq.command.ActiveMQQueue constructor and run your application in debug mode.
You will see that Spring Boot is calling
new ActiveMQQueue("TEST.QUEUE?consumer.exclusive=true") which corresponds to the official ActiveMQ documentation:
https://activemq.apache.org/exclusive-consumer
Moreavor you can go to the ActiveMQ admin and browse the active consumers of this queue: you will now see that the exclusive flag is set to true for your consumer.

Related

How to disable KAFKA consumer being autostarted without having to make any code changes including setting the autoStartup = "{xyz}" in Spring Boot?

Below is my KAFKA consumer
#Service
public class Consumer {
private static final Logger LOGGER = Logger.getLogger(Consumer.class.getName());
public static Queue<ProductKafka> consumeQueue = new LinkedList<>();
#KafkaListener(topics = "#{'${spring.kafka.topics}'.split('\\\\ ')}", groupId = "#{'${spring.kafka.groupId}'}")
public void consume(ProductKafka productKafka) throws IOException {
consumeQueue.add(productKafka);
LOGGER.info(String.format("#### -> Logger Consumed message -> %s", productKafka.toString()));
System.out.printf("#### -> Consumed message -> %s", productKafka.toString());
}
}
and below is my "application.properties" file
spring.kafka.topics=Product
spring.kafka.groupId=Product-Group
My KAFKA consumer is getting started automatically.
However I want to disable KAFKA consumer being autostarted without having to make any changes to the existing code including setting autoStartup = "{xyz}" in the consumer class due to the requirement.
I am looking an existing properties which would disable KAFKA consumer being autostarted, something like this
spring.kafka.consumer.enable=false
Note: I have multiple KAFKA consumers and the above property should disable all the consumers in the project.
do we have any existing properties which would disable KAFKA consumer being autostarted without having to make any changes to the existing code?
There is no standard out-of-the-box property; you have to provide your own.
autoStartup="${should.start:true}"
will start the container if property should.start is not present.
EDIT
Just add something like this in your application.
#Component
class Customizer {
Customizer(AbstractKafkaListenerContainerFactory<?, ?, ?> factory,
#Value("${start.containers:true}") boolean start) {
factory.setAutoStartup(start);
}
}
start:
containers: false

Unable to synchronise Kafka and MQ transactions usingChainedKafkaTransaction

We have a spring boot application which consumes messages from IBM MQ does some transformation and publishes the result to a Kafka topic. We use https://spring.io/projects/spring-kafka for this. I am aware that Kafka does not supports XA; however, in the documentation I found some inputs about using a ChainedKafkaTransactionManager to chain multiple transaction managers and synchronise the transactions. The same documentation also provides an example about how to synchronise Kafka and database while reading messages from Kafka and storing them in the database.
I follow the same example in my se case and chained the JmsTransactionManager with KafkaTransactionManager under the umbrella of a ChainedKafkaTransactionManager. The bean definitions follows below:
#Bean({"mqListenerContainerFactory"})
public DefaultJmsListenerContainerFactory jmsListenerContainerFactory() {
DefaultJmsListenerContainerFactory factory = new DefaultJmsListenerContainerFactory();
factory.setConnectionFactory(this.connectionFactory());
factory.setTransactionManager(this.jmsTransactionManager());
return factory;
}
#Bean
public JmsTransactionManager jmsTransactionManager() {
return new JmsTransactionManager(this.connectionFactory());
}
#Bean("chainedKafkaTransactionManager")
public ChainedKafkaTransactionManager<?, ?> chainedKafkaTransactionManager(
JmsTransactionManager jmsTransactionManager, KafkaTransactionManager kafkaTransactionManager) {
return new ChainedKafkaTransactionManager<>(kafkaTransactionManager, jmsTransactionManager);
}
#Transactional(transactionManager = "chainedKafkaTransactionManager", rollbackFor = Throwable.class)
#JmsListener(destination = "${myApp.sourceQueue}", containerFactory = "mqListenerContainerFactory")
public void receiveMessage(#Headers Map<String, Object> jmsHeaders, String message) {
// Processing the message here then publishing it to Kafka using KafkaTemplate
kafkaTemplate.send(sourceTopic,transformedMessage);
// Then throw an exception just to test the transaction behaviour
throw new RuntimeException("Not good Pal!");
}
When running the application what is happening is that he message keep getting rollbacked into the MQ Queue but messages keep growing in Kafka topic which means to me that kafkaTemplate interaction does not get rollbacked.
If I understand well according with the documentation this should not be the case. "If a transaction is active, any KafkaTemplate operations performed within the scope of the transaction use the transaction’s Producer."
In our application.yaml we configured the Kafka producer to use transactions by setting up spring.kafka.producer.transaction-id-prefix
The question is what I am missing here and how should I fix it.
Thank you in advance for your inputs.
Consumers can see uncommitted records by default; set the isolation.level consumer property to read_committed to avoid receiving records from rolled-back transactions.

Spring boot migration of apache activemq to artemis

I am using apache activemq with spring boot and I want to migrate to apache artemis to improve usage for cluster and nodes.
At the moment I am using mainly the concept of VirtualTopics and with JMS like
#JMSListener(destination = "Consumer.A.VirtualTopic.simple")
public void receiveMessage() {
...
}
...
public void send(JMSTemplate template) {
template.convertAndSend("VirtualTopic.simple", "Hello world!");
}
I have read, that artemis changed it's address model to addresses, queues and routing types instead of queues, topics and virtual topics like in activemq.
I have read a lot more, but I think I don't get it right, how I can migrate now. I tried it the same way like above, so I imported Artemis JMSClient from Maven and wanted to use it like before, but with FQQN (Fully Qualified Queue Name) or the VirtualTopic-Wildcard you can read on some sources. But somehow it does not work properly.
My Questions are:
- How can I migrate VirtualTopics? Did I get it right with FQQN and those VirtualTopics-Wildcards?
- How can I specify the routingtypes anycast and multicast for the code examples above? (In the online examples addresses and queues are hardcoded in the server broker.xml, but I want to create it on the fly of the application.)
- How can I use it with openwire protocol and how does the application know what it uses? Does it only depend on the port I am using of artemis? So 61616 for openwire?
Can anyone help in clarifying my thoughts?
UPDATE:
Some further questions.
1) I always read something like "a default 5.x consumer". Is it expected then to get mixed with artemis? Like you leave all of those naming conventions and just add the addresses to the VirtualTopic name to a FQQN, and just change dependecies to artemis?
2) I've already tried the "virtualTopicConsumerWildcards" with "import org.apache.activemq.artemis.jms.client.ActiveMQConnectionFactory;" and "import org.apache.activemq.ActiveMQConnectionFactory;", but only in the second case it made a difference.
3) I also tried to only use OpenWire as protocol in the acceptor, but in this case (and with "import org.apache.activemq.artemis.jms.client.ActiveMQConnectionFactory;") I get following error when starting my application: "2020-03-30 11:41:19,504 ERROR [org.apache.activemq.artemis.core.server] AMQ224096: Error setting up connection from /127.0.0.1:54201 to /127.0.0.1:61616; protocol CORE not found in map: [OPENWIRE]".
4) Do I put i.e. multicast:://VirtualTopic.simple this as destination name in template.convertAndSend(...)?
I tried template.setPubSubDomain(true) for multicast routing type and left it for anycast, this works. But is it a good way?
5) Do you maybe know, how I can "tell" my spring-boot-application with template.convertAndSend(...); to use Openwire?
UPDATE2:
Shared durable subscriptions
#JmsListener(destination = "VirtualTopic.test", id = "c1", subscription = "Consumer.A.VirtualTopic.test", containerFactory = "queueConnectionFactory")
public void receive1(String m) {
}
#JmsListener(destination = "VirtualTopic.test", id = "c2", subscription = "Consumer.B.VirtualTopic.test", containerFactory = "queueConnectionFactory")
public void receive2(String m) {
}
#Bean
public DefaultJmsListenerContainerFactory queueConnectionFactory() {
DefaultJmsListenerContainerFactory factory = new DefaultJmsListenerContainerFactory();
factory.setConnectionFactory(connectionFactory());
factory.setClientId("brokerClientId");
factory.setSubscriptionDurable(true);
factory.setSubscriptionShared(true);
return factory;
}
Errors:
2020-04-17 11:23:44.485 WARN 7900 --- [enerContainer-3] o.s.j.l.DefaultMessageListenerContainer : Setup of JMS message listener invoker failed for destination 'VirtualTopic.test' - trying to recover. Cause: org.apache.activemq.ActiveMQSession.createSharedDurableConsumer(Ljavax/jms/Topic;Ljava/lang/String;Ljava/lang/String;)Ljavax/jms/MessageConsumer;
2020-04-17 11:23:44.514 ERROR 7900 --- [enerContainer-3] o.s.j.l.DefaultMessageListenerContainer : Could not refresh JMS Connection for destination 'VirtualTopic.test' - retrying using FixedBackOff{interval=5000, currentAttempts=0, maxAttempts=unlimited}. Cause: Broker: d1 - Client: brokerClientId already connected from /127.0.0.1:59979
What am I doing wrong here?
The idea behind virtual topics is that producers send to a topic in the usual JMS way and s consumer can consume from a physical queue for a logical topic subscription, allowing many consumers to be running on many machines & threads to load balance the load.
Artemis uses a queue per topic subscriber model internally and it is possibly to directly address the subscription queue using its Fully Qualified Queue name (FQQN).
For example, a default 5.x consumer destination for topic VirtualTopic.simple subscription A Consumer.A.VirtualTopic.simple would be replaced with an Artemis FQQN comprised of the address and queue VirtualTopic.simple::Consumer.A.VirtualTopic.simple.
However Artemis supports a virtual topic wildcard filter mechanism that will automatically convert the consumer destination into the corresponding FQQN. To enable filter mechanism the configuration string property
virtualTopicConsumerWildcards could be used. It has has two parts separated by a ;, ie the default 5.x virtual topic with consumer prefix of Consumer.*., would require a virtualTopicConsumerWildcards filter of Consumer.*.>;2.
Artemis is configured by default to auto-create destinations requested by clients. They can specify a special prefix when connecting to an address to indicate which kind of routing type to use. They can be enabled by adding the configuration string property anycastPrefix and multicastPrefix to an acceptor, you can find more details at Using Prefixes to Determine Routing Type. For example adding to the acceptor anycastPrefix=anycast://;multicastPrefix=multicast://, if the client needs to send a message to only one of the ANYCAST queues should use the destination anycast:://VirtualTopic.simple, if the client needs to send a message to the MULTICAST should use the destination multicast:://VirtualTopic.simple.
Artemis acceptors support using a single port for all protocols, they will automatically detect which protocol is being used CORE, AMQP, STOMP or OPENWIRE, but it is possible to limit which protocols are supported by using the protocols parameter.
The following acceptor enables the anycast prefix anycast://, the multicast prefix multicast:// and the virtual topic consumer wildcards, disabling all protocols except OPENWIRE on the endpoint localhost:61616.
<acceptor name="artemis">tcp://localhost:61616?anycastPrefix=anycast://;multicastPrefix=multicast://;virtualTopicConsumerWildcards=Consumer.*.%3E%3B2;protocols=OPENWIRE</acceptor>
UPDATE:
The following example application connects to an Artemis instance with the previous acceptor using the OpenWire protocol.
import org.apache.activemq.ActiveMQConnectionFactory;
import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;
import org.springframework.context.ConfigurableApplicationContext;
import org.springframework.context.annotation.Bean;
import org.springframework.jms.annotation.EnableJms;
import org.springframework.jms.annotation.JmsListener;
import org.springframework.jms.config.DefaultJmsListenerContainerFactory;
import org.springframework.jms.core.JmsTemplate;
#SpringBootApplication
#EnableJms
public class Application {
private final String BROKER_URL = "tcp://localhost:61616";
private final String BROKER_USERNAME = "admin";
private final String BROKER_PASSWORD = "admin";
public static void main(String[] args) throws Exception {
final ConfigurableApplicationContext context = SpringApplication.run(Application.class);
System.out.println("********************* Sending message...");
JmsTemplate jmsTemplate = context.getBean("jmsTemplate", JmsTemplate.class);
JmsTemplate jmsTemplateAnycast = context.getBean("jmsTemplateAnycast", JmsTemplate.class);
JmsTemplate jmsTemplateMulticast = context.getBean("jmsTemplateMulticast", JmsTemplate.class);
jmsTemplateAnycast.convertAndSend("VirtualTopic.simple", "Hello world anycast!");
jmsTemplate.convertAndSend("anycast://VirtualTopic.simple", "Hello world anycast using prefix!");
jmsTemplateMulticast.convertAndSend("VirtualTopic.simple", "Hello world multicast!");
jmsTemplate.convertAndSend("multicast://VirtualTopic.simple", "Hello world multicast using prefix!");
System.out.print("Press any key to close the context");
System.in.read();
context.close();
}
#Bean
public ActiveMQConnectionFactory connectionFactory(){
ActiveMQConnectionFactory connectionFactory = new ActiveMQConnectionFactory();
connectionFactory.setBrokerURL(BROKER_URL);
connectionFactory.setUserName(BROKER_USERNAME);
connectionFactory.setPassword(BROKER_PASSWORD);
return connectionFactory;
}
#Bean
public JmsTemplate jmsTemplate(){
JmsTemplate template = new JmsTemplate();
template.setConnectionFactory(connectionFactory());
return template;
}
#Bean
public JmsTemplate jmsTemplateAnycast(){
JmsTemplate template = new JmsTemplate();
template.setPubSubDomain(false);
template.setConnectionFactory(connectionFactory());
return template;
}
#Bean
public JmsTemplate jmsTemplateMulticast(){
JmsTemplate template = new JmsTemplate();
template.setPubSubDomain(true);
template.setConnectionFactory(connectionFactory());
return template;
}
#Bean
public DefaultJmsListenerContainerFactory jmsListenerContainerFactory() {
DefaultJmsListenerContainerFactory factory = new DefaultJmsListenerContainerFactory();
factory.setConnectionFactory(connectionFactory());
factory.setConcurrency("1-1");
return factory;
}
#JmsListener(destination = "Consumer.A.VirtualTopic.simple")
public void receiveMessageFromA(String message) {
System.out.println("*********************** MESSAGE RECEIVED FROM A: " + message);
}
#JmsListener(destination = "Consumer.B.VirtualTopic.simple")
public void receiveMessageFromB(String message) {
System.out.println("*********************** MESSAGE RECEIVED FROM B: " + message);
}
#JmsListener(destination = "VirtualTopic.simple")
public void receiveMessageFromTopic(String message) {
System.out.println("*********************** MESSAGE RECEIVED FROM TOPIC: " + message);
}
}

Configure Amazon SQS queue name in Spring Boot

I'm using AmazonSQS & Spring Boot (spring-cloud-aws-messaging). I've configured a message listener to receive messages from the queue with the annotation #SqsListener.
#SqsListener(value = "indexerQueue", deletionPolicy = SqsMessageDeletionPolicy.ON_SUCCESS)
public void queueListener(String rawMessage) {
...
}
This is a very simple approach but I didn't find the way to make the queue name load from a configuration file because I have different environments. Any ideas on this regard?
What version of spring-cloud-aws-messaging are you using? Version 1.1 should allow you to use a placeholder as a queue name, e.g.
#SqsListener(value = "${sqs.queue.indexer}", deletionPolicy = SqsMessageDeletionPolicy.ON_SUCCESS)
public void queueListener(String rawMessage) {
...
}
Then, in your application-env.properties files you can put different values. For instance in application-dev.properties:
sqs.queue.indexer=devIndexerQueue
and in application-production.properties
sqs.queue.indexer=indexerQueue

Task executor in Spring Boot

In my Spring Boot application I'm listening message queue. When a message appears I need to execute it synchronously(one by one) in some task-executor.
I'm using Amazon SQS, this is my config:
/**
* AWS Credentials Bean
*/
#Bean
public AWSCredentials awsCredentials() {
return new BasicAWSCredentials(accessKey, secretAccessKey);
}
/**
* AWS Client Bean
*/
#Bean
public AmazonSQS amazonSQSAsyncClient() {
AmazonSQS sqsClient = new AmazonSQSClient(awsCredentials());
sqsClient.setRegion(Region.getRegion(Regions.US_EAST_1));
return sqsClient;
}
/**
* AWS Connection Factory
*/
#Bean
public SQSConnectionFactory connectionFactory() {
SQSConnectionFactory.Builder factoryBuilder = new SQSConnectionFactory.Builder(
Region.getRegion(Regions.US_EAST_1));
factoryBuilder.setAwsCredentialsProvider(new AWSCredentialsProvider() {
#Override
public AWSCredentials getCredentials() {
return awsCredentials();
}
#Override
public void refresh() {
}
});
return factoryBuilder.build();
}
/**
* Registering QueueListener for queueName
*/
#Bean
public DefaultMessageListenerContainer defaultMessageListenerContainer() {
DefaultMessageListenerContainer messageListenerContainer = new DefaultMessageListenerContainer();
messageListenerContainer.setConnectionFactory(connectionFactory());
messageListenerContainer.setMessageListener(new MessageListenerAdapter(new QueueListener()));
messageListenerContainer.setDestinationName(queueName);
return messageListenerContainer;
}
Also I need to have possibility to check the status of this task-executor, for example - number of scheduled tasks.
Is it a good idea to use Spring SyncTaskExecutor for this purpose ? If so, could you please show an example how it can be used with Spring Boot.
EDIT:
After revealing your messaging technology and Spring configuration for it, simplest way for you is to configure SyncTaskExecutor (or Executors.newFixedThreadPool(1) would do the job also) as executor for your DefaultMessageListenerContainer. Use this method.
You can register Task executor as separate bean (via #Bean annotation) and autowire it to defaultMessageListenerContainer() method (just add TaskExectuor as parameter).
Below answer is relevant for JMS messaging. It was created before AWS SQS usage was revealed in question:
You didn't mention which messaging technology are you using, therefore I assume JMS.
If synchronous execution is requirement, I believe you can't use native JMS listeners (need to avoid SimpleJmsListenerContainerFactory or SimleMessageListenerContainer).
Instead I would suggest to use #JmsListener annotation with DefaultJmsListenerContainerFactory (this uses long polling instead of native JMS listeners) and configure SyncTaskExecutor (or Executors.newFixedThreadPool(1) would do the job also) as executor for mentioned container factory: DefaultJmsListenerContainerFactory.setTaskExecutor().
This is simple Spring Boot JMS example with DefaultJmsListenerContainerFactory configured. You just need to plug in suitable task executor.

Categories