I am using apache activemq with spring boot and I want to migrate to apache artemis to improve usage for cluster and nodes.
At the moment I am using mainly the concept of VirtualTopics and with JMS like
#JMSListener(destination = "Consumer.A.VirtualTopic.simple")
public void receiveMessage() {
...
}
...
public void send(JMSTemplate template) {
template.convertAndSend("VirtualTopic.simple", "Hello world!");
}
I have read, that artemis changed it's address model to addresses, queues and routing types instead of queues, topics and virtual topics like in activemq.
I have read a lot more, but I think I don't get it right, how I can migrate now. I tried it the same way like above, so I imported Artemis JMSClient from Maven and wanted to use it like before, but with FQQN (Fully Qualified Queue Name) or the VirtualTopic-Wildcard you can read on some sources. But somehow it does not work properly.
My Questions are:
- How can I migrate VirtualTopics? Did I get it right with FQQN and those VirtualTopics-Wildcards?
- How can I specify the routingtypes anycast and multicast for the code examples above? (In the online examples addresses and queues are hardcoded in the server broker.xml, but I want to create it on the fly of the application.)
- How can I use it with openwire protocol and how does the application know what it uses? Does it only depend on the port I am using of artemis? So 61616 for openwire?
Can anyone help in clarifying my thoughts?
UPDATE:
Some further questions.
1) I always read something like "a default 5.x consumer". Is it expected then to get mixed with artemis? Like you leave all of those naming conventions and just add the addresses to the VirtualTopic name to a FQQN, and just change dependecies to artemis?
2) I've already tried the "virtualTopicConsumerWildcards" with "import org.apache.activemq.artemis.jms.client.ActiveMQConnectionFactory;" and "import org.apache.activemq.ActiveMQConnectionFactory;", but only in the second case it made a difference.
3) I also tried to only use OpenWire as protocol in the acceptor, but in this case (and with "import org.apache.activemq.artemis.jms.client.ActiveMQConnectionFactory;") I get following error when starting my application: "2020-03-30 11:41:19,504 ERROR [org.apache.activemq.artemis.core.server] AMQ224096: Error setting up connection from /127.0.0.1:54201 to /127.0.0.1:61616; protocol CORE not found in map: [OPENWIRE]".
4) Do I put i.e. multicast:://VirtualTopic.simple this as destination name in template.convertAndSend(...)?
I tried template.setPubSubDomain(true) for multicast routing type and left it for anycast, this works. But is it a good way?
5) Do you maybe know, how I can "tell" my spring-boot-application with template.convertAndSend(...); to use Openwire?
UPDATE2:
Shared durable subscriptions
#JmsListener(destination = "VirtualTopic.test", id = "c1", subscription = "Consumer.A.VirtualTopic.test", containerFactory = "queueConnectionFactory")
public void receive1(String m) {
}
#JmsListener(destination = "VirtualTopic.test", id = "c2", subscription = "Consumer.B.VirtualTopic.test", containerFactory = "queueConnectionFactory")
public void receive2(String m) {
}
#Bean
public DefaultJmsListenerContainerFactory queueConnectionFactory() {
DefaultJmsListenerContainerFactory factory = new DefaultJmsListenerContainerFactory();
factory.setConnectionFactory(connectionFactory());
factory.setClientId("brokerClientId");
factory.setSubscriptionDurable(true);
factory.setSubscriptionShared(true);
return factory;
}
Errors:
2020-04-17 11:23:44.485 WARN 7900 --- [enerContainer-3] o.s.j.l.DefaultMessageListenerContainer : Setup of JMS message listener invoker failed for destination 'VirtualTopic.test' - trying to recover. Cause: org.apache.activemq.ActiveMQSession.createSharedDurableConsumer(Ljavax/jms/Topic;Ljava/lang/String;Ljava/lang/String;)Ljavax/jms/MessageConsumer;
2020-04-17 11:23:44.514 ERROR 7900 --- [enerContainer-3] o.s.j.l.DefaultMessageListenerContainer : Could not refresh JMS Connection for destination 'VirtualTopic.test' - retrying using FixedBackOff{interval=5000, currentAttempts=0, maxAttempts=unlimited}. Cause: Broker: d1 - Client: brokerClientId already connected from /127.0.0.1:59979
What am I doing wrong here?
The idea behind virtual topics is that producers send to a topic in the usual JMS way and s consumer can consume from a physical queue for a logical topic subscription, allowing many consumers to be running on many machines & threads to load balance the load.
Artemis uses a queue per topic subscriber model internally and it is possibly to directly address the subscription queue using its Fully Qualified Queue name (FQQN).
For example, a default 5.x consumer destination for topic VirtualTopic.simple subscription A Consumer.A.VirtualTopic.simple would be replaced with an Artemis FQQN comprised of the address and queue VirtualTopic.simple::Consumer.A.VirtualTopic.simple.
However Artemis supports a virtual topic wildcard filter mechanism that will automatically convert the consumer destination into the corresponding FQQN. To enable filter mechanism the configuration string property
virtualTopicConsumerWildcards could be used. It has has two parts separated by a ;, ie the default 5.x virtual topic with consumer prefix of Consumer.*., would require a virtualTopicConsumerWildcards filter of Consumer.*.>;2.
Artemis is configured by default to auto-create destinations requested by clients. They can specify a special prefix when connecting to an address to indicate which kind of routing type to use. They can be enabled by adding the configuration string property anycastPrefix and multicastPrefix to an acceptor, you can find more details at Using Prefixes to Determine Routing Type. For example adding to the acceptor anycastPrefix=anycast://;multicastPrefix=multicast://, if the client needs to send a message to only one of the ANYCAST queues should use the destination anycast:://VirtualTopic.simple, if the client needs to send a message to the MULTICAST should use the destination multicast:://VirtualTopic.simple.
Artemis acceptors support using a single port for all protocols, they will automatically detect which protocol is being used CORE, AMQP, STOMP or OPENWIRE, but it is possible to limit which protocols are supported by using the protocols parameter.
The following acceptor enables the anycast prefix anycast://, the multicast prefix multicast:// and the virtual topic consumer wildcards, disabling all protocols except OPENWIRE on the endpoint localhost:61616.
<acceptor name="artemis">tcp://localhost:61616?anycastPrefix=anycast://;multicastPrefix=multicast://;virtualTopicConsumerWildcards=Consumer.*.%3E%3B2;protocols=OPENWIRE</acceptor>
UPDATE:
The following example application connects to an Artemis instance with the previous acceptor using the OpenWire protocol.
import org.apache.activemq.ActiveMQConnectionFactory;
import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;
import org.springframework.context.ConfigurableApplicationContext;
import org.springframework.context.annotation.Bean;
import org.springframework.jms.annotation.EnableJms;
import org.springframework.jms.annotation.JmsListener;
import org.springframework.jms.config.DefaultJmsListenerContainerFactory;
import org.springframework.jms.core.JmsTemplate;
#SpringBootApplication
#EnableJms
public class Application {
private final String BROKER_URL = "tcp://localhost:61616";
private final String BROKER_USERNAME = "admin";
private final String BROKER_PASSWORD = "admin";
public static void main(String[] args) throws Exception {
final ConfigurableApplicationContext context = SpringApplication.run(Application.class);
System.out.println("********************* Sending message...");
JmsTemplate jmsTemplate = context.getBean("jmsTemplate", JmsTemplate.class);
JmsTemplate jmsTemplateAnycast = context.getBean("jmsTemplateAnycast", JmsTemplate.class);
JmsTemplate jmsTemplateMulticast = context.getBean("jmsTemplateMulticast", JmsTemplate.class);
jmsTemplateAnycast.convertAndSend("VirtualTopic.simple", "Hello world anycast!");
jmsTemplate.convertAndSend("anycast://VirtualTopic.simple", "Hello world anycast using prefix!");
jmsTemplateMulticast.convertAndSend("VirtualTopic.simple", "Hello world multicast!");
jmsTemplate.convertAndSend("multicast://VirtualTopic.simple", "Hello world multicast using prefix!");
System.out.print("Press any key to close the context");
System.in.read();
context.close();
}
#Bean
public ActiveMQConnectionFactory connectionFactory(){
ActiveMQConnectionFactory connectionFactory = new ActiveMQConnectionFactory();
connectionFactory.setBrokerURL(BROKER_URL);
connectionFactory.setUserName(BROKER_USERNAME);
connectionFactory.setPassword(BROKER_PASSWORD);
return connectionFactory;
}
#Bean
public JmsTemplate jmsTemplate(){
JmsTemplate template = new JmsTemplate();
template.setConnectionFactory(connectionFactory());
return template;
}
#Bean
public JmsTemplate jmsTemplateAnycast(){
JmsTemplate template = new JmsTemplate();
template.setPubSubDomain(false);
template.setConnectionFactory(connectionFactory());
return template;
}
#Bean
public JmsTemplate jmsTemplateMulticast(){
JmsTemplate template = new JmsTemplate();
template.setPubSubDomain(true);
template.setConnectionFactory(connectionFactory());
return template;
}
#Bean
public DefaultJmsListenerContainerFactory jmsListenerContainerFactory() {
DefaultJmsListenerContainerFactory factory = new DefaultJmsListenerContainerFactory();
factory.setConnectionFactory(connectionFactory());
factory.setConcurrency("1-1");
return factory;
}
#JmsListener(destination = "Consumer.A.VirtualTopic.simple")
public void receiveMessageFromA(String message) {
System.out.println("*********************** MESSAGE RECEIVED FROM A: " + message);
}
#JmsListener(destination = "Consumer.B.VirtualTopic.simple")
public void receiveMessageFromB(String message) {
System.out.println("*********************** MESSAGE RECEIVED FROM B: " + message);
}
#JmsListener(destination = "VirtualTopic.simple")
public void receiveMessageFromTopic(String message) {
System.out.println("*********************** MESSAGE RECEIVED FROM TOPIC: " + message);
}
}
Related
We have a spring boot application which consumes messages from IBM MQ does some transformation and publishes the result to a Kafka topic. We use https://spring.io/projects/spring-kafka for this. I am aware that Kafka does not supports XA; however, in the documentation I found some inputs about using a ChainedKafkaTransactionManager to chain multiple transaction managers and synchronise the transactions. The same documentation also provides an example about how to synchronise Kafka and database while reading messages from Kafka and storing them in the database.
I follow the same example in my se case and chained the JmsTransactionManager with KafkaTransactionManager under the umbrella of a ChainedKafkaTransactionManager. The bean definitions follows below:
#Bean({"mqListenerContainerFactory"})
public DefaultJmsListenerContainerFactory jmsListenerContainerFactory() {
DefaultJmsListenerContainerFactory factory = new DefaultJmsListenerContainerFactory();
factory.setConnectionFactory(this.connectionFactory());
factory.setTransactionManager(this.jmsTransactionManager());
return factory;
}
#Bean
public JmsTransactionManager jmsTransactionManager() {
return new JmsTransactionManager(this.connectionFactory());
}
#Bean("chainedKafkaTransactionManager")
public ChainedKafkaTransactionManager<?, ?> chainedKafkaTransactionManager(
JmsTransactionManager jmsTransactionManager, KafkaTransactionManager kafkaTransactionManager) {
return new ChainedKafkaTransactionManager<>(kafkaTransactionManager, jmsTransactionManager);
}
#Transactional(transactionManager = "chainedKafkaTransactionManager", rollbackFor = Throwable.class)
#JmsListener(destination = "${myApp.sourceQueue}", containerFactory = "mqListenerContainerFactory")
public void receiveMessage(#Headers Map<String, Object> jmsHeaders, String message) {
// Processing the message here then publishing it to Kafka using KafkaTemplate
kafkaTemplate.send(sourceTopic,transformedMessage);
// Then throw an exception just to test the transaction behaviour
throw new RuntimeException("Not good Pal!");
}
When running the application what is happening is that he message keep getting rollbacked into the MQ Queue but messages keep growing in Kafka topic which means to me that kafkaTemplate interaction does not get rollbacked.
If I understand well according with the documentation this should not be the case. "If a transaction is active, any KafkaTemplate operations performed within the scope of the transaction use the transaction’s Producer."
In our application.yaml we configured the Kafka producer to use transactions by setting up spring.kafka.producer.transaction-id-prefix
The question is what I am missing here and how should I fix it.
Thank you in advance for your inputs.
Consumers can see uncommitted records by default; set the isolation.level consumer property to read_committed to avoid receiving records from rolled-back transactions.
I wanted to configure exclusive consumer for ActiveMQ with Spring boot
Configuring with java is easy
queue = new ActiveMQQueue("TEST.QUEUE?consumer.exclusive=true");
consumer = session.createConsumer(queue);
But with Spring boot, listener is configured as below.
#JmsListener(destination = "TEST.QUEUE", containerFactory = "myFactory")
public void receiveMessage(Object message) throws Exception {
......
}
Now, how to make this exclusive consumer? Does the below work?
#JmsListener(destination = "TEST.QUEUE?consumer.exclusive=true", containerFactory = "myFactory")
public void receiveMessage(Object message) throws Exception {
......
}
Yes, it's working this way.
Just set a breakpoint to the org.apache.activemq.command.ActiveMQQueue constructor and run your application in debug mode.
You will see that Spring Boot is calling
new ActiveMQQueue("TEST.QUEUE?consumer.exclusive=true") which corresponds to the official ActiveMQ documentation:
https://activemq.apache.org/exclusive-consumer
Moreavor you can go to the ActiveMQ admin and browse the active consumers of this queue: you will now see that the exclusive flag is set to true for your consumer.
I've been developing a project using spring framework 4. I'm trying to create a simple TCP client via spring-integration-ip library. I've adjusted all configurations:
applicationContext.xml
...
<int:channel id="tcpChannel" />
<int-ip:tcp-outbound-channel-adapter id="outboundClient"
channel="tcpChannel"
connection-factory="tcpConnectionFactory"/>
...
bean configuration:
#Configuration
public class MyConfiguration{
#Bean
public AbstractClientConnectionFactory tcpConnectionFactory() {
return new TcpNetClientConnectionFactory("localhost", 2345);
}
}
I've read all documentations about spring tcp here.
I guess I must use tcp-outbound-channel-adapter or gateway to send messages. but I wonder how to use it; what method should I invoke. I'm not supposed to receive any messages from server.
I found the solution. I didn't need gateway. spring messaging gateways have been designed to implement the request-response scenario. So the only thing I need to do is that I send message vi channel. Perhaps there be some better solutions.
import org.springframework.integration.support.MessageBuilder;
import org.springframework.messaging.MessageChannel;
public class MyOwnService{
#Inject
private MessageChannel channel;
public void someMethod(String message){
Message<String> m = MessageBuilder.withPayload(message).build();
channel.send(m);
}
}
I have a Elasticache setup with one master and two slaves. I am still not sure how to pass in a list of master slave RedisURIs to construct a StatefulRedisMasterSlaveConnection for LettuceConnectionFactory. I only see support for standaloneConfiguration with single host and port.
LettuceClientConfiguration configuration = LettuceTestClientConfiguration.builder().readFrom(ReadFrom.SLAVE).build();
LettuceConnectionFactory factory = new LettuceConnectionFactory(SettingsUtils.standaloneConfiguration(),configuration);
I know there is a similar question Configuring Spring Data Redis with Lettuce for Redis master/slave
But I don't think it works for ElastiCache Master/Slave setup as currently the above code would try to use MasterSlaveTopologyProvider to discover slave ips. However, slave IP addresses are not reachable. So what's the right way to configure Spring Data Redis to make it compatible with Master/Slave ElastiCache? It seems to me LettuceConnectionFactory needs to take in a list of endpoints and use StaticMasterSlaveTopologyProvider in order to work.
There have been further improvements in AWS and Lettuce making it easier to support Master/Slave.
One improvement that has happened recently in AWS is it has launched reader endpoints for Redis which distributes load among replicas: Amazon ElastiCache launches reader endpoints for Redis.
Hence the best way to connect to Redis using Spring Data Redis will be to use the primary endpoint (master) and reader endpoint (for replicas) of the Redis cluster.
You can get both of them from the AWS console. Here is a sample code:
#Bean
public LettuceConnectionFactory redisConnectionFactory() {
LettuceClientConfiguration clientConfig = LettuceClientConfiguration.builder()
.readFrom(ReadFrom.SLAVE_PREFERRED)
.build();
RedisStaticMasterReplicaConfiguration redisStaticMasterReplicaConfiguration =
new
RedisStaticMasterReplicaConfiguration(REDIS_CLUSTER_PRIMARY_ENDPOINT, redisPort);
redisStaticMasterReplicaConfiguration.addNode(REDIS_CLUSTER_READER_ENDPOINT, redisPort);
redisStaticMasterReplicaConfiguration.setPassword(redisPassword);
return new LettuceConnectionFactory(redisStaticMasterReplicaConfiguration, clientConfig);
}
Right now, static Master/Slave with provided endpoints is not supported by Spring Data Redis. I filed a ticket to add support for that.
You can implement this functionality yourself by subclassing LettuceConnectionFactory, creating an own configuration and LettuceConnectionFactory.
You would start with something like:
public static class MyLettuceConnectionFactory extends LettuceConnectionFactory {
private final MyMasterSlaveConfiguration configuration;
public MyLettuceConnectionFactory(MyMasterSlaveConfiguration standaloneConfig,
LettuceClientConfiguration clientConfig) {
super(standaloneConfig, clientConfig);
this.configuration = standaloneConfig;
}
#Override
protected LettuceConnectionProvider doCreateConnectionProvider(AbstractRedisClient client, RedisCodec<?, ?> codec) {
return new ElasticacheConnectionProvider((RedisClient) client, codec, getClientConfiguration().getReadFrom(),
this.configuration);
}
}
static class MyMasterSlaveConfiguration extends RedisStandaloneConfiguration {
private final List<RedisURI> endpoints;
public MyMasterSlaveConfiguration(List<RedisURI> endpoints) {
this.endpoints = endpoints;
}
public List<RedisURI> getEndpoints() {
return endpoints;
}
}
You can find all code in this gist, not posting all code here as it would be a wall of code.
#Component
public class OrderItemListener{
#Autowired
private final StoreService storeService;
#JmsListener(destination = "order.item.queue")
public void receiveOrder(String message) {
//processing
}
}
This is my POJO class for receiving messages. I can send messages here through JCONSOLE however, what if I have another application that needs to send a message to this listener/queue? How would I dentify the address? This is automatically configured through spring-boot. I only specified the activemq jar.
#Autowired
private JmsTemplate template;
...
this.template.convertAndSend("order.item.queue", "foo");
If this is running in a different JVM you will need a stand-alone broker and set spring.activemq.broker-url=tcp://somehost:61616.