Spring JMS defining multiple destinations - java

The spring documentation says:
Destinations, as ConnectionFactory instances, are JMS administered
objects that you can store and retrieved in JNDI. When configuring a
Spring application context, you can use the JNDI JndiObjectFactoryBean
factory class or to perform dependency injection on
your object’s references to JMS destinations.
However, this strategy
is often cumbersome if there are a large number of destinations in the
application or if there are advanced destination management features
unique to the JMS provider.
The question is:
How to proceed when I have a large number of destinations in my application?
Using the strategy mentioned above I have to define:
JndiTemplate
JndiDestinationResolver
JndiObjectFactoryBean
CachingConnectionFactory
JmsTemplate
For EACH destination.
So If I have 20 queues, I'll have to define 100 such beans...

The comment in the Spring documentation makes a note on 'using JNDI for destination endpoints' versus 'not using JNDI for destination endpoints'. So in your case - are your destinations stored in JNDI ? If you don't have to use that, forget about it. Only load your ConnectionFactory (one object) from JNDI or simply create it from scratch.
And then you don't have to assign one Spring bean to each destination. You could have just one Java 'consumer bean' which then uses JmsTemplate. I guess your connection factory is the same, so that's only one new JmsTemplate(connectionFactory). Then do createSession/createConsumer, etc. as needed.

You can just use a single JmsTemplate, CachingConnectionFactory and JndiDestinationResolver...
The whole point of using a DestinationResolver is to lazily resolve the destinations for you. Use the specific send or [convertAndSend][5]. ThedestininationNamewill be passed on to theDestinationResolver` to get the destination.
The only drawback is that you need to use the jndi-name as the destinationName.
#Bean
public JndiDestinationResolver jndiDestinationResolver() {
return new JndiDestinationResolver();
}
#Bean
public JmsTemplate jmsTemplate() {
JmsTemplate jmsTemplate = new JmsTemplate();
jmsTemplate.setDestinationResolver(jndiDestinationResolver());
jmsTemplate.setConnectionFactory(connectionFactory());
return jmsTemplate;
}
With this you can use the following to dynamically resolve the destination from JNDI.
jmsTemplate.send("jms/queue1", "This is a message");
jmsTemplate.send("jms/queue3", "This is another message");

Related

Spring default transactionManager not found when using JmsTransactionManager

I have a Spring Boot app, where I use JMS with Database. I'm trying to configure JmsTransactionManager to use with default TransactionManager (for JPA). I defined the bean in the #SpringBootApplication file (that means it has #Configuration and #EnableTransactionManagement):
#Bean(name="jmsTransactionManager")
public JmsTransactionManager jmsTransactionManager(ConnectionFactory connectionFactory) {
JmsTransactionManager jmsTransactionManager = new JmsTransactionManager();
jmsTransactionManager.setConnectionFactory(connectionFactory);
return jmsTransactionManager;
}
That's the only bean I configure by myself for JMS because other configuartion spring-boot does automatically, I just have properties in the application.yaml so I assume connectionFactory will be autowired. And I want it to use like this:
#Transactional(transactionManager = "jmsTransactionManager", propagation = Propagation.REQUIRES_NEW)
void do(){
sendJms();
saveDb();
}
#Transactional // uses default JPA TM
void sendDb(){
...
}
So the logic is that I will send to Jms first, then save something to DB, so I need two separate transactions but I want to close the DB transaction before JMS transaction. Maybe it's not correct to make calls like this in such situation, but I don't know how to do it else using declarative transaction management. And the problem is that when I'm defining JmsTransactionManagement the default one, that works with DB stops working, but without a JmsTransactionManagement transactions to db work:
org.springframework.beans.factory.NoSuchBeanDefinitionException: No bean named 'transactionManager' available: No matching TransactionManager bean found for qualifier 'transactionManager' - neither qualifier
match nor bean name match!
Am I missing something? I have spring-data-jpa in pom so default transactionManager configures by spring boot, but it can't find it, why? Unfortunately didn't find the answer on how to do something like that on the StackOverflow.
I am presuming that you are not using two phase commit (XA) transactions. Essentially in order to chain transactions between multiple transactional resources, both your jms ConnectionFactory and db Datasource have to be an XA resource implementation, and you have to use proper JTA TransactionManager. While it is not particularly hard thing to do, JTA is usually skipped by majority of java programmers as in normal historical situation (spring code deployed in JEE serer) JTA "just works" in background never to be directly accessed by java programmer. In standalone boot application you have to explicitly enable this functionality by providing proper JTA transaction manager and using XA implementation of your resources.
See: https://docs.spring.io/spring-boot/docs/2.0.x/reference/html/boot-features-jta.html
In short JMSTransactionManager and DBTrasnactionManager wont do as you need instance of JTATransactionManager.

Spring-Kafka usage of ConcurrentKafkaListenerContainerFactory for more than One #Kafkalistener

I am working on implementing consumption of messages from Kafka Topics using Spring-Kafka framework. I am trying to understand some usage of the ConcurrentKafkaListenerContainerFactory that i am creating for my Kafka Listener. The #KafkaListener works fine and as expected, however, in my scenario, i have more than one independent Listeners, listening to more than one Topic respectively. I would like to know if i can reuse the ConcurrentKafkaListenerContainerFactory among all my Listeners, or do i have to create one containerFactory per #KafkaListener. Is there a way of having a generic containerFactory that can be shared among all #Kafkalisteners
Thanks you
Yes; that's the whole point - it's a factory for listener containers; you typically only need the one factory that boot auto configures.
If you need different properties (e.g. deserializers) for a listener, recent versions (since spring-kafka 2.2.4) allow you to override consumer properties on the annotation.
To override other properties, e.g. container properties, for individual listeners, add a listener container customizer to the factory.
#Component
class ContainerFactoryCustomizer {
ContainerFactoryCustomizer(AbstractKafkaListenerContainerFactory<?, ?, ?> factory) {
factory.setContainerCustomizer(
container -> {
String groupId = container.getContainerProperties().getGroupId();
if (groupId.equals("foo")) {
container.getContainerProperties().set...
}
else {
container.getContainerProperties().set...
}
});
}
As you can see, you can tell which container we are creating when it is called by accessing the groupId() container property.
You might want to use 2 factories if your listeners have vastly different configuration, but then you lose boot's auto configuration features (at least for the factory).

Spring JMS Use Point-to-point and Topic in the same application

We are currently introducing ActiveMQ into our existing application which was running on a different Queueing system. Spring JMS is used to make use of the existing integration within the Spring framework.
Most of our applications use point-to-point (queue) communication, with the exception of one. It needs to be able to listen to the topic created by another producing application while publishing to multiple queues at the same time.
This means that application needs to support both Topics and Queues. However, when setting the global property
jms:
pub-sub-domain: true
the setting is global and all queue subscribers are immediately subscribing to topics, which we can see in the ActiveMQ web interface.
Is there a way to configure the application to support both topics and queues at the same time?
The boot property is used to configure the default container factory used by #JmsListener methods, as well as to configure the JmsTemplate.
Simply override Boot's default container factory...
#Bean
public DefaultJmsListenerContainerFactory jmsListenerContainerFactory(
DefaultJmsListenerContainerFactoryConfigurer configurer,
ConnectionFactory connectionFactory) {
DefaultJmsListenerContainerFactory factory = new DefaultJmsListenerContainerFactory();
configurer.configure(factory, connectionFactory);
return factory;
}
and then add a second one
#Bean
public DefaultJmsListenerContainerFactory jmsTopicListenerContainerFactory(
DefaultJmsListenerContainerFactoryConfigurer configurer,
ConnectionFactory connectionFactory) {
DefaultJmsListenerContainerFactory factory = new DefaultJmsListenerContainerFactory();
configurer.configure(factory, connectionFactory);
factory.setPubSubDomain(true); << override the boot property
return factory;
}
Then refer to the alternate factory in the #JmsListener for the topic.
Alternatively, if you don't have listeners for both types, set the property to true, but override Boot's JmsTemplate configuration.

How to set Spring Data Cassandra keyspace dynamically?

We're using Spring Boot 1.5.10 with Spring Data for Apache Cassandra and that's all working well.
We've had a new requirement coming along where we need to connect to a different keyspace while the service is up and running.
Through the use of Spring Cloud Config Server, we can easily set the value of spring.data.cassandra.keyspace-name, however, we're not certain if there's a way that we can dynamically switch (force) the service to use this new keyspace without having to restart if first?
Any ideas or suggestions?
Using #RefreshScope with properties/repositories doesn't work as the keyspace is bound to the Cassandra Session bean.
Using Spring Data Cassandra 1.5 with Spring Boot 1.5 you have at least two options:
Declare a #RefreshScope CassandraSessionFactoryBean, see also CassandraDataAutoConfiguration. This will interrupt all Cassandra operations upon refresh and re-create all dependant beans.
Listen to RefreshScopeRefreshedEvent and change the keyspace via USE my-new-keyspace;. This approach is less invasive and doesn't interrupt running queries. You'd basically use an event listener.
#Component
class MySessionRefresh {
private final Session session;
private final Environment environment;
// omitted constructors for brevity
#EventListener
#Order(Ordered.LOWEST_PRECEDENCE)
public void handle(RefreshScopeRefreshedEvent event) {
String keyspace = environment.getProperty("spring.data.cassandra.keyspace-name");
session.execute("USE " + keyspace + ";");
}
}
With Spring Data Cassandra 2, we introduced the SessionFactory abstraction providing AbstractRoutingSessionFactory for code-controlled routing of CQL/session calls.
Yes, you can use the #RefreshScope annotation on a the bean(s) holding the spring.data.cassandra.keyspace-name value.
After changing the config value through Spring Cloud Config Server, you have to issue a POST on the /refresh endpoint of your application.
From the Spring cloud documentation:
A Spring #Bean that is marked as #RefreshScope will get special treatment when there is a configuration change. This addresses the problem of stateful beans that only get their configuration injected when they are initialized. For instance if a DataSource has open connections when the database URL is changed via the Environment, we probably want the holders of those connections to be able to complete what they are doing. Then the next time someone borrows a connection from the pool he gets one with the new URL.
From the RefreshScope class javadoc:
A Scope implementation that allows for beans to be refreshed dynamically at runtime (see refresh(String) and refreshAll()). If a bean is refreshed then the next time the bean is accessed (i.e. a method is executed) a new instance is created. All lifecycle methods are applied to the bean instances, so any destruction callbacks that were registered in the bean factory are called when it is refreshed, and then the initialization callbacks are invoked as normal when the new instance is created. A new bean instance is created from the original bean definition, so any externalized content (property placeholders or expressions in string literals) is re-evaluated when it is created.

Camel RabbitMQ Default ConnectionFactory?

I'm using Camel to consume and produce messages in RabbitMQ. Also, I'm working with Spring boot so I have created a ConnectionFactory bean with all the configuration I want.
That works great but I have to declare the name of the bean in every Endpoint string I create.
Is there a way to setup camel to use this specific bean by default?
According to these source lines I don't think it is achievable.
If you name your bean rabbitConnectionFactory then you don't have to specify this in every endpoint.
For example:
#Bean
public ConnectionFactory rabbitConnectionFactory() {
ConnectionFactory factory = new ConnectionFactory();
factory.setHost("localhost");
factory.setPort(5672);
factory.setUsername("guest");
factory.setPassword("guest");
return factory;
}
And after that your rabbitmq URI is as simple as: public static final String RABBIT_URI = "rabbitmq:%s?queue=%s&routingKey=%s&autoDelete=false";

Categories