I have the following configurations for Spring and RabbitMQ:
Spring Boot : 1.2.7
RabbitMQ : 3.5.4
I am using the following Spring beans to create Stomp endpoint (My config class extends AbstractWebSocketMessageBrokerConfigurer):
#Bean
public TopicExchange streamingExchange(#Qualifier("admin") final RabbitAdmin rabbitAdmin) {
TopicExchange topicExchange = new TopicExchange(exchangeName, true, false);
topicExchange.setAdminsThatShouldDeclare(rabbitAdmin);
return topicExchange;
}
#Override
public void configureMessageBroker(final MessageBrokerRegistry config) {
config.enableStompBrokerRelay("/my_stream", "/test").setRelayHost(host)
.setSystemLogin(username).setSystemPasscode(password).setClientLogin(username)
.setClientPasscode(password);
}
#Override
public void registerStompEndpoints(final StompEndpointRegistry registry) {
registry.addEndpoint("/test").setAllowedOrigins("*").withSockJS();
}
Now, when a client connects to this end point, a temporary queue gets created and response data is streamed through the queue. If clients get disconnected, queue gets deleted and messages are lost.
To prevent this, I want to create durable queues (as these queues are have durable set to false and auto-delete set to true) if not, I want to have some expiration set on these queues (e.g. 1 hour or something). From RabbitMQ documentation, it seems we can pass these values in headers, however, that is only applicable for versions 3.6.0 onwards, as we have 3.5.4, it's not an option.
Is there any other way by which we can configure this? (Another approach would be to add some kind of Listener for connect request and configure queue parameters programmatically? I am not sure whether this is feasible as I don't know much about spring rabbitmq stomp plugin)
Wondering if you have tried declaring the queue as durable using the rabbitmqadmin tool ?
rabbitmqadmin declare queue name=your-queue durable=true
Admin tool can be downloaded from here https://www.rabbitmq.com/management-cli.html
Related
Can we use Spring Integration to configure directory polling for files such that -
With 2 servers configured, polling occurs on 1 server and corresponding processing get distributed b/w both the servers.
Also, can we switch the polling on either of the servers on runtime ?
Edit -
Tried configuring JBDC MetaStore and run the two instances separately, able to poll and process but getting intermittently DeadLockLoserDataAccessException
Configuration below
#Bean
public MessageChannel fileInputChannel(){
return new DirectChannel();
}
#Bean(PollerMetadata.DEFAULT_POLLER)
public PollerMetadata defaultPoller(){
PollerMetadata pollermetadata = new PollerMetadata();
pollermetadata.setMaxMessagesPerPoll(-1);
pollermetadata.setTrigger(new PeriodicTrigger(1000));
return pollermetadata;
}
#Bean
#InBoundChannelAdapter(value = "fileInputChannel"){
FileReadingMessageSource source = new FileReadingMessageSource();
source.setDirectory("Mylocalpath");
FileSystemPersistentAcceptOnceFileListFilter acceptOnce = new FileSystemPersistentAcceptOnceFileListFilter();
ChainFileListFilter<File> chainFilter = ChainFileListFilter(".*\\.txt"));
chainFilter.addFilter(acceptOnce);
source.setFilter(chainFilter);
source.setUseWatchService(true);
source.setWatchEvents(FileReadingMessageSource.WatchEventType.CREATE,FileReadingMessageSource.WatchEventType.MODIFY);
return source;
}
#Bean
public IntegrationFlow processFileFlow(){
return IntegrationFlows.from("fileInputChannel")
.handle(service).get();
}
It is really one of the features of Spring Integration to easy implement a distributed solution. You just need add a messaging middle ware into your cluster infrastructure and have all the nodes connected to some destination for sending and receiving. A good example could be a SubscribableJmsChannel which you just simple can declare in your application context and all the nodes of your cluster are going to subscribe to this channel for round-robin consumption from JMS queue. It already wouldn't matter which node produces to this channel.
See more in docs: https://docs.spring.io/spring-integration/docs/current/reference/html/jms.html#jms-channel.
Another sample of similar distributed channels are: AMQP, Kafka, Redis, ZeroMQ.
You also can have a shared message store and use it in the QueueChannel definition: https://docs.spring.io/spring-integration/docs/current/reference/html/system-management.html#message-store
It is not clear what you mean about "poller on runtime", so I would suggest you to start a new SO thread with much more info.
See rules as a guidance: https://stackoverflow.com/help/how-to-ask
We have multiple applications consumer listening to the same kafka topic and a producer sets the message header when sending message to the topic so specific instance can evaluate the header and process the message. eg
#StreamListener(target=ITestSink.CHANNEL_NAME,condition="headers['franchiseName'] == 'sydney'")
public void fullfillOrder(#Payload TestObj message) {
log.info("sydney order request received message is {}",message.getName());
}
In Spring Cloud Stream 3.0.0 the #StreamListener is deprecated and I could not find the equivalent of the condition property in Function.
Any suggestion?
Though I was not able to find the equivalent for the functional approach either, I do have a suggestion.
The #StreamListener annotations condition does not stop the fact that the application must consume the message, read its header, and filter out specific records before passing it to the listener (fullfillOrder()). So it's safe to assume you're consuming every message that hits the topic regardless (by the event receiver that Spring Cloud has implemented for us under the hood), but the listener only gets executed when header == sydney.
If there was a way to configure the event receiver that Spring Cloud uses (to discard message before hitting listener), I would suggest looking into that. If not, would resort to filtering out any messages (non-sydney) before doing any processing. If you're familiar with Spring Cloud's functional approach, would look something like this:
#Bean
public Consumer<Message<TestObj>> fulfillOrder() {
return msg -> {
// to get header - msg.getHeaders().get(key, valueType);
// filter out bad messages
}
}
or
#Bean
public Consumer<ConsumerRecord<?, TestObj>> fulfillOrder() {
return msg -> {
// msg.headers().lastHeader("franchiseName").value() -> filter em out
}
}
Other:
^ my code assumes you're integrating the kafka-client API with Spring cloud stream via spring-cloud-stream-binder-kafka. based on tags listed, i will note Spring Cloud Stream has two versions of binders for Kafka - one for the kafka client library, and one for kafka streams library.
Without considering Spring Cloud / Frameworks, the high-lvl DSL in kafka streams doesn't give you access to headers, but the low-level Processor API does. From the example, it seems like you're leveraging the client binder and not spring-cloud-stream-binder-kafka-streams / kafka streams binder. I haven't seen an implementation of spring cloud stream + kafka streams binder using the low-level processor API, so i can't tell if that was the aim.
We are develop a micro-service system that use ActiveMQ Artemis as the communication method between service. Since the requirement ask to be able to stop the listeners at runtime, we can not use #JmsListener provide by spring-artemis. After digging the internet and finding out that spring use MessageListenerContainer behind the scence, we come up with the idea of maintain a list of MessageListenerContainer our self.
#Bean(name = "commandJmsListenerContainerFactory")
public DefaultJmsListenerContainerFactory commandJmsListenerContainerFactory(
DefaultJmsListenerContainerFactoryConfigurer configurer) {
var factory = new DefaultJmsListenerContainerFactory();
configurer.configure(factory, connectionFactory);
factory.setPubSubDomain(false);
return factory;
}
// Use
private Map<String, DefaultMessageListenerContainer> commandQueue;
public void subscribeToCommandQueue(String queueName, CommandListener<?> command) {
commandQueue.computeIfAbsent(queueName, key -> {
var endPoint = new SimpleJmsListenerEndpoint();
endPoint.setDestination(queueName);
endPoint.setMessageListener(message -> {
try {
var body = message.getBody(String.class);
command.execute(commandMessageConverter.deserialize(body));
} catch (JMSException e) {
throw new RuntimeException("Error while process message for queue: " + queueName, e);
}
});
var container = commandJmsListenerContainerFactory.createListenerContainer(endPoint);
// https://stackoverflow.com/questions/44555106/defaultmessagelistenercontainer-not-reading-messages-from-ibm-mq
// for Every object of Spring classes that implement InitializingBean created manually, we need to call afterPropertiesSet to make the object "work"
container.afterPropertiesSet();
container.start();
return container;
});
}
public void start() {
commandQueue = new ConcurrentHashMap<>();
}
public void stop() {
commandQueue.values().forEach(DefaultMessageListenerContainer::destroy);
commandQueue.clear();
}
While testing, I notice that after we destroy all the listener by calling stop() , the queue and the address in the Artemis console are deleted too. It isn't the case for durable subscription.
#Bean(name = "eventJmsListenerContainerFactory")
public DefaultJmsListenerContainerFactory eventJmsListenerContainerFactory(
CachingConnectionFactory cachingConnectionFactory,
DefaultJmsListenerContainerFactoryConfigurer configurer) {
cachingConnectionFactory.setClientId(UUID.randomUUID().toString());
var factory = new DefaultJmsListenerContainerFactory();
configurer.configure(factory, cachingConnectionFactory);
factory.setPubSubDomain(true);
factory.setSubscriptionDurable(true);
return factory;
}
// usage is the same as the first block code, except we store multicast subscriptions in another map
private Map<String, DefaultMessageListenerContainer> eventTopic;
After running the unit tests and destroying all the listeners of two maps, only the test-event-topic address and its queues were kept, the test-command-queue was deleted. Why both the queues behave differently?
Also, what is the correct behavior? We afraid the auto deletion will remove messages that aren't sent in the queue yet. On the other hand, new queue under test-event-topic keep being created if we run the test again and again. I think it is because of the line cachingConnectionFactory.setClientId(UUID.randomUUID().toString()); . But for durable subscription, not setting clientId result in error.
The connection factory used in the app is an CachingConnectionFactory created by spring-artemis
By default the broker will auto-create addresses and queues as required when a message is sent or a consumer is created by the core JMS client. These resources will also be auto-deleted by default when they're no longer needed (i.e. when a queue has no consumers and messages or when an address no longer has any queues bound to it). This is controlled by these settings in broker.xml which are discussed in the documentation:
auto-create-queues
auto-delete-queues
auto-create-addresses
auto-delete-addresses
To be clear, auto-deletion should not cause any message loss by default as queues should only be deleted when they have 0 consumers and 0 messages. However, you can always set auto-deletion to false to be 100% safe.
Queues representing durable JMS topic subscriptions won't be deleted as they are meant to stay and gather messages while the consumer is offline. In other words, a durable topic subscription will remain if the client using the subscription is shutdown without first explicitly removing the subscription. That's the whole point of durable subscriptions - they are durable. Any client can use a durable topic subscription if it connects with the same client ID and uses the same subscription name. However, unless the durable subscription is a "shared" durable subscription then only one client at a time can be connected to it. Shared durable topic subscriptions were added in JMS 2.0.
I'm trying to write a basic ActiveMQ client to listen to a topic. I'm using Spring Boot ActiveMQ. I have an implementation built off of various tutorials that uses DefaultJmsListenerContainerFactory, but I am having some issues getting it working properly.
#Configuration
#EnableJms
public class JmsConfig {
#Bean
public DefaultJmsListenerContainerFactory jmsContainerFactory(ConnectionFactory connectionFactory,
DefaultJmsListenerContainerFactoryConfigurer configurer) {
DefaultJmsListenerContainerFactory factory = new DefaultJmsListenerContainerFactory();
factory.setConcurrency("3-10");
factory.setConnectionFactory(connectionFactory);
configurer.configure(factory, connectionFactory);
factory.setSubscriptionDurable(true);
factory.setClientId("someUniqueClientId");
return factory;
}
}
#JmsListener(destination="someTopic", containerFactory="jmsContainerFactory", subscription="someUniqueSubscription")
public void onMessage(String msg) {
...
}
Everything works fine, until I try to get a durable subscription going. When I do that, I'm finding that with the client id set on the container factory, I get an error about how the client id cannot be set on a shared connection.
Cause: setClientID call not supported on proxy for shared Connection. Set the 'clientId' property on the SingleConnectionFactory instead.
When I change the code to set the client id on the connection factory instead (it's a CachingConnectionFactory wrapping an ActiveMQConnectionFactory), the service starts up successfully, reads a couple messages and then starts consistently outputting this error:
Setup of JMS message listener invoker failed for destination 'someTopic' - trying to recover. Cause: Durable consumer is in use for client: someUniqueClientId and subscriptionName: someUniqueSubscription
I continue to receive messages, but also this error inter-mingled in the logs. This seems like it is probably a problem, but I'm really not clear on how to fix it.
I do have a naive implementation of this going without any spring code, using ActiveMQConnectionFactory directly and it seems happy to use a durable consumer (but it has its own different issues). In any case, I don't think it's a lack of support for durable connections on the other side.
I'm hoping someone with more experience in this area can help me figure out if this error is something I can ignore, or alternatively what I need to do to address it.
Thanks!
JMS 1.1 (which is what you're using since you're using ActiveMQ 5.x) doesn't support shared durable subscriptions. Therefore, when you use setConcurrency("3-10") and Spring tries to create > 1 subscription you receive an error. I see two main ways to solve this problem:
Use setConcurrency("1") which will limit the number of subscribers/consumers to 1. Depending on your requirements this could have a severe negative performance impact.
Switch to ActiveMQ Artemis which does support JMS 2.0 and invoke setSubscriptionShared(true).
I have a webapp with spring and websockets using a message broker (activemq).
here is my config class:
#Configuration
#EnableWebSocketMessageBroker
#EnableScheduling
public class WebSocketConfig extends AbstractWebSocketMessageBrokerConfigurer {
#Override
public void configureMessageBroker(MessageBrokerRegistry config) {
config.enableStompBrokerRelay("/topic","/queue/");
config.setApplicationDestinationPrefixes("/app");
}
#Override
public void registerStompEndpoints(StompEndpointRegistry registry) {
registry.addEndpoint("/hello").withSockJS();
}
}
I have a scheduled task that constantly pushing messages to the username "StanTheMan" :
#Scheduled(fixedDelay = 1000)
public void sendGreetings() {
HelloMessage hello = new HelloMessage();
hello.setContent("timeStamp:" + System.currentTimeMillis());
String queueMapping = "/queue/greetings";
template.convertAndSendToUser(
"DannyD",
queueMapping,
hello);
}
Currently, if the user is NOT connected via a websocket to the server - all the messages for him are not being en-queued for him, they simply discarded. whenever he connects - fresh messages are being en-queued for him.
Is it possible for me to "convertAndSendToUser" a message to an offline user in any way? i would like to en-queue messages with an expired time for offline users to be later on pushed when they are connecting again and the expired time wasn't over.
how can i achieve that? Obviously using a real message broker (activemq) supposed to help achieving that, but how?
Thanks!
Indeed this feature can only be used to send messages to a (presently) connected user.
We plan to provide better ways to track failed messages (see SPR-10891). In the meantime as a workaround you could inject the UserSessionRegistry into your #Scheduled component and check if the getSessionIds methods returns a non-empty Set. That would indicate the user is connected.
One alternative may be to use your own convention for a queue name that each user can subscribe to (probably based on their user name or something else that's unique enough) in order to receive persistent messages. The ActiveMQ STOMP page has a section on persistent messages and expiration times.
This is a default behavior for message brokers.
And you haven't setup the ActiveMQ broker as your default broker so Spring is setting up a in-memory broker.
To achieve what you want setup/give your details of activeMQ to your spring websocket message broker. As Spring doesn't provide these settings, you have to do all the persistence settings at the ActiveMQ side. .
To setup ActiveMQ for spring websocket message broker you also need to enable stomp and use this:
registry.enableStompBrokerRelay("/topic").setRelayHost("hostName").setRelayPort("00000");
For more information checkout:
http://docs.spring.io/spring/docs/4.0.0.RELEASE/javadoc-api/org/springframework/messaging/simp/config/StompBrokerRelayRegistration.html