Spring JMS Producer and Consumer interaction - java

How can i determine if the consumer already received the message from the producer or how can i notify the producer that the message have already been sent/consumed in Spring JMS?

you need to use request/reply by using org.springframework.jms.core.JmsTemplate.sendAndReceive(...) on the producer side and on the consumer side the MessageListener method set in the container must return a value.
or make your producer subscribing to ActiveMQ.Advisory.MessageConsumed.Queue to be notified when the message is consumed by having access to some properties like orignalMessageId, take a look here http://activemq.apache.org/advisory-message.html
you can use it like this :
Destination advisoryDestination = org.apache.activemq.advisory.AdvisorySupport
.getMessageConsumedAdvisoryTopic(session.createQueue("yourQueue"));
MessageConsumer consumer = session.createConsumer(advisoryDestination);
consumer.setMessageListener(new MessageListener() {
#Override
public void onMessage(Message msg) {
System.out.println(msg);
System.out.println(((ActiveMQMessage) msg).getMessageId());
ActiveMQMessage aMsg = (ActiveMQMessage) ((ActiveMQMessage) msg).getDataStructure();
System.out.println(aMsg.getMessageId());
}
});

Related

Invoking our Custom MessageListenerContainer in JMS

I am using a JMS queue.
I am attaching my own Listener name as MyMessageListenerContainer to endpoint using following way but it is not getting picked up.
JmsQueueEndpoint jmsQueueEndpoint = (JmsQueueEndpoint) endpoint;
// my own listener.
MyMessageListenerContainer myMessageListenerContainer = new MyMessageListenerContainer();
//overriden below method of the JmsComponent
configureMessageListenerContainer(myMessageListenerContainer, endpoint);
// MyMessageListenerContainer implementation
class MyMessageListenerContainer extends DefaultMessageListenerContainer {
#Override
protected void refreshConnectionUntilSuccessful() {
System.out.println("In My message listener **********");
super.refreshConnectionUntilSuccessful();
}
}
When the broker is down, ideally it should pickup this class and log In My message listener **********.
But in my case it is invoking DefaultMessageListenerContainer method.
Can somebody help me here ?

Whats the difference between message listeners and jmslistener annotation

So I'm diving deeper into the world of JMS.
I am writing some dummy projects right now and understanding how to consume messages. I am using Active MQ artemis as the message broker.
Whilst following a tutorial, I stumbled upon something in terms on consuming messages. What exactly is the difference between a message listener to listen for messages and using the #JmsListener annotion?
This is what I have so far:
public class Receiver {
#JmsListener(containerFactory = "jmsListenerContainerFactory", destination = "helloworld .q")
public void receive(String message) {
System.out.println("received message='" + message + "'.");
}
}
#Configuration
#EnableJms
public class ReceiverConfig {
#Value("${artemis.broker-url}")
private String brokerUrl;
#Bean
public ActiveMQConnectionFactory activeMQConnectionFactory(){
return new ActiveMQConnectionFactory(brokerUrl);
}
#Bean
public DefaultJmsListenerContainerFactory jmsListenerContainerFactory(){
DefaultJmsListenerContainerFactory factory = new DefaultJmsListenerContainerFactory();
factory.setConnectionFactory(activeMQConnectionFactory());
factory.setConcurrency("3-10");
return factory;
}
#Bean
public DefaultMessageListenerContainer orderMessageListenerContainer() {
SimpleJmsListenerEndpoint endpoint =
new SimpleJmsListenerEndpoint();
endpoint.setMessageListener(new StatusMessageListener("DMLC"));
endpoint.setDestination("helloworld.q"); //Try renaming this and see what happens.
return jmsListenerContainerFactory()
.createListenerContainer(endpoint);
}
#Bean
public Receiver receiver() {
return new Receiver();
}
}
public class StatusMessageListener implements MessageListener {
public StatusMessageListener(String dmlc) {
}
#Override
public void onMessage(Message message) {
System.out.println("In the onMessage().");
System.out.println(message);
}
}
From what I've read is that we register a message listener to the container listener which in turn is created by the listener factory. So essentially the flow is this:
DefaultJmsListenerContainerFactory -> creates -> DefaultMessageListenerContainer -> registers a message listener which is used to listen to messages from the endpoint configured.
From my research, i've gathered that messageListeners are used to asynchornously consume messages from the queues/topic whilst using the #JmsListener annotation is used to synchronously listen to messages?
Furthermore, there's a few other ListenerContainerFactory out there such as DefaultJmsListenerContainerFactory and SimpleJmsListenerContainerFactory but not sure I get the difference. I was reading https://codenotfound.com/spring-jms-listener-example.html and from what I've gathered from that is Default uses a pull model so that suggests it's async so why would it matter if we consume the message via a messageListener or the annotation? I'm a bit confused and muddled up so would like my doubts to be cleared up. Thanks!
This is the snippet of the program when sending 100 dummy messages (just noticed it's not outputting the even numbered messages..):
received message='This the 95 message.'.
In the onMessage().
ActiveMQMessage[ID:006623ca-d42a-11ea-a68e-648099ad9459]:PERSISTENT/ClientMessageImpl[messageID=24068, durable=true, address=helloworld.q,userID=006623ca-d42a-11ea-a68e-648099ad9459,properties=TypedProperties[__AMQ_CID=00651257-d42a-11ea-a68e-648099ad9459,_AMQ_ROUTING_TYPE=1]]
received message='This the 97 message.'.
In the onMessage().
ActiveMQMessage[ID:006ba214-d42a-11ea-a68e-648099ad9459]:PERSISTENT/ClientMessageImpl[messageID=24088, durable=true, address=helloworld.q,userID=006ba214-d42a-11ea-a68e-648099ad9459,properties=TypedProperties[__AMQ_CID=0069cd51-d42a-11ea-a68e-648099ad9459,_AMQ_ROUTING_TYPE=1]]
received message='This the 99 message.'.
The following configuration
#Configuration
#EnableJms
public class ReceiverConfig {
//your config code here..
}
would ensure that every time a Message is received on the Destination named "helloworld .q", Receiver.receive() is called with the content of the message.
You can read more here: https://docs.spring.io/spring/docs

Spring Boot Kafka Batch Forwarding

I've configured my consumer to accept messages from a topic in batches. How do I forward it to a new topic?
I want each consumed messages to be forwarded as it's own message. So X amount of messages consumed will produce X amount of messages.
Here's my current setup:
#KafkaListener(topics = "input")
#SendTo("output")
public ConsumerRecords consume(ConsumerRecords records) {
// Do things
return records;
}
And here's the exception thrown:
org.springframework.kafka.KafkaException: No method found for class java.util.ArrayList
That functionality is not supported. In any case, you can't send a ConsumerRecord to a Producer.
This works, though
#KafkaListener(id = "foo", topics = "input")
#SendTo("output")
public List<String> consume(List<String> data) {
return data;
}
(where String is the type created by your deserializer).

Async receiving value from ibm mq with dynamic correlationId

I'm sending messages to ibm mq with some correlationId (unique for each message). Then I want to read from output queue this concrete message with specific correlationId, and i want it to be non-blocking to use it in java webflux controller.
I'm wondering if there is a way to do it without lot of pain? Options like jmsTemplate.receiveSelected(...) is blocking, while creating a bean implementing interface MessageListener doesn't provide a way to select message by dynamic selector(i.e. correlationId is unique for each message).
You could use spring MessageListener to retrieve all messages and connect it with controller by Mono.create(...) and your own event listener which trigger result Mono
// Consumes message and trigger result Mono
public interface MyEventListener extends Consumer<MyOutputMessage> {}
Class to route incoming messages to correct MyEventListener
public class MyMessageProcessor {
// You could use in-memory cache here if you need ttl etc.
private static final ConcurrentHashMap<String, MyEventListener> REGISTRY
= new ConcurrentHashMap<>();
public void register(String correlationId, MyEventListener listener) {
MyEventListener oldListeer = REGISTRY.putIfAbsent(correlationId, listener);
if (oldListeer != null)
throw new IllegalStateException("Correlation ID collision!");
}
public void unregister(String correlationId) {
REGISTRY.remove(correlationId);
}
public void accept(String correlationId, MyOutputMessage myOutputMessage) {
Optional.ofNullable(REGISTRY.get(correlationId))
.ifPresent(listener -> listener.accept(myOutputMessage));
}
}
Webflux controller
private final MyMessageProcessor messageProcessor;
....
#PostMapping("/process")
Mono<MyOutputMessage> process(Mono<MyInputMessage> inputMessage) {
String correlationId = ...; //generate correlationId
// then send message asynchronously
return Mono.<MyOutputMessage>create(sink ->
// create and save MyEventListener which call MonoSink.success
messageProcessor.register(correlationId, sink::success))
// define timeout if you don't want to wait forever
.timeout(...)
// cleanup MyEventListener after success, error or cancel
.doFinally(ignored -> messageProcessor.unregister(correlationId));
}
And into onMessage of your JMS MessageListener implementation you could call
messageProcessor.accept(correlationId, myOutputMessage);
You could find similar example for Flux in the reactor 3 reference guide

How to implement a round-robin queue consumer in Spring boot

I am building a message driven service in spring which will run in a cluster and needs to pull messages from a RabbitMQ queue in a round robin manner. The implementation is currently pulling messages off the queue in a first come basis leading to some servers getting backed up while others are idle.
The current QueueConsumerConfiguration.java looks like :
#Configuration
public class QueueConsumerConfiguration extends RabbitMqConfiguration {
private Logger LOG = LoggerFactory.getLogger(QueueConsumerConfiguration.class);
private static final int DEFAULT_CONSUMERS=2;
#Value("${eventservice.inbound}")
protected String inboudEventQueue;
#Value("${eventservice.consumers}")
protected int queueConsumers;
#Autowired
private EventHandler eventtHandler;
#Bean
public RabbitTemplate rabbitTemplate() {
RabbitTemplate template = new RabbitTemplate(connectionFactory());
template.setRoutingKey(this.inboudEventQueue);
template.setQueue(this.inboudEventQueue);
template.setMessageConverter(jsonMessageConverter());
return template;
}
#Bean
public Queue inboudEventQueue() {
return new Queue(this.inboudEventQueue);
}
#Bean
public SimpleMessageListenerContainer listenerContainer() {
SimpleMessageListenerContainer container = new SimpleMessageListenerContainer();
container.setConnectionFactory(connectionFactory());
container.setQueueNames(this.inboudEventQueue);
container.setMessageListener(messageListenerAdapter());
if (this.queueConsumers > 0) {
LOG.info("Starting queue consumers:" + this.queueConsumers );
container.setMaxConcurrentConsumers(this.queueConsumers);
container.setConcurrentConsumers(this.queueConsumers);
} else {
LOG.info("Starting default queue consumers:" + DEFAULT_CONSUMERS);
container.setMaxConcurrentConsumers(DEFAULT_CONSUMERS);
container.setConcurrentConsumers(DEFAULT_CONSUMERS);
}
return container;
}
#Bean
public MessageListenerAdapter messageListenerAdapter() {
return new MessageListenerAdapter(this.eventtHandler, jsonMessageConverter());
}
}
Is it a case of just adding
container.setChannelTransacted(true);
to the configuration?
RabbitMQ treats all consumers the same - it knows no difference between multiple consumers in one container Vs. one consumer in multiple containers (e.g. on different hosts). Each is a consumer from Rabbit's perspective.
If you want more control over server affinity, you need to use multiple queues with each container listening to its own queue.
You then control the distribution on the producer side - e.g. using a topic or direct exchange and specific routing keys to route messages to a specific queue.
This tightly binds the producer to the consumers (he has to know how many there are).
Or you could have your producer use routing keys rk.0, rk.1, ..., rk.29 (repeatedly, resetting to 0 when 30 is reached).
Then you can bind the consumer queues with multiple bindings -
consumer 1 gets rk.0 to rk.9, 2 gets rk.10 to rk.19, etc, etc.
If you then decide to increase the number of consumers, just refactor the bindings appropriately to redistribute the work.
The container will scale up to maxConcurrentConsumers on demand but, practically, scaling down only occurs when the entire container is idle for some time.

Categories