I use spring amqp with rabbitmq. I want get one message without prefetch.
I configured with
SimpleMessageListenerContainer container = new SimpleMessageListenerContainer();
container.setConnectionFactory(rabbitConnectionFactory());
container.setQueueNames(
ProjectConfigs.getInstance().get_RABBIT_TASK_QUEUE()
);
container.setMessageListener(taskListener());
container.setConcurrentConsumers(1);
container.setPrefetchCount(1);
container.setTxSize(1);
return container;
How to disable prefetch and get only one message/
prefetch simply controls how many messsages the broker allows to be outstanding at the consumer at a time. When set to 1, this means the broker will send 1 message, wait for the ack, then send the next.
It defaults to 1. Setting it to 0 will mean the broker will send unlimited messages to the consumer, regardless of acks.
If you only want one message and then stop, you shouldn't use a container, you can use one of the RabbitTemplate.receive() methods.
I try do it with Spring AMQP
#Bean
public MessageListener taskListener() {
return new MessageListener() {
public void onMessage(Message message) {
try {
LOGGER.info(new String(message.getBody(), "UTF-8"));
Converter converter = new Converter();
converter.startConvert(new String(message.getBody(), "UTF-8"));
} catch (Exception e) {
LOGGER.error(getStackTrace(e));
}
}
};
}
Related
I recently changed from using a standard Rabbit Template, in my Spring Boot application, to using an Async Rabbit Template. In the process, I switched from the standard send method to using the sendAndReceive method.
Making this change does not seem to affect the publishing of messages to RabbitMQ, however I do now see stack traces as follows when sending messages:
org.springframework.amqp.core.AmqpReplyTimeoutException: Reply timed out
at org.springframework.amqp.rabbit.AsyncRabbitTemplate$RabbitFuture$TimeoutTask.run(AsyncRabbitTemplate.java:762) [spring-rabbit-2.3.10.jar!/:2.3.10]
at org.springframework.scheduling.support.DelegatingErrorHandlingRunnable.run(DelegatingErrorHandlingRunnable.java:54) [spring-context-5.3.9.jar!/:5.3.9]
I have tried modifying various settings including the reply and receive timeouts but all that changes is the time it takes to receive the above error. I have also tried setting useDirectReplyToContainer to true as well as setting useChannelForCorrelation to true.
I have managed to recreate the issue in a main method, included bellow, using a RabbitMQ broker running in docker.
public static void main(String[] args) {
com.rabbitmq.client.ConnectionFactory cf = new com.rabbitmq.client.ConnectionFactory();
cf.setHost("localhost");
cf.setPort(5672);
cf.setUsername("<my-username>");
cf.setPassword("<my-password>");
cf.setVirtualHost("<my-vhost>");
ConnectionFactory connectionFactory = new CachingConnectionFactory(cf);
RabbitTemplate rabbitTemplate = new RabbitTemplate(connectionFactory);
rabbitTemplate.setExchange("primary");
rabbitTemplate.setUseDirectReplyToContainer(true);
rabbitTemplate.setReceiveTimeout(10000);
rabbitTemplate.setReplyTimeout(10000);
rabbitTemplate.setUseChannelForCorrelation(true);
AsyncRabbitTemplate asyncRabbitTemplate = new AsyncRabbitTemplate(rabbitTemplate);
asyncRabbitTemplate.start();
System.out.printf("Async Rabbit Template Running? %b\n", asyncRabbitTemplate.isRunning());
MessageBuilderSupport<MessageProperties> props = MessagePropertiesBuilder.newInstance()
.setContentType(MessageProperties.CONTENT_TYPE_TEXT_PLAIN)
.setMessageId(UUID.randomUUID().toString())
.setHeader(PUBLISH_TIME_HEADER, Instant.now(Clock.systemUTC()).toEpochMilli())
.setDeliveryMode(MessageDeliveryMode.NON_PERSISTENT);
asyncRabbitTemplate.sendAndReceive(
"1.1.1.csv-routing-key",
new Message(
"a,test,csv".getBytes(StandardCharsets.UTF_8),
props.build()
)
).addCallback(new ListenableFutureCallback<>() {
#Override
public void onFailure(Throwable ex) {
System.out.printf("Error sending message:\n%s\n", ex.getLocalizedMessage());
}
#Override
public void onSuccess(Message result) {
System.out.println("Message successfully sent");
}
});
}
I am sure that I am just missing a configuration option but any help would be appricated.
Thanks. :)
asyncRabbitTemplate.sendAndReceive(..) will always expect a response from the consumer of the message, hence the timeout you are receiving.
To fire and forget use the standard RabbitTemplate.send(...) and catching any exceptions in a try/catch block:
try {
rabbitTemplate.send("1.1.1.csv-routing-key",
new Message(
"a,test,csv".getBytes(StandardCharsets.UTF_8),
props.build());
} catch (AmqpException ex) {
log.error("failed to send rabbit message, routing key = {}", routingKey, ex);
}
Set reply timeout to some bigger number and see the effect.
rabbitTemplate.setReplyTimeout(60000);
https://docs.spring.io/spring-amqp/reference/html/#reply-timeout
Problem:
Somehow producer is sending event to "ActiveMQ.Advisory.Producer.Queue.Queue" instead of "Queue"
Active-MQ admin console in Topics section Screenshot with producer-queue: (Not sure why it has queue and 0 consumers and number of message enqueued = 38)
Active-MQ admin console in Queues section Screenshot with consumer-queue: (it shows consumers = 1 but number of message enqueued = 0)
Attaching Producer, Consumer and Config code.
Producer
public void sendMessage(WorkflowRun message){
var queue = "Queue";
try{
log.info("Attempting Send message to queue: "+ queue);
jmsTemplate.convertAndSend(queue, message);
} catch(Exception e){
log.error("Recieved Exception during send Message: ", e);
}
}
Listener
#JmsListener(destination = "Queue")
public void messageListener(SystemMessage systemMessage) {
LOGGER.info("Message received! {}", systemMessage);
}
Config
#Value("${spring.active-mq.broker-url}")
private String brokerUrl;
#Bean
public ConnectionFactory connectionFactory() throws JMSException {
ActiveMQConnectionFactory activeMQConnectionFactory = new ActiveMQConnectionFactory();
activeMQConnectionFactory.setBrokerURL(brokerUrl);
activeMQConnectionFactory.setWatchTopicAdvisories(false);
activeMQConnectionFactory.createQueueConnection(ActiveMQConnectionFactory.DEFAULT_USER,
ActiveMQConnectionFactory.DEFAULT_PASSWORD);
return activeMQConnectionFactory;
}
When your producer starts, the ActiveMQ broker produces an 'Advisory Message' and sends it to that topic. The count indicates how many producers have been created for the queue://Queuee-- in this case 38 producers have been created.
Since the message is not being produced, it appears that in your Spring wiring, you have the connection, session and producer objects being created-- but the messages are not being sent.
Additionally, if you are showing queue://ActiveMQ.Advisory.. showing up you probably have a bug in some other part of the app (or monitoring tool?) that should be configured to consume from topic://ActiveMQ.Advisory.. instead of queue://
I am currently using Spring Cloud Stream with Kafka binders with a GlobalChannelInterceptor to perform message-logging for my Spring Boot microservices.
I have:
a producer to publish messages to a SubscribableChannel
a consumer to listen from the Stream (using the #StreamListener annotation)
Throughout the process when a message is published to the Stream from the producer and listened by the consumer, it is observed that the preSend method was triggered twice:
Once at producer side - when the message is published to the Stream
Once at consumer side - when the message is listened from the Stream
However, for my logging purposes, I only need to intercept and log the message at consumer side.
Is there any way to intercept the SCS message ONLY at one side (e.g. consumer side)?
I would appreciate any thoughts on this matter. Thank you!
Ref:
GlobalChannelInterceptor documentation - https://docs.spring.io/spring-integration/api/org/springframework/integration/config/GlobalChannelInterceptor.html
EDIT
Producer
public void sendToPushStream(PushStreamMessage message) {
try {
boolean results = streamChannel.pushStream().send(MessageBuilder.withPayload(new ObjectMapper().writeValueAsString(message)).build());
log.info("Push stream message {} sent to {}.", results ? "successfully" : "not", StreamChannel.PUSH_STREAM);
} catch (JsonProcessingException ex) {
log.error("Unable to parse push stream message.", ex);
}
}
Producer's streamChannel
public interface StreamChannel {
String PUSH_STREAM = "PushStream";
#Output(StreamChannel.PUSH_STREAM)
SubscribableChannel pushStream();
}
Consumer
#StreamListener(StreamChannel.PUSH_STREAM)
public void handle(Message<PushStreamMessage> message) {
log.info("Incoming stream message from {}, {}", streamChannel.pushStream(), message);
}
Consumer's streamChannel
public interface StreamChannel {
String PUSH_STREAM = "PushStream";
#Input(StreamChannel.PUSH_STREAM)
SubscribableChannel pushStream();
}
Interceptor (Common Library)
public class GlobalStreamInterceptor extends ChannelInterceptorAdapter {
#Override
public Message<?> preSend(Message<?> msg, MessageChannel mc) {
log.info("presend " + msg);
return msg;
}
#Override
public void postSend(Message<?> msg, MessageChannel mc, boolean sent) {
log.info("postSend " + msg);
}
}
Right, why don't follow GlobalChannelInterceptor options and don't apply
An array of simple patterns against which channel names will be matched.
?
So, you may have something like this:
#GlobalChannelInterceptor(patterns = Processor.INPUT)
Or use a custom name of input channel to your SCSt app.
I encountered a knotty problem when receiving message from WildFly JMS queue. My code is below:
Session produceSession = connectionFactory.createConnection().createSession(false, Session
.CLIENT_ACKNOWLEDGE);
Session consumerSession = connectionFactory.createConnection().createSession(false, Session
.CLIENT_ACKNOWLEDGE);
ApsSchedule apsSchedule = new ApsSchedule();
boolean success;
MessageProducer messageProducer = produceSession.createProducer(outQueueMaxusOrder);
success = apsSchedule.sendD90Order(produceSession,messageProducer, d90OrderAps);
if (!success) {
logger.error("Can't send APS schedule msg ");
} else {
MessageConsumer consumer = consumerSession.createConsumer(inQueueDeliveryDate);
data = apsSchedule.receiveD90Result(consumerSession,consumer);
}
then getting into the receiveD90Result():
public DeliveryData receiveD90Result(Session session, MessageConsumer consumer) {
DeliveryData data = null;
try {
Message message = consumer.receive(10000);
if (message == null) {
return null;
}
TextMessage msg = (TextMessage) message;
String text = msg.getText();
logger.debug("Receive APS d90 result: {}", text);
ObjectMapper mapper = new ObjectMapper();
data = mapper.readValue(text, DeliveryData.class);
} catch (JMSException je) {
logger.error("Can't receive APS d90 order result: {}", je.getMessage());
} catch (Exception e) {
e.printStackTrace();
} finally {
try {
consumer.close();
} catch (JMSException e) {
e.printStackTrace();
}
}
return data;
}
But when implementing the consumer.receive(10000), the project can't get a message from queue. If I use asynchronous way of MDB to listen the queue, I can get the message from queue. How to resolve it?
There are multiple modes you can choose to get a message from the queue. Message Queues are by default asynchronous in usage. There are however cases when you want to read it synchronously , for example sending a message with account number and using another queue to read the response and match it with a message id or a message correlation id. When you do a receive , the program is waiting for a message to arrive within that polling interval specified in receive.
The code snippet you have , as i see it uses the psuedo synchronous approach. If you have to use it as an MDB , you will have to implement message driven bean (EJB Resource) or message listener.
The way that MDB/Message Listener works is more event based , instead of a poll with a timeout (like the receive) , you implement a callback called onMessage() that is invoked every time there is a message. Instead of a synchronous call , this becomes asynchronous. Your application may require some changes both in terms of design.
I don't see where you're calling javax.jms.Connection.start(). In fact, it doesn't look like you even have a reference to the javax.jms.Connection instance used for your javax.jms.MessageConsumer. If you don't have a reference to the javax.jms.Connection then you can't invoke start() and you can't invoke close() when you're done so you'll be leaking connections.
Furthermore, connections are "heavy" objects and are meant to be re-used. You should create a single connection for both the producer and consumer. Also, if your application is not going to use the javax.jms.Session from multiple threads then you don't need multiple sessions either.
I have a JMS client which is producing messages and sending over a JMS queue to its unique consumer.
What I want is more than one consumer getting those messages. The first thing that comes to my mind is converting the queue to a topic, so current and new consumers can subscribe and get the same message delivered to all of them.
This will obviously involve modifying the current clients code in both producer and consumer side of things.
I would like to also look at other options like creating a second queue, so that I don't have to modify the existing consumer. I believe there are advantages in this approach like (correct me if I am wrong) balancing the load between two different queues rather than one, which might have a positive impact on performance.
I would like to get advise on these options and cons / pros that you might see. Any feedback is highly appreciated.
You have a few options as you stated.
If you convert it to a topic to get the same effect you will need to make the consumers persistent consumers. One thing the queue offers is persistence if your consumer isn't alive. This will depend on the MQ system you are using.
If you want to stick with queues, you will create a queue for each consumer and a dispatcher that will listen on the original queue.
Producer -> Queue_Original <- Dispatcher -> Queue_Consumer_1 <- Consumer_1
-> Queue_Consumer_2 <- Consumer_2
-> Queue_Consumer_3 <- Consumer_3
Pros of Topics
Easier to dynamically add new consumers. All consumers will get new messages without any work.
You can create round-robin topics, so that Consumer_1 will get a message, then Consumer_2, then Consumer_3
Consumers can be pushed new messages, instead of having to query a queue making them reactive.
Cons of Topics
Messages are not persistent unless your Broker supports this configuration. If a consumer goes off line and comes back it is possible to have missed messages unless Persistent consumers are setup.
Difficult to allow Consumer_1 and Consumer_2 to receive a message but not Consumer_3. With a Dispatcher and Queues, the Dispatcher can not put a message in Consumer_3's queue.
Pros of Queues
Messages are persistent until a Consumer removes them
A dispatcher can filter which consumers get which messages by not placing messages into the respective consumers queues. This can be done with topics through filters though.
Cons of Queues
Additional Queues need to be created to support multiple consumers. In a dynamic environment this wouldn't be efficient.
When developing a Messaging System I prefer topics as it gives me the most power, but seeing as you are already using Queues it would require you to change how your system works to implement Topics instead.
Design and Implementation of Queue System with multiple consumers
Producer -> Queue_Original <- Dispatcher -> Queue_Consumer_1 <- Consumer_1
-> Queue_Consumer_2 <- Consumer_2
-> Queue_Consumer_3 <- Consumer_3
Source
Keep in mind there are other things you'll need to take care of such as problem exception handling, reconnection to the connection and queues if you lose your connection, etc. This is just designed to give you an idea of how to accomplish what I described.
In a real system I probably wouldn't exit out at the first exception. I would allow the system to continue operating the best it could and log errors. As it stands in this code if putting a message in a single consumers queue fails, the whole dispatcher will stop.
Dispatcher.java
/*
* To change this template, choose Tools | Templates
* and open the template in the editor.
*/
package stackoverflow_4615895;
import javax.jms.JMSException;
import javax.jms.Message;
import javax.jms.MessageConsumer;
import javax.jms.MessageProducer;
import javax.jms.Queue;
import javax.jms.QueueConnection;
import javax.jms.QueueConnectionFactory;
import javax.jms.QueueSession;
import javax.jms.Session;
public class Dispatcher {
private static long QUEUE_WAIT_TIME = 1000;
private boolean mStop = false;
private QueueConnectionFactory mFactory;
private String mSourceQueueName;
private String[] mConsumerQueueNames;
/**
* Create a dispatcher
* #param factory
* The QueueConnectionFactory in which new connections, session, and consumers
* will be created. This is needed to ensure the connection is associated
* with the correct thread.
* #param source
*
* #param consumerQueues
*/
public Dispatcher(
QueueConnectionFactory factory,
String sourceQueue,
String[] consumerQueues) {
mFactory = factory;
mSourceQueueName = sourceQueue;
mConsumerQueueNames = consumerQueues;
}
public void start() {
Thread thread = new Thread(new Runnable() {
public void run() {
Dispatcher.this.run();
}
});
thread.setName("Queue Dispatcher");
thread.start();
}
public void stop() {
mStop = true;
}
private void run() {
QueueConnection connection = null;
MessageProducer producer = null;
MessageConsumer consumer = null;
QueueSession session = null;
try {
// Setup connection and queues for receiving the messages
connection = mFactory.createQueueConnection();
session = connection.createQueueSession(false, Session.DUPS_OK_ACKNOWLEDGE);
Queue sourceQueue = session.createQueue(mSourceQueueName);
consumer = session.createConsumer(sourceQueue);
// Create a null producer allowing us to send messages
// to any queue.
producer = session.createProducer(null);
// Create the destination queues based on the consumer names we
// were given.
Queue[] destinationQueues = new Queue[mConsumerQueueNames.length];
for (int index = 0; index < mConsumerQueueNames.length; ++index) {
destinationQueues[index] = session.createQueue(mConsumerQueueNames[index]);
}
connection.start();
while (!mStop) {
// Only wait QUEUE_WAIT_TIME in order to give
// the dispatcher a chance to see if it should
// quit
Message m = consumer.receive(QUEUE_WAIT_TIME);
if (m == null) {
continue;
}
// Take the message we received and put
// it in each of the consumers destination
// queues for them to process
for (Queue q : destinationQueues) {
producer.send(q, m);
}
}
} catch (JMSException ex) {
// Do wonderful things here
} finally {
if (producer != null) {
try {
producer.close();
} catch (JMSException ex) {
}
}
if (consumer != null) {
try {
consumer.close();
} catch (JMSException ex) {
}
}
if (session != null) {
try {
session.close();
} catch (JMSException ex) {
}
}
if (connection != null) {
try {
connection.close();
} catch (JMSException ex) {
}
}
}
}
}
Main.java
QueueConnectionFactory factory = ...;
Dispatcher dispatcher =
new Dispatcher(
factory,
"Queue_Original",
new String[]{
"Consumer_Queue_1",
"Consumer_Queue_2",
"Consumer_Queue_3"});
dispatcher.start();
You may not have to modify the code; it depends on how you wrote it.
For example, if your code sends messages using MessageProducer rather than QueueSender, then it will work for topics as well as queues. Similarly if you used MessageConsumer rather than QueueReceiver.
Essentially, it is good practice in JMS applications to use non-specific interfaces to interact with the JMS system, such as MessageProducer, MessageConsumer, Destination, etc. If that's the case, it's a "mere" matter of configuration.