I had configured SQS listener to consume messages in List of Messages but I am only getting a single message at a time and getting error as cannot convert model.StudentData to the instance of java.util.ArrayList<com.amazonaws.services.sqs.model.Message>
my code is :-
#SqsListener(value = "${queueName}", deletionPolicy = SqsMessageDeletionPolicy.NEVER)
public void receiveMessage(final StudentData studentData,
#Header("SenderId") final String senderId, final Acknowledgment acknowledgment) {
// business logic
acknowledgment.acknowledge();
}
Any suggestion on how to configure sqs listener to consume multiple messages
any help will be appreciated
solution for the above issue is :-
final ExecutorService executorService = Executors.newSingleThreadExecutor();
executorService.execute(() -> {
while (true) {
final String queueUrl = amazonSqs.getQueueUrl("enter your queue name").getQueueUrl();
final var receiveMessageRequest = new ReceiveMessageRequest(queueUrl)
.withWaitTimeSeconds(20);
List<Message> messages = amazonSqs.receiveMessage(receiveMessageRequest).getMessages();
while (messages.size() > 0) {
for (final Message queueMessage : messages) {
try {
String message = queueMessage.getBody();
amazonSqs.deleteMessage(new DeleteMessageRequest(queueUrl, queueMessage
.getReceiptHandle()));
} catch (Exception e) {
log.error("Received message with errors " + e);
}
}
messages = amazonSqs.receiveMessage(new ReceiveMessageRequest(queueUrl)).getMessages();
}
}
});
executorService.shutdown();
The SQS listener annotation provides the most simple configuration, it will consume messages one by one. This limitation comes directly from spring's
QueueMessagingTemplate.
To consume batches you could use AmazonSQS client directly.
#Autowire AmazonSQSAsync amazonSqs;
...
String queueUrl = amazonSqs.getQueueUrl("queueName").getQueueUrl();
ReceiveMessageRequest receiveMessageRequest = new ReceiveMessageRequest();
receiveMessageRequest.setQueueUrl(queueUrl);
receiveMessageRequest.setWaitTimeSeconds(10); // Listener for messages in the next 10 seconds
receiveMessageRequest.setMaxNumberOfMessages(1000); // If 10000 messages are read stop listening
ReceiveMessageResult receiveMessageResult = amazonSqs.receiveMessage(receiveMessageRequest);
receiveMessageResult.getMessages(); // batch of messages
Related
I'm trying to identify how to increase the rate of message consumption of a JMS Springboot application. I tried testing the rate of message consumption of the app and it took 1.5 hrs to consume and process 2000 waiting/pending messages in QUEUE.
In other words, problem is, it took 1.5hrs for the springboot app to empty the QUEUE it's consuming from.
public class MyMessageListener implements MessageListener {
#Autowired
private MyMessageService messageService;
#Override
public void onMessage(Message message) {
String messageContent = null;
try {
if (message instanceof BytesMessage) {
BytesMessage bytesMessage = (BytesMessage) message;
long length = bytesMessage.getBodyLength();
byte[] content = new byte[(int) length];
bytesMessage.readBytes(content);
messageContent = new String(content, StandardCharsets.UTF_8);
} else if (message instanceof TextMessage) {
TextMessage textMessage = (TextMessage) message;
messageContent = textMessage.getText();
}
if (messageContent != null) {
FileIOHelper.writeInboundXmlToFile(messageContent); //write message to file
String accountNumber = XmlUtil.extractAccountNumber(messageContent);
final String xmlMessageTransformed = messageService.transformXmlMessageToOldSchema(messageContent);
if (!xmlMessageTransformed.isEmpty()) {
FileIOHelper.writeTransformedXmlToFile(accountNumber, xmlMessageTransformed); //write message to file
Map<String, String> outboundHeaderProperties = messageService.createJMSHeaderProperties(message);
messageService.publishMessageToOutboundTopic(xmlMessageTransformed, outboundHeaderProperties);
} else {
FileIOHelper.writeUnprocessedXmlToFile(messageContent); //write message to file
log.error(
String.format("Failed transformation of message account# ", accountNumber));
}
message.acknowledge(); // acknowledge ALL inbound messages from inbound QUEUE
}
} catch (Exception e) {
log.error(e.getMessage());
}
}
}
As you can see, part of the processing of message involves writing a copy of inbound and outbound messages received to a file. I suspect this is what's causing the slow consumption/processing rate of messages from QUEUE.
In my JMS Configuration class, I have the following :
#Bean
public DefaultMessageListenerContainer listenerContainer(MessageListenerAdapter messageListener,
#Qualifier("sourceConnection") ConnectionFactory listenerConnectionFactory) {
DefaultMessageListenerContainer container = new DefaultMessageListenerContainer();
container.setConnectionFactory(listenerConnectionFactory);
container.setDestinationName(jmsSourceQueue);
container.setMessageListener(messageListener);
container.setSessionTransacted(true);
container.setSessionAcknowledgeMode(Session.CLIENT_ACKNOWLEDGE);
container.setRecoveryInterval(30000); //reconnect every 30 seconds if disconnected.
// container.setConcurrentConsumers(1); // do I need to add this line?
// container.setMaxConcurrentConsumers(5); //Or, add this line?
return container;
}
I searched SO and learned about setConcurrentConsumers() and setMaxConcurrentConsumers() I'm not sure if that's how I can solve the slow message consumption rate.
The requirement for our JMS application is to be able to consume messages in just a few minutes. In my example above, it took 1.5 hrs to consume all 2000 messages.
Can you suggest a way or approach to solve this without removing the write-to-file step?
Thank you!
Problem:
Somehow producer is sending event to "ActiveMQ.Advisory.Producer.Queue.Queue" instead of "Queue"
Active-MQ admin console in Topics section Screenshot with producer-queue: (Not sure why it has queue and 0 consumers and number of message enqueued = 38)
Active-MQ admin console in Queues section Screenshot with consumer-queue: (it shows consumers = 1 but number of message enqueued = 0)
Attaching Producer, Consumer and Config code.
Producer
public void sendMessage(WorkflowRun message){
var queue = "Queue";
try{
log.info("Attempting Send message to queue: "+ queue);
jmsTemplate.convertAndSend(queue, message);
} catch(Exception e){
log.error("Recieved Exception during send Message: ", e);
}
}
Listener
#JmsListener(destination = "Queue")
public void messageListener(SystemMessage systemMessage) {
LOGGER.info("Message received! {}", systemMessage);
}
Config
#Value("${spring.active-mq.broker-url}")
private String brokerUrl;
#Bean
public ConnectionFactory connectionFactory() throws JMSException {
ActiveMQConnectionFactory activeMQConnectionFactory = new ActiveMQConnectionFactory();
activeMQConnectionFactory.setBrokerURL(brokerUrl);
activeMQConnectionFactory.setWatchTopicAdvisories(false);
activeMQConnectionFactory.createQueueConnection(ActiveMQConnectionFactory.DEFAULT_USER,
ActiveMQConnectionFactory.DEFAULT_PASSWORD);
return activeMQConnectionFactory;
}
When your producer starts, the ActiveMQ broker produces an 'Advisory Message' and sends it to that topic. The count indicates how many producers have been created for the queue://Queuee-- in this case 38 producers have been created.
Since the message is not being produced, it appears that in your Spring wiring, you have the connection, session and producer objects being created-- but the messages are not being sent.
Additionally, if you are showing queue://ActiveMQ.Advisory.. showing up you probably have a bug in some other part of the app (or monitoring tool?) that should be configured to consume from topic://ActiveMQ.Advisory.. instead of queue://
I'm am using Virtual Destinations to implement Publish Subscribe model in ActiveMQ 5.15.13.
I have a virtual topic VirtualTopic and there are two queues bound to it. Each queue has its own redelivery policy. Let's say Queue 1 will retry message 2 times in case there is an exception while processing the message and Queue 2 will retry message 3 times. Post retry message will be sent to deadletter queue. I'm also using Individual Dead letter Queue strategy so that each queue has it's own deadletter queue.
I've observed that when a message is sent to VirtualTopic, the message with same message id is delivered to both the queues. I'm facing an issue where if the consumers of both queues are not able to process the message successfully. The message destined for Queue 1 is moved to deadletter queue after retrying for 2 times. But there is no deadletter queue for Queue 2, though message in Queue 2 is retried for 3 times.
Is it the expected behavior?
Code:
public class ActiveMQRedelivery {
private final ActiveMQConnectionFactory factory;
public ActiveMQRedelivery(String brokerUrl) {
factory = new ActiveMQConnectionFactory(brokerUrl);
factory.setUserName("admin");
factory.setPassword("password");
factory.setAlwaysSyncSend(false);
}
public void publish(String topicAddress, String message) {
final String topicName = "VirtualTopic." + topicAddress;
try {
final Connection producerConnection = factory.createConnection();
producerConnection.start();
final Session producerSession = producerConnection.createSession(false, AUTO_ACKNOWLEDGE);
final MessageProducer producer = producerSession.createProducer(null);
final TextMessage textMessage = producerSession.createTextMessage(message);
final Topic topic = producerSession.createTopic(topicName);
producer.send(topic, textMessage, PERSISTENT, DEFAULT_PRIORITY, DEFAULT_TIME_TO_LIVE);
} catch (JMSException e) {
throw new RuntimeException("Message could not be published", e);
}
}
public void initializeConsumer(String queueName, String topicAddress, int numOfRetry) throws JMSException {
factory.getRedeliveryPolicyMap().put(new ActiveMQQueue("*." + queueName + ".>"),
getRedeliveryPolicy(numOfRetry));
Connection connection = factory.createConnection();
connection.start();
final Session consumerSession = connection.createSession(false, CLIENT_ACKNOWLEDGE);
final Queue queue = consumerSession.createQueue("Consumer." + queueName +
".VirtualTopic." + topicAddress);
final MessageConsumer consumer = consumerSession.createConsumer(queue);
consumer.setMessageListener(message -> {
try {
System.out.println("in listener --- " + ((ActiveMQDestination)message.getJMSDestination()).getPhysicalName());
consumerSession.recover();
} catch (JMSException e) {
e.printStackTrace();
}
});
}
private RedeliveryPolicy getRedeliveryPolicy(int numOfRetry) {
final RedeliveryPolicy redeliveryPolicy = new RedeliveryPolicy();
redeliveryPolicy.setInitialRedeliveryDelay(0);
redeliveryPolicy.setMaximumRedeliveries(numOfRetry);
redeliveryPolicy.setMaximumRedeliveryDelay(-1);
redeliveryPolicy.setRedeliveryDelay(0);
return redeliveryPolicy;
}
}
Test:
public class ActiveMQRedeliveryTest {
private static final String brokerUrl = "tcp://0.0.0.0:61616";
private ActiveMQRedelivery activeMQRedelivery;
#Before
public void setUp() throws Exception {
activeMQRedelivery = new ActiveMQRedelivery(brokerUrl);
}
#Test
public void testMessageRedeliveries() throws Exception {
String topicAddress = "testTopic";
activeMQRedelivery.initializeConsumer("queue1", topicAddress, 2);
activeMQRedelivery.initializeConsumer("queue2", topicAddress, 3);
activeMQRedelivery.publish(topicAddress, "TestMessage");
Thread.sleep(3000);
}
#After
public void tearDown() throws Exception {
}
}
I recently came across this problem. To fix this there are 2 attributes that needs to be added to individualDeadLetterStrategy as below
<deadLetterStrategy>
<individualDeadLetterStrategy destinationPerDurableSubscriber="true" enableAudit="false" queuePrefix="DLQ." useQueueForQueueMessages="true"/>
</deadLetterStrategy>
Explanation of attributes:
destinationPerDurableSubscriber - To enable a separate destination per durable subscriber.
enableAudit - The dead letter strategy has a message audit that is enabled by default. This prevents duplicate messages from being added to the configured DLQ. When the attribute is enabled, the same message that isn't delivered for multiple subscribers to a topic will only be placed on one of the subscriber DLQs when the destinationPerDurableSubscriber attribute is set to true i.e. say two consumers fail to acknowledge the same message for the topic, that message will only be placed on the DLQ for one consumer and not the other.
I've used the subscriber example from the google documentation for Google PubSub
the only modification I've made is commenting out the acknowledgement of the messages.
The subscriber doesn't add messages to the queue anymore while messages should be resent according to the interval set in the google cloud console.
Why is this happening or am I missing something?
public class SubscriberExample {
use the default project id
private static final String PROJECT_ID = ServiceOptions.getDefaultProjectId();
private static final BlockingQueue<PubsubMessage> messages = new LinkedBlockingDeque<>();
static class MessageReceiverExample implements MessageReceiver {
#Override
public void receiveMessage(PubsubMessage message, AckReplyConsumer consumer) {
messages.offer(message);
//consumer.ack();
}
}
/** Receive messages over a subscription. */
public static void main(String[] args) throws Exception {
// set subscriber id, eg. my-sub
String subscriptionId = args[0];
ProjectSubscriptionName subscriptionName = ProjectSubscriptionName.of(
PROJECT_ID, subscriptionId);
Subscriber subscriber = null;
try {
// create a subscriber bound to the asynchronous message receiver
subscriber = Subscriber.newBuilder(subscriptionName, new MessageReceiverExample()).build();
subscriber.startAsync().awaitRunning();
// Continue to listen to messages
while (true) {
PubsubMessage message = messages.take();
System.out.println("Message Id: " + message.getMessageId());
System.out.println("Data: " + message.getData().toStringUtf8());
}
} finally {
if (subscriber != null) {
subscriber.stopAsync();
}
}
}
}
When you do not acknowledge a messages, the Java client library calls modifyAckDeadline on the message until maxAckExtensionPeriod passes. By default, this value is one hour. Therefore, if you don't ack/nack the message or change this value, it is likely the message will not be redelivered for an hour. If you want to change the max ack extension period, set it on the builder:
subscriber = Subscriber.newBuilder(subscriptionName, new MessageReceiverExample())
.setMaxAckExtensionPeriod(Duration.ofSeconds(60))
.build();
It is also worth noting that when you don't ack or nack messages, then flow control may prevent the delivery of more messages. By default, the Java client library allows up to 1000 messages to be outstanding, i.e., waiting for ack or nack or for the max ack extension period to pass.
I started JMS for a week now. I created JMS using Netbeans,maven and glassfish.
I have one producer and one durable consumer and I wanted to add another durable consumer to the same topic(not queue). Is it possible to do so?
because I want all the consumers consume all the message being sent by the producer whether the consumers are offline or not.
Any advice?
Thanks
public class DurableReceive {
#Resource(lookup = "jms/myDurableConnectionFactory")
private static ConnectionFactory connectionFactory;
#Resource(lookup = "jms/myNewTopic")
private static Topic topic;
public static void main(String[] args) {
Destination dest = (Destination) topic;
JMSConsumer consumer;
boolean messageReceived = false;
String message;
System.out.println("Waiting for messages...");
try (JMSContext context = connectionFactory.createContext();) {
consumer = context.createDurableConsumer(topic, "Subscriber1");
while (!messageReceived) {
message = consumer.receiveBody(String.class);
if (message != null) {
System.out.print("Received the following message: " + message);
System.out.println("(Received date: " + new Date() + ")\n");
} else {
messageReceived = true;
}
}
} catch (JMSRuntimeException e) {
System.err.println("##$%RuntimeException occurred: " + e.toString());
System.exit(1);
}
}
}
You can set different clientID for different durable consumers. Jms-broker uses combination of subscriptionName and clientId to identify the unique client (so if your subscriber have unique clientID - it can receive own messages). You can set clientID in your JmsContext.