Google PubSub resent messages aren't being processed - java

I've used the subscriber example from the google documentation for Google PubSub
the only modification I've made is commenting out the acknowledgement of the messages.
The subscriber doesn't add messages to the queue anymore while messages should be resent according to the interval set in the google cloud console.
Why is this happening or am I missing something?
public class SubscriberExample {
use the default project id
private static final String PROJECT_ID = ServiceOptions.getDefaultProjectId();
private static final BlockingQueue<PubsubMessage> messages = new LinkedBlockingDeque<>();
static class MessageReceiverExample implements MessageReceiver {
#Override
public void receiveMessage(PubsubMessage message, AckReplyConsumer consumer) {
messages.offer(message);
//consumer.ack();
}
}
/** Receive messages over a subscription. */
public static void main(String[] args) throws Exception {
// set subscriber id, eg. my-sub
String subscriptionId = args[0];
ProjectSubscriptionName subscriptionName = ProjectSubscriptionName.of(
PROJECT_ID, subscriptionId);
Subscriber subscriber = null;
try {
// create a subscriber bound to the asynchronous message receiver
subscriber = Subscriber.newBuilder(subscriptionName, new MessageReceiverExample()).build();
subscriber.startAsync().awaitRunning();
// Continue to listen to messages
while (true) {
PubsubMessage message = messages.take();
System.out.println("Message Id: " + message.getMessageId());
System.out.println("Data: " + message.getData().toStringUtf8());
}
} finally {
if (subscriber != null) {
subscriber.stopAsync();
}
}
}
}

When you do not acknowledge a messages, the Java client library calls modifyAckDeadline on the message until maxAckExtensionPeriod passes. By default, this value is one hour. Therefore, if you don't ack/nack the message or change this value, it is likely the message will not be redelivered for an hour. If you want to change the max ack extension period, set it on the builder:
subscriber = Subscriber.newBuilder(subscriptionName, new MessageReceiverExample())
.setMaxAckExtensionPeriod(Duration.ofSeconds(60))
.build();
It is also worth noting that when you don't ack or nack messages, then flow control may prevent the delivery of more messages. By default, the Java client library allows up to 1000 messages to be outstanding, i.e., waiting for ack or nack or for the max ack extension period to pass.

Related

How to write Junit testcase for GCP pubsub Message Receiver in springboot application

Have implemented the GCP PubSub Message Receiver in Springboot application using following approach: https://cloud.google.com/pubsub/docs/samples/pubsub-subscriber-concurrency-control, How to write junit testcases for the below implementation in springboot application
Attaching the implementation code:
import com.google.api.gax.core.ExecutorProvider;
import com.google.api.gax.core.InstantiatingExecutorProvider;
import com.google.cloud.pubsub.v1.AckReplyConsumer;
import com.google.cloud.pubsub.v1.MessageReceiver;
import com.google.cloud.pubsub.v1.Subscriber;
import com.google.pubsub.v1.ProjectSubscriptionName;
import com.google.pubsub.v1.PubsubMessage;
import java.util.concurrent.TimeUnit;
import java.util.concurrent.TimeoutException;
public class SubscribeWithConcurrencyControlExample {
public static void main(String... args) throws Exception {
// TODO(developer): Replace these variables before running the sample.
String projectId = "your-project-id";
String subscriptionId = "your-subscription-id";
subscribeWithConcurrencyControlExample(projectId, subscriptionId);
}
public static void subscribeWithConcurrencyControlExample(
String projectId, String subscriptionId) {
ProjectSubscriptionName subscriptionName =
ProjectSubscriptionName.of(projectId, subscriptionId);
// Instantiate an asynchronous message receiver.
MessageReceiver receiver =
(PubsubMessage message, AckReplyConsumer consumer) -> {
// Handle incoming message, then ack the received message.
System.out.println("Id: " + message.getMessageId());
System.out.println("Data: " + message.getData().toStringUtf8());
consumer.ack();
};
Subscriber subscriber = null;
try {
// Provides an executor service for processing messages. The default `executorProvider` used
// by the subscriber has a default thread count of 5.
ExecutorProvider executorProvider =
InstantiatingExecutorProvider.newBuilder().setExecutorThreadCount(4).build();
// `setParallelPullCount` determines how many StreamingPull streams the subscriber will open
// to receive message. It defaults to 1. `setExecutorProvider` configures an executor for the
// subscriber to process messages. Here, the subscriber is configured to open 2 streams for
// receiving messages, each stream creates a new executor with 4 threads to help process the
// message callbacks. In total 2x4=8 threads are used for message processing.
subscriber =
Subscriber.newBuilder(subscriptionName, receiver)
.setParallelPullCount(2)
.setExecutorProvider(executorProvider)
.build();
// Start the subscriber.
subscriber.startAsync().awaitRunning();
System.out.printf("Listening for messages on %s:\n", subscriptionName.toString());
// Allow the subscriber to run for 30s unless an unrecoverable error occurs.
subscriber.awaitTerminated(30, TimeUnit.SECONDS);
} catch (TimeoutException timeoutException) {
// Shut down the subscriber after 30s. Stop receiving messages.
subscriber.stopAsync();
}
}
}

Remove "ActiveMQ.Advisory.Producer.x" prefix

Problem:
Somehow producer is sending event to "ActiveMQ.Advisory.Producer.Queue.Queue" instead of "Queue"
Active-MQ admin console in Topics section Screenshot with producer-queue: (Not sure why it has queue and 0 consumers and number of message enqueued = 38)
Active-MQ admin console in Queues section Screenshot with consumer-queue: (it shows consumers = 1 but number of message enqueued = 0)
Attaching Producer, Consumer and Config code.
Producer
public void sendMessage(WorkflowRun message){
var queue = "Queue";
try{
log.info("Attempting Send message to queue: "+ queue);
jmsTemplate.convertAndSend(queue, message);
} catch(Exception e){
log.error("Recieved Exception during send Message: ", e);
}
}
Listener
#JmsListener(destination = "Queue")
public void messageListener(SystemMessage systemMessage) {
LOGGER.info("Message received! {}", systemMessage);
}
Config
#Value("${spring.active-mq.broker-url}")
private String brokerUrl;
#Bean
public ConnectionFactory connectionFactory() throws JMSException {
ActiveMQConnectionFactory activeMQConnectionFactory = new ActiveMQConnectionFactory();
activeMQConnectionFactory.setBrokerURL(brokerUrl);
activeMQConnectionFactory.setWatchTopicAdvisories(false);
activeMQConnectionFactory.createQueueConnection(ActiveMQConnectionFactory.DEFAULT_USER,
ActiveMQConnectionFactory.DEFAULT_PASSWORD);
return activeMQConnectionFactory;
}
When your producer starts, the ActiveMQ broker produces an 'Advisory Message' and sends it to that topic. The count indicates how many producers have been created for the queue://Queuee-- in this case 38 producers have been created.
Since the message is not being produced, it appears that in your Spring wiring, you have the connection, session and producer objects being created-- but the messages are not being sent.
Additionally, if you are showing queue://ActiveMQ.Advisory.. showing up you probably have a bug in some other part of the app (or monitoring tool?) that should be configured to consume from topic://ActiveMQ.Advisory.. instead of queue://

Multiple queues receiving same message from virtual topic creates a deadletter entry for one queue only

I'm am using Virtual Destinations to implement Publish Subscribe model in ActiveMQ 5.15.13.
I have a virtual topic VirtualTopic and there are two queues bound to it. Each queue has its own redelivery policy. Let's say Queue 1 will retry message 2 times in case there is an exception while processing the message and Queue 2 will retry message 3 times. Post retry message will be sent to deadletter queue. I'm also using Individual Dead letter Queue strategy so that each queue has it's own deadletter queue.
I've observed that when a message is sent to VirtualTopic, the message with same message id is delivered to both the queues. I'm facing an issue where if the consumers of both queues are not able to process the message successfully. The message destined for Queue 1 is moved to deadletter queue after retrying for 2 times. But there is no deadletter queue for Queue 2, though message in Queue 2 is retried for 3 times.
Is it the expected behavior?
Code:
public class ActiveMQRedelivery {
private final ActiveMQConnectionFactory factory;
public ActiveMQRedelivery(String brokerUrl) {
factory = new ActiveMQConnectionFactory(brokerUrl);
factory.setUserName("admin");
factory.setPassword("password");
factory.setAlwaysSyncSend(false);
}
public void publish(String topicAddress, String message) {
final String topicName = "VirtualTopic." + topicAddress;
try {
final Connection producerConnection = factory.createConnection();
producerConnection.start();
final Session producerSession = producerConnection.createSession(false, AUTO_ACKNOWLEDGE);
final MessageProducer producer = producerSession.createProducer(null);
final TextMessage textMessage = producerSession.createTextMessage(message);
final Topic topic = producerSession.createTopic(topicName);
producer.send(topic, textMessage, PERSISTENT, DEFAULT_PRIORITY, DEFAULT_TIME_TO_LIVE);
} catch (JMSException e) {
throw new RuntimeException("Message could not be published", e);
}
}
public void initializeConsumer(String queueName, String topicAddress, int numOfRetry) throws JMSException {
factory.getRedeliveryPolicyMap().put(new ActiveMQQueue("*." + queueName + ".>"),
getRedeliveryPolicy(numOfRetry));
Connection connection = factory.createConnection();
connection.start();
final Session consumerSession = connection.createSession(false, CLIENT_ACKNOWLEDGE);
final Queue queue = consumerSession.createQueue("Consumer." + queueName +
".VirtualTopic." + topicAddress);
final MessageConsumer consumer = consumerSession.createConsumer(queue);
consumer.setMessageListener(message -> {
try {
System.out.println("in listener --- " + ((ActiveMQDestination)message.getJMSDestination()).getPhysicalName());
consumerSession.recover();
} catch (JMSException e) {
e.printStackTrace();
}
});
}
private RedeliveryPolicy getRedeliveryPolicy(int numOfRetry) {
final RedeliveryPolicy redeliveryPolicy = new RedeliveryPolicy();
redeliveryPolicy.setInitialRedeliveryDelay(0);
redeliveryPolicy.setMaximumRedeliveries(numOfRetry);
redeliveryPolicy.setMaximumRedeliveryDelay(-1);
redeliveryPolicy.setRedeliveryDelay(0);
return redeliveryPolicy;
}
}
Test:
public class ActiveMQRedeliveryTest {
private static final String brokerUrl = "tcp://0.0.0.0:61616";
private ActiveMQRedelivery activeMQRedelivery;
#Before
public void setUp() throws Exception {
activeMQRedelivery = new ActiveMQRedelivery(brokerUrl);
}
#Test
public void testMessageRedeliveries() throws Exception {
String topicAddress = "testTopic";
activeMQRedelivery.initializeConsumer("queue1", topicAddress, 2);
activeMQRedelivery.initializeConsumer("queue2", topicAddress, 3);
activeMQRedelivery.publish(topicAddress, "TestMessage");
Thread.sleep(3000);
}
#After
public void tearDown() throws Exception {
}
}
I recently came across this problem. To fix this there are 2 attributes that needs to be added to individualDeadLetterStrategy as below
<deadLetterStrategy>
<individualDeadLetterStrategy destinationPerDurableSubscriber="true" enableAudit="false" queuePrefix="DLQ." useQueueForQueueMessages="true"/>
</deadLetterStrategy>
Explanation of attributes:
destinationPerDurableSubscriber - To enable a separate destination per durable subscriber.
enableAudit - The dead letter strategy has a message audit that is enabled by default. This prevents duplicate messages from being added to the configured DLQ. When the attribute is enabled, the same message that isn't delivered for multiple subscribers to a topic will only be placed on one of the subscriber DLQs when the destinationPerDurableSubscriber attribute is set to true i.e. say two consumers fail to acknowledge the same message for the topic, that message will only be placed on the DLQ for one consumer and not the other.

Consuming messages in batches in amazon sqs, alpine sqs spring boot

I had configured SQS listener to consume messages in List of Messages but I am only getting a single message at a time and getting error as cannot convert model.StudentData to the instance of java.util.ArrayList<com.amazonaws.services.sqs.model.Message>
my code is :-
#SqsListener(value = "${queueName}", deletionPolicy = SqsMessageDeletionPolicy.NEVER)
public void receiveMessage(final StudentData studentData,
#Header("SenderId") final String senderId, final Acknowledgment acknowledgment) {
// business logic
acknowledgment.acknowledge();
}
Any suggestion on how to configure sqs listener to consume multiple messages
any help will be appreciated
solution for the above issue is :-
final ExecutorService executorService = Executors.newSingleThreadExecutor();
executorService.execute(() -> {
while (true) {
final String queueUrl = amazonSqs.getQueueUrl("enter your queue name").getQueueUrl();
final var receiveMessageRequest = new ReceiveMessageRequest(queueUrl)
.withWaitTimeSeconds(20);
List<Message> messages = amazonSqs.receiveMessage(receiveMessageRequest).getMessages();
while (messages.size() > 0) {
for (final Message queueMessage : messages) {
try {
String message = queueMessage.getBody();
amazonSqs.deleteMessage(new DeleteMessageRequest(queueUrl, queueMessage
.getReceiptHandle()));
} catch (Exception e) {
log.error("Received message with errors " + e);
}
}
messages = amazonSqs.receiveMessage(new ReceiveMessageRequest(queueUrl)).getMessages();
}
}
});
executorService.shutdown();
The SQS listener annotation provides the most simple configuration, it will consume messages one by one. This limitation comes directly from spring's
QueueMessagingTemplate.
To consume batches you could use AmazonSQS client directly.
#Autowire AmazonSQSAsync amazonSqs;
...
String queueUrl = amazonSqs.getQueueUrl("queueName").getQueueUrl();
ReceiveMessageRequest receiveMessageRequest = new ReceiveMessageRequest();
receiveMessageRequest.setQueueUrl(queueUrl);
receiveMessageRequest.setWaitTimeSeconds(10); // Listener for messages in the next 10 seconds
receiveMessageRequest.setMaxNumberOfMessages(1000); // If 10000 messages are read stop listening
ReceiveMessageResult receiveMessageResult = amazonSqs.receiveMessage(receiveMessageRequest);
receiveMessageResult.getMessages(); // batch of messages

Consumer is not receiving message from MQ when message is sent before consumer is listening

I am using MQs for the first time and attempting to implement a logging system with RabbitMQ. My implementation involves a 'sender'
/*
* This class sends messages over MQ
*/
public class MQSender {
private final static String EXCHANGE_NAME = "mm_exchange";
private final static String[] LOG_LEVELS = {"green", "orange", "red", "black"};
public static void main(String[] args) throws IOException, ShutdownSignalException, ConsumerCancelledException, InterruptedException {
/*
* Boilerplate stuff
*/
ConnectionFactory factory = new ConnectionFactory();
factory.setHost("localhost");
Connection connection = factory.newConnection();
Channel channel = connection.createChannel();
//declare the exchange that messages pass through, type=direct
channel.exchangeDeclare(EXCHANGE_NAME, "direct");
String[] levels = {"green", "orange", "red", "black"};
for (String log_level : levels) {
String message = "This is a " + log_level + " message";
System.out.println("Sending " + log_level + " message");
//publish the message with each of the bindings in levels
channel.basicPublish(EXCHANGE_NAME, log_level, null, message.getBytes());
}
channel.close();
connection.close();
}
}
Which sends one message for each of my colors to the exchange, where the color will be used as bindings. And it involves a 'receiver'
public class MQReceiver {
private final static String EXCHANGE_NAME = "mm_exchange";
private final static String[] LOG_LEVELS = {"green", "orange", "red", "black"};
public static void main(String[] args) throws IOException, ShutdownSignalException, ConsumerCancelledException, InterruptedException {
receiveMessagesFromQueue(2);
}
public static void receiveMessagesFromQueue(int maxLevel) throws IOException, ShutdownSignalException, ConsumerCancelledException, InterruptedException {
/*
* Boilerplate stuff
*/
ConnectionFactory factory = new ConnectionFactory();
factory.setHost("localhost");
Connection connection = factory.newConnection();
Channel channel = connection.createChannel();
//declare the exchange that messages pass through, type=direct
channel.exchangeDeclare(EXCHANGE_NAME, "direct");
//generate random queue
String queueName = channel.queueDeclare().getQueue();
//set bindings from 0 to maxLevel for the queue
for (int level = 0; level <= maxLevel; level++) {
channel.queueBind(queueName, EXCHANGE_NAME, LOG_LEVELS[level]);
}
QueueingConsumer consumer = new QueueingConsumer(channel);
channel.basicConsume(queueName, true, consumer);
while(true) {
//waits until a message is delivered then gets that message
QueueingConsumer.Delivery delivery = consumer.nextDelivery();
String message = new String(delivery.getBody());
String routingKey = delivery.getEnvelope().getRoutingKey();
System.out.println(" [x] Received '" + routingKey + "':'" + message + "'");
}
}
}
which is given as a parameter a number representing which color bindings I would like it to be fed from the exchange.
In my implementation, and in RabbitMQ in general, it seems like messages are stored in the exchange until the Consumer asks for them, at which point they are distributed to their respective queues and then sent one at a time to the client (or consumer in MQ lingo). My problem is that when I run the MQSender class before running the MQReceiver class the messages never get delivered. But when I run the MQReceiver class first the messages are received. From my understanding of MQ I would think that the messages should be stored on the server until the MQReceiver class is run, then the messages should be delivered to their consumers, however this is not what is happening. My main question is whether these messages can be stored in an exchange and if not, where should they be stored so that they will be delivered once a consumer (i.e. my MQReceiver class) is called?
Thanks for your help!
RabbitMQ discards messages if their routing key doesn't match any queues bound to the exchange. When you start MQSender first, no queues are bound, so the messages it sends are lost. When you start MQReceiver, it binds queues to the exchange, so RabbitMQ has a place to put the message from MQSender. When you stop MQReceiver, since you created an anonymous queue, the queue and all bindings are removed from the exchange.
If you want messages to be stored on the server while MQReceiver is not running, you need the receiver to create a named queue, and bind the routing keys to that queue. Note that creating a named queue is idempotent, and the queue won't be created if it already exists. Then you need the receiver to pull messages off the named queue.
Change your code to look something like this:
MQSender
....
String namedQueue = "logqueue";
//declare named queue and bind log level routing keys to it.
//RabbitMQ will put messages with matching routing keys in this queue
channel.queueDeclare(namedQueue, false, false, false, null);
for (int level = 0; level < LOG_LEVELS.length; level++) {
channel.queueBind(namedQueue, EXCHANGE_NAME, LOG_LEVELS[level]);
}
...
MQReceiver
...
channel.exchangeDeclare(EXCHANGE_NAME, "direct");
QueueingConsumer consumer = new QueueingConsumer(channel);
//Consume messages off named queue instead of anonymous queue
String namedQueue = "logqueue";
channel.basicConsume(namedQueue, true, consumer);
while(true) {
...

Categories