I'm dealing with an Azure Service Bus Subscription and its Dead Letter Queue in Java SpringBoot setup.
I need to process the messages in DLQ when there is a trigger.
I have 12 messages in the DLQ, I need to read 5 messages in one go and submit it to an ExecutorService to process the individual messages.
I created an IMessageReceiver deadLetterReceiver, then did batch receiving as deadLetterReceiver.receiveBatch(5)
The catch here is until the messages in 1st batch are not processed, the next batch of messages is not read, and the 1st batch of messages will not be removed from DLQ, and it remains there.
The problem is after I process the 1st batch and read the 2nd batch from the ASB, instead of getting the next 5 messages, I get the same messages again.
For example; if I have messages with messageId 1 to 12 in DLQ, after reading the 1st batch, I get messages with messageId 1,2,3,4,5. After reading the second batch instead of getting 6,7,8,9,10 I'm getting 1,2,3,4,5.
Here is the code:
public void processDeadLetterQueue(){
IMessageReceiver deadLetterReceiver = getDeadLetterMessageReceiver();
Long deadLetterMessageCount = getDeadLetterMessageCount();
Long receivedMessageCount = 0L;
ExecutorService executor = Executors.newFixedThreadPool(2);
while(receivedMessageCount < deadLetterMessageCount) {
Collection<IMessage> messageList = deadLetterReceiver.receiveBatch(5);
receivedMessageCount += messageList.size();
List<Callable<Void>> callableDeadLetterMessages = new ArrayList<>();
messageList.forEach(message ->callableDeadLetterMessages.add(() -> {
handleDeadLetterMessage(message, deadLetterReceiver);
return null;
}));
try {
List<Future<Void>> futureList = executor.invokeAll(callableDeadLetterMessages);
for (Future<Void> future : futureList) {
future.get();
}
} catch (InterruptedException | ExecutionException ex){
log.error("Interrupted during processing callableDeadLetterMessage: ", ex);
Thread.currentThread().interrupt();
}
}
executor.shutdown();
deadLetterReceiver.close();
}
How can I stop it reading the same message again in the next batch and read the next available messages instead?
Note: I' not abandoning the message from DLQ (deadLetterReceiver.abandon(message.getLockToken());)
Related
I have a Spring Boot app, where I receive one single message from a Azure Service Bus queue session.
The code is:
#Autowired
ServiceBusSessionReceiverAsyncClient apiMessageQueueIntegrator;
.
.
.
Mono<ServiceBusReceiverAsyncClient> receiverMono = apiMessageQueueIntegrator.acceptSession(sessionid);
Disposable subscription = Flux.usingWhen(receiverMono,
receiver -> receiver.receiveMessages(),
receiver -> Mono.fromRunnable(() -> receiver.close()))
.subscribe(message -> {
// Process message.
logger.info(String.format("Message received from quque. Session id: %s. Contents: %s%n", message.getSessionId(),
message.getBody()));
receivedMessage.setReceivedMessage(message);
timeoutCheck.countDown();
}, error -> {
logger.info("Queue error occurred: " + error);
});
As I am receiving only one message from the session, I use a CountDownLatch(1) to dispose of the subscription when I have received the message.
The documentation of the library says that it is possible to use Mono.usingWhen instead of Flux.usingWhen if I only expect one message, but I cannot find any examples of this anywhere, and I have not been able to figure out how to rewrite this code to do this.
How would the pasted code look if I were to use Mono.usingWhen instead?
Thank you conniey. Posting your suggestion as an answer to help other community members.
By default receiveMessages() is a Flux because we imagine the messages from a session to be "infinitely long". In your case, you only want the first message in the stream, so we use the next() operator.
The usage of the countdown latch is probably not the best approach. In the sample, we had one there so that the program didn't end before the messages were received. .subscribe is not a blocking call, it sets up the handlers and moves onto the next line of code.
Mono<ServiceBusReceiverAsyncClient> receiverMono = sessionReceiver.acceptSession("greetings-id");
Mono<ServiceBusReceivedMessage> singleMessageMono = Mono.usingWhen(receiverMono,
receiver -> {
// Anything you wish to do with the receiver.
// In this case we only want to take the first message, so we use the "next" operator. This returns a
// Mono.
return receiver.receiveMessages().next();
},
receiver -> Mono.fromRunnable(() -> receiver.close()));
try {
// Turns this into a blocking call. .block() waits indefinitely, so we have a timeout.
ServiceBusReceivedMessage message = singleMessageMono.block(Duration.ofSeconds(30));
if (message != null) {
// Process message.
}
} catch (Exception error) {
System.err.println("Error occurred: " + error);
}
You can refer to GitHub issue:ServiceBusSessionReceiverAsyncClient - Mono instead of Flux
I'am trying to implement a java application with redis streams where every consomer consumes exactly one message. Like a pipeline/queue where every consumer takes exactly one message, processes it and after finishing the consumer takes the next message which was not processed so far in the stream.
What works is that every message is consumed by exactly one consumer (with xreadgroup).
I started with this tutorial from redislabs
The code:
RedisClient redisClient = RedisClient.create("redis://pw#host:port");
StatefulRedisConnection<String, String> connection = redisClient.connect();
RedisCommands<String, String> syncCommands = connection.sync();
try {
syncCommands.xgroupCreate(XReadArgs.StreamOffset.from(STREAM_KEY, "0-0"), ID_READ_GROUP);
} catch (RedisBusyException redisBusyException) {
System.out.println(String.format("\t Group '%s' already exists", ID_READ_GROUP));
}
System.out.println("Waiting for new messages ");
while (true) {
List<StreamMessage<String, String>> messages = syncCommands.xreadgroup(
Consumer.from(ID_READ_GROUP, ID_WORKER), ReadArgs.StreamOffset.lastConsumed(STREAM_KEY));
if (!messages.isEmpty()) {
System.out.println(messages.size()); //
for (StreamMessage<String, String> message : messages) {
System.out.println(message.getId());
Thread.sleep(5000);
syncCommands.xack(STREAM_KEY, ID_READ_GROUP, message.getId());
}
}
}
My current problem is that a consumer takes more that one message from the queue and in some situations the other consumers are waiting and one consumer is processing 10 messages at once.
Thanks in advance!
Notice that XREADGROUP can get COUNT argument.
See the JavaDoc how to do it in Lettuce xreadgroup, by passing XReadArgs.
I am learning about DDS using RTI (still very new to this topic) . I am creating a Publisher that writes to a Subscriber, and the Subscriber outputs the message. One thing I would like to simulate is dropped packages. As an example, let's say the Publisher writes to the Subscriber 4 times a second but the Subscriber can only read one a second (the most recent message).
As of now, I am able to create a Publisher & Subscriber w/o any packages being dropped.
I read through some documentation and found HistoryQosPolicyKind.KEEP_LAST_HISTORY_QOS.
Correct me if I am wrong, but I was under the impression that this would essentially keep the most recent message received from the Publisher. Instead, the Subscriber is receiving all the messages but delayed by 1 second.
I don't want to cache the messages but drop the messages. How can I simulate the "dropped" package?
BTW: I don't want to change anything in the .xml file. I want to do it programmatically.
Here are some snippets of my code.
//Publisher.java
//writer = (MsgDataWriter)publisher.create_datawriter(topic, Publisher.DATAWRITER_QOS_DEFAULT,null /* listener */, StatusKind.STATUS_MASK_NONE);
writer = (MsgDataWriter)publisher.create_datawriter(topic, write, null,
StatusKind.STATUS_MASK_ALL);
if (writer == null) {
System.err.println("create_datawriter error\n");
return;
}
// --- Write --- //
String[] messages= {"1", "2", "test", "3"};
/* Create data sample for writing */
Msg instance = new Msg();
InstanceHandle_t instance_handle = InstanceHandle_t.HANDLE_NIL;
/* For a data type that has a key, if the same instance is going to be
written multiple times, initialize the key here
and register the keyed instance prior to writing */
//instance_handle = writer.register_instance(instance);
final long sendPeriodMillis = (long) (.25 * 1000); // 4 per second
for (int count = 0;
(sampleCount == 0) || (count < sampleCount);
++count) {
if (count == 11)
{
return;
}
System.out.println("Writing Msg, count " + count);
/* Modify the instance to be written here */
instance.message =words[count];
instance.sender = "some user";
/* Write data */
writer.write(instance, instance_handle);
try {
Thread.sleep(sendPeriodMillis);
} catch (InterruptedException ix) {
System.err.println("INTERRUPTED");
break;
}
}
//writer.unregister_instance(instance, instance_handle);
} finally {
// --- Shutdown --- //
if(participant != null) {
participant.delete_contained_entities();
DomainParticipantFactory.TheParticipantFactory.
delete_participant(participant);
}
//Subscriber
// Customize time & Qos for receiving info
DataReaderQos readerQ = new DataReaderQos();
subscriber.get_default_datareader_qos(readerQ);
Duration_t minTime = new Duration_t(1,0);
readerQ.time_based_filter.minimum_separation.sec = minTime.sec;
readerQ.time_based_filter.minimum_separation.nanosec = minTime.nanosec;
readerQ.history.kind = HistoryQosPolicyKind.KEEP_LAST_HISTORY_QOS;
readerQ.reliability.kind = ReliabilityQosPolicyKind.BEST_EFFORT_RELIABILITY_QOS;
reader = (MsgDataReader)subscriber.create_datareader(topic, readerQ, listener, StatusKind.STATUS_MASK_ALL);
if (reader == null) {
System.err.println("create_datareader error\n");
return;
}
// --- Wait for data --- //
final long receivePeriodSec = 1;
for (int count = 0;
(sampleCount == 0) || (count < sampleCount);
++count) {
//System.out.println("Msg subscriber sleeping for "+ receivePeriodSec + " sec...");
try {
Thread.sleep(receivePeriodSec * 1000); // in millisec
} catch (InterruptedException ix) {
System.err.println("INTERRUPTED");
break;
}
}
} finally {
// --- Shutdown --- //
On the subscriber side, it is useful to distinguish three different types of interaction between your application and the DDS Domain: polling, Listeners and WaitSets
Polling means that the application decides when it reads available data. This is often a time-driven mechanism.
Listeners are basically callback functions that get invoked as soon as data becomes available, by an infrastructure thread, to read that data.
WaitSets implement a mechanism similar to the socket select mechanism: an application thread waits (blocks) for data to become available and after unblocking reads the new data.
Your application uses a Listener mechanism. You did not post the implementation of the callback function, but from the overall picture, it is likely that the listener implementation immediately tries to read the data at the moment that the callback is invoked. There is no time for the data to be "pushed out" or "dropped" as you called it. This reading happens in a different thread than your main thread, which is sleeping most of the time. You can find a Knowledge Base article about it here.
The only thing that is not clear is the impact of the time_based_filter QoS setting. You did not mention that in your question, but it does show up in the code. I would expect this to filter out some of your samples. That is a different mechanism than the pushing out of the history though. The behavior for the time based filter may be implemented differently for different DDS implementations. Which product do you use?
I have have a message producer on my local machine and a broker on remote host (aws).
After sending a message from the producer,
I wait and call the console consumer on the remote host and
see excessive logs.
Without the value from producer.
The producer flushes the data after calling the send method.
Everything is configured correctly.
How can I check to see that the broker received the message from the producer and to see if the producer received the answer?
The Send method asynchronously sends the message to the topic and
returns a Future of RecordMetadata.
java.util.concurrent.Future<RecordMetadata> send(ProducerRecord<K,V> record)
Asynchronously sends a record to a topic
After the flush call,
check to see that the Future has completed by calling the isDone method.
(for example, Future.isDone() == true)
Invoking this method makes all buffered records immediately available to send (even if linger.ms is greater than 0) and blocks on the completion of the requests associated with these records. The post-condition of flush() is that any previously sent record will have completed (e.g. Future.isDone() == true). A request is considered completed when it is successfully acknowledged according to the acks configuration you have specified or else it results in an error.
The RecordMetadata contains the offset and the partition
public int partition()
The partition the record was sent to
public long offset()
the offset of the record, or -1 if {hasOffset()} returns false.
Or you can also use Callback function to ensure messages was sent to topic or not
Fully non-blocking usage can make use of the Callback parameter to provide a callback that will be invoked when the request is complete.
here is clear example in docs
ProducerRecord<byte[],byte[]> record = new ProducerRecord<byte[],byte[]>("the-topic", key, value);
producer.send(myRecord,
new Callback() {
public void onCompletion(RecordMetadata metadata, Exception e) {
if(e != null) {
e.printStackTrace();
} else {
System.out.println("The offset of the record we just sent is: " + metadata.offset());
}
}
});
You can try get() API of send , which will return the Future of RecordMetadata
ProducerRecord<String, String> record =
new ProducerRecord<>("SampleTopic", "SampleKey", "SampleValue");
try {
producer.send(record).get();
} catch (Exception e) {
e.printStackTrace();
}
Use exactly-once-delivery and you won't need to worry about whether your message reached or not: https://www.baeldung.com/kafka-exactly-once, https://www.confluent.io/blog/exactly-once-semantics-are-possible-heres-how-apache-kafka-does-it/
I have an application with several activeMq queues. I would like to list the messages in them and remove any of them from any of the queues based on the id of the message.
Here is my code so far.
public void killMessage(String id) {
try {
ActiveMQConnection activeMqConnection = (ActiveMQConnection) connectionFactory.createConnection();
activeMqConnection.start();
DestinationSource destinationSource = activeMqConnection.getDestinationSource();
Set<ActiveMQQueue> queues = destinationSource.getQueues();
QueueSession queueSession = activeMqConnection.createQueueSession(true, Session.CLIENT_ACKNOWLEDGE);
for(ActiveMQQueue queue : queues) {
QueueBrowser browser = queueSession.createBrowser(queue);
Enumeration<?> messagesInQueue = browser.getEnumeration();
while (messagesInQueue.hasMoreElements()) {
Message message = (Message) messagesInQueue.nextElement();
System.out.println("Current id: " + message.getJMSMessageID());
if(message.getJMSMessageID().equals(id)){
System.out.println("-----message id found-------");
}
}
}
activeMqConnection.close();
} catch (JMSException e) {
e.printStackTrace();
}
}
I iterate through all the queues, then I iterate through all messages in each queue. I even find the message I want to delete, but I cannot find a way to remove it from the queue.
Edit:
I also created a consumer. I am not sure how the consumer should make the messages disappear from the queue. My attempt at it, which have no effect at all, messages remain in the queue, and I get no error message and no exception is thrown which could indicate the consumer did not match a message:
if(message.getJMSMessageID().equals(id)){
System.out.println("-----message id found-------");
MessageConsumer consumer = queueSession.createConsumer(queue, "JMSMessageID='" + id + "'");
consumer.receive();
consumer.close();
}
If you want to use the JMS API to do this then you'll have to create a consumer and use a selector to consume the message with the ID that you want. A queue browser cannot consume messages; it can only browse them.
In the code you pasted you're creating a transacted session which means when you consume the message you'll need to commit the session otherwise the message will never be acknowledged. That said, you're probably better off creating a non-transacted session with AUTO_ACKNOWLEDGE instead.
Also, you probably want to call receive(int) (i.e. with a timeout) so that if the selector can't find the message for some reason your application doesn't just sit there forever waiting on the method to return.