Offline messages are not consumed in Moquette with Paho Client - java

I have an issue with respect to consuming offline MQTT messages in Moquette server through eclipse Paho client.
Following are the steps which I have followed.
Created and spinned up the Moquette MQTT broker.
Created a simple MQTT consumer application using eclipse Paho client.
Set consumer to consume data on topic : "devices/reported/#" with QOS : 1 and CleanSession : False
Created a simple MQTT data publisher to publish data to MQTT broker using Eclipse Paho.
Used MQTT data publisher to publish messages to : "devices/reported/client_1" topic with QOS : 1
Above steps were successful without any issue.
Then I stopped my consumer application and sent MQTT data to broker with the same topic. using my publisher application - Server was able to receive these messages but in this moment there was no any consumer to consume this message since I have stopped my consumer.
Then I started my consumer application again. It was connected to the broker successfully but, it did not receive any message which I sent to broker while the consumer shutdown.
Do I need to do any specific configuration to my Moquette server to persist data (with clean session : false) ?
Or am I missing something ?
Please find my sample code below,
Moquette Server initialization
package com.gbids.mqtt.moquette.main;
import com.gbids.mqtt.moquette.server.PublishInterceptor;
import io.moquette.interception.InterceptHandler;
import io.moquette.server.Server;
import io.moquette.server.config.IConfig;
import io.moquette.server.config.MemoryConfig;
import java.io.IOException;
import java.util.Arrays;
import java.util.List;
import java.util.Properties;
public class ServerLauncher {
public static void main(String[] args) throws IOException {
Properties props = new Properties();
final IConfig configs = new MemoryConfig(props);
final Server mqttBroker = new Server();
final List<? extends InterceptHandler> userHandlers = Arrays.asList(new PublishInterceptor());
mqttBroker.startServer(configs, userHandlers);
System.out.println("moquette mqtt broker started, press ctrl-c to shutdown..");
Runtime.getRuntime().addShutdownHook(new Thread() {
#Override
public void run() {
System.out.println("stopping moquette mqtt broker..");
mqttBroker.stopServer();
System.out.println("moquette mqtt broker stopped");
}
});
}
}
MQTT Consumer
package com.gbids.mqtt.moquette.main;
import org.eclipse.paho.client.mqttv3.*;
import org.eclipse.paho.client.mqttv3.persist.MemoryPersistence;
public class ConsumerLauncher implements MqttCallback {
private static final String topicPrefix = "devices/reported";
private static final String broker = "tcp://0.0.0.0:1883";
private static final String clientIdPrefix = "consumer";
public static void main(String[] args) throws MqttException {
final String clientId = "consumer_1";
MqttClient sampleClient = new MqttClient(broker, clientId, new MemoryPersistence());
MqttConnectOptions connOpts = new MqttConnectOptions();
connOpts.setCleanSession(false);
sampleClient.connect(connOpts);
sampleClient.subscribe(topicPrefix + "/#", 1);
sampleClient.setCallback(new ConsumerLauncher());
}
public void connectionLost(Throwable throwable) {
System.out.println("Consumer connection lost : " + throwable.getMessage());
}
public void messageArrived(String s, MqttMessage mqttMessage) throws Exception {
System.out.println("Message arrived from topic : " + s + " | Message : " + new String(mqttMessage.getPayload()) + " | Message ID : " +mqttMessage.getId());
}
public void deliveryComplete(IMqttDeliveryToken iMqttDeliveryToken) {
System.out.println("Delivery completed from : " + clientIdPrefix + "_1");
}
}
MQTT Publisher
package com.gbids.mqtt.moquette.main;
import org.eclipse.paho.client.mqttv3.MqttClient;
import org.eclipse.paho.client.mqttv3.MqttConnectOptions;
import org.eclipse.paho.client.mqttv3.MqttException;
import org.eclipse.paho.client.mqttv3.MqttMessage;
import org.eclipse.paho.client.mqttv3.persist.MemoryPersistence;
public class ClientLauncher {
private static final String content = "{\"randomData\": 25}";
private static final String willContent = "Client disconnected unexpectedly";
private static final String broker = "tcp://0.0.0.0:1883";
private static final String clientIdPrefix = "client";
public static void main(String[] args) throws Exception{
sendDataWithQOSOne();
System.exit(0);
}
private static void sendDataWithQOSOne(){
try {
final String clientId = "client_1";
MqttClient sampleClient = new MqttClient(broker, clientId, new MemoryPersistence());
MqttConnectOptions connOpts = new MqttConnectOptions();
connOpts.setCleanSession(false); // for publisher - this is not needed I think
sampleClient.connect(connOpts);
MqttMessage message = new MqttMessage(content.getBytes());
message.setQos(1);
final String topic = "devices/reported/" + clientId;
sampleClient.publish(topic, message);
System.out.println("Message published from : " + clientId + " with payload of : " + content);
sampleClient.disconnect();
} catch (MqttException me) {
me.printStackTrace();
}
}
}

In your case you need to set the retained flag to true when creating the MqttMessage in your ClientLauncher (publisher). The default value is false as in the documentation.
...
message.setRetained(true)
...
Setting this flag enables messages to be retained on the broker and be sent to newly connected clients. Please be aware, that the broker only keeps the last message for a topic. There is no way to keep more than one message for a specific topic.

Related

How to write Junit testcase for GCP pubsub Message Receiver in springboot application

Have implemented the GCP PubSub Message Receiver in Springboot application using following approach: https://cloud.google.com/pubsub/docs/samples/pubsub-subscriber-concurrency-control, How to write junit testcases for the below implementation in springboot application
Attaching the implementation code:
import com.google.api.gax.core.ExecutorProvider;
import com.google.api.gax.core.InstantiatingExecutorProvider;
import com.google.cloud.pubsub.v1.AckReplyConsumer;
import com.google.cloud.pubsub.v1.MessageReceiver;
import com.google.cloud.pubsub.v1.Subscriber;
import com.google.pubsub.v1.ProjectSubscriptionName;
import com.google.pubsub.v1.PubsubMessage;
import java.util.concurrent.TimeUnit;
import java.util.concurrent.TimeoutException;
public class SubscribeWithConcurrencyControlExample {
public static void main(String... args) throws Exception {
// TODO(developer): Replace these variables before running the sample.
String projectId = "your-project-id";
String subscriptionId = "your-subscription-id";
subscribeWithConcurrencyControlExample(projectId, subscriptionId);
}
public static void subscribeWithConcurrencyControlExample(
String projectId, String subscriptionId) {
ProjectSubscriptionName subscriptionName =
ProjectSubscriptionName.of(projectId, subscriptionId);
// Instantiate an asynchronous message receiver.
MessageReceiver receiver =
(PubsubMessage message, AckReplyConsumer consumer) -> {
// Handle incoming message, then ack the received message.
System.out.println("Id: " + message.getMessageId());
System.out.println("Data: " + message.getData().toStringUtf8());
consumer.ack();
};
Subscriber subscriber = null;
try {
// Provides an executor service for processing messages. The default `executorProvider` used
// by the subscriber has a default thread count of 5.
ExecutorProvider executorProvider =
InstantiatingExecutorProvider.newBuilder().setExecutorThreadCount(4).build();
// `setParallelPullCount` determines how many StreamingPull streams the subscriber will open
// to receive message. It defaults to 1. `setExecutorProvider` configures an executor for the
// subscriber to process messages. Here, the subscriber is configured to open 2 streams for
// receiving messages, each stream creates a new executor with 4 threads to help process the
// message callbacks. In total 2x4=8 threads are used for message processing.
subscriber =
Subscriber.newBuilder(subscriptionName, receiver)
.setParallelPullCount(2)
.setExecutorProvider(executorProvider)
.build();
// Start the subscriber.
subscriber.startAsync().awaitRunning();
System.out.printf("Listening for messages on %s:\n", subscriptionName.toString());
// Allow the subscriber to run for 30s unless an unrecoverable error occurs.
subscriber.awaitTerminated(30, TimeUnit.SECONDS);
} catch (TimeoutException timeoutException) {
// Shut down the subscriber after 30s. Stop receiving messages.
subscriber.stopAsync();
}
}
}

Thingsboard: MQTT-Subscription to internal broker failed (Java/Paho)

I have some trouble in subscribing at the topic v1/devices/me/telemetry. I have no problems in subscribing on v1/devices/me/attributes using the paho Java-MQTT-Client. At the attributes-topic I can get new attributes when I post them in the UI. So my Java-Program seems to runs fine (see bottom).
I get the following at the console:
Subscriber running
checking
Mqtt Connecting to broker: tcp://192.168.1.25:1883
Mqtt Connected
MqttException (128)
MqttException (128)
at org.eclipse.paho.client.mqttv3.MqttClient.subscribe(MqttClient.java:438)
at org.eclipse.paho.client.mqttv3.MqttClient.subscribe(MqttClient.java:406)
at Test.MqttSubscriber.subscribe(MqttSubscriber.java:57)
at Test.MqttSubscriber.main(MqttSubscriber.java:30)
I guess that Error Code 128 means that the subscription was pulled back.
What am I doing wrong? Publishing content to thingsboard at that topic is no problem. Do I have to activate the broker for publishing/subscribing somehow? Does the internal broker of TB need a special command (JSON maybe) to grant a subscribtion? Or do I have to realise it with the IoT gateway (I understand it the way that TB can push data to an external broker - but here a simple subscription is needed)? Which alternative do I have to get device-telemetry from Thingsboard using MQTT?
I hope someone can help :) Thank you!
The Code is (MqttSubscriber.java):
package Test;
import org.eclipse.paho.client.mqttv3.IMqttDeliveryToken;
import org.eclipse.paho.client.mqttv3.MqttCallback;
import org.eclipse.paho.client.mqttv3.MqttClient;
import org.eclipse.paho.client.mqttv3.MqttConnectOptions;
import org.eclipse.paho.client.mqttv3.MqttException;
import org.eclipse.paho.client.mqttv3.MqttMessage;
import org.eclipse.paho.client.mqttv3.persist.MemoryPersistence;
public class MqttSubscriber implements MqttCallback {
private static final String brokerUrl ="tcp://192.168.1.25:1883"; //Broker
private static final String clientId = "test"; //Client-ID
private static final String topic = "v1/devices/me/telemetry"; //Topic
private static final String user = "AT2"; // Accesstoken/User from Device in TB!
private static final String pw = "test";
private static final char[] password = pw.toCharArray();
public static void main(String[] args) {
System.out.println("Subscriber running");
new MqttSubscriber().subscribe(topic);
}
public void subscribe(String topic) {
MemoryPersistence persistence = new MemoryPersistence();
try
{
MqttClient sampleClient = new MqttClient(brokerUrl, clientId, persistence);
MqttConnectOptions connOpts = new MqttConnectOptions();
connOpts.setCleanSession(true);
connOpts.setUserName(user);
connOpts.setPassword(password);
System.out.println("checking");
System.out.println("Mqtt Connecting to broker: " + brokerUrl);
sampleClient.connect(connOpts);
if (sampleClient.isConnected()==true) System.out.println("Mqtt Connected");
else System.out.println("could not connect");
sampleClient.setCallback(this);
sampleClient.subscribe(topic);
System.out.println("Subscribed");
System.out.println("Listening");
} catch (MqttException me) {
System.out.println(me);
me.printStackTrace();
}
}
//Called when the client lost the connection to the broker
public void connectionLost(Throwable arg0) {
}
//Called when a outgoing publish is complete
public void deliveryComplete(IMqttDeliveryToken arg0) {
}
public void messageArrived(String topic, MqttMessage message) throws Exception {
System.out.println("| Topic:" + topic);
System.out.println("| Message: " +message.toString());
System.out.println("-------------------------------------------------");
}
}
As far as I can see the problem is an unsatisfied QoS level.
A subscription without a QoS parameter defaults to QoS == 1. If this QoS is not supported for the requested topic the client throws this exception.
Excerpt from Paho client whereby your call to subscribe(topic) cascades to this subscribe method:
public void subscribe(String[] topicFilters, int[] qos, IMqttMessageListener[] messageListeners) throws MqttException {
IMqttToken tok = aClient.subscribe(topicFilters, qos, null, null, messageListeners);
tok.waitForCompletion(getTimeToWait());
int[] grantedQos = tok.getGrantedQos();
for (int i = 0; i < grantedQos.length; ++i) {
qos[i] = grantedQos[i];
}
if (grantedQos.length == 1 && qos[0] == 0x80) {
throw new MqttException(MqttException.REASON_CODE_SUBSCRIBE_FAILED);
}
}
So you have to check the QoS level of the requested topic and subscribe with that QoS level. Because QoS 1 is rejected I presume the topic is published with QoS 0.

JMS Queue receive causes application to crash

I've created a very simple JMS Queue example to send and receive messages. I have it set up to receive the messages after a certain number have been sent and then do work on them. After it receives all of the messages, trying to send more messages causes the application to crash.
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import javax.annotation.Resource;
import javax.ejb.Singleton;
import javax.ejb.Startup;
import javax.jms.*;
#Startup
#Singleton
public class JMSQueue {
/** SLF4J logger. */
#SuppressWarnings("unused")
private final Logger log = LoggerFactory.getLogger(JMSQueue.class);
#Resource(mappedName = "jms/__defaultQueue")
private Queue queue;
#Resource(mappedName = "jms/__defaultQueueConnectionFactory")
private QueueConnectionFactory factory;
private int count = 0;
private QueueConnection connection;
private QueueSession session;
private MessageProducer producer;
private QueueReceiver receiver;
public void init(){
try {
connection = factory.createQueueConnection();
session = connection.createQueueSession(false, Session.AUTO_ACKNOWLEDGE);
producer = session.createProducer(queue);
receiver = session.createReceiver(queue);
connection.start();
} catch (JMSException e) {
log.error("JMS Queue Initialization failed.", e);
}
}
public void sendMessage() throws JMSException {
String messageBody = "ping" + count;
Message request = session.createTextMessage(messageBody);
request.setJMSReplyTo(queue);
producer.send(request);
count++;
if (count >= 10) {
count = 0;
Message response = receiver.receive();
while (response != null){
String responseBody = ((TextMessage) response).getText();
log.debug("jms - " + responseBody);
try {
response = receiver.receive();
} catch(JMSException e){
response = null;
}
}
}
}
}
I run init once to create the connection, producer, and receiver, and then I run sendMessage 10 times. On the tenth time it spits out the output of all ten received messages. If I then hit sendMessage a couple of times after that, my application crashes. I have tried changing it to create and close the connection after each message which didn't change anything. I'm running a glassfish application web server and trying to use the queue to be notified of every rest call that users try to access.
Turns out the issue was that the receive was hanging indefinitely due to there not being a timeout. Adding a timeout of 1 millisecond solved the issue.

ConfirmListener.handleNack is not invoked when exchange is missing

In my application I need to determine whether a message is successfully published into AMQP exchange or some error happens. It seems like Publisher Confirms were invented to address this issue so I started experimenting with them.
For my Java application I used com.rabbitmq:amqp-client:jar:3.5.4 and I chose a very simple scenario when the exchange (where I try to publish) is missing. I expected that ConfirmListener.handleNack is going to be invoked in such case.
Here's my Java code:
package wheleph.rabbitmq_tutorial.confirmed_publishes;
import com.rabbitmq.client.Channel;
import com.rabbitmq.client.ConfirmListener;
import com.rabbitmq.client.Connection;
import com.rabbitmq.client.ConnectionFactory;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import java.io.IOException;
import java.util.concurrent.TimeoutException;
public class ConfirmedPublisher {
private static final Logger logger = LoggerFactory.getLogger(ConfirmedPublisher.class);
private final static String EXCHANGE_NAME = "confirmed.publishes";
public static void main(String[] args) throws IOException, InterruptedException, TimeoutException {
ConnectionFactory connectionFactory = new ConnectionFactory();
connectionFactory.setHost("localhost");
Connection connection = connectionFactory.newConnection();
Channel channel = connection.createChannel();
channel.confirmSelect();
channel.addConfirmListener(new ConfirmListener() {
public void handleAck(long deliveryTag, boolean multiple) throws IOException {
logger.debug(String.format("Received ack for %d (multiple %b)", deliveryTag, multiple));
}
public void handleNack(long deliveryTag, boolean multiple) throws IOException {
logger.debug(String.format("Received nack for %d (multiple %b)", deliveryTag, multiple));
}
});
for (int i = 0; i < 100; i++) {
String message = "Hello world" + channel.getNextPublishSeqNo();
channel.basicPublish(EXCHANGE_NAME, "", null, message.getBytes());
logger.info(" [x] Sent '" + message + "'");
Thread.sleep(2000);
}
channel.close();
connection.close();
}
}
However it's not the case. Log shows that no callback is executed:
17:49:34,988 [main] ConfirmedPublisher - [x] Sent 'Hello world1'
Exception in thread "main" com.rabbitmq.client.AlreadyClosedException: channel is already closed due to channel error; protocol method: #method<channel.close>(reply-code=404, reply-text=NOT_FOUND - no exchange 'confirmed.publishes' in vhost '/', class-id=60, method-id=40)
at com.rabbitmq.client.impl.AMQChannel.ensureIsOpen(AMQChannel.java:195)
at com.rabbitmq.client.impl.AMQChannel.transmit(AMQChannel.java:309)
at com.rabbitmq.client.impl.ChannelN.basicPublish(ChannelN.java:657)
at com.rabbitmq.client.impl.ChannelN.basicPublish(ChannelN.java:640)
at com.rabbitmq.client.impl.ChannelN.basicPublish(ChannelN.java:631)
at wheleph.rabbitmq_tutorial.confirmed_publishes.ConfirmedPublisher.main(ConfirmedPublisher.java:38)
What's interesting is that pubilsher confirms work as expected when I try to use library for NodeJS amqp-coffee (0.1.24).
Here's my NodeJS code:
var AMQP = require('amqp-coffee');
var connection = new AMQP({host: 'localhost'});
connection.setMaxListeners(0);
console.log('Connection started')
connection.publish('node.confirm.publish', '', 'some message', {deliveryMode: 2, confirm: true}, function(err) {
if (err && err.error && err.error.replyCode === 404) {
console.log('Got 404 error')
} else if (err) {
console.log('Got some error')
} else {
console.log('Message successfully published')
}
})
Here's the output that indicates that the callback is invoked with proper argument:
Connection started
Got 404 error
Am I using com.rabbitmq:amqp-client incorrectly or there's some inconsistency in that library?
It turned out that my assumption was not correct and ConfirmListener.handleNack should not be invoked in this case.
Here's a relevant portion of AMQP messages for the scenario described in the question observed for amqp-coffee library:
ch#1 -> {#method<channel.open>(out-of-band=), null, ""}
ch#1 <- {#method<channel.open-ok>(channel-id=), null, ""}
ch#1 -> {#method<confirm.select>(nowait=false), null, ""}
ch#1 <- {#method<confirm.select-ok>(), null, ""}
ch#1 -> {#method<basic.publish>(ticket=0, exchange=node.confirm.publish, routing-key=, mandatory=false, immediate=false), #contentHeader<basic>(content-type=string/utf8, content-encoding=null, headers=null, delivery-mode=2, priority=null, correlation-id=null, reply-to=null, expiration=null, message-id=null, timestamp=null, type=null, user-id=null, app-id=null, cluster-id=null), "some message"}
ch#1 <- {#method<channel.close>(reply-code=404, reply-text=NOT_FOUND - no exchange 'node.confirm.publish' in vhost '/', class-id=60, method-id=40), null, ""}
ch#2 -> {#method<channel.open>(out-of-band=), null, ""}
ch#2 <- {#method<channel.open-ok>(channel-id=), null, ""}
ch#2 -> {#method<confirm.select>(nowait=false), null, ""}
ch#2 <- {#method<confirm.select-ok>(), null, ""}
You can see that:
After unsuccessful publish the channel is closed by broker using channel.close that includes the reason.
basic.nack is not sent.
The library automatically opens another channel for subsequent operations.
This behaviour can be implemented in Java using ShutdownListener:
package wheleph.rabbitmq_tutorial.confirmed_publishes;
import com.rabbitmq.client.Channel;
import com.rabbitmq.client.Connection;
import com.rabbitmq.client.ConnectionFactory;
import com.rabbitmq.client.ShutdownListener;
import com.rabbitmq.client.ShutdownSignalException;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import java.io.IOException;
import java.util.concurrent.TimeoutException;
public class ConfirmedPublisher {
private static final Logger logger = LoggerFactory.getLogger(ConfirmedPublisher.class);
private final static String EXCHANGE_NAME = "confirmed.publishes";
// Beware that proper synchronization of channel is needed because current approach may lead to race conditions
private volatile static Channel channel;
public static void main(String[] args) throws IOException, InterruptedException, TimeoutException {
ConnectionFactory connectionFactory = new ConnectionFactory();
connectionFactory.setHost("localhost");
connectionFactory.setPort(5672);
final Connection connection = connectionFactory.newConnection();
for (int i = 0; i < 100; i++) {
if (channel == null) {
createChannel(connection);
}
String message = "Hello world" + i;
channel.basicPublish(EXCHANGE_NAME, "", null, message.getBytes());
logger.info(" [x] Sent '" + message + "'");
Thread.sleep(2000);
}
channel.close();
connection.close();
}
private static void createChannel(final Connection connection) throws IOException {
channel = connection.createChannel();
channel.confirmSelect(); // This in fact is not necessary
channel.addShutdownListener(new ShutdownListener() {
public void shutdownCompleted(ShutdownSignalException cause) {
// Beware that proper synchronization is needed here
logger.debug("Handling channel shutdown...", cause);
if (cause.isInitiatedByApplication()) {
logger.debug("Shutdown is initiated by application. Ignoring it.");
} else {
logger.error("Shutdown is NOT initiated by application. Resetting the channel.");
/* We cannot re-initialize channel here directly because ShutdownListener callbacks run in the connection's thread,
so the call to createChannel causes a deadlock since it blocks waiting for a response (whilst the connection's thread
is stuck executing the listener). */
channel = null;
}
}
});
}
}
There're few caveats:
Publisher confirms are not necessary in this case because we don't use ConfirmListener or any other functionality specific to that approach. However publisher confirms would be useful if we wanted to track which messages were successfully send and which not.
If we launch ConfirmedPublisher and after some time create the missing exchange, all following messages will be successfully published. However all the previous failed messages are lost.
Additional synchronization is needed.

JMS multiple durable subscription to one topic

I started JMS for a week now. I created JMS using Netbeans,maven and glassfish.
I have one producer and one durable consumer and I wanted to add another durable consumer to the same topic(not queue). Is it possible to do so?
because I want all the consumers consume all the message being sent by the producer whether the consumers are offline or not.
Any advice?
Thanks
public class DurableReceive {
#Resource(lookup = "jms/myDurableConnectionFactory")
private static ConnectionFactory connectionFactory;
#Resource(lookup = "jms/myNewTopic")
private static Topic topic;
public static void main(String[] args) {
Destination dest = (Destination) topic;
JMSConsumer consumer;
boolean messageReceived = false;
String message;
System.out.println("Waiting for messages...");
try (JMSContext context = connectionFactory.createContext();) {
consumer = context.createDurableConsumer(topic, "Subscriber1");
while (!messageReceived) {
message = consumer.receiveBody(String.class);
if (message != null) {
System.out.print("Received the following message: " + message);
System.out.println("(Received date: " + new Date() + ")\n");
} else {
messageReceived = true;
}
}
} catch (JMSRuntimeException e) {
System.err.println("##$%RuntimeException occurred: " + e.toString());
System.exit(1);
}
}
}
You can set different clientID for different durable consumers. Jms-broker uses combination of subscriptionName and clientId to identify the unique client (so if your subscriber have unique clientID - it can receive own messages). You can set clientID in your JmsContext.

Categories