Mqtt Client Paho close issue (Java) - java

I have:
MqttAsyncClient mq;
...
mq = new MqttAsyncClient(myServer1,"app1");
mq.connect();
...
//(1)
//doing something with mq (pub/sub)
...
mq.disconnect();
mq.close();
//(2)
Now I am using a Monitoring Console and I see:
In (1), 3 Mqtt threads:
MQTT REC, MQTT SND and MQTT Call
In (2), 2 Mqtt threads:
MQTT SND and MQTT Call
After further seconds only 1 thread
MQTT CALL
The CALL thread will never be stopped.
How come ?

Ensure the the asyncclient had been disconnect before invoking close() method, otherwise the thred of async will blocking forever. you can deal like this(just same with #Tom&#Mehmet Ince):
IMqttToken token = mq.disconnect();
int count = 0;
while (count++<5) {
if (token.isComplete()) {
mq.close();
break;
}
try {
Thread.sleep(2000l);
} cath(Exception e) {
//TODO
}
}
if (count > 5) {
mq.disconnectForcibly();
mq.close();
}

I think use should use : MqttClient client = new MqttClient... then call client.connect();
Because, It will invoke the code : aClient.connect(options, null, null).waitForCompletion(getTimeToWait()); (you can check the source code)
so it can make sure the connection is really completed.

Related

Azure ServiceBusSessionReceiverAsyncClient - Mono instead of Flux

I have a Spring Boot app, where I receive one single message from a Azure Service Bus queue session.
The code is:
#Autowired
ServiceBusSessionReceiverAsyncClient apiMessageQueueIntegrator;
.
.
.
Mono<ServiceBusReceiverAsyncClient> receiverMono = apiMessageQueueIntegrator.acceptSession(sessionid);
Disposable subscription = Flux.usingWhen(receiverMono,
receiver -> receiver.receiveMessages(),
receiver -> Mono.fromRunnable(() -> receiver.close()))
.subscribe(message -> {
// Process message.
logger.info(String.format("Message received from quque. Session id: %s. Contents: %s%n", message.getSessionId(),
message.getBody()));
receivedMessage.setReceivedMessage(message);
timeoutCheck.countDown();
}, error -> {
logger.info("Queue error occurred: " + error);
});
As I am receiving only one message from the session, I use a CountDownLatch(1) to dispose of the subscription when I have received the message.
The documentation of the library says that it is possible to use Mono.usingWhen instead of Flux.usingWhen if I only expect one message, but I cannot find any examples of this anywhere, and I have not been able to figure out how to rewrite this code to do this.
How would the pasted code look if I were to use Mono.usingWhen instead?
Thank you conniey. Posting your suggestion as an answer to help other community members.
By default receiveMessages() is a Flux because we imagine the messages from a session to be "infinitely long". In your case, you only want the first message in the stream, so we use the next() operator.
The usage of the countdown latch is probably not the best approach. In the sample, we had one there so that the program didn't end before the messages were received. .subscribe is not a blocking call, it sets up the handlers and moves onto the next line of code.
Mono<ServiceBusReceiverAsyncClient> receiverMono = sessionReceiver.acceptSession("greetings-id");
Mono<ServiceBusReceivedMessage> singleMessageMono = Mono.usingWhen(receiverMono,
receiver -> {
// Anything you wish to do with the receiver.
// In this case we only want to take the first message, so we use the "next" operator. This returns a
// Mono.
return receiver.receiveMessages().next();
},
receiver -> Mono.fromRunnable(() -> receiver.close()));
try {
// Turns this into a blocking call. .block() waits indefinitely, so we have a timeout.
ServiceBusReceivedMessage message = singleMessageMono.block(Duration.ofSeconds(30));
if (message != null) {
// Process message.
}
} catch (Exception error) {
System.err.println("Error occurred: " + error);
}
You can refer to GitHub issue:ServiceBusSessionReceiverAsyncClient - Mono instead of Flux

Flow hangs when IdempotentReceiverInterceptor discards the message(after 4-th message)

I have following flow:
return flow -> flow.channel(inputChannel())
...
.gateway(childFlow, addMyInterceptor(str)); // by name
}
Consumer<GatewayEndpointSpec> addMyInterceptor(String objectIdHeader) {
return endpointSpec -> endpointSpec.advice(addMyInterceptorInternal(objectIdHeader))
.errorChannel(errorChannel());
}
default IdempotentReceiverInterceptor addMyInterceptorInternal(String header) {
MessageProcessor<String> headerSelector = message -> headerExpression(header).apply(message);
var interceptor = new IdempotentReceiverInterceptor(new MetadataStoreSelector(headerSelector, idempotencyStore()));
interceptor.setDiscardChannel(idempotentDiscardChannel());
return interceptor;
}
When IdempotentReceiverInterceptor encounters that message is duplicated - I see that application hangs on after 4-th duplicated message. I understand that it is because gateway expected response(like here: PubSubInboundChannelAdapter stops to receive messages after 4th message) but I don't have any ideas how to return result from interceptor.
Could you please explain it for me?
As long as all channels are direct (default) - i.e. no async handoffs in the flow using queue or executor channels, set the gateway's replyTimeout to 0 when the flow might not return a reply

How to ensure messages reach kafka broker?

I have have a message producer on my local machine and a broker on remote host (aws).
After sending a message from the producer,
I wait and call the console consumer on the remote host and
see excessive logs.
Without the value from producer.
The producer flushes the data after calling the send method.
Everything is configured correctly.
How can I check to see that the broker received the message from the producer and to see if the producer received the answer?
The Send method asynchronously sends the message to the topic and
returns a Future of RecordMetadata.
java.util.concurrent.Future<RecordMetadata> send(ProducerRecord<K,V> record)
Asynchronously sends a record to a topic
After the flush call,
check to see that the Future has completed by calling the isDone method.
(for example, Future.isDone() == true)
Invoking this method makes all buffered records immediately available to send (even if linger.ms is greater than 0) and blocks on the completion of the requests associated with these records. The post-condition of flush() is that any previously sent record will have completed (e.g. Future.isDone() == true). A request is considered completed when it is successfully acknowledged according to the acks configuration you have specified or else it results in an error.
The RecordMetadata contains the offset and the partition
public int partition()
The partition the record was sent to
public long offset()
the offset of the record, or -1 if {hasOffset()} returns false.
Or you can also use Callback function to ensure messages was sent to topic or not
Fully non-blocking usage can make use of the Callback parameter to provide a callback that will be invoked when the request is complete.
here is clear example in docs
ProducerRecord<byte[],byte[]> record = new ProducerRecord<byte[],byte[]>("the-topic", key, value);
producer.send(myRecord,
new Callback() {
public void onCompletion(RecordMetadata metadata, Exception e) {
if(e != null) {
e.printStackTrace();
} else {
System.out.println("The offset of the record we just sent is: " + metadata.offset());
}
}
});
You can try get() API of send , which will return the Future of RecordMetadata
ProducerRecord<String, String> record =
new ProducerRecord<>("SampleTopic", "SampleKey", "SampleValue");
try {
producer.send(record).get();
} catch (Exception e) {
e.printStackTrace();
}
Use exactly-once-delivery and you won't need to worry about whether your message reached or not: https://www.baeldung.com/kafka-exactly-once, https://www.confluent.io/blog/exactly-once-semantics-are-possible-heres-how-apache-kafka-does-it/

Vert.x multi-thread web-socket

I have simple vert.x app:
public class Main {
public static void main(String[] args) {
Vertx vertx = Vertx.vertx(new VertxOptions().setWorkerPoolSize(40).setInternalBlockingPoolSize(40));
Router router = Router.router(vertx);
long main_pid = Thread.currentThread().getId();
Handler<ServerWebSocket> wsHandler = serverWebSocket -> {
if(!serverWebSocket.path().equalsIgnoreCase("/ws")){
serverWebSocket.reject();
} else {
long socket_pid = Thread.currentThread().getId();
serverWebSocket.handler(buffer -> {
String str = buffer.getString(0, buffer.length());
long handler_pid = Thread.currentThread().getId();
log.info("Got ws msg: " + str);
String res = String.format("(req:%s)main:%d sock:%d handlr:%d", str, main_pid, socket_pid, handler_pid);
try {
Thread.sleep(500);
} catch (InterruptedException e) {
e.printStackTrace();
}
serverWebSocket.writeFinalTextFrame(res);
});
}
};
vertx
.createHttpServer()
.websocketHandler(wsHandler)
.listen(8080);
}
}
When I connect this server with multiple clients I see that it works in one thread. But I want to handle each client connection parallelly. How I should change this code to do it?
This:
new VertxOptions().setWorkerPoolSize(40).setInternalBlockingPoolSize(40)
looks like you're trying to create your own HTTP connection pool, which is likely not what you really want.
The idea of Vert.x and other non-blocking event-loop based frameworks, is that we don't attempt the 1 thread -> 1 connection affinity, rather, when a request, currently being served by the event loop thread is waiting for IO - EG the response from a DB - that event-loop thread is freed to service another connection. This then allows a single event loop thread to service multiple connections in a concurrent-like fashion.
If you want to fully utilise all core on your machine, and you're only going to be running a single verticle, then set the number of instances to the number of cores when your deploy your verticle.
IE
Vertx.vertx().deployVerticle("MyVerticle", new DeploymentOptions().setInstances(Runtime.getRuntime().availableProcessors()));
Vert.x is a reactive framework, which means that it uses a single thread model to handle all your application load. This model is known to scale better than the threaded model.
The key point to know is that all code you put in a handler must never block (like your Thread.sleep) since it will block the main thread. If you have blocking code (say for example a JDBC call) you should wrap your blocking code in a executingBlocking handler, e.g.:
serverWebSocket.handler(buffer -> {
String str = buffer.getString(0, buffer.length());
long handler_pid = Thread.currentThread().getId();
log.info("Got ws msg: " + str);
String res = String.format("(req:%s)main:%d sock:%d handlr:%d", str, main_pid, socket_pid, handler_pid);
vertx.executeBlocking(future -> {
try {
Thread.sleep(500);
} catch (InterruptedException e) {
e.printStackTrace();
}
serverWebSocket.writeFinalTextFrame(res);
future.complete();
});
});
Now all the blocking code will be run on a thread from the thread pool that you can configure as already shown in other replies.
If you would like to avoid writing all these execute blocking handlers and you know that you need to do several blocking calls then you should consider using a worker verticle, since these will scale at the event bus level.
A final note for multi threading is that if you use multiple threads your server will not be as efficient as a single thread, for example it won't be able to handle 10 million websockets since 10 million threads event on a modern machine (we're in 2016) will bring your OS scheduler to its knees.

How to maintain Channel API connection when internet is offline

I am using Google AppEngine's Channel API. I am having some issue to re-start the lost connection due to the user's network connection. When you loose the internet connection, channel call onError but it will not call onClose. As far as JavaScript object is concerned, the channel socket is open.
How do you handle lost connection due to the internet issue? I am thinking of 1) by trigger to close the channel and re-open it when RPC unrelated to the channel somewhere in the application succeeds for the first time (which indicates the internet is alive again) or 2) Use timer that runs all the time and pings the server for network status (which was the point of introducing the long polling to avoid consuming unwanted resource this way). Any other ideas would be great.
Observation:
When the internet connection is dead, onError is called in incremental interval (10sec, 20sec, 40sec) twice. Once the internet connection is back, channel does not resume connection. It stops working without any indication that it is dead.
Thanks.
When you see the javascript console, presumably you will see "400 Unknown SID Error".
If so, here is my workaround for this. This is a service module for AngularJS, but please look at the onerror callback. Please try this workaround and let me know if it works or not.
Added: I neglected to answer your main question, but in my opinion, it is hard to determine if you're connected to the internet unless actually pinging the "internet". So probably you may want to use some retry logic similar to the following code with some tweak. In the following example, I just retrying 3 times, but you can do it more with some back offs. However, I think the best way to handle this is, when the app consume the retry max count, you can indicate the user that the app lost the connection, ideally showing a button or a link to re-connect to the channel service.
And, you can also track the connection on the server side, see:
https://developers.google.com/appengine/docs/java/channel/#Java_Tracking_client_connections_and_disconnections
app.factory('channelService', ['$http', '$rootScope', '$timeout',
function($http, $rootScope, $timeout) {
var service = {};
var isConnectionAlive = false;
var callbacks = new Array();
var retryCount = 0;
var MAX_RETRY_COUNT = 3;
service.registerCallback = function(pattern, callback) {
callbacks.push({pattern: pattern, func: callback});
};
service.messageCallback = function(message) {
for (var i = 0; i < callbacks.length; i++) {
var callback = callbacks[i];
if (message.data.match(callback.pattern)) {
$rootScope.$apply(function() {
callback.func(message);
});
}
}
};
service.channelTokenCallback = function(channelToken) {
var channel = new goog.appengine.Channel(channelToken);
service.socket = channel.open();
isConnectionAlive = false;
service.socket.onmessage = service.messageCallback;
service.socket.onerror = function(error) {
console.log('Detected an error on the channel.');
console.log('Channel Error: ' + error.description + '.');
console.log('Http Error Code: ' + error.code);
isConnectionAlive = false;
if (error.description == 'Invalid+token.' || error.description == 'Token+timed+out.') {
console.log('It should be recovered with onclose handler.');
} else {
// In this case, we need to manually close the socket.
// See also: https://code.google.com/p/googleappengine/issues/detail?id=4940
console.log('Presumably it is "Unknown SID Error". Try closing the socket manually.');
service.socket.close();
}
};
service.socket.onclose = function() {
isConnectionAlive = false;
console.log('Reconnecting to a new channel');
openNewChannel();
};
console.log('A channel was opened successfully. Will check the ping in 20 secs.');
$timeout(checkConnection, 20000, false);
};
function openNewChannel(isRetry) {
console.log('Retrieving a clientId.');
if (isRetry) {
retryCount++;
} else {
retryCount = 0;
}
$http.get('/rest/channel')
.success(service.channelTokenCallback)
.error(function(data, status) {
console.log('Can not retrieve a clientId');
if (status != 403 && retryCount <= MAX_RETRY_COUNT) {
console.log('Retrying to obtain a client id')
openNewChannel(true);
}
})
}
function pingCallback() {
console.log('Got a ping from the server.');
isConnectionAlive = true;
}
function checkConnection() {
if (isConnectionAlive) {
console.log('Connection is alive.');
return;
}
if (service.socket == undefined) {
console.log('will open a new connection in 1 sec');
$timeout(openNewChannel, 1000, false);
return;
}
// Ping didn't arrive
// Assuming onclose handler automatically open a new channel.
console.log('Not receiving a ping, closing the connection');
service.socket.close();
}
service.registerCallback(/P/, pingCallback);
openNewChannel();
return service;
}]);

Categories