Aws IoT Message Delivery - java

I am looking into Amazon IoT as a transport mechanism for mobile devices periodically measuring data (usually every N minutes, with N being anywhere between 2 and 32 minutes). With MQTT, I can utilize Amazon's broker to publish finished measurement results to subscribers with QoS=1. Let's also assume that my sole subscriber is just another device listening on wildcard topic (eg. abc/#) and storing published messages into a local database.
But now it's also possible that:
the publishing mobile devices have spotty/bad/no connection to the cell network,
the subscriber dies (reboots, software failure, hardware failure, maintenance, etc.)
Assuming I use the official Java SDK . What would happen to data published during these times when at least either of them is offline? Will the subscriber get all the messages it has been missing out on upon reconnect?
Also: does this depend on the protocol in question? For testing purposes, we're using WebSockets, but later development/production, we'd prefer MQTT over SSL.

What would happen to data published during these times when at least
either of them is offline? Will the subscriber get all the messages it
has been missing out on upon reconnect?
Yes, If you use MQTT with QoS Level 1 or Higher because MQTT employs a Publisher/Subscriber architecture with Topic. Messages destined to Topics with QoS Level 1 and Higher will have the messages in memory and on disk (Atleast mosquitto) by the MQTT server until a Subscriber subscribes.
WebSocket is different. It doesn't have a Publisher/Subscriber architecture. It is a Duplex Communication model on a Single TCP connection. WebSocket initiates as a HTTP connection which will get updated to a WebSocket connection. In case of WebSocket it is the responsibility of the Application to make sure what happens when there are connection problems with Subscribers.

Related

MQTT broker connection management

I'm using Paho to communicate with an MQTT broker and all the example I found (like this) do these 3 steps when performing an action (publish or subscribe):
connect to the broker
do action
disconnect
My question is: are there any drawbacks holding a connection for the whole life of the application instead of opening/closing it for each action? Isn't it a faster solution removing the time for opening the connection?
No, holding a connection open for the lifetime of the application is a fully expected usecase, it's the only real way you'd be able to subscribe to a topic and receive messages when they are published.
The protocol has built in ping messages to ensure the broker knows the client is still connected.
The examples tend to be relatively trivial but want to show the full life cycle of the client which is why they connect, do something, disconnect

Uniform in zeromq for realtime purpose

I need to implement analytics system with server and terminals which in realtime.
I use library ZeroMq (pub|sub mode) to send messages to client (~40bytes).
if I connect with 1 client, messages come with delay (sometime more than 250ms).
if I connect with 100 clients a lot of clients lose uniformity of delivery (more than 750ms no one message, after that huge scope of data). It is so critical issue for me.
I have to publish to more than 6000 terminals...
Publish every 30ms, it is about 1700bytes to each client in the worst case (tcp)
Maybe I should use another technology to deliver messages in realtime?
As I said in the comment, Multicast is the way. The primary overriding concern is whether your terminals can join the group you are publishing on - irrespective of how far away they are.
You've not indicated how the terminals connect to your network - (for example vpn over internet, private line whatever..) You asked for a better technology - it's multicast.
Now there are some options if you are going to go down the tcp route:
Build a load-balacing infrastructure which sits in front of your
service. Meaning that your terminals don't connect to your
service, but to a set of load balancers which then connect to your
service. If you have 10 of these for example, each only has to deal
with 600 clients. Your problem is much smaller - you can scale this
way. Don't forget to use asynchronous io.
Buy better hardware - for example solace or tervela do hardware
message brokers which can scale to very large numbers concurrent tcp
connections - but this is not cheap.

How to choose the right protocol and network connection for using ActiveMQ?

Recently I started using ActiveMQ to act as a Message Middleware in my new project, this is the first time I try use ActiveMQ, the projects I had participated before used our previous company's inner message framework like Swallow. So before I begin implementing the system, I need to clear some design points.
Cases in our system will use ActiveMQ include sending mail, sending tasks to queue and doing tasks from queue, asynchronous request/response, so what kind of protocol and network connection is the right choice for our cases? I list some protocols and network connection options here:
ActiveMQ protocols:
MQTT
WS
Openwire
Stomp
Stomp
ActiveMQ Network connections:
VM
TCP
UDP
HTTP
Failover
Discovery
I will also consider the aspects of HA and cluster for my system, so can anybody gives me some ideas to decide how to choose the protocol and network connection?
Thanks a lot.
Openwire has historically been the default protocol the NIO transport can give performance improvements over TCP so if you use ActiveMQ as your only broker use one of these. However using AMQP means in the future you could possibly use RabbitMQ, another popular Message Broker. There are others, STOMP or MQTT are lightweight, VM is designed to be used when the application resides on the same machine as the broker so gets very high throughput.
As ActiveMQ can enable all protocols by default do some quick tests to gain an idea of throughput on the specific application you are building. Then consider the above points in making a decison.
Regarding UDP, TCP, HTTP I would choose TCP. UDP is unreliable and TCP is more than adequate in sending 1000's per second. HTTP could be useful if your company has awkward firewall rules.
I would wrap this in a failover transport. I have never used discovery but would argue this is more advanced and not required initially as it requires a discovery agent. Its only purpose is too discover the ActiveMQ broker dynamically (although you still have to know where the discover agent is).

ActiveMQ/HornetQ p2p is polling-based or pushing-based model

What happens behind the scene, when receiving messages with (spring or ejb) message listener container in ActiveMQ/HornetQ?
Does broker pushing messages to consumers? If so, how consumers register
themselves to broker?
Or consumers polling messages on the queue? If so, why each queue (in admin console) has a consumer-number field that shows number of registered consumers of the queue?
This link of O'Reilly book said:
The p2p messaging model has traditionally been a pull- or
polling-based model, where messages are requested from the queue
instead of being pushed to the client automatically. (The JMS
specification does not specifically state how the p2p and pub/sub
models must be implemented. Either one may use push or pull, but at
least conceptually pub/sub is push and p2p is pull).
You are not stating the protocol, since ActiveMQ and HornetQ are multi protocol brokers the exact implementation may vary a bit. However, most protocols except HTTP/REST based ones pushes messages to the client. It's not possible to achive high throughput without a push strategy on the wire protocol level.
The application level API allows for "polling", i.e. JMS MessageConsumer.receive, but that's really just a "sleep until a message is pushed" mechanism.

Using JMS, is there any way to store messages on intermittently disconnected clients and forward them to a broker when a network is available?

I am considering an architecture where I have clients that are intermittently connected to a network. I would like to store messages created on these clients in a JMS queue when the network is not available and have these forwarded to a central message broker when the clients are on the network. (The user has control over the network, e.g. dialing in, so it's not an intermittent connection like with a mobile phone.)
Are there any JMS implementations that provide this feature?
You can embed an activeMQ broker into your application
http://activemq.apache.org/how-do-i-embed-a-broker-inside-a-connection.html
Then, I suppose (did not test) that you could use ActiveMQ features which allow you to dispatch messages accross a net of brokers, using the discovery of brokers feature,
http://activemq.apache.org/clustering.html
or simply by adding a queue consumer server side, then dispatching through other brokers through this consumer.
Hope it helps.
The Glassfish Open Message Queue can be embedded (or run stand-alone) in version 4.4 (Support the ability for a broker to run "in process" with any client.). It is very light-weight, and will support other client languages over the STOMP protocol in version 4.4 - besides Java and C. - https://mq.dev.java.net/4.4.html

Categories