I have a few questions one how to use spring websockets and messaging. So I have a program that interfaces with an external web service producer endpoint that will send data payloads to my web service consumer endpoint. While on the other end of my program I will be routing these data payloads to multiple websocket connections (stomp and sockjs). The external web service producer is providing a subscription ID in each data payload for every query requests so my approach is to send them back to the broker with a SimpMessagingTemplate with it's own unique destination (ie. /user/{subscriptionId}/subscribe). That way I can subscribe each websocket client to an existing destination if a duplicate query was made and only make requests for a new subscripion to the external web service producer if otherwise.
How do I access my SimpMessagingTemplate from within different component such as my web service consumer so that I can send the data payloads to my message broker? Do I just declare my SimpMessagingTemplate static and declare a getter function within my controller where the template object is stored?
How do I get a list of all known destinations and as well as the number of stomp client subscribers to each one? The external web service producer sets a termination time for each subscription, so I would like to implement auto renewal requests if there are still subscribers to a destination. I suppose I can keep track of it myself with Maps/Caches and update them everytime a websocket session is opened or closed, but i prefer to do it with spring if possible as it minimizes my risk and probably less error prone, or perhaps a full featured broker such as RabbitMQ or ActiveMQ is necessary to do this.
Found the answers I needed:
All I need to do is use spring Autowiring support and the bean will be injected with the object initialized
#Autowired
private SimpMessagingTemplate
Need a full featured broker for this, however for what I want to do i decided it would be too much work and essentially not needed. I decided I will just implement my own subscription checking with the 3rd party web service on my own with java maps/caches. I've set went to painstaking lengths by setting breakpoints in eclipse in the java .class files even with a java decompiler plugin and found out that all of this information can be found in the DefaultSubscriberRegistry class. Although I can not access it with the api given by Spring, I can rest assured it is being properly handled by the application. When a client subscribes or disconnects to my application, the information in the internal maps/caches of the registry are added and removed accordingly. Furthermore I can make make changes to my own implemented map/caches by implementing the interfaces provided by Spring such as SessionSubscribeEvent or SessionDisconnectedEvent and sub class it with ApplicationListener and they will be triggered whenever a client subscribes or disconnects.
public class SubscribeEvent implements ApplicationListener
Related
The problem: I have a spring boot service running on K8s. Generally API calls can be served by any pod of my service, but for a particular use case we have a requirement to propagate the call to all instances of the service.
A bit of googling led me to https://discuss.kubernetes.io/t/how-to-broadcast-message-to-all-the-pod/10002 where they suggest using
kubectl get endpoints cache -o yaml
and proceeding from there. This is fine for a human or a CLI environment, but how do I accomplish the same from within my Java service, aside from executing the above command via Process and parsing the output?
Essentially I want a way to do what the above command is doing but in a more java-friendly way.
Seems like your spring boot service should be listening to a message queue, and when one service receives a specific HTTP request message to the /propagateme endpoint, it sends an event to the topic to all other clients listening to the Propagation topic, when the instances receive a message from the topic they perform the specific action
See JMS https://spring.io/guides/gs/messaging-jms/
I am learning about Kafka, and I am curious as to how a Kafka client should exist in a Microservices architecture I want Kafka to keep a log of important information, and enable automatic reaction to those information appropriately.
My question is how should Kafka exist with the backend as a microservice?
Stand alone client.- The Kafka client (producer/consumer) exists alone. It exposes an API for the frontend to do HTTP requests (POST, PUT), sending data. The data is then converted into events, and produced to the Kafka cluster by the Kafka client.- The consumer also lives here, and will make reactive API calls to a separate backend when necessary.
A layer in the backend- When the frontend makes a HTTP request (POST, PUT) to the backend, the ‘REST controller’ in the backend looks at what data is sent, and if necessary, produces it to the Kafka cluster. - The consumer also lives here, and reacts to new events internally with other services in the backend.
Producer in the frontend, consumer in the backend.- Frontend (React.js, Vue.js, etc) has a Kafka client that produces event of important information that requires logging. Consumer Kafka client also exists in the backend and reacts to events internally with other services. Backend also exposes an API for non-kafka requests.
Maybe a combination of this? Or are these all wrong?
Confluent has been quite helpful so far, but I think I am missing something important to piece all of it together.
Scenario:
A microservice picks up a message from a RabbitMQ Queue, it's converted to an object and then the microservice makes a REST call to external service.
It's going to deal with thousands 'n thousands of these messages, is there a way of telling my consumer not to pick up a message off the Queue if we know the external Rest service is down?
I know I can do retries for an individual message once it's picked up, but I dont want to even pick it up if I know its down.
I dont want to deal with thousands of messages in DLQ.
Also it feels like a Circuit Breaker design pattern, but I cant find any specific examples of how to implement it with AMQP.
Extra info:
SpringBoot app, taking to RabbitMQ using spring amqp.
Thanks in advance
You can stop and start the message listener container.
If you are using discrete containers you can stop/start the container bean.
If you are using #RabbitListener, provide an id attribute and wire in the RabbitListenerEndpointRegistry bean.
Then registry.getMessageListenerContainer(myId).stop();.
Lets say a camel context implements a route which consumes from an endpoint (direct://simpleEndpoint), and another java program sends to this endpoint in its main method using a producer template. Will the messages be received on the consumer?
Right now, I'm not able to make this work?
Is there any other way to test by sending dummy messages to an endpoint?
direct only works within the same CamelContext.
You could use direct-vm to communicate across different CamelContexts but within the same JVM.
If you want to communicate across different JVMs you should look at jms or activemq instead.
Hi there we are planning on integrating a websocket server implementation as frontend to our RabbitMQ systems. Currently we are running some Java/Groovy/Grails based apps which use the RabbitMQ server.
We would like to have a simple websocket server implementation that handles connections etc and that passes the request to our RabbitMQ layer.
Clients (hardware devices) would connect to a websocket layer that handles the request to RabbitMQ. Some other process takes on the job of handling the request and places back data in the queue if needed so that RabbitMQ is able to pass the data via websockets back to the client.
I am a bit lost in the land of websockets so i am wondering what other people would advise to use.
You can use rabbitmq itself with the webstomp plugin and sock.js for web frontends. You can expose this directly or via something like haproxy.
http://www.rabbitmq.com/blog/2012/05/14/introducing-rabbitmq-web-stomp/
In version 3.x it is now included by default, just enable the plugin.
For Java there are a couple of choices:
Atmosphere
Vert.x
Play 2.0
Netty directly
There are so many ways to skin the cat. Atmosphere will probably get you the furthers if you already using Grails. You will have to write a custom Broadcaster IIRC there is not one for RabbitMQ but you can just copy one of the existing ones.
Also with RabbitMQ or any queue your going to have to decide whether your going to make queues for each for each user (browser using websocket) or your going to aggregate based on some hash and then dispatch internally (ie make a giant map of mailboxes). Akka would be a good choice for mapping to mailboxes.