The problem: I have a spring boot service running on K8s. Generally API calls can be served by any pod of my service, but for a particular use case we have a requirement to propagate the call to all instances of the service.
A bit of googling led me to https://discuss.kubernetes.io/t/how-to-broadcast-message-to-all-the-pod/10002 where they suggest using
kubectl get endpoints cache -o yaml
and proceeding from there. This is fine for a human or a CLI environment, but how do I accomplish the same from within my Java service, aside from executing the above command via Process and parsing the output?
Essentially I want a way to do what the above command is doing but in a more java-friendly way.
Seems like your spring boot service should be listening to a message queue, and when one service receives a specific HTTP request message to the /propagateme endpoint, it sends an event to the topic to all other clients listening to the Propagation topic, when the instances receive a message from the topic they perform the specific action
See JMS https://spring.io/guides/gs/messaging-jms/
Related
We are trying to migrate our legacy system to Micro service
With Paas environment, we have scheduler jobs to trigger and put messages in MQ one by one and we have MQ listener in our Microservice to get message and create request and send request to external party.
Here the problem comes our micro service is capable doing Asynchronous call to external service, but our external service is not able to handle Asynchronous call so it is returning wrong data.
For example, we are hitting external service with 40 to 60 request per minute and external service is capable to handle only 6 request per minute.
So how can I make the MQ listener to process slowly.
I have tried reducing setMaxConcurrenceConsumer to 1 and
Used observable.toblocking.single() to make the process to run in only one thread.
We use RxJava in our micro service.
It sounds like either your micro service or the external service is not following the use case for Request-Reply messaging.
(1) Is the external service setting the Reply's message Correlation ID with the Request message's Message ID?
(2) Is your micro service performing an MQGET with the matching option of getting by Correlation ID.
You can blame the external service for the error but if your micro service is actually picking up the wrong message then it is your application's fault. i.e. Does your micro service simply get the "next" message on the queue?
Read this answer: How to match MQ Server reply messages to the correct request
Here's a explanation (looks like from the 90's but has good information): https://www.enterpriseintegrationpatterns.com/patterns/messaging/RequestReplyJmsExample.html
In long term approach we are planning to migrate the External service to as well.
In short time i have fixed it using the observable.toblocking.single() ,thread.sleep(), and setMaxConcurrenceConsumer() to 1 so only one thread will run at a time. which will avoid the Asynchronous call to external service.The sleep time will set dynamically with some analysis done on the external service.
I am working on a small project using Spring Boot, Kafka, and Spark. So far I have been able to create a Kafka producer in one project and a Spark-Kafka direct stream as a consumer.
I am able to see messages pass through and things seem to be working as intended. However, I have a rest endpoint on the project that is running the consumer. Whenever I disable the Direct Stream, the endpoint works fine. However when I have the stream running, Postman says there is no response. I see nothing in the server logs indicating that a request was ever received either.
The Spark consumer is started by a bean at project launch. Is this keeping the normal server on localhost:8080 from being started?
Initially I was kicking off the StreamingContext by annotating it as a Bean. I instead made the application implement CommandLineRunner, and in the overridden run method, I called the method that kicks off the Streaming Context. That allowed Apache to start and fixed the issue.
I have two services, the first it's spring boot rest api service , and the second Event proccessor(java).The main logic is , when client make http request to spring boot service it send message to rabbitmq exchange, the second service listening queue that bind to this exchange name.To test this life cycle , I add class from spring boot ,that send message to exchange , into test folder of the second service(Event processor). My question is, is it good practise to test things like this one.
It is always up to you, as in your example, you have service sending a message to the queue and a service which is basically processing queue which is an end to end flow so the test you described also end to end and it is based on the rabbitmq also. I will suggest following to tests instead of one e2e:
(1st service) Emulate client request and make sure the first service is generating expected message
(2nd service) Emulate incoming message and make sure service processing message as it should
I need to setup RabbitMQ in an attempt to redesign our architecture using asynchronous messaging.
Existing Application Flow:
JEE web application (via browser) creates a new thread.
This thread creates a new OS process to invoke a Perl script to do some processing.
Perl script writes its output in a file and the control comes back to the thread.
The thread then reads the output file and loads the results to the database.
The control passes to the servlet which displays the result to the UI.
All these are synchronous and time consuming and we need to convert this to asynchronous messaging communication.
Now, I am planning to break this down to the following different components but not sure if this would work with RabbitMQ:
Application Breakdown:
JEE Web Application which is the Producer for RabbitMQ
Separate the Perl Script in to its own application that supports RabbitMQ communication. This Perl client will consume the message, process it and places a new message in RabbitMQ for the next step
Separate the output file to database loader into its own Java application that suppors RabbitMQ communication. This would consume the message from the queue corresponding to the Perl client's message from the previous step.
This way, the output would be available in the database and the asynchronous flow would be completed.
Is is possible to separate the applications this way compatible to RabbitMQ?
Are there any better ways to do this?
Please suggest some framework components for RabbitMQ and Perl
Appreciate your inputs with this.
Yes, you can do it that way. If it's not a hard work, I'll include the database load on the Perl step. This probably avoids to handle an intermediate file, but I don't know if it's a viable task on your project.
In order to use RabbitMQ, I'll recommend you the AnyEvent::RabbitMQ CPAN module. As the documentation establishes, You can use AnyEvent::RabbitMQ to:
Declare and delete exchanges
Declare, delete, bind and unbind queues
Set QoS and confirm mode
Publish, consume, get, ack, recover and reject messages
Select, commit and rollback transactions
I have a few questions one how to use spring websockets and messaging. So I have a program that interfaces with an external web service producer endpoint that will send data payloads to my web service consumer endpoint. While on the other end of my program I will be routing these data payloads to multiple websocket connections (stomp and sockjs). The external web service producer is providing a subscription ID in each data payload for every query requests so my approach is to send them back to the broker with a SimpMessagingTemplate with it's own unique destination (ie. /user/{subscriptionId}/subscribe). That way I can subscribe each websocket client to an existing destination if a duplicate query was made and only make requests for a new subscripion to the external web service producer if otherwise.
How do I access my SimpMessagingTemplate from within different component such as my web service consumer so that I can send the data payloads to my message broker? Do I just declare my SimpMessagingTemplate static and declare a getter function within my controller where the template object is stored?
How do I get a list of all known destinations and as well as the number of stomp client subscribers to each one? The external web service producer sets a termination time for each subscription, so I would like to implement auto renewal requests if there are still subscribers to a destination. I suppose I can keep track of it myself with Maps/Caches and update them everytime a websocket session is opened or closed, but i prefer to do it with spring if possible as it minimizes my risk and probably less error prone, or perhaps a full featured broker such as RabbitMQ or ActiveMQ is necessary to do this.
Found the answers I needed:
All I need to do is use spring Autowiring support and the bean will be injected with the object initialized
#Autowired
private SimpMessagingTemplate
Need a full featured broker for this, however for what I want to do i decided it would be too much work and essentially not needed. I decided I will just implement my own subscription checking with the 3rd party web service on my own with java maps/caches. I've set went to painstaking lengths by setting breakpoints in eclipse in the java .class files even with a java decompiler plugin and found out that all of this information can be found in the DefaultSubscriberRegistry class. Although I can not access it with the api given by Spring, I can rest assured it is being properly handled by the application. When a client subscribes or disconnects to my application, the information in the internal maps/caches of the registry are added and removed accordingly. Furthermore I can make make changes to my own implemented map/caches by implementing the interfaces provided by Spring such as SessionSubscribeEvent or SessionDisconnectedEvent and sub class it with ApplicationListener and they will be triggered whenever a client subscribes or disconnects.
public class SubscribeEvent implements ApplicationListener