Monitoring replicas with Spring Boot Admin on Kubernetes - java

I setted up a Spring Boot Admin Client on Kubernetes and scaled up to 3 replicas, but when I try to check the instances the Admin Server show just one

In order for SBA (Spring Boot Admin) to understand that the three instances of your services are distinct, you need to make sure each is registered in SBA using its "internal IP address".
Doing so will let SBA query the health of each instance independently, and will result with spring creating unique instance-id for each pod.
Note that using the k8s service name for the registration will result with SBA's health queries being load-balanced across the service's pods.
To do this, add to your application.yml the following:
spring:
boot.admin.client:
url: http://<k8s-service-name-and-port>
instance:
name: <service-name>
service-base-url: http://${K8S_POD_IP}:8080
management-base-url: http://${K8S_POD_IP}:8081
auto-deregistration: true
Having:
K8S_POD_IP is an environment-variable with the pod's IP address that must be accessible from SBA - this is the address that will be used by SBA to query for your service instance's health
spring.boot.admin.client.url is the URL that will be used by SBA's UI when you click on an instance of your service - this URL should point to k8s's service
spring.boot.admin.client.management-base-url - this is used by SBA to monitor every service's health, should be unique for every instance and should be accessible from SBA
If you don't set auto-deregistration to true whenever you roll out an update or scale down your service, you'll get notification of unhealthy instances - with this setting, instances will derigister from SBA when shutdown.

you need set parameter in yml file:
eureka.instance.instance-id: ${spring.cloud.client.ip-address}:${server.port}

Related

Spring Boot 3 App with Swagger cannot deployed with AWS (Severe Issue)

I have a problem to deploy my Spring Boot 3 with Swagger to AWS.
After creating the environment from elastic beanstalk and defining jar file of the project, I complete the process.
I get this message "Severe" for the instance after waiting a certain of time.
I also changed server.port= 5000 and define SERVER_PORT 5000 for the environment of the software configuration but nothing changed.
Here is the image shown below
Here is the health overview information shown below.
Severe
Environment health has transitioned from Ok to Severe. 100.0 % of the requests are erroring with HTTP 4xx. Insufficient request rate (12.0 requests/min) to determine application health. ELB processes are not healthy on all instances. ELB health is failing or not available for all instances.
How can I fix the issue?
The local URL is http://localhost:5000/swagger-ui/index.html
Here is the link showing logs : Link
Here is the repo : Link

How to create several instances of a Microservice: SpringBoot and Spring Cloud

I am new to the Microservices, I came across several concepts like Service Registry and Load Balancing. I have following questions-
How multiple instances of a particular microservice are created?
How Service Registry using Eureka Server helps in distributing the load on the several instances of Microservice?
In my scenario, I created 3 different microservices and registered them on my service registry-
Service Registry Configuration-
server:
port: 8761
eureka:
instance:
hostname: localhost
client:
register-with-eureka: false #this will ensure that this service
#is not itself registered as a client
fetch-registry: false
Client Configuration-
eureka:
instance:
prefer-ip-address: true
client:
fetch-registry: true #true by default
register-with-eureka: true #true by default
service-url:
defaultZone: http://localhost:8761/eureka
When I stop my services I still see the services as Up and Running on Eureka and get a warning as-
Can somebody please help me find the reason for this problem?
1. To communicate with another instance, you can use Eureka to find the address of the service you want to talk to. Eureka will give you a list of all the available instances of that service, and you can choose which one you want to communicate with.
2. The Eureka server is a microservice that keeps track of the locations of other microservices within the same system. These other microservices register themselves with the Eureka server so that they can be found and contacted by other microservices when needed. The Eureka server acts as a directory for the microservices, allowing them to find and communicate with each other. (not sure if that's what you asked).
3. In order to remove the warning:
You can set a renewal threshold limit in the Eureka server's properties file.
eureka.renewalPercentThreshold=0.85
1. To scale your microservice locally, you can run multiple instances of your Spring Boot application each on a different port.
First update the port on the microservice you wish to scale in the application.yml file to:
server:
port: 0
This will start the application on a random port each time you run it.
If you run 2 applications now, you will see 1 instance of your microservice on your Eureka dashboard. This is because they both have the same Eureka instance id. To fix this you need to generate a new instance id so add the below in the same application.yml:
spring:
application:
name: "hotel-service"
eureka:
instance:
instance-id: "${spring.application.name}:${random.value}"
Finally, just run the same application more than once. You can do this on InteliJ by right clicking on the main class and selecting run and then doing that again. Some extra setup may be required to run multiple instance of the same application on IntelliJ, please see link: How do I run the same application twice in IntelliJ?
If you are using Eclipse/STS, right click on project > Run as > Spring Boot App (do this twice or more).
Alternatively if you have maven installed. Open a terminal and run mvn spring-boot:run and then open a new terminal and run the command again.
Now you should see multiple instances of your application on the Eureka dashboard.
Note: In production scaling up a microservice is done on the by the Devops team, for example a container orchestration platform such as Kubernetes can be used to increase the instances of a microservices.
2. Generally an API gateway is used to route incoming network requests to microservices and do the load balancing while service discovery allows microservices to find and communicate with each other.
With Eureka service discovery, the microservices will register on the discovery server. An API gateway will also register on the Eureka server and will do load balancing.
One method of load balancing is the round-robin strategy, where the load balancer will rotate through the available instances in a sequential order. This helps to distribute the load evenly across the instances. There are also other load balancing methods, like Least Connection, Resource Based (Adaptive) etc.
3. The error your getting is due to the self preservation mode which comes with the Eureka server. The Eureka server expects a heartbeat from microservices every 30 seconds by default, if it doesn't receive a heartbeat within 90 seconds it will de-register a microservice from it. In a case where Eureka doesn't receive heartbeat signals from many services, it will de-register each microservice up to a certain limit. Then after it will enter self preservation mode and will not de-register any more microservices and will try to re-establish a connection, this is because a network issue could result in eureka server from not receiving heartbeats.
Since you are developing locally and you stopped running your microservices, you are seeing the expected behaviour of the Eureka server entering self preservation mode, which you can ignore.
You already register the microservices in eureka server.
Just run same service [hotel-service, rating service] on different port. Eureka server check name while registering microservice if It found same name It registered microservice with different {ip-address:port} format, you can also use same approch for load balancing

All web containers occupied by one app that depends on 2nd app

We have an application on our WebSphere Application Server that calls a web service of a second application which is deployed on the same app server.
There are 100 available web containers (threads).
At times when there are many active users, application 1 allocates all available web container threads. When application 1 tries to call the web service (application 2) there are no free threads, so application 1 never finishes and therefore the whole system hangs.
How can I solve this? For example, is it possible to restrict the web container thread count per application? For example, I would only permit application 1 to use 50% of available threads.
A solution would be to add some code to application 1 that watches the count of requests being processed simultaneously. But I'd try to avoid that if possible, because I think this is very error prone. Earlier, we used the synchronized keyword. But that only allows 1 request at a time which caused even bigger problems.
It could be possible by defining separate transport chain and thread pool.
I dont have web console before me, so steps in rough order:
create separate thread pool for your soap service app
create separate web transport chain on new port e.g. 9045
associate that thread pool with transport chain
create new virtual host, with host alias *:9045
map your soap-service app to that port
If you will access app via 9045 port it will use your own separate thread pool for that.
Concerns:
if it is only local access (from one app to the other) then you just access it via localhost:9045 and you are good to go
if your soap service needs to be accessible ALSO from outside e.g. via plugin with the default https port (443), you would need to create different DNS hostname for that so it can be associadet with you soap sercvice app eg. soap-service.domain.com (and then you use that domain in the host alias instead of *. In that case plugin should use that 9045 port for transport also, but I dont have env at hand to verify that.
I hope I didnt complicate it too much. ;-)

Polling SQS message from container in EC2 instance - EC2 Instance IAM Role vs ECS Task Role

I am trying to poll SQS messages from a spring boot app running on a container in an EC2 instance. Both the consumer and SQS queue are on the same AWS account.
The messages are encrypted with a KMS key, so I need to create a "km:Decrypt" rule, otherwise, I'm always getting this same message:
com.amazonaws.services.sqs.model.AmazonSQSException: The ciphertext
refers to a customer master key that does not exist, does not exist in
this region, or you are not allowed to access
To allow decryption, I can either:
create the rule directly in the KMS key policy -> I don't want to do this due to security reasons
create the rule at the EC2 Instance IAM Role level -> I wanted to avoid this because I have other containers running in the same EC2 instance
create the rule at the ECS Task role level -> preferred option
The 3rd option is already in place, but the problem is that the spring boot request is always using the EC2 Instance IAM Role (terraform-20210318145009433200000002), as seen in CloudTrail, instead of the ECS Task role:
How can I make it use the ECS Task role?
You may be able to block access to the credentials that are supplied to the Amazon EC2 instance profile using one of these methods:
Setting the environment variable ECS_AWSVPC_BLOCK_IMDS to true.
Running the following command:
sudo yum install -y iptables-services; sudo iptables --insert DOCKER-USER 1 --in-interface docker+ --destination 169.254.169.254/32 --jump DROP
It depends on the ECS task definition.
Reference: https://docs.aws.amazon.com/AmazonECS/latest/developerguide/task-iam-roles.html#task-iam-role-considerations

Camel Ip Address caching

I have a URL in camel route which is set via JVM custom property and it remains unchanged.
**<camel:to id="to-server" uri="{{serverURL}}" />**
serverURL property is set to a site loadbalancer address http://xyz:8080/Server/transactionServlet
In the network layer this URL can either be pointed to Server 1 or Server 2, the URL should work regardless of which server we're using.
After the switch over from Server 1 to Server 2, our WAR still tries to post to Server 1 and fails.
It appears that our WAR is caching the the URL address (what the site loadbalancer was pointing to at the time) when it was started and does not recognize that we have switched over.
The only workaround is to restart the application WAR, at which point it stores the Server 2 address (what the site loadbalancer is now pointing to) and begins posting transactions to Server 2.
is there any way to make camel not cache the IP Address and post to whatever server URL is pointing to ?
I am using Apache Camel 2.14
Have a look at networkaddress.cache.ttl.
Specified in java.security to indicate the caching policy for successful name lookups from the name service.. The value is specified as integer to indicate the number of seconds to cache the successful lookup.
A value of -1 indicates "cache forever". The default behavior is to cache forever when a security manager is installed, and to cache for an implementation specific period of time, when a security manager is not installed.

Categories