Spring Boot 3 App with Swagger cannot deployed with AWS (Severe Issue) - java

I have a problem to deploy my Spring Boot 3 with Swagger to AWS.
After creating the environment from elastic beanstalk and defining jar file of the project, I complete the process.
I get this message "Severe" for the instance after waiting a certain of time.
I also changed server.port= 5000 and define SERVER_PORT 5000 for the environment of the software configuration but nothing changed.
Here is the image shown below
Here is the health overview information shown below.
Severe
Environment health has transitioned from Ok to Severe. 100.0 % of the requests are erroring with HTTP 4xx. Insufficient request rate (12.0 requests/min) to determine application health. ELB processes are not healthy on all instances. ELB health is failing or not available for all instances.
How can I fix the issue?
The local URL is http://localhost:5000/swagger-ui/index.html
Here is the link showing logs : Link
Here is the repo : Link

Related

How to create several instances of a Microservice: SpringBoot and Spring Cloud

I am new to the Microservices, I came across several concepts like Service Registry and Load Balancing. I have following questions-
How multiple instances of a particular microservice are created?
How Service Registry using Eureka Server helps in distributing the load on the several instances of Microservice?
In my scenario, I created 3 different microservices and registered them on my service registry-
Service Registry Configuration-
server:
port: 8761
eureka:
instance:
hostname: localhost
client:
register-with-eureka: false #this will ensure that this service
#is not itself registered as a client
fetch-registry: false
Client Configuration-
eureka:
instance:
prefer-ip-address: true
client:
fetch-registry: true #true by default
register-with-eureka: true #true by default
service-url:
defaultZone: http://localhost:8761/eureka
When I stop my services I still see the services as Up and Running on Eureka and get a warning as-
Can somebody please help me find the reason for this problem?
1. To communicate with another instance, you can use Eureka to find the address of the service you want to talk to. Eureka will give you a list of all the available instances of that service, and you can choose which one you want to communicate with.
2. The Eureka server is a microservice that keeps track of the locations of other microservices within the same system. These other microservices register themselves with the Eureka server so that they can be found and contacted by other microservices when needed. The Eureka server acts as a directory for the microservices, allowing them to find and communicate with each other. (not sure if that's what you asked).
3. In order to remove the warning:
You can set a renewal threshold limit in the Eureka server's properties file.
eureka.renewalPercentThreshold=0.85
1. To scale your microservice locally, you can run multiple instances of your Spring Boot application each on a different port.
First update the port on the microservice you wish to scale in the application.yml file to:
server:
port: 0
This will start the application on a random port each time you run it.
If you run 2 applications now, you will see 1 instance of your microservice on your Eureka dashboard. This is because they both have the same Eureka instance id. To fix this you need to generate a new instance id so add the below in the same application.yml:
spring:
application:
name: "hotel-service"
eureka:
instance:
instance-id: "${spring.application.name}:${random.value}"
Finally, just run the same application more than once. You can do this on InteliJ by right clicking on the main class and selecting run and then doing that again. Some extra setup may be required to run multiple instance of the same application on IntelliJ, please see link: How do I run the same application twice in IntelliJ?
If you are using Eclipse/STS, right click on project > Run as > Spring Boot App (do this twice or more).
Alternatively if you have maven installed. Open a terminal and run mvn spring-boot:run and then open a new terminal and run the command again.
Now you should see multiple instances of your application on the Eureka dashboard.
Note: In production scaling up a microservice is done on the by the Devops team, for example a container orchestration platform such as Kubernetes can be used to increase the instances of a microservices.
2. Generally an API gateway is used to route incoming network requests to microservices and do the load balancing while service discovery allows microservices to find and communicate with each other.
With Eureka service discovery, the microservices will register on the discovery server. An API gateway will also register on the Eureka server and will do load balancing.
One method of load balancing is the round-robin strategy, where the load balancer will rotate through the available instances in a sequential order. This helps to distribute the load evenly across the instances. There are also other load balancing methods, like Least Connection, Resource Based (Adaptive) etc.
3. The error your getting is due to the self preservation mode which comes with the Eureka server. The Eureka server expects a heartbeat from microservices every 30 seconds by default, if it doesn't receive a heartbeat within 90 seconds it will de-register a microservice from it. In a case where Eureka doesn't receive heartbeat signals from many services, it will de-register each microservice up to a certain limit. Then after it will enter self preservation mode and will not de-register any more microservices and will try to re-establish a connection, this is because a network issue could result in eureka server from not receiving heartbeats.
Since you are developing locally and you stopped running your microservices, you are seeing the expected behaviour of the Eureka server entering self preservation mode, which you can ignore.
You already register the microservices in eureka server.
Just run same service [hotel-service, rating service] on different port. Eureka server check name while registering microservice if It found same name It registered microservice with different {ip-address:port} format, you can also use same approch for load balancing

How do I configure a load balancer in an AWS Elastic Beanstalk environment?

I have a Elastic Beanstalk enviroment that works perfectly when I configure the capacity as a single instance. The SpringBoot app respond without problems on the port 8083, for example when I make a POST petition like "http://54.162.95.157:8083/login" (54.162.95.157 that is the public ip of ec2 instance). But when I change the Beanstalk environment to a balanced load, the environment stops working. The POST request now I do it to the DNS of the load balancer, for example "http://awseb-e-m-AWSEBLoa-VVP8D98KT5SX-219136517.us-east-1.elb.amazonaws.com:80/login" but it fails.. I get a 503 Service Unavailable: Back-end server is at capacity as Response. My question is, how to correctly configure a load balancer in this case? or How to move from a single instance to a load balancer and make this work?
The load balancer configuration:
The problem occurs because the load balancer of aws only sends traffic to the instance that is healthy, otherwise it does not, therefore I had to configure the way I want the health of my instances to be checked. In this way the problem was solved.

AWS EC2 Tomcat Java Webapp - How can I manage bot http sessions

I have a Tomcat 8 Java webapp deployed in an AWS EC2 Ubuntu instance.
It seems that there is a lot of bots trying to access my app because in my Javamelody monitoring I can see one-request bot's sessions cached by spring security like:
DefaultSavedRequest[http://52.27.73.101/phpmy-admin/]
DefaultSavedRequest[http://52.27.73.101/wp-login.php]
DefaultSavedRequest[http://52.27.73.101/admin/phpmyadmin/]
Is there a way to prevent from that bot's request? I don't know, maybe a spring security config that does not save them on caché, o a tomcat config that do something similar?
Lots of them doesn't have even IP, or Country, or User Agent.
Apart from security warning, my Javamelody Http Sessions Info it's not trustable because there are a lot of these.
The EC2 Instances is behind an AWS Load Balancer, so maybe can help in this thread too.
If you have an EC2 instance you have full control of your SO. You could use any tool, for example iptables, to control the network traffic. Maybe with something like this:
iptables -I INPUT -s <bot IP source> -j DROP

How to configure Wildfly 9 to failover HTTP sessions

I'm really struggling to configure Wildfly 9 to cluster/failover its sessions...
I keep reading that wildfly in standalone-ha mode will automatically discover peers and automatically share sessions, but it's clearly not working for me.
I have setup 3 AWS EC2 servers which all have the same configuration. They all run the same versions of everything and have the same webapp .war file deployed to each of them. This webapp works fine, I can log in to the app which maintains a simple session variable to verify that I am logged in. I've launched each server with standalone-ha.xml configuration files but logging into one doesn't allow be to access the session in any of the others.
I've tried all the things I can think of, but don't know how to diagnose the issue as I don't know how the servers identify each other.
I've manually deployed the war file on each server but placing the file into .../standalone/deployments/
Each has a fully open firewall...
Oh - I set the muticast address on the command line to 230.0.0.4 (That number came from a guide, and I have literally no understanding of it) and each is bound (-b) to the internal IP of the server...
Any help appreciated...
First you must consider that in AWS EC2 multicast traffic is not allowed and thus MPING will not work.
See http://developer.jboss.org/wiki/JGroupsS3PING
An example how to implement S3Ping http://aws.typepad.com/awsaktuell/2013/10/elastic-jboss-as-7-clustering-in-aws-using-ec2-s3-elb-and-chef.html

Weblogic application on a cluster behavior

I have an application that successfully deploys to Weblogic Server. I've configured cluster and wanted to start this application on a cluster, but sometimes I am facing such a situation.
1st node - application runs and there isn't any errors.
2st node - shows me 404 error.
How is that possible that 1 application can successfully work on 1 node and show 404 error on the second one?
Sorry for late answer and thanks to #Kyouma.
The problem was in JMS server.
I had such a configuration:
1 Machine that has 2 servers united in a cluster.
JMS Server that was targeted to 1 server from the cluster.
JMS Module that was targeted to the same server.
The problem was, that I had *.war file configured to both servers in a cluster.
When the first server was starting - everything was fine, because during Topic bean creation he could find those topics. But the second server couldn't find them, because JMS server in general was not in Running state.
So the solution is:
1. Have no *.war files in the Deployment menu.
2. Start all servers in a cluster.
3. Install *.war file in the Deployment menu.
E.R. Application have been deployed.

Categories