I am developing a spring boot application.
Since spring boot created a .jar file for an application.
I want to cluster this particular application on different server. Lets say I build a jar file and ran a project then it should run in cluster mode from number of defined servers and should be able to serve end user needs.
My jar will reside on only one server but it will be clustered across number of servers. When end user calls a web service from my spring boot app he never know from where it is getting called.
The reason behind clustering is suppose any of the server goes down in future, end user will still be able to access web services from another server. But I don't know how to make it clustered.
Can any one please give me insight on this ?
If you want to have it clustered, you just run your Spring Boot application on multiple servers (of course, the JAR must be present on those servers, otherwise you can't run it). You would then place a loadbalancer in front of the application servers to distribute the load.
If all services you are going to expose are stateless so you only need to use load balancer in front of your nodes for ex. apache or nginx, if your services are stateful "store any state [session, store data in db]" so you have to use distributed cache or in memory data grid:
for session you can use spring-session project which could used rails to store sessions.
for store data in DB you need to cluster DB it self and can use distributed cache above your DB layer like Hazelcast.
Look into spring cloud, they have used some netflix open software along with amazons to create 12 factor apps for micro services.
Ideally you would need a load balancer, service registry that can help you achieve multiple instances of spring boot. I believe you have to add a dependency called eureka.
Check the below link
Spring cloud
You can deploy it in cloud foundry and use autoscale function to increase your application instances.
Related
Currently, I have a springboot jar file with a bunch of rest and apis including calls for login deployed on ec2. I also have a separate code base for my ui i.e with js,html,css. What is the best way to deploy this on aws and keep it separate from the backend.
This can be done in many ways. But will share a simple way.
Deploy your spring boot app in one aws instance.
Deploy the other front end app on the other aws instance.
This is a kind of two tier application where the server and client app are hosted in different instance. You can restrict the access of your rest api to be accessed only by the instance where you host front end app. For trial you can use heroku account. E.g.
Github: https://github.com/krishna28/springbootapi
Also check https://github.com/krishna28/etodo
I have coded a Spring MVC Hibernate application with RabbitMQ as a messaging server & a MySQL DB. I have also used Hazelcast an in-memory distributed cache to centralize the state of the application, moving the local tomcat session to a centralized session & implementing distributed locks.
The app right now is hosted on a single tomcat server in my local system.
I want to test my application on a multiple JVM node environment i'e app running on multiple tomcat servers.
What would be the best approach to test the app.
Few things that come to my mind
A. Install & configure a load balancer & set up a tomcat cluster in my local system. This I believe is a tedious task & requires much effort.
B. Host the application on a PAAS like OpenShift, cloudfoundry but I am not sure if I will be able to test my application on several nodes.
C. Any other way to simulate a clustered environment on my local windows system?
I would suggest first you should understand your application requirement. For the real production/live environment, are you going to use Infrastructure as a service or PAAS.
If Infrastructure as a service then
I would suggest create local cluster environment and use the tomcat and spring application sticky session concept. Persist the session in Hazelcast or redis server installed on different node. Configure load balancer for multiple nodes having tomcat server. 2-3 VMs for testing purpose would be suitable.
If requirement is PAAS then
Don't think about local environment. Test directly on OpenShift or AWS free account and trust me you would be able to test on PAAS if all setup is fine.
I am trying to figure out an easy way to manage many Spring Boot applications on my Production server. Right now I have many fat jars running on different folders where each one has its your own script to start/stop the application and there's an external folder for the configurations (logback, properties, xml). For record those configurations are loaded by command line -Dloader.path to Spring Boot execution.
So how can I avoid conflicts for the same http/https port already running on Production? Does exist any kind of application manager where system administrators could control it? One solution I found was to virtualize Spring Boot applications with Docker, but my environment is Unix Solaris.
Is there any java solution for this scenario?
You can have a look at Spring Cloud which will give you better control and management when running multiple boot applications. All components of Spring Cloud
might not be useful to you, but few of them will help in port resolution, service rerouting and property maintenance. Along with the above you can also try SBA.
Along with the above you can also try Nginx for UI load balancing and reverse proxy.
We are developing an application which periodically syncs the LDAP servers of different clients with our database. This application needs to be accessed via a web portal. A web user will create, modify or delete scheduled tasks on this application. So, we have developed this application as a web service.
Now, we have to scale this application and also ensure high availability.
The application is an Axis2 based web service running on Tomcat. We have thought of httpd + mod_jk + tomcat combination for load balancing. The problem is that if a request for modification/deletion comes, then it should land on the same tomcat server on which the task was created initially. But, since, the request can come from different web users accessing web portal from different ip addresses, we can not have same session id (sticky session).
Any solutions? Different architecture? Anything.
We have also thought of using Quartz scheduler api. The site says it supports load balancing and clustering. Does anyone has experience of working on such scenario with Quartz?
If you are using Quartz for your scheduling, that can be backed by a database (see JDBCJobStore). Then you could access any Tomcat server and the scheduling would be centralized. I would recommend using a key in the database that you return back to the Axis service, so that the user can reference the same data between calls.
Alternatively it is not difficult to use the database as a job scheduler, then have your tasks run on Tomcat (any location), and put the results into the database. If the results of the job (such as its status) are small, this would work fine.
I am looking for the best method to host multiple websites developed using Spring Boot.
I have a public IP and it points to EC2 machine.
Already I am running one web application on it, developed using Spring Boot.
Now, I am looking for a way to create my second Spring Boot application(running on a different port).
My configuration should result like this(Single public IP),
www.app1.com(x.x.x.x) => Spring Boot App1
www.app2.com(x.x.x.x) => Spring Boot App2
I found many articles on internet dealing with conf/server.xml file, http://tomcat.apache.org/tomcat-7.0-doc/config/host.html
Can someone help me to achieve the same
The best way is probably to use a reverse proxy front end. E.g. install nginx on your EC2 box, or (probably better if you are serious about it) use an ELB, and Route 53 to register your DNS record.