How do I automatically restart tomcat in linux - java

I have created a Spring Boot microservice and hosted inside a Tomcat on a Linux machine.
Everything is inside the container and the container is inside the IBM cloud private platform.
Now the microservice should be running continuously.
But suppose for any reasons the microservice got stop or tomcat got crashed.
Is there any way we could restart the Tomcat server or microservice automatically without manual intervention?

Why are you deploying a Spring boot app in your local tomcat? By default Springboot comes with embedded Tomcat server in it, so if you just build and run the jar, a tomcat will be started with the service itself.You can also , configure the server type(tomcat or jetty) and other server properties in the application.yml file. More details here - https://www.springboottutorial.com/spring-boot-with-embedded-servers-tomcat-jetty
Coming to the second part, of the question,about how to make sure , that if one service crashes, a new instance should be started automatically, for this you might be needing to do some reading on container managers like dockerswarm or kubernetes, which support auto-scaling, and can take care of restarting services (pods) as and when required,they can even scale up, meaning increase the number of instances of a service if existing containers reach a resource usage threshold and then load balancing requests to that service through some discovery and registry client.I think this would be helpful to you - https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/#restart-policy

Related

Need to deploy a container to multiple KIE servers running behind a load-balancer

I have a docker swarm running multiple services. One of these services is a KIE server. I use Bamboo to deploy my services and I need to be able to do the same (automated deployment) for the kJar into my KIE server.
Also, I will have multiple replicas of this KIE server running so I need to be able to deploy to all of them. They will be running behind the Docker swarm load-balancer.
My question is how can I do this? The only methodology I've seen so far is to place the kJar into the .m2 directory of the server and then use a REST call to create the container.
This would work fine for a single server but not for many behind a load-balancer.
Has anyone else done this and, if so, how did you implement it?

WAR based microservice registration with Eureka Discovery Server

One of our client applications has following architecture -
Angular based front end
Spring Boot based web application to talk to front end
Spring Boot based microservices to talk to web application
Eureka Discovery client to enable web app locate microservices
Recently we faced some issue and want to make one of the microservice to be installed as application under standalone tomcat. Making microservice application main class extend SpringBootServletInitializer, and changing packaging to war helped generate war artifact and it gets deployed on tomcat, as well as registers on Eureka - but its not servicable.
When web application looks up the service via Eureka and invoked any API, it fails. Even invoking service via Postman or directly in browser fails for registered URL. It seems the microservice when exposed as web application under tomcat does not resolve via Eureka. Any suggestions?
Configuration:
Data service - to be deployed as war
spring.application.name=data-service
server.contextPath=/data-service
server.servlet.application-display-name=Data Service
spring.main.banner-mode=log
#server.port=9090
spring.jmx.default-domain=${spring.application.name}
eureka.client.service-url.defaultZone=http://localhost:9098/eureka
eureka.client.register-with-eureka=true
eureka.client.fetch-registry=true
eureka.client.preferSameZoneEureka=true
ribbon.eureka.enabled=true
ribbon.ReadTimeout = 60000
When deployed, it registers with Eureka Discovery with name data-service, but The uri is not a correct one to reach the instance, it happens to be something like
GET http://data-service/query/xxxxx HTTP/1.1
It misses the Tomcat port 8080 and the tomcat context. Manually checking uri
http://localhost:8080/data-service/query/xxxxx
does work.

Deploy Angular 5 and spring boot applications on same tomcat at different ports

I am developing a small project where I have Springboot java application and
Anagular 5 application. I want to deploy them on one tomcat. running each on diffrent ports.
Application Flow should be like this:
1) Some external service calls Java application with some headers. Springboot java application should read the headers put them in cookie and forward the request to Angular application.
2)Angular application reads the headers from the cookie and communicates to another application(Hosted somewhere else) with API calls.
What I tried:
I am able to deploy Spring boot application on tomcat.
For angular deployment I am copy pasting the dist folder into webapp.
What is question about: I wanted them to run at a time on tomcat on defferent ports so.
external application --calls-> java application(say running on localhost:8080)-----redirect from localhost:8080 to ----> Angular application(say running on localhost:8081).
in moment your are delegating servlet container to a provided one, all spring properties concerning an "embedded" container will be simply ignored. This is the case of server.port property.
Maybe it's a client/company constraint BUT using Spring Boot project this way makes you loose big part of its benefits: Raising serverless apps ready to be horizontaly scaled :(
Spring Boot keeps the possibility to let your static resources in the apps whitout loosing the ability to run the embedded container; by building an executable war.
Tip: To do that, just change the packaging from .jar to .war.
Hope was helpful :)

Can a Tomcat web application tell a load balancer its Tomcat is down?

I have two Tomcat servers deployed behind an Nginx load balancer that's using proxy_pass to route the requests. This works well, but there is now an use case in my application for which I need to pull one of the servers out of the cluster (but keep it running), have the web application on it do something and when that's done place the Tomcat back.
Right now I'm reloading the Nginx configuration manually and mark the server down to give time to the application to do its thing, but what I would like is have the web application "trick" Nginx that its Tomcat server is down, do it's stuff, then rejoin the cluster.
I'm thinking that I need to have some custom Tomcat Connector that's controlled by the web application but everything online is about proxying with Apache or using AJP and that's not what I need, I need this to be a HTTP proxy with Nginx.
Anyone has some pointers on how I might go about doing this?
When tomcat goes down, your webapp will go with it - you shouldn't rely on it to do any meaningful work to delay the shutdown. Rather have proper systems management procedures to first change the LB, then shutdown tomcat. That's a solution external to tomcat - and should be easy as you say that you pull one of the tomcats from your cluster.
For unplanned downtimes, use the LB's detection of tomcat being down, like #mikhailov described.
Try max_fails and fail_timeout configuration of Upstream module, for instance
upstream backend {
server tomcat1.localhost max_fails=3 fail_timeout=15s;
server tomcat2.localhost max_fails=3 fail_timeout=15s;
}
UPDATE:
To solve "mark as down by demand" task you can put maintenance.html file into the public directory, handle it via "try files" and produce error code 503 if the file exists. It helps you to configure balancer efficiently.

how to deploy an app on glassfish without without crashing the service?

I have an app that take long time to deploy/redeploy because this use EJB3, JPA2, JSF, Icefaces
The app is deployed on glassfish 3 on ec2 in amazon web services. Each i redeploy the app, while is redeploying the app, the service isn't available.
How can i redeploy an existing application and still the service available, until the redeploy finish?
thanks in advance
Depending on your architecture, you will always lose the service for a few seconds whilst you redeploy.
The proper way to architect this would be to have a software load balancer sitting in front of 2 or more glassfish server instances which are set in a cluster. The load balancer will automatically route all requests to the server holding the older available service. Once the new service is up and running, it will route requests there again. Using mod_jk inside apache works well as the load balancer.

Categories