deploy different spring boot wars on external tomcat with same port - java

how do I deploy different spring boot wars on tomcat container
I have 3 wars with name
myapp1.war
myapp2.war
myapp3.war
I have added these wars into tomcat webapps folder and did some changes in server.xml under <Host? tag.
<Context path="/apipath" docBase="myapp1" reloadable="true"></Context>
I can access the application on http://localhost:9080/apipath/mymethoduriapp1
Now if I put another Context path in same host tag for other war like
<Context path="/apipath" docBase="myapp2" reloadable="true"></Context>
server unable to start now.
is there any way that we can have multiple context path so I can access all application on same port.
example.
http://localhost:9080/apipath/mymethoduriapp1
http://localhost:9080/apipath/mymethoduriapp2
http://localhost:9080/apipath/mymethoduriapp3
Thanks in advance for help

No you can't have multiple apps listening on the same port. How would
the kernel know which application to send the packages to? What you
can do that run a http server such nginx or apache which gonna listen
on 9090 and each app in a different port and then proxy the requests
based on the URL to the desired app.
nginx is probably the most popular and easier to setup , Below shows
the basic configuration for this case
Nginx Reverse Proxy. Multiple Applications on One Domain

Related

Multiple servlets on multiple ports in jetty

I want to serve two servlets on different port, something like
http://localhost:8888/servlet1
http://localhost:9999/servlet2
These servlets are part of one war file. I found a way to do as explained in this blog. But I am not using embedded jetty. In my case its a separate jetty installation. I dump my application war file inside webapp directory of jetty. I have configured servlet mappings in web.xml file, which is inside war. I have followed the jetty documentation to create connector and virtual hosts by configuring xml files as described there. But still I am not able to understand how/where would I pass in the configuration saying serve servlet1 from 8888 and servlet2 from 9999. Anyone has done this before through xml files?

Can a Tomcat web application tell a load balancer its Tomcat is down?

I have two Tomcat servers deployed behind an Nginx load balancer that's using proxy_pass to route the requests. This works well, but there is now an use case in my application for which I need to pull one of the servers out of the cluster (but keep it running), have the web application on it do something and when that's done place the Tomcat back.
Right now I'm reloading the Nginx configuration manually and mark the server down to give time to the application to do its thing, but what I would like is have the web application "trick" Nginx that its Tomcat server is down, do it's stuff, then rejoin the cluster.
I'm thinking that I need to have some custom Tomcat Connector that's controlled by the web application but everything online is about proxying with Apache or using AJP and that's not what I need, I need this to be a HTTP proxy with Nginx.
Anyone has some pointers on how I might go about doing this?
When tomcat goes down, your webapp will go with it - you shouldn't rely on it to do any meaningful work to delay the shutdown. Rather have proper systems management procedures to first change the LB, then shutdown tomcat. That's a solution external to tomcat - and should be easy as you say that you pull one of the tomcats from your cluster.
For unplanned downtimes, use the LB's detection of tomcat being down, like #mikhailov described.
Try max_fails and fail_timeout configuration of Upstream module, for instance
upstream backend {
server tomcat1.localhost max_fails=3 fail_timeout=15s;
server tomcat2.localhost max_fails=3 fail_timeout=15s;
}
UPDATE:
To solve "mark as down by demand" task you can put maintenance.html file into the public directory, handle it via "try files" and produce error code 503 if the file exists. It helps you to configure balancer efficiently.

configure apache bridge for tomcat

I have got two java projects:
is a web service, which is running fine on Tomcat 6.
the other is a client developed using Play 1.2.4 framework and must be deployed on the same tomcat as that of the web service.
My problem is that when they are deployed as a war file on tomcat, client request URL's don't include the application context and thus the path could not be found.
I read as it is possible to make Apache a bridge for tomcat will solve the problem but don't know how to configure it.
Please help me out.
I think what you mean is that your UI, developed with Play, is deployed under a context like /myapp.
You have two options:
rename the WAR file to ROOT.war. Tomcat will deploy it to /
Use Apache as Proxy and implement a Redirect
Apache config would look like:
RedirectMatch "^/$" /myapp
ProxyPass /myapp http://tomcatserver:8080/myapp

How to deploy a web service to amazon EC2?

i've just created a web application and deployed it to Amazon EC2 , but now i want to create a java web service and deploy it to instance in Amazon AWS to be able to use it in my Application
There are many possible configurations. This can be one of them:
Start the application server (probably tomcat) with AJP enabled.
Use the apache HTTP server JK module to connect the webserver to the Tomcat application.
Deploy your web application on the application server and make it generate the WSDL using the external domain name or ip. For example: http://www.domain.com/application/service
Make sure that the Amazon firewall has the port 80 open for that instance.
If you use domain name, make your DNS point to that host.

glassfish load balancer configuration

I am trying to configure clustering in glassfish 3.1.1 server. I have made clusters but it deploys application on different port number and different ip i want it shold be handled by the load balancer and application should be run on the ip of loadbalancer only that means if a machine is down the loadbalancer redirects that request on another machine configured in loadbalancer.
How to achieve it? does any one have idea about it? or provide link of tutorial and/or blogs for the same.
http://tiainen.sertik.net/2011/03/load-balancing-with-glassfish-31-and.html
To brief up:
Steps to configure a glassfish cluster with a Load Balancer:
Create a cluster
Create instances for the cluster
Start the cluster
Deploy the web application
Create a network listener to listen the requests coming from the load balancer
Install Apache Web Server and get the mod_jk module
Edit the httpd.conf file and workers.properties file in the conf directory of apache web server
Restart the cluster, glassfish domain and apache daemon
Note: You might need to keep you firewall off if using any Linux OS.

Categories