glassfish load balancer configuration - java

I am trying to configure clustering in glassfish 3.1.1 server. I have made clusters but it deploys application on different port number and different ip i want it shold be handled by the load balancer and application should be run on the ip of loadbalancer only that means if a machine is down the loadbalancer redirects that request on another machine configured in loadbalancer.
How to achieve it? does any one have idea about it? or provide link of tutorial and/or blogs for the same.

http://tiainen.sertik.net/2011/03/load-balancing-with-glassfish-31-and.html
To brief up:
Steps to configure a glassfish cluster with a Load Balancer:
Create a cluster
Create instances for the cluster
Start the cluster
Deploy the web application
Create a network listener to listen the requests coming from the load balancer
Install Apache Web Server and get the mod_jk module
Edit the httpd.conf file and workers.properties file in the conf directory of apache web server
Restart the cluster, glassfish domain and apache daemon
Note: You might need to keep you firewall off if using any Linux OS.

Related

deploy different spring boot wars on external tomcat with same port

how do I deploy different spring boot wars on tomcat container
I have 3 wars with name
myapp1.war
myapp2.war
myapp3.war
I have added these wars into tomcat webapps folder and did some changes in server.xml under <Host? tag.
<Context path="/apipath" docBase="myapp1" reloadable="true"></Context>
I can access the application on http://localhost:9080/apipath/mymethoduriapp1
Now if I put another Context path in same host tag for other war like
<Context path="/apipath" docBase="myapp2" reloadable="true"></Context>
server unable to start now.
is there any way that we can have multiple context path so I can access all application on same port.
example.
http://localhost:9080/apipath/mymethoduriapp1
http://localhost:9080/apipath/mymethoduriapp2
http://localhost:9080/apipath/mymethoduriapp3
Thanks in advance for help
No you can't have multiple apps listening on the same port. How would
the kernel know which application to send the packages to? What you
can do that run a http server such nginx or apache which gonna listen
on 9090 and each app in a different port and then proxy the requests
based on the URL to the desired app.
nginx is probably the most popular and easier to setup , Below shows
the basic configuration for this case
Nginx Reverse Proxy. Multiple Applications on One Domain

Upload JavaEE application to remote tomcat inside a cluster

Our application is served by tomcat inside a cluster which contains a lot of windows 2008 servers. These servers are behind a Hardware Loader Balancer and Firewall. Usually one of the server can be exposed outside the Loader Balancer for system maintenance.
And we commit our source codes to a git repository, and then the application will be built by Jenkins which have the internet access.
Now I wonder if it is possible to update the application from Jenkins Server to the cluster?

Can a Tomcat web application tell a load balancer its Tomcat is down?

I have two Tomcat servers deployed behind an Nginx load balancer that's using proxy_pass to route the requests. This works well, but there is now an use case in my application for which I need to pull one of the servers out of the cluster (but keep it running), have the web application on it do something and when that's done place the Tomcat back.
Right now I'm reloading the Nginx configuration manually and mark the server down to give time to the application to do its thing, but what I would like is have the web application "trick" Nginx that its Tomcat server is down, do it's stuff, then rejoin the cluster.
I'm thinking that I need to have some custom Tomcat Connector that's controlled by the web application but everything online is about proxying with Apache or using AJP and that's not what I need, I need this to be a HTTP proxy with Nginx.
Anyone has some pointers on how I might go about doing this?
When tomcat goes down, your webapp will go with it - you shouldn't rely on it to do any meaningful work to delay the shutdown. Rather have proper systems management procedures to first change the LB, then shutdown tomcat. That's a solution external to tomcat - and should be easy as you say that you pull one of the tomcats from your cluster.
For unplanned downtimes, use the LB's detection of tomcat being down, like #mikhailov described.
Try max_fails and fail_timeout configuration of Upstream module, for instance
upstream backend {
server tomcat1.localhost max_fails=3 fail_timeout=15s;
server tomcat2.localhost max_fails=3 fail_timeout=15s;
}
UPDATE:
To solve "mark as down by demand" task you can put maintenance.html file into the public directory, handle it via "try files" and produce error code 503 if the file exists. It helps you to configure balancer efficiently.

How to deploy a web service to amazon EC2?

i've just created a web application and deployed it to Amazon EC2 , but now i want to create a java web service and deploy it to instance in Amazon AWS to be able to use it in my Application
There are many possible configurations. This can be one of them:
Start the application server (probably tomcat) with AJP enabled.
Use the apache HTTP server JK module to connect the webserver to the Tomcat application.
Deploy your web application on the application server and make it generate the WSDL using the external domain name or ip. For example: http://www.domain.com/application/service
Make sure that the Amazon firewall has the port 80 open for that instance.
If you use domain name, make your DNS point to that host.

How do I re-direct all http requests to a mount point

We currently use JBoss 5.1 as the application server and my application is mounted on http://<host>:<port>/<myapp>. Images are rendered via the following mount point
http://<host>:<port>/<myapp>/img?id=<image-id>
Currently the servlet rendering image is present as part of the application, but I have re-factored this code to run on a tomcat server.
How should I re-direct all http requests to http://<host>:<port>/<myapp>/img?id=<image-id> a tomcat instance (e.g. http://<tomcat-host>:<tomcat-port>/img?id=<image-id>)
Where should I put this re-direction rule?
Note:Should I introduce a apache http server in front of jboss server to achieve this? Is there a simpler way to configure this in a dev environment?
One way I have seen these kinds of things handled is to host images and other static resources at the ROOT context level on an Apache web server. In this way you can host multiple web applications at various other context levels on the same server and port and they can all benefit from shared static resources.
Another advantage to this approach is that your Apache web server can help offset some load off of your production environment.

Categories