We currently use JBoss 5.1 as the application server and my application is mounted on http://<host>:<port>/<myapp>. Images are rendered via the following mount point
http://<host>:<port>/<myapp>/img?id=<image-id>
Currently the servlet rendering image is present as part of the application, but I have re-factored this code to run on a tomcat server.
How should I re-direct all http requests to http://<host>:<port>/<myapp>/img?id=<image-id> a tomcat instance (e.g. http://<tomcat-host>:<tomcat-port>/img?id=<image-id>)
Where should I put this re-direction rule?
Note:Should I introduce a apache http server in front of jboss server to achieve this? Is there a simpler way to configure this in a dev environment?
One way I have seen these kinds of things handled is to host images and other static resources at the ROOT context level on an Apache web server. In this way you can host multiple web applications at various other context levels on the same server and port and they can all benefit from shared static resources.
Another advantage to this approach is that your Apache web server can help offset some load off of your production environment.
Related
We have a single Tomcat app server and a single front-end web server running Apache and mod_jk. Is it possible for us to add a second front-end web server, that points to the same app server and also runs Apache and mod_jk?
We don't want to do this for reasons of load balancing. Rather, it's for reasons of migration. The new web server will be an entirely different OS and will use a different SSO authentication module. We want to be able to stage and test it, switch a DNS entry to make it live, and decommission the old server.
The current workers.properties file looks like this:
worker.list=worker1
worker.worker1.type=ajp13
worker.worker1.host=10.x.x.x
worker.worker1.port=8009
Can I just duplicate that config onto a second web server?
I have no experience or background whatsoever with any of this, but have been asked to assist with a server migration and am trying my best to help.
I have been reading the documentation for mod_jk and Tomcat. I have found all sorts of documentation on pointing a single mod_jk instance to multiple Tomcat app servers, but nothing describing the opposite of that, which is what I'm trying to accomplish?
Is this possible?
Edit: I should note that we do not have any access to the Tomcat server, as it is managed by a third-party vendor. I'm sure they'd make changes if we asked them, but we don't have the ability to log into it ourselves.
Yes - duplicating will be easiest. Most important** is keeping the worker name the same.
One gotcha is making sure Tomcat has enough connections available to be handled by both web servers. The normal defaults are typically high enough, but if you try to stress test, the tomcat server may need the sum of workers available on the webservers. But if you don't have enough, Tomcat does write a warning to the logs.
** Most important - Ok - Not that important since you are not using sticky sessions. But could be confusing later if you try this experiment with 2 tomcats when switching between web servers.
Yes, of course you can. I have done it several times, even just to change the static files served by Apache (js, images, css) and test the Tomcat application with a different "skin".
Usually when building a high availability system, not only the Tomcat's or any other back-end servers get replicated, the front-end Apache or IIS or whatever is used gets replicated too.
As you said, it should be fine just copying the workers.properties file and the mapping rules in the Apache httpd's *.conf files.
An also, check with tomcat management team that the incoming connections to Tomcat's AJP port are not limited by net rules or firewalls, making just the old Apache able to reach the Tomcat.
Can I just duplicate that config onto a second web server?
Yes sure, since you want to hit the same Tomcat server so you can simply copy your worker.properties from Apache instance 1 to Apache instance 2. If you have only those 4 properties then nothing but if you have some properties like worker.worker1.cachesize=10 or worker.worker1.cache_timeout=600 and you want to play around then change it. But bottom line is that since you want to hit same Tomcat instance so you can simply copy.
Understanding it in non-tomcat way - you can have more than 1 HTTP web server like Apache intercepting all the requests and then forwarding it to same application or web server. However this is not common because common thing is a web server load balancing the requests for back end application servers.
I have been reading the documentation for mod_jk and Tomcat. I have
found all sorts of documentation on pointing a single mod_jk instance
to multiple Tomcat app servers, but nothing describing the opposite of
that, which is what I'm trying to accomplish?
Is this possible?
You couldn't find in any of the reading because what you are trying is a corner case, generally people configure multiple Tomcat workers to serve servlets on behalf of a certain web server in order to achieve load balancing, virtual hosting etc.
You mentioned that you are doing all this in order to test the Apache running on different OS and using different SSO, I am assuming there is no hardware load balancer sitting in front of your web servers (Apache) so how you are gonna hit your new Apache? I think you need to do this explicitly because your current URL will be pointing your first Apache, so in order to hit second/new Apache you need to give your testers/users the URL containing end point (IP:port) of second/new Apache. Even if you are doing all this locally still you need to have your second Apache listening on different port, or may be different IP but that's not common.
Tomcat is used for running Java servlets, but it also has the webserver functionality built in, so it can run independently. However, I see several articles on how to integrate Apache Webserver with Tomcat? What's the purpose of doing this? Does it improve performance?
I am using Tomcat for serving WebServices.
Tomcat is a fine Servlet container, but there are a lot of things an Apache httpd can do better (easier and/or faster).
For example Apache can handle security, SSL, provide load balancing, URL rewriting etc.
You can also split content: you can have your Apache httpd to serve static content like images, static html, js etc. and leave the dynamic content (like servlets, jsp etc.) to Tomcat. This also has the advantage that a failure in Tomcat will not render your whole web site unusable / unavailable (just the servlets/jsp pages).
You can also separate the 2 and thus increase security: you can run Apache httpd on one server (which would be reachable on the internet) and direct it to another server running Tomcat, invisible from the outside.
Depends. Quite often a separate web server is used to distribute traffic or allow for extra functionality provided by Apache Web Server modules, which there are plenty. It can also be more performant, depending on your use case.
In short, even if Tomcat has the basic web server functionalities, Apache Web Server can also do other things Tomcat cannot.
I am using dedicated server. I have hosted different HTML, PHP and wordpress websites on this server those are working perfectly.
Now I want to deploy java web application on this server. So I have installed Apache tomcat server on another port. So now I want to know how I can handle request directly from domain name to tomcat apache server.
Along with this I want to know how I can deploy multiple web applications on single tomcat. I want to know configuration to call different WAR files from tomcat.
Thank you in advance for your support.
You can use Apache as reverse proxy with the mod_proxy plugin: http://httpd.apache.org/docs/2.2/mod/mod_proxy.html
Therefore, you can handle all HTTP requests with Apache, specifying which requests shall be redirected to the Java web app in Apache Tomcat - port 8080.
Easiest way is to set up a HTTP server (apache, nginx, etc.) as a reverse proxy. Then you can map different domains to different contexts, for example:
www.domain.com -> localhost:8080/main/
www.otherdomain.com -> localhost:8080/othermain/
subdomain.domain.com -> localhost:8080/anotherwar/
For example with Nginx it would be done with a ProxyPass directive. Other HTTP servers have their own respective mechanisms.
I have two Tomcat servers deployed behind an Nginx load balancer that's using proxy_pass to route the requests. This works well, but there is now an use case in my application for which I need to pull one of the servers out of the cluster (but keep it running), have the web application on it do something and when that's done place the Tomcat back.
Right now I'm reloading the Nginx configuration manually and mark the server down to give time to the application to do its thing, but what I would like is have the web application "trick" Nginx that its Tomcat server is down, do it's stuff, then rejoin the cluster.
I'm thinking that I need to have some custom Tomcat Connector that's controlled by the web application but everything online is about proxying with Apache or using AJP and that's not what I need, I need this to be a HTTP proxy with Nginx.
Anyone has some pointers on how I might go about doing this?
When tomcat goes down, your webapp will go with it - you shouldn't rely on it to do any meaningful work to delay the shutdown. Rather have proper systems management procedures to first change the LB, then shutdown tomcat. That's a solution external to tomcat - and should be easy as you say that you pull one of the tomcats from your cluster.
For unplanned downtimes, use the LB's detection of tomcat being down, like #mikhailov described.
Try max_fails and fail_timeout configuration of Upstream module, for instance
upstream backend {
server tomcat1.localhost max_fails=3 fail_timeout=15s;
server tomcat2.localhost max_fails=3 fail_timeout=15s;
}
UPDATE:
To solve "mark as down by demand" task you can put maintenance.html file into the public directory, handle it via "try files" and produce error code 503 if the file exists. It helps you to configure balancer efficiently.
I have been reading countless oracle documents, blogs, etc. but I cannot wrap my mind around this concept.
I have successfully deployed an application to a GlassFish server cluster. See screenshot:
I would like to have load balancing and fail over by using a single url address to access my application.
For example currently to get to my application I must use http://<server-name>:28080/AppName but I would like to use http://cluster:28080/AppName and have an available load balancing service automatically select it.
Currently I have 3 GlassFish 3.1 servers with a basic default setup and GMS. Is GlassFish capable of doing the automatic load balancing and fail over or do I need a web server (like Apache or Oracle IPlanet) in front of my GlassFish cluster to distribute connections?
As Olivier states you need to put a load balancer in front of your cluster. You can use a hardware device or you can use software.
I've used both and each works great. You should read Configuring Web Servers for HTTP Load Balancing for a better understanding.
You need a front-end load balancer (software or hardware).