Web Sockets + Tomcat/Glassfish + Cluster + Load Balancing - What are the options? - java

I'm working on a Java SE 7 / Java EE 6 application that may use either Tomcat 7 or Glassfish 3.1 (probably GlassFish, but this has not been decided yet). The application will take advantage of the new WebSockets technology that has recently achieved widespread adoption across all major browsers.
Through much research, forum reading and mailing list monitoring I have determined that (currently, AFAICT) neither mod_jk/isapi_redirect nor mod_proxy reliably (or at all) support WebSockets. Since these are the two tried and tested methods for load balancing/directing traffic in a Tomcat or GlassFish cluster, this obviously represents a problem.
On the other hand, however, Tomcat and GlassFish both have built-in web servers that are widely touted to be just as efficient at serving static content as Apache or IIS, and therefore it is generally not recommended to place Apache or IIS in front of one of these servers UNLESS you need redundancy/load balancing.
So, all of this leads me to these questions:
Is Apache or IIS even needed anymore to load balance in a Tomcat/GlassFish cluster? Wouldn't it be just as efficient (or more so?) to simply place a standard load balancer, like what you would use in any other scenario, in front of a cluster of Tomcat or GlassFish servers and forego the intermediary web server altogether? Or is there still some technical reason that standard load balancers won't work with TC/GF? Assuming a standard load balancer could be used, one could simply find one that supports WebSockets (like Coyote) and use it.
If a standard load balancer will simply not work with Tomcat/GlassFish, what other options are there? How can one effect performant and reliable WebSocket technology using Java EE?
Caveat: I prefer not to consider load-balancing technologies that are limited to dumb round-robin protocols (such as Round-Robin DNS). I do not consider these options reliable/redundant, as one could easily be sent to a server that was down or already handling a much larger number of connections than another server in the cluster. Obviously I know that something like Round-Robin DNS could easily be used with WebSockets without any compatibility concerns.

We were going to use an approach with having our Tomcat instances directly after a standard load balancer. We use SSL heavily in our setup. To keep things simple behind our load balancers and avoid different configurations for SSL/no SSL in our web container we wanted to terminate SSL in our load balancers.
However, the SSL decrypting hardware for our load balancers was quite buggy. Hence we ended up with web servers (nginx) between our web containers and our load balancers for the sole purpose of decrypting SSL.
This is a special case that applied to us but is worth keeping in mind. Apart from this I do not see a reason to keep a web server between your load balancer and your web container. The load balancer should work just fine with the web container. Aim for simplicity and minimzing the different components in your setup.

Related

Multiple Apache mod_jk servers pointing to the same Tomcat worker?

We have a single Tomcat app server and a single front-end web server running Apache and mod_jk. Is it possible for us to add a second front-end web server, that points to the same app server and also runs Apache and mod_jk?
We don't want to do this for reasons of load balancing. Rather, it's for reasons of migration. The new web server will be an entirely different OS and will use a different SSO authentication module. We want to be able to stage and test it, switch a DNS entry to make it live, and decommission the old server.
The current workers.properties file looks like this:
worker.list=worker1
worker.worker1.type=ajp13
worker.worker1.host=10.x.x.x
worker.worker1.port=8009
Can I just duplicate that config onto a second web server?
I have no experience or background whatsoever with any of this, but have been asked to assist with a server migration and am trying my best to help.
I have been reading the documentation for mod_jk and Tomcat. I have found all sorts of documentation on pointing a single mod_jk instance to multiple Tomcat app servers, but nothing describing the opposite of that, which is what I'm trying to accomplish?
Is this possible?
Edit: I should note that we do not have any access to the Tomcat server, as it is managed by a third-party vendor. I'm sure they'd make changes if we asked them, but we don't have the ability to log into it ourselves.
Yes - duplicating will be easiest. Most important** is keeping the worker name the same.
One gotcha is making sure Tomcat has enough connections available to be handled by both web servers. The normal defaults are typically high enough, but if you try to stress test, the tomcat server may need the sum of workers available on the webservers. But if you don't have enough, Tomcat does write a warning to the logs.
** Most important - Ok - Not that important since you are not using sticky sessions. But could be confusing later if you try this experiment with 2 tomcats when switching between web servers.
Yes, of course you can. I have done it several times, even just to change the static files served by Apache (js, images, css) and test the Tomcat application with a different "skin".
Usually when building a high availability system, not only the Tomcat's or any other back-end servers get replicated, the front-end Apache or IIS or whatever is used gets replicated too.
As you said, it should be fine just copying the workers.properties file and the mapping rules in the Apache httpd's *.conf files.
An also, check with tomcat management team that the incoming connections to Tomcat's AJP port are not limited by net rules or firewalls, making just the old Apache able to reach the Tomcat.
Can I just duplicate that config onto a second web server?
Yes sure, since you want to hit the same Tomcat server so you can simply copy your worker.properties from Apache instance 1 to Apache instance 2. If you have only those 4 properties then nothing but if you have some properties like worker.worker1.cachesize=10 or worker.worker1.cache_timeout=600 and you want to play around then change it. But bottom line is that since you want to hit same Tomcat instance so you can simply copy.
Understanding it in non-tomcat way - you can have more than 1 HTTP web server like Apache intercepting all the requests and then forwarding it to same application or web server. However this is not common because common thing is a web server load balancing the requests for back end application servers.
I have been reading the documentation for mod_jk and Tomcat. I have
found all sorts of documentation on pointing a single mod_jk instance
to multiple Tomcat app servers, but nothing describing the opposite of
that, which is what I'm trying to accomplish?
Is this possible?
You couldn't find in any of the reading because what you are trying is a corner case, generally people configure multiple Tomcat workers to serve servlets on behalf of a certain web server in order to achieve load balancing, virtual hosting etc.
You mentioned that you are doing all this in order to test the Apache running on different OS and using different SSO, I am assuming there is no hardware load balancer sitting in front of your web servers (Apache) so how you are gonna hit your new Apache? I think you need to do this explicitly because your current URL will be pointing your first Apache, so in order to hit second/new Apache you need to give your testers/users the URL containing end point (IP:port) of second/new Apache. Even if you are doing all this locally still you need to have your second Apache listening on different port, or may be different IP but that's not common.

Why do some servers run both Tomcat and also Apache Web Server?

Tomcat is used for running Java servlets, but it also has the webserver functionality built in, so it can run independently. However, I see several articles on how to integrate Apache Webserver with Tomcat? What's the purpose of doing this? Does it improve performance?
I am using Tomcat for serving WebServices.
Tomcat is a fine Servlet container, but there are a lot of things an Apache httpd can do better (easier and/or faster).
For example Apache can handle security, SSL, provide load balancing, URL rewriting etc.
You can also split content: you can have your Apache httpd to serve static content like images, static html, js etc. and leave the dynamic content (like servlets, jsp etc.) to Tomcat. This also has the advantage that a failure in Tomcat will not render your whole web site unusable / unavailable (just the servlets/jsp pages).
You can also separate the 2 and thus increase security: you can run Apache httpd on one server (which would be reachable on the internet) and direct it to another server running Tomcat, invisible from the outside.
Depends. Quite often a separate web server is used to distribute traffic or allow for extra functionality provided by Apache Web Server modules, which there are plenty. It can also be more performant, depending on your use case.
In short, even if Tomcat has the basic web server functionalities, Apache Web Server can also do other things Tomcat cannot.

Application server architecture: access clustered server application through single address

I have been reading countless oracle documents, blogs, etc. but I cannot wrap my mind around this concept.
I have successfully deployed an application to a GlassFish server cluster. See screenshot:
I would like to have load balancing and fail over by using a single url address to access my application.
For example currently to get to my application I must use http://<server-name>:28080/AppName but I would like to use http://cluster:28080/AppName and have an available load balancing service automatically select it.
Currently I have 3 GlassFish 3.1 servers with a basic default setup and GMS. Is GlassFish capable of doing the automatic load balancing and fail over or do I need a web server (like Apache or Oracle IPlanet) in front of my GlassFish cluster to distribute connections?
As Olivier states you need to put a load balancer in front of your cluster. You can use a hardware device or you can use software.
I've used both and each works great. You should read Configuring Web Servers for HTTP Load Balancing for a better understanding.
You need a front-end load balancer (software or hardware).

What is the best practice when hosting more then one tomcat based web application

I do understand that using the same tomcat instance for a number of web applications has some risks (e.g. if one web application crashes tomcat it will terminate the other web applications as well.). The benefit is of course cost effectiveness since one server is enough and having all the web applications in one place makes it very easy to administrate.
Are there any industry guidelines on how a good setup with multiple web applications on tomcat should look like?
Pros
One JVM to monitor
Common libraries can be shared (sometimes risky)
Cons
Common HTTP thread pool all applications are using (you can, however, configure several connectors with different thread pools)
One malfunctioning application can take down the whole server
Restarting one application requires restarting all of them (if not using hot-deployment)
You are right that hosting of multiple we applications on one application server / web container (either Tomcat or other) has benefits.
You mentioned the robustness issue when one application may cause failure of another. But let's simplify this: even if you have only one application you still want 24*7 availability. To achieve this goal people typically run more than one instance of application server with identical application on each one and load balancer in the enterence to the site. The same is relevant for several web applications. Just run N (2 as minimum) application servers with identical set of web applications deployed and load balancer. You will probably need a kind of watchdog that restarts server if it failed or if it stopped responding etc.
In some cases kind of clustering is required. But it is other story.

Clustering Apache Tomcat6

I've got a Java Enterprise Web Application which uses Tomcat6+Struts+Hibernate+MySql. at the time being it's publicly up and running on a single server. In order to performance issues we should move the application to a clustered environment. Anyhow I wanna use Tomcat6 clustering as below:
A Load Balancing machine including a web server (Apache+mod_proxy) as front-end
Some application server machines, each one running a tomcat6 instance
A session management back-end
And finally a db server
something like this
The load balancer machine receives all the requests and depending on the balancing algorithm redirects them to the respective tomacat6 machine. After doing the business part, the response is returned to the webserver and at the end to the user. In this scenario the front-end machine processes all the requests and responses so it'd be a bottleneck point in the application.
In Apache Tomcat clustering, is there a way to load balancing mechanism and web servers? I mean putting a load balancer in the front end and leave the request/response processing part to multiple web servers.
Tomcat has no support for clustering built in. What happens is that the load balancer distributes requests, so the various Tomcat instances don't need to know what is happening.
What you have to do is to make sure your application can handle it. For example, you must be aware that caches can be stale.
Say instance 1 has object X in it's cache and X is modified by a request processed on instance 2. The cache in instance 2 will be correct, the cache from instance 1 will be stale now.
The solution is to use a cache which supports clustering or disable caching for instances which can be modified. But that doesn't matter to Tomcat.

Categories