I've got a Java Enterprise Web Application which uses Tomcat6+Struts+Hibernate+MySql. at the time being it's publicly up and running on a single server. In order to performance issues we should move the application to a clustered environment. Anyhow I wanna use Tomcat6 clustering as below:
A Load Balancing machine including a web server (Apache+mod_proxy) as front-end
Some application server machines, each one running a tomcat6 instance
A session management back-end
And finally a db server
something like this
The load balancer machine receives all the requests and depending on the balancing algorithm redirects them to the respective tomacat6 machine. After doing the business part, the response is returned to the webserver and at the end to the user. In this scenario the front-end machine processes all the requests and responses so it'd be a bottleneck point in the application.
In Apache Tomcat clustering, is there a way to load balancing mechanism and web servers? I mean putting a load balancer in the front end and leave the request/response processing part to multiple web servers.
Tomcat has no support for clustering built in. What happens is that the load balancer distributes requests, so the various Tomcat instances don't need to know what is happening.
What you have to do is to make sure your application can handle it. For example, you must be aware that caches can be stale.
Say instance 1 has object X in it's cache and X is modified by a request processed on instance 2. The cache in instance 2 will be correct, the cache from instance 1 will be stale now.
The solution is to use a cache which supports clustering or disable caching for instances which can be modified. But that doesn't matter to Tomcat.
Related
I'm deploying my spring boot web application on a local server that could be down sometimes and I can't use server replication.
is there a way to cache the content of my application to make it available when the server is down?
Implement a Load Balancer and create a copy of your server instance.
When a server is down, the traffic will be sent to the other.
AWS offers a very good solution for this: https://aws.amazon.com/es/elasticloadbalancing/?nc=sn&loc=0
Any solution would need another server. Even if you cache your information, another server is needed in order to get and serve your content.
We have a single Tomcat app server and a single front-end web server running Apache and mod_jk. Is it possible for us to add a second front-end web server, that points to the same app server and also runs Apache and mod_jk?
We don't want to do this for reasons of load balancing. Rather, it's for reasons of migration. The new web server will be an entirely different OS and will use a different SSO authentication module. We want to be able to stage and test it, switch a DNS entry to make it live, and decommission the old server.
The current workers.properties file looks like this:
worker.list=worker1
worker.worker1.type=ajp13
worker.worker1.host=10.x.x.x
worker.worker1.port=8009
Can I just duplicate that config onto a second web server?
I have no experience or background whatsoever with any of this, but have been asked to assist with a server migration and am trying my best to help.
I have been reading the documentation for mod_jk and Tomcat. I have found all sorts of documentation on pointing a single mod_jk instance to multiple Tomcat app servers, but nothing describing the opposite of that, which is what I'm trying to accomplish?
Is this possible?
Edit: I should note that we do not have any access to the Tomcat server, as it is managed by a third-party vendor. I'm sure they'd make changes if we asked them, but we don't have the ability to log into it ourselves.
Yes - duplicating will be easiest. Most important** is keeping the worker name the same.
One gotcha is making sure Tomcat has enough connections available to be handled by both web servers. The normal defaults are typically high enough, but if you try to stress test, the tomcat server may need the sum of workers available on the webservers. But if you don't have enough, Tomcat does write a warning to the logs.
** Most important - Ok - Not that important since you are not using sticky sessions. But could be confusing later if you try this experiment with 2 tomcats when switching between web servers.
Yes, of course you can. I have done it several times, even just to change the static files served by Apache (js, images, css) and test the Tomcat application with a different "skin".
Usually when building a high availability system, not only the Tomcat's or any other back-end servers get replicated, the front-end Apache or IIS or whatever is used gets replicated too.
As you said, it should be fine just copying the workers.properties file and the mapping rules in the Apache httpd's *.conf files.
An also, check with tomcat management team that the incoming connections to Tomcat's AJP port are not limited by net rules or firewalls, making just the old Apache able to reach the Tomcat.
Can I just duplicate that config onto a second web server?
Yes sure, since you want to hit the same Tomcat server so you can simply copy your worker.properties from Apache instance 1 to Apache instance 2. If you have only those 4 properties then nothing but if you have some properties like worker.worker1.cachesize=10 or worker.worker1.cache_timeout=600 and you want to play around then change it. But bottom line is that since you want to hit same Tomcat instance so you can simply copy.
Understanding it in non-tomcat way - you can have more than 1 HTTP web server like Apache intercepting all the requests and then forwarding it to same application or web server. However this is not common because common thing is a web server load balancing the requests for back end application servers.
I have been reading the documentation for mod_jk and Tomcat. I have
found all sorts of documentation on pointing a single mod_jk instance
to multiple Tomcat app servers, but nothing describing the opposite of
that, which is what I'm trying to accomplish?
Is this possible?
You couldn't find in any of the reading because what you are trying is a corner case, generally people configure multiple Tomcat workers to serve servlets on behalf of a certain web server in order to achieve load balancing, virtual hosting etc.
You mentioned that you are doing all this in order to test the Apache running on different OS and using different SSO, I am assuming there is no hardware load balancer sitting in front of your web servers (Apache) so how you are gonna hit your new Apache? I think you need to do this explicitly because your current URL will be pointing your first Apache, so in order to hit second/new Apache you need to give your testers/users the URL containing end point (IP:port) of second/new Apache. Even if you are doing all this locally still you need to have your second Apache listening on different port, or may be different IP but that's not common.
We are developing an application which periodically syncs the LDAP servers of different clients with our database. This application needs to be accessed via a web portal. A web user will create, modify or delete scheduled tasks on this application. So, we have developed this application as a web service.
Now, we have to scale this application and also ensure high availability.
The application is an Axis2 based web service running on Tomcat. We have thought of httpd + mod_jk + tomcat combination for load balancing. The problem is that if a request for modification/deletion comes, then it should land on the same tomcat server on which the task was created initially. But, since, the request can come from different web users accessing web portal from different ip addresses, we can not have same session id (sticky session).
Any solutions? Different architecture? Anything.
We have also thought of using Quartz scheduler api. The site says it supports load balancing and clustering. Does anyone has experience of working on such scenario with Quartz?
If you are using Quartz for your scheduling, that can be backed by a database (see JDBCJobStore). Then you could access any Tomcat server and the scheduling would be centralized. I would recommend using a key in the database that you return back to the Axis service, so that the user can reference the same data between calls.
Alternatively it is not difficult to use the database as a job scheduler, then have your tasks run on Tomcat (any location), and put the results into the database. If the results of the job (such as its status) are small, this would work fine.
I have been reading countless oracle documents, blogs, etc. but I cannot wrap my mind around this concept.
I have successfully deployed an application to a GlassFish server cluster. See screenshot:
I would like to have load balancing and fail over by using a single url address to access my application.
For example currently to get to my application I must use http://<server-name>:28080/AppName but I would like to use http://cluster:28080/AppName and have an available load balancing service automatically select it.
Currently I have 3 GlassFish 3.1 servers with a basic default setup and GMS. Is GlassFish capable of doing the automatic load balancing and fail over or do I need a web server (like Apache or Oracle IPlanet) in front of my GlassFish cluster to distribute connections?
As Olivier states you need to put a load balancer in front of your cluster. You can use a hardware device or you can use software.
I've used both and each works great. You should read Configuring Web Servers for HTTP Load Balancing for a better understanding.
You need a front-end load balancer (software or hardware).
I'm working on a Java SE 7 / Java EE 6 application that may use either Tomcat 7 or Glassfish 3.1 (probably GlassFish, but this has not been decided yet). The application will take advantage of the new WebSockets technology that has recently achieved widespread adoption across all major browsers.
Through much research, forum reading and mailing list monitoring I have determined that (currently, AFAICT) neither mod_jk/isapi_redirect nor mod_proxy reliably (or at all) support WebSockets. Since these are the two tried and tested methods for load balancing/directing traffic in a Tomcat or GlassFish cluster, this obviously represents a problem.
On the other hand, however, Tomcat and GlassFish both have built-in web servers that are widely touted to be just as efficient at serving static content as Apache or IIS, and therefore it is generally not recommended to place Apache or IIS in front of one of these servers UNLESS you need redundancy/load balancing.
So, all of this leads me to these questions:
Is Apache or IIS even needed anymore to load balance in a Tomcat/GlassFish cluster? Wouldn't it be just as efficient (or more so?) to simply place a standard load balancer, like what you would use in any other scenario, in front of a cluster of Tomcat or GlassFish servers and forego the intermediary web server altogether? Or is there still some technical reason that standard load balancers won't work with TC/GF? Assuming a standard load balancer could be used, one could simply find one that supports WebSockets (like Coyote) and use it.
If a standard load balancer will simply not work with Tomcat/GlassFish, what other options are there? How can one effect performant and reliable WebSocket technology using Java EE?
Caveat: I prefer not to consider load-balancing technologies that are limited to dumb round-robin protocols (such as Round-Robin DNS). I do not consider these options reliable/redundant, as one could easily be sent to a server that was down or already handling a much larger number of connections than another server in the cluster. Obviously I know that something like Round-Robin DNS could easily be used with WebSockets without any compatibility concerns.
We were going to use an approach with having our Tomcat instances directly after a standard load balancer. We use SSL heavily in our setup. To keep things simple behind our load balancers and avoid different configurations for SSL/no SSL in our web container we wanted to terminate SSL in our load balancers.
However, the SSL decrypting hardware for our load balancers was quite buggy. Hence we ended up with web servers (nginx) between our web containers and our load balancers for the sole purpose of decrypting SSL.
This is a special case that applied to us but is worth keeping in mind. Apart from this I do not see a reason to keep a web server between your load balancer and your web container. The load balancer should work just fine with the web container. Aim for simplicity and minimzing the different components in your setup.