I work on a very high volume public website running on Tomcat 5.5. Currently we require stickiness to a particular server in order to maintain session. I'd like to start replicating session, but have had trouble finding a good FOSS solution. I've written my own Manager (using memcached as the store) but am having trouble dealing with race conditions if more than one server is handling the requests for the same user.
Is there a solution out there I should be looking at? I'm looking for not just something that works as a fallback if stickiness fails, but that would work if user requests are regularly spread to multiple servers.
This is a tough issue. In my opinion, Servlet sessions in Tomcat doesn't work at all if you have multiple servers and geo-distributed.
Our solution is to make our server totally stateless. All sessions are stored in database only. We use geo-localized MySQL with memory engine and the performance is much better than the old methods using Tomcat session replication.
Even though the chance of race condition is much less, it still occurs occasionally. We added record versioning in DB so we can detect race conditions and retry.
Related
I have medium sized and traffic ecommerce website. At a time around 200-300 visitors. Webapp features are:
Built in Java, Spring used for MVC
Using Ehcache to cache several data requests from database
Pure JDBC used for connecting to database (Using connection pool of tomcat)
Deployed on tomcat in an AWS EC2
Using RDS as a database server
Around 100 database connections assigned to webapp
I am using Ehcache extensively to cache most of the catalog data as it is requested by all traffic coming on website. But when I deploy a new version on tomcat, almost always database server gets stalled due to excessive queries fired. Ehcache does not able to help here because till now nothing is cached. Best case is that it takes around 45 minutes till when website remains extremely slow and ehcache manages to cache important data. Worst case is website gets crashed and application stops running.
On Development environment it works very smooth, as there is no traffic. To quickly find a way around this problem, we did a quick fix.
The fix was: In ServletContextListener we made a dummy hit to most crucial services related to catalog which was eating up the database server by excessive queries. Due to this change, as soon as application gets deployed we fetch all data related to catalog in our memory and ehcache caches it all. Thereafter, application becomes usable to public. Although, this change has caused around 30seconds of lag in start when we deploy the app but we managed to get away from 45 minutes of slow website.
This fix indeed solved our problem but it doesn't feel like a good solution. Because everything related to catalog or other crucial data is in memory whether it is going to get used to not. It is around 3.5 GB of data. Moreover, it is a nightmare now to work in development environment now. Because of low memory in development systems.
Please suggest a good way to handle this problem.
Filling the cache at startup feels like a good idea. That's what I would do. If it fits in memory, I wouldn't mind loading too much stuff.
The alternative would be to have an expiry policy and to periodically ping the cache to remove expired entries. But it sounds more like a waste of time.
Distributed caching could also solve the problem but it means adding a layer of complexity to your architecture. I would do that only if necessary. And I don't think it is.
Then, to prevent loading in dev, just use a Spring profile that causes the loading to be active only in production (and staging ideally).
I'm using Terracotta Enterprise Ehcache along with a Java application, but at some points of the day the Terracotta starts to take too much time to answer put/get requests, sometimes locking client's threads and launching exceptions.
My infrastructure is composed by a cluster of 5 JBoss servers 6.2.0 and another cluster with 4 Terracotta Enterprise Ehcache 3.7.5 that stores a large amount of data.
The application does around 10 million accesses to the Terracotta Ehcache per day.
Originally I used criteria, but, when the problems started, I changed everything to use id searches only.
I tried to change the DGC interval, making it run more often or even only once a day, it didn't get any better.
I started with the persistence mode permanent-store and tried to change to temporary-swap-only, but the problem continues.
Tried to change the Terracotta cluster to work with 2 actives machines and 2 passives or 4 actives.
Tried to config my caches as eternal true or false.
All my caches are nonstop and I tried to use the timeoutBehavior as exception or noop.
Basically none of my efforts seems to produce any significant change and the Terracotta continues to enter in this state where it can't answer the requests anymore.
Right now the only thing that seems to "solve" the problem is to restart all the clients.
Does anybody have a similar scenario using Terracotta, with this kind of throughput? Any ideas for where to look now?
Yes i faced a similar issue of thread contention on terracota cluster setup. The slaves requests for get/put used to take time and a thread dump showed locking as the main reason. I dont remember the details as it was more than 4-6 months back. I had 2 options then:
Create an own cache server which would be a custom war and would run ehcache underneath and expose my own put, get, delete etc operations as REST endpoints.
Use cache replication as ehcache provides.
I first tried with replication suing RMI and then with JGroups. RMI based approach worked excellently and was much stable so I decided to move to RMI based replication which ehcache provides OOTB. My setup was to use ehcache as a cache provider for hibernate based JPA and RMI absed solution worked very well and effectively. It is intelligent enough to see when the other servers in cluster go down and when it comes up. Replication is async and transparent. Since the second approach worked well I didnt try out the first one.
I've heard the term "clustering" used for application servers like GlassFish, as well as with Terracotta; and I'm trying to understand what the word clustering implies when used in conjunction with application servers, and when used in conjunction with Terracotta.
My understanding is:
If a GlassFish server is clustered, then it means we have multiple physical/virtual machines, each with their own JRE/JVM running separate instances of GlassFish. However, since they are clustered, they will all communicate through their admin server ("DAS"), and have the same apps deployed to all of them. They will effectively act (to the end user) as if they are a single app server - but now with load balancing, failover/redundancy and scalability added into the mix.
Terracotta is, essentially, a product that makes multiple JVMs, running on different physical/virtual machines, act as if they are a single JVM.
Thus, if my understanding is correct, the following are implied:
You cluster app servers when you want load balancing and failover tolerance
You use Terracotta when any particular JVM is too small to contain your application and you need more "horsepower"
Thus, technically, if you have a GlassFish cluster of, say, 5 server instances; each of those 5 instances could actually be an array/cluster of Terracotta instances; meaning each GlassFish server instance is actually a GlassFish instance living across the JVMs of multiple machines itself
If any of these assertions/assumptions are untrue, please correct me! If I have gone way off-base and clearly don't understand clustering and/or the very purpose of Terracotta, please point me in the right direction!
Terracotta enables you to have a shared state across all your nodes (its stateful). Basically it creates a shared memory space between different JVM's. This is useful when nodes in a cluster all need access to the same objects.
If your application is stateless and you just need load balancing and fail over you can use a solution like JGroups. In this scenario each node just handles requests and has little idea about other nodes. Objects in memory are not shared across nodes and each JVM just runs on its own and has no idea about other JVM's. This often works nicely for request / response type applications. A webserver serving content (without sessions) does this for example.
Dealing with a stateless cluster is often simpler then dealing with a stateful cluster. This is because in a stateless cluster nodes know almost nothing about each other which results in less things that can go wrong.
GlassFish sits a bit in the middle of the above concepts. Objects in memory within GlassFish are visible to all nodes. However the frontend (HTTP connectors) work stateless.
So to answer your questions:
1) Yes, those are the two most obvious reasons. However sometimes people only want failover or only want load balancing or sometimes both. Not all clustering solutions fix both of these problems.
2) Yes. Altough technically speaking Terracotta only solves the shared memory part, not the CPU part. However by solving the memory part it automatically solves the CPU part since you can now just add JVM's to the shared memory space.
3) I don't know if thats practically possible but as a thought experiment; Yes.
Clustering can mean one of the following:
Multiple instances can be managed as one. Deploy an application to the cluster, it is deployed to all instances in the cluster. Make a configuration change, and that change will be pushed to all nodes in the cluster. GlassFish supports this out of the box.
Service Availability. If any one instance fails, the application is available on another instance. Without high availability enabled, any instance failure also results in session loss for any session being managed by that instance. GlassFish supports this out of the box.
High availability. If any one instance fails, the application is available on another instance, and there is no session loss because a session replica is also maintained on another instance. GlassFish supports this. You will have to choose either #2 or #3 in any one cluster.
What you are asking about IMHO is really #3, because it is the only real case where Terracotta - in the context of high availability clustering - will offer value w/GlassFish. GlassFish already offers built-in high availability, so there had better be a very good reason to add Terracotta to the solution because it will complicate the deployment architecture.
The primary reason I can think of adding Terracotta is that you may want to offload session management to a data grid and free up GlassFish to run business logic. This may be due to more frequent garbage collection or wanting to manage more users per GlassFish instance. However, I'm not sure that Terracotta can do this seamlessly. With GlassFish built-in HA clustering, replicating sessions is seamless (no application logic modifications). You may have to write code to put/get data from a Terracotta cache I'll let you research :-) Oracle GlassFish Server also integrates (seamlessly) with Coherence to solve this problem. You can separate session management into a Coherence data grid without modifying your application code.
Unless you know for a fact up front that your application must scale to a very large number of concurrent users, start with built-in HA clustering, run tests, and go from there.
Hope this helps.
We have an infrastructure set up where in the webservers are clustered and the application servers are not. The webservers route the request to the application servers based on round-robin policy.
In this scenario, the session data available in one application server is not available in the other application server. Is there anyway by which the session data from first application server can be made available in the second application ? The two application servers are physically separate boxes in different cells.
One approach could be to use the database - is there any other means of accomplishing this session replication ?
In WebSphere there are essentially two ways to replicate session data:
Persisting to a database
Memory-To-Memory transfers
Which one is appropriate for your needs is highly dependent on your application scenario:
How important is the persistence of your session data, when all your application servers go down?
How many session objects do you have at any one time simultaneously?
In a DB you can store many sessions without much problems, the other option is always a question of how much memory is available.
I would go with the database, if you already got one set up, which all application servers use anyway.
Here is the link to the WebSphere Information Center with the necessary details.
One obvious solution is to enable clustering of your application servers. I assume from the way you worded your question you have rejected this option. Another option is to change the routing used by the web servers to use session affinity (requests for the same session go to the same app server).
Other that that, I'd second the answer by dertoni.
maybe you can look at 'terracota'. its an caching framework, which can cache sessions and runs on a seperate server
There are two options for clustering within WebSphere, session replication or database. If you have large session objects you are best off using database because it allows you to offload stale sessions to disk. If they are then represented then they can be extracted from the database, if you use session replication then those sessions need to stay in memory on not just your target server but also the other servers in the replication group. With large sessions this can lead to an out of memory condition.
With database session handling it is also very customisable and doesn't performance noticeably in the environments that I have used it.
don't forget oracle coherence.
Tomcat (version 5 here) stores session information in memory. When clustering this information is periodically broadcast to other servers in the cluster to keep things in sync. You can use a database store to make sessions persistant but this information is only written periodically as well and is only really used for failure-recovery rather than actually replacing the in-memory sessions.
If you don't want to use sticky sessions (our configuration doesn't allow it unfortunately) this raises the problem of the sessions getting out of sync.
In other languages, web frameworks tend to allow you to use a database as the primary session store. Whilst this introduces a potential scaling issue it does make session management very straightforward. I'm wondering if there's a way to get tomcat to use a database for sessions in this way (technically this would also remove the need for any clustering configuration in the tomcat server.xml).
There definitely is a way. Though I'd strongly vote for sticky sessions - saves so much load for your servers/database (unless something fails)...
http://tomcat.apache.org/tomcat-5.5-doc/config/manager.html has information about SessionManager configuration and setup for Tomcat. Depending on your exact requirements you might have to implement your own session manager, but this starting point should provide some help.
Take a look at Terracotta, I think it can address your scaling issues without a major application redesign.
I've always been a fan of the Rails sessions technique: store the sessions (zipped+encrypted+signed) in the user's cookie. That way you can do load balancing to your hearts content, and not have to worry about sticky sessions, or hitting the database for your session data, etc. I'm just not sure you could implement that easily in a java app without some sort of rewriting of your session-access code. Anyway just a thought.
Another alternative would be the memcached-session-manager, a memcached based session failover and session replication solution for tomcat 6.x / 7.x. It supports both sticky sessions and non-sticky sessions.
I created this project to get the best of performance and reliability and to be able to scale out by just adding more tomcat and memcached nodes.