Using Replicated Cache vs LB sticky session - java

I need to keep some data in cache on server. The servers are in cluster and call can go to any of them. In such a scenario is it better to use a replicated/distributed cache like EhCache Or to use session stickiness of LB.
If the data size(in cache) is big, won't it have a performance impact of serialization and de-serialization across all servers?
Also in case of distributed cache, whats the optimal number of servers till which such cache is effective. Since data is replicated to all nodes, and say number of nodes is 20, its like master to master replication across all nodes. By that I mean, each node will get notifications from other 19 and will update modifications to other 19.Does such type os setup scale?

As always in distributed systems, the answer depands on different things:
A load balancer with sticky sessions is for sure the simpler way for the developer, since it doesn’t make any difference if the application runs on 1, 2 or 100 servers. If this is all you care about, stick with it and you can stop reading right here.
I’m not sure how session aware load balancers are implemented and what their general limit in terms of requests per second would be, but they have at least one big disadvantage over the distributed cache. - What to do if the machine handling the sessions is down? - If you distributed your cache, any machine can serve the request and it doesn’t matter if one of them fails. The serialisation/deserialisation part is not a big problem, rather the network could be the bottleneck if you don't run it in at least a 1 Gbit network environment, but it should be ok.
For distributed cache you could go either with Hazelcast, Infinispan or similar solutions, which would simplify the access from your own application. (Update: these implementations use DHT to distribute the cache)
Fully replicated cache you could use EhCached, which you mentioned, or Infinispan. Here the advantage over the distributed cache is the much faster access since you have all the data replicated on every machine and only need to access it localy. The disadvantage is slower writes (so rather use it for read very often, write very seldom scenarios) and the fact that your cache is limited by the amount which one machine is able to store. If you are running your applications on servers with 64GB of RAM this is ok. If you want to distribute them over small amazon instances, this is probably a bad idea. I think before you will hit any problems with updating too many nodes, you will run out of memory, and that one is at least very easy to calculate: AVG_CACHE_NEEDED_PER_CLIENT * NUMBER_OF_CLIENTS < MEMORY_FOR_CACHE_AVAILABLE (on one server). If you need more cache than you have available on any node in your EhCached cluster, full replication won't be possible any more.
Or you could use a Redis cluster or similar independent from your application and the servers your application is running on. This would allow you to scale the cache at a different speed than the rest of your application, however the access to the data wouldn’t be that trivial.
Of course the actual decision depends on your very specific use-case and the demands you are putting on your application.
Personally I was very happy when I found out today that Azure WebPages have a load balancer with sticky session support, and I don’t need to reconfigure my application to use Redis as a session object store, and can just keep everything as it is.
But for a huge workload with hundreds of servers a simple load balancer probably will be rather overwhelmed, and distributed cache, or centralized replicated cache (Redis) will be the way to go.

Related

Cache implementation

I've been researching this for a week now, but I'd like some thoughts on my particular situation...
2 physical servers:
Server A - public WAR, admin WAR
Server B - public WAR
Requirements:
Both WARs need to view the same data.
admin WAR modifies / adds data to the cache.
public WARs modify other parts of the cache / add data to it.
entire cache needs to reside in memory on each physical server (if I add something on Server A admin WAR or public WAR, it needs to show up on Server B public WAR) so in the event of a failure, we aren't waiting for half the cache to be populated
1,500 active users/server, vast majority of traffic is read, very little write
Additional hardware is out of the question.
Is there a good third party caching solution for this scenario? It seems most distributed caching systems want to leave half the data on Server A and half on Server B, which wouldn't meet our failover performance needs.
Thanks for any ideas!
You should look at Redis
http://www.gigaspaces.com/ has a solution for that, it allows you to create "Space" that serves as cache in replicated mode, so each node will have exact copy of data.
They also have solution for fail-over or hot stand by.
Edit:
Gigaspace is far more than just a shared cache, but you can use just the caching solution. It's called In memory data grid. They have dramaticaly changed they web pages so I can't find exact page. But if you search through the documentation yo'll find it.
You can start here
http://www.gigaspaces.com/datagrid
But the technology is not free.
Take a look at the replicated options for EhCache.
Sounds like you've been searching for information on "distributed caches", which has a different defintion than "replicated cache". A distributed cache is a larger cache system spread out among many machines, so that the loss of anyone machine in the cluster does not bring down the entire cache, but just a portion. In this scenario the total size of your cache can reach (number of machines times memory of each machine).
In a replicated cache, the cached data is replicated across each machine, limiting you to a total cache size of max(memory of any one machine).
It seems most distributed caching systems want to leave half the data
on Server A and half on Server B, which wouldn't meet our failover
performance needs.
No, you can tweak it easy. Otherwise you need sticky seesion (you have to know exactly, which cache stores your data). You can choose any solution on the market EhCache, GigaSpace, GridGain etc. I would recommend to use JBoss Cache, imho the simplest and exactly what you need
There are many solutions in this space.
Memcached
EhCache
Infinispan
All of them can be configured as distributed caches. AFAIK Infinispan works best when left an an embedded cache in JBoss AS, last I checked it was difficult to integrate into other app servers. If you have money I would recommend BigMemory from Terracotta. Its the commercial derivative of EhCache and provides alot of additional nice-to-have features.
We use Apache Commons JCS and have been very pleased with it. It claims to be almost twice as fast as EHCache. For the situation you have described, you would probably configure a Lateral TCP Cache.

Shared cache between Tomcat web apps

I'm looking for a solution to share a cache between two tomcat web apps running on different hosts. The cache is being used for data synchronization, so the cache must be guaranteed to be up-to-date at all times between the two tomcat instances. (Sorry, I'm not 100% sure if the correct terminology for this requirement is "consistency" or something more specific like having ACID property). Another requirement is of course is that it should be fast to access the cache, with about equal numbers of writes as reads. I do have access to a shared filesystem so that is a consideration.
I've looked at something like ehcache but in order to get a shared cache between the webapps I would either need to implement on top of a Terracotta environment or using the new ehcache cache server. The former (Terracotta) seems like overkill for this, while the cache web server seems like it wouldn't provide the fast performance that I want.
Another solution I've looked at is building something simple on top of a fast key-value store like Redis or memcachedb. Redis is in-memory but can easily be configured to be a centralized cache, while memcachedb is a disk-based persistent cache which could work because I have a shared filesystem.
I'm looking for suggestions on how to best solve this problem. The solution needs to be a relatively mature technology as it will be used in a production environment.
Thanks in advance!
I'm quite sure that you don't require terracotta or ehcache server if you need a distributed cache. Ehcache with one of the four replication mechanisms would do.
However, based on what you've written I guess that you're looking for more than just a cache. Memcached/Ehcache are examples of what you might call a caching layer for your application - nothing more.
If you find yourself using words like 'guaranteed' 'up-to-date' 'ACID' you're better off using an in-memory DB like Oracle Times Ten/MySQL Cluster/Redis with a disk-based persistent storage.
You can use memcached (not memcachedb) for fast and efficient caching. Redis or memcachedb could be an overkill unless you want persistent caching. Memcached can be clustered very easily and you can use spymemcached java client to access it. Memcacached is very mature and is running in several hundred thousands, if not millions of production servers. It can be monitored through Nagios and Munin systems when in production.

memcached tomcat mysql on 1GB RAM

I am new to memcached and caching in general. I have a java web application running on Ubuntu + Tomcat + MySQL on a VPS Server with 1GB of memory.
Does it make sense to add a memcached layer with about 256MB for caching? Will this be too much load on the server? Which is more appropriate caching rendered html pages or database objects?
Please advise.
If you're going to cache pages, don't use memcached, use Varnish. However, there's a good chance that's not a great use of memory. Cacheing pages trades memory for computation and database work, but it does cost quite a lot of memory per page, so it's best for cases where the computation and database work needed to produce a single page amounts to a lot (or the pages are very small!). Also, consider that page cacheing won't be effective, or even possible, if you want to use per-user customisation on your pages (eg showing the number of items in a shopping cart). At least not without getting into some truly hairy shenanigans (edge-side includes, anyone?).
If you're not going to cache pages, and your app is on a single machine, then there's no point using memcached or similar. The point of cache servers like that is to make the memory on one machine work as a cache for another - like how a file server shares a disk, they're essentially memory servers. On a single machine, you might as well give all the memory to Java and cache objects on the heap.
Are you using an object-relational mapper? If so, see if it has any support for a second-level cache. The big three implementations (Hibernate, OpenJPA, and EclipseLink) all support in-memory caches. They're likely to do a much better job than you would if you did the cacheing yourself.
But, if you're not using a mapper, you have no choice but to do the cacheing yourself. There are extension points in LinkedHashMap for building LRU caches, and then of course there's the people's favourite, SoftReference, in combination with a HashMap. Plus, there are probably cache implementations out there you could download and use - i'd be shocked if there wasn't something in the Apache Commons libraries.
memcached won't add any noticeable load on your server, but it will be memory your app can't use. If you only plan to have a single app server for a while, you're better off using an in-JVM cache.
As far what to cache, the answer falls somewhere in the middle of the above. You don't want to cache exactly what's in your database and you certainly don't want to cache the final output. You have a data model representation in your application that isn't exactly what's in the DB (e.g. a User object might be made up of multiple queries from a few different tables). Cache that kind of thing as it's most reusable.
There's lots of info in the memcached site that should help you understand and get going with caching in general and memcached specifically.
It might make sense to do that, why don't try a smaller size like 64 MB and see how that goes. When you use more resources for the memcache, there is less for everything else. You should try it and see what will give you the best performance.

Zookeeper/Chubby -vs- MySql NDB

I have been reading the Paxos paper, the FLP theorem etc. recently and evaluating Apache Zookeeper for a project. I have also been going thru Chubby (Google's distributed locking service) and the various literature on it that is available online. My fundamental usecase for Zookeeper is to implement replication and general coordination for a distributed system.
I was just wondering though, what is the specific advantage that Zookeeper or a Chubby like distributed locking system brings to the table. Basically I am just wondering why I can't just use a MySQL NDB Cluster. I keep hearing that MySQL has a lot of replication issues. I was hoping some with more experience on the subject might shed some light on it.
Thanks in advance..
A simplistic listing of my requirements :
I have a homogeneous distributed system.
I need some means of maintaining consistent state across all my nodes.
My system exposes a service, and interaction with clients will lead to some change in collective state of my system.
High availability is a goal, thus a node going down must not affect the service.
I expect the system to service atleast a couple of 1000 req/sec.
I expect the collective state of the system to be bounded in size (basically inserts/deletes will be transient... but in steady state, i expect lots of updates and reads)
It depends on the kind of data you are managing and the scale and fault tolerance you are going for.
I can answer from the ZooKeeper point of view. Before starting I should mention that ZooKeeper is not a Chubby clone. Specifically it does not do locks directly. It is also designed with different ordering and performance requirements in mind.
In ZooKeeper the entire copy of system state is memory resident. Changes are replicated using an atomic broadcast protocol and synced to disk (using a change journal) by a majority of ZooKeeper servers before being processed. Because of this ZooKeeper has deterministic performance that can tolerate failures as long as a majority of servers are up. Even with a big outage, such as a power failure, as long as a majority of servers come back on line, system state will be preserved. The information stored is ZooKeeper is usually considered the ground truth of the system so such consistency and durability guarantees are very important.
The other things that ZooKeeper gives you have to do with monitoring dynamic coordination state. Ephemeral nodes allow you do to easy failure detection and group membership. The ordering guarantees allow you to do leader election and client side locking. Finally, watches allow you to monitor system state and quickly respond to changes in system state.
So if you need to manage and respond to dynamic configuration, detect failures, elect leaders, etc. ZooKeeper is what you are looking for. If you need to store lots of data or you need a relational model for that data, MySQL is a much better option.
MySQL with Innodb provides a good general purpose solution, and will probably keep up with your performance requirements quite easily on not-too-expensive hardware. It can easily handle many thousands of updates per second on a dual quad-core box with decent disks. The built-in asynchronous replication will get you most of the way there for your availability requirements - but you might lose a few seconds' worth of data if the primary fails. Some of this lost data might be recoverable when the primary is repaired, or might be recoverable from your application logs: whether you can tolerate this is dependent on how your system works. A less lossy - but slower - alternative is to use MySQL Innodb with shared disk between Primary and Failover units: in this case, the Failover unit will take over the disk when the Primary fails with no loss of data -- as long as the Primary did not have some kind of disk catastrophe. If shared disk is not available, DRBD can be used to simulate this by synchronously copying disk blocks to the Failover unit as they are written: this might have an impact on performance.
Using Innodb and one of the replication solutions above will get your data copied to your Failover unit, which is a large part of the recovery problem solved, but extra glue is required to reconfigure your system to bring the Failover unit on-line. This is usually performed with a cluster system like RHCS or Pacemaker or Heartbeat (on Linux) or the MS Cluster stuff for Windows. These systems are toolkits, and you are left to get your hands dirty building them into a solution that will fit your environment. However, for all of these systems there is a brief outage period while the system notices that the Primary has failed, and reconfigures the system to use the Failover unit. This might be tens of seconds: trying to reduce this can make your failure detection system too sensitive, and you might find your system being failed over unnecessarily.
Moving up, MySQL NDB is intended to reduce the time to recovery, and to some extent help scale up your database for improved performance. However, MySQL NDB has a quite narrow range of applicability. The system maps a relational database on to a distributed hash table, and so for complex queries involving multiple joins across tables, there is quite a bit of traffic between the MySQL component and the storage components (the NDB nodes) making complex queries run slow. However, queries that fit well run very fast indeed. I have looked at this product a few times, but my existing databases have been too complicated to fit well and would require a lot of redesign to get good performance. However, if you are at the design stage of a new system, NDB would work well if you can bear its constraints in mind as you go. Also, you might find that you need quite a few machines to provide a good NDB solution: a couple of MySQL nodes plus 3 or more NDB nodes - although the MySQL and NDB nodes can co-exist if your performance needs are not too extreme.
Even MySQL NDB cannot cope with total site loss - fire at the data centre, admin error, etc. In this case, you usually need another replication stream running to a DR site. This will normally be done asynchronously so that connectivity blips on the inter-site link does not stall your whole database. This is provided with NDB's Geographic replication option (in the paid-for telco version), but I think MySQL 5.1 and above can provide this natively.
Unfortunately, I know little about Zookeeper and Chubby. Hopefully someone else can pick up these aspects.

Persistence strategy for low latency reads and writes

I am building an application that includes a feature to bulk tag millions of records, more or less interactively. The user interaction is very similar to Gmail where users can tag individual emails, or bulk tag large amounts of emails. I also need quick read access to these tag memberships as well, and where the read pattern is more or less random.
Right now we're using Mysql and inserting one row for every tag-document pair. Writing millions of rows to Mysql takes a while (high I/O), even with bulk insertions and heavy optimization. We need this to be an interactive process, not a batch process.
For the data that we're storing and reading, consistency and availability of the data are not as important as performance and scalability. So in the event of system failure while the writes are occurring, I can deal with some data loss. However, the data definitely needs to be persisted to secondary storage at some point.
So, to sum up, here are the requirements:
Low latency bulk writes of potentially tens of millions of records
Data needs to be persisted in some way
Low latency random reads
Durable writes not required
Eventual consistency is okay
Here are some solutions I've looked at:
Write behind caches (Terracotta, Gigaspaces, Coherence) where records are written to memory and drained to the database asynchronously. These scare me a little because they appear to add a certain amount of complexity to the app that I'd want to avoid.
Highly scalable key-value stores, like MongoDB, HBase, Tokyo Tyrant
If you have the budget to use Coherence for this, I highly recommend doing so. There is direct support for write-behind, eventual consistency behavior in Coherence and it is very survivable to both a database outage and Coherence cluster node outages (if you use >= 3 Coherence nodes on separate JVMs, preferably on separate hosts). I have implemented this for doing high-volume CRM for a Fortune 100 company's e-commerce site and it works fantastically.
One of the best aspects of this architecture is that you write your Java application code as if none of the write-behind behavior were taking place, and then plug in the Coherence topology and configuration that makes it happen. If you need to change the behavior or topology of Coherence later, no change in your application is required. I know there are probably a handful of reasonable ways to do this, but this behavior is directly supported in Coherence rather than having to invent or hand-roll a way of doing it.
To make a really fine point - your worry about adding application complexity is a good one. With Coherence, you simply write updates to the cache (or if you're using Hibernate it can be the L2 cache provider). Depending upon your Coherence configuration and topology, you have the option to deploy your application to use write-behind, distributed, caches. So, your application is no more complex (and, frankly unaware) due to the features of the cache.
Finally, I implemented the solution mentioned above from 2005-2007 when Coherence was made by Tangosol and they had the best possible support. I'm not sure how things are now under Oracle - hopefully still good.
I've worked on a large project that used asyncrhonous writes althoguh in that case it was just hand-written using background threads. You could also implement something like that by offloading the db write process to a JMS queue.
One thing that will certainly speed up db writes is to do them in batches. JDBC batch updates can be orders of magnitude faster than individual writes, and if you're doing them asynchronously you can just write them 500 at a time.
Depending on how your data is organized perhaps you would be able to use sharding,
if the read latency isn't low enough you can also try to add caching. Memcache is one popular solution.
Berkeley DB has a very high performance disk-based hash table that supports transactions, and integrates with a Java EE environment if you need that. If you're able to model the data as key/value pairs, this can be a very scalable solution.
http://www.oracle.com/technology/products/berkeley-db/je/index.html
(Note: oracle bought berkeley db about 5-10 years ago; the original product has been around for 15-20 years).

Categories