Zookeeper/Chubby -vs- MySql NDB - java

I have been reading the Paxos paper, the FLP theorem etc. recently and evaluating Apache Zookeeper for a project. I have also been going thru Chubby (Google's distributed locking service) and the various literature on it that is available online. My fundamental usecase for Zookeeper is to implement replication and general coordination for a distributed system.
I was just wondering though, what is the specific advantage that Zookeeper or a Chubby like distributed locking system brings to the table. Basically I am just wondering why I can't just use a MySQL NDB Cluster. I keep hearing that MySQL has a lot of replication issues. I was hoping some with more experience on the subject might shed some light on it.
Thanks in advance..
A simplistic listing of my requirements :
I have a homogeneous distributed system.
I need some means of maintaining consistent state across all my nodes.
My system exposes a service, and interaction with clients will lead to some change in collective state of my system.
High availability is a goal, thus a node going down must not affect the service.
I expect the system to service atleast a couple of 1000 req/sec.
I expect the collective state of the system to be bounded in size (basically inserts/deletes will be transient... but in steady state, i expect lots of updates and reads)

It depends on the kind of data you are managing and the scale and fault tolerance you are going for.
I can answer from the ZooKeeper point of view. Before starting I should mention that ZooKeeper is not a Chubby clone. Specifically it does not do locks directly. It is also designed with different ordering and performance requirements in mind.
In ZooKeeper the entire copy of system state is memory resident. Changes are replicated using an atomic broadcast protocol and synced to disk (using a change journal) by a majority of ZooKeeper servers before being processed. Because of this ZooKeeper has deterministic performance that can tolerate failures as long as a majority of servers are up. Even with a big outage, such as a power failure, as long as a majority of servers come back on line, system state will be preserved. The information stored is ZooKeeper is usually considered the ground truth of the system so such consistency and durability guarantees are very important.
The other things that ZooKeeper gives you have to do with monitoring dynamic coordination state. Ephemeral nodes allow you do to easy failure detection and group membership. The ordering guarantees allow you to do leader election and client side locking. Finally, watches allow you to monitor system state and quickly respond to changes in system state.
So if you need to manage and respond to dynamic configuration, detect failures, elect leaders, etc. ZooKeeper is what you are looking for. If you need to store lots of data or you need a relational model for that data, MySQL is a much better option.

MySQL with Innodb provides a good general purpose solution, and will probably keep up with your performance requirements quite easily on not-too-expensive hardware. It can easily handle many thousands of updates per second on a dual quad-core box with decent disks. The built-in asynchronous replication will get you most of the way there for your availability requirements - but you might lose a few seconds' worth of data if the primary fails. Some of this lost data might be recoverable when the primary is repaired, or might be recoverable from your application logs: whether you can tolerate this is dependent on how your system works. A less lossy - but slower - alternative is to use MySQL Innodb with shared disk between Primary and Failover units: in this case, the Failover unit will take over the disk when the Primary fails with no loss of data -- as long as the Primary did not have some kind of disk catastrophe. If shared disk is not available, DRBD can be used to simulate this by synchronously copying disk blocks to the Failover unit as they are written: this might have an impact on performance.
Using Innodb and one of the replication solutions above will get your data copied to your Failover unit, which is a large part of the recovery problem solved, but extra glue is required to reconfigure your system to bring the Failover unit on-line. This is usually performed with a cluster system like RHCS or Pacemaker or Heartbeat (on Linux) or the MS Cluster stuff for Windows. These systems are toolkits, and you are left to get your hands dirty building them into a solution that will fit your environment. However, for all of these systems there is a brief outage period while the system notices that the Primary has failed, and reconfigures the system to use the Failover unit. This might be tens of seconds: trying to reduce this can make your failure detection system too sensitive, and you might find your system being failed over unnecessarily.
Moving up, MySQL NDB is intended to reduce the time to recovery, and to some extent help scale up your database for improved performance. However, MySQL NDB has a quite narrow range of applicability. The system maps a relational database on to a distributed hash table, and so for complex queries involving multiple joins across tables, there is quite a bit of traffic between the MySQL component and the storage components (the NDB nodes) making complex queries run slow. However, queries that fit well run very fast indeed. I have looked at this product a few times, but my existing databases have been too complicated to fit well and would require a lot of redesign to get good performance. However, if you are at the design stage of a new system, NDB would work well if you can bear its constraints in mind as you go. Also, you might find that you need quite a few machines to provide a good NDB solution: a couple of MySQL nodes plus 3 or more NDB nodes - although the MySQL and NDB nodes can co-exist if your performance needs are not too extreme.
Even MySQL NDB cannot cope with total site loss - fire at the data centre, admin error, etc. In this case, you usually need another replication stream running to a DR site. This will normally be done asynchronously so that connectivity blips on the inter-site link does not stall your whole database. This is provided with NDB's Geographic replication option (in the paid-for telco version), but I think MySQL 5.1 and above can provide this natively.
Unfortunately, I know little about Zookeeper and Chubby. Hopefully someone else can pick up these aspects.

Related

regarding of improvement of the efficiency of a cache heavy system

i'm about to improve the efficiency of a cache heavy system, which has the following properties/architecture:
The system has 2 components, a single instance backend and multiple frontend instances, spread across remote data centers.
The backend generates data and writes it to a relational database that is replicated to multiple data centers.
The frontends handle client requests (common web traffic based) by reading data from the database and serving it. Data is stored in a local cache for an hour before it expires and has to be retrieved again.
(The cache’s eviction policy is LRU based).
I want to mention that there are two issues with the implementation above:
It turns out that many of the database accesses are redundant because the underlying data didn't actually change.
On the other hand, a change isn't reflected until the cache TTL elapses, causing staleness issues.
can you advice for a solution that fixes both of these problems?
should the solution change if the data was stored in nosql db like cassandra and not a classic database?
Unfortunately, there is no silver bullet here. There are two obvious variants:
Keep long TTL or cache forever, but invalidate the cache data when it is updated. This might get quite complex and error prone
Simply lower the TTL to get faster updates. The low TTL approach is IMHO the KISS approach. We go as low as 27 seconds. A cache with this low TTL has not a lot hits during normal operation, but helps a lot when a flash crowd hits your application
In case your database is powerful enough and has acceptable latency the approach 2 is the simplest one.
If your database, does not have acceptable latency, or maybe your application is doing a multiple of sequential reads from the database per web request, then your can use a cache that provides refresh ahead or background refresh. This means, the cache refreshes the entries automatically and there is no additional latency except for the first read. However, this approach come with the downside of increasing the database load.
Cassandra may not support the same access strategies like the classic database. Changing to Cassandara will affect your caching as well, e.g. in case you cache also query results. However, the high level concept keeps the same. Your data access layers may change to an asynchronous or reactive pattern, since Cassandara has support for that.
If you want to do invalidation (solution 1), with Cassandara, you can get information from the database which data has updated, see CASSANDRA-8844. You may get similar information from "classical" SQL databases, but that is a vendor specific feature.

Using Replicated Cache vs LB sticky session

I need to keep some data in cache on server. The servers are in cluster and call can go to any of them. In such a scenario is it better to use a replicated/distributed cache like EhCache Or to use session stickiness of LB.
If the data size(in cache) is big, won't it have a performance impact of serialization and de-serialization across all servers?
Also in case of distributed cache, whats the optimal number of servers till which such cache is effective. Since data is replicated to all nodes, and say number of nodes is 20, its like master to master replication across all nodes. By that I mean, each node will get notifications from other 19 and will update modifications to other 19.Does such type os setup scale?
As always in distributed systems, the answer depands on different things:
A load balancer with sticky sessions is for sure the simpler way for the developer, since it doesn’t make any difference if the application runs on 1, 2 or 100 servers. If this is all you care about, stick with it and you can stop reading right here.
I’m not sure how session aware load balancers are implemented and what their general limit in terms of requests per second would be, but they have at least one big disadvantage over the distributed cache. - What to do if the machine handling the sessions is down? - If you distributed your cache, any machine can serve the request and it doesn’t matter if one of them fails. The serialisation/deserialisation part is not a big problem, rather the network could be the bottleneck if you don't run it in at least a 1 Gbit network environment, but it should be ok.
For distributed cache you could go either with Hazelcast, Infinispan or similar solutions, which would simplify the access from your own application. (Update: these implementations use DHT to distribute the cache)
Fully replicated cache you could use EhCached, which you mentioned, or Infinispan. Here the advantage over the distributed cache is the much faster access since you have all the data replicated on every machine and only need to access it localy. The disadvantage is slower writes (so rather use it for read very often, write very seldom scenarios) and the fact that your cache is limited by the amount which one machine is able to store. If you are running your applications on servers with 64GB of RAM this is ok. If you want to distribute them over small amazon instances, this is probably a bad idea. I think before you will hit any problems with updating too many nodes, you will run out of memory, and that one is at least very easy to calculate: AVG_CACHE_NEEDED_PER_CLIENT * NUMBER_OF_CLIENTS < MEMORY_FOR_CACHE_AVAILABLE (on one server). If you need more cache than you have available on any node in your EhCached cluster, full replication won't be possible any more.
Or you could use a Redis cluster or similar independent from your application and the servers your application is running on. This would allow you to scale the cache at a different speed than the rest of your application, however the access to the data wouldn’t be that trivial.
Of course the actual decision depends on your very specific use-case and the demands you are putting on your application.
Personally I was very happy when I found out today that Azure WebPages have a load balancer with sticky session support, and I don’t need to reconfigure my application to use Redis as a session object store, and can just keep everything as it is.
But for a huge workload with hundreds of servers a simple load balancer probably will be rather overwhelmed, and distributed cache, or centralized replicated cache (Redis) will be the way to go.

High-performance storage for messages

I have been looking for high-performance file storage solution to be used for persisting soap messages in Java EE environment.
We are currently using a CLOB table on Oracle RMDBS, but it is very expensive to scale. While oracle works well for storing the related metadata, it doesn't perform too well with the message content. Insert on a table with a CLOB gives roughly 1000% worse performance than one without it (This was measured by comparing performance of VARCHAR2(4000)-insert to CLOB-insert when in row storage has been disabled for CLOB)
Persisting messages on file system is one option, but I have some serious doubts how an average file systems would perform storing millions of files per day. Considering we have to keep those files for several months, it just doesn't sound right.
I know there are several open source key-value databases (jackrabbit, mongodb to name few) that might be up for the task, but I just can't find time to evaluate them all. I would also like to hear about performance of open source RMDBS.
Considering that volume of transmitted messages is ever increasing, priority is on low latency and high performance. We do not require clustering or transactionality and (minor) data loss on system failure is acceptable.
Requirements:
Must be able to maintain rate of at least 100persisted messages/sec when message size is 8kilobytes
Must be able to store at least 100million messages
Must support deletion of persisted messages by age
Must support persisting while deletion is in progress
Must support retrieval of message by id
Help is appreciated
Here is nice comparison between MongoDB and SQL Server (I believe Oracle will have similar performance). You can see from charts that Mongo can handle 20 000 inserts per second. Mongo has also query language based on JSON which can do almost everything like regular SQL and it has Sharded Clusters and Replica sets which can handle all neccesary backups and failover (some basic info here).
Also, if you are interested in digging little bit deeper, 10 gen has an online course starting in two weeks awarded with a certificate.
You can try the following products:
HBase
MongoDB
Cassandra
Solr 4.0 (only)
These are the guys that I have any experience. There are a lot of other good products that can do what you want in the market.
Some observations: none of them have this "delete by age" feature out-of-the-box, as far as I know it. But it should be really simple to implement it. Easier in MogoDB I must assume.
If you will try Solr, you should stick with versions 4.X as these are the only ones with support to near realtime commits, and it will affect your "delete and insert" requirement.
All of them have great performance, but I did not run a benchmark with your requirement. If I were you I would make my own benchmarks.
Oracle11g has the data deduplication featured introduced. This feature will improve the performance of the oracle database with clob.
This is what I've discovered so far. I will try to update this answer after evaluating each product.
I started my experiments using MongoDB, which on paper looked like a viable option. Here's a summary of my findings:
Written in C++
Replication (replicaset) requires 3 nodes for high availability
One of the nodes is elected as a master - only the master can write
Scaling out is done by sharding (partitioning)
Each shard is essentially a replicaset - therefore sharded environment requires atleast 6 nodes for high availability
mongod instance consumes all available memory - virtualization should be used for resource partitioning (if you intend to run application server on same hardware)
Master re-election may take up to 1minute
Document collections (tables) use exclusive lock during write operation
Java API is exceptionally easy to use and includes a virtual filesystem called GridFS
Single node write performance on test system was ~20000 inserts/sec for 1kbyte document
Single node read performance was ~20000 read/sec for 1kbyte document
The fact that MongoDB would require 6 nodes on a two data center configuration, made me look further for more cost-efficient solutions.
Apache Cassandra:
Written in Java
Replication requires 3 nodes for high availability
Database survives network partitioning
Replication algorithm has been designed for multiple data centers
All nodes are writable
Scaling out can be done by adding more nodes (up to a certain limit)
Cassandra may require JVM garbage collection tuning
Java API is not the easiest to work with
Single node write performance was ~7000 inserts/sec for 1kbyte document
Single node read performance was ~7000 reads/sec for 1kbyte document
While Cassandra was slower in a single node configuration, write performance on a high availability configuration would match MongoDB's performance. The ability perform writes on every node (even during network partitioning) is very welcome addition for logging.
Couchbase:
Unfortunately I was unable to test Couchbase.
For now we'll keep using Oracle SecureFiles. Would we run out of resources on Oracle, both Cassandra and MongoDB seem like viable alternatives.

Postgresql Replication solutions and their performance

I am doing POC on Posgtresql replication. I am using latest version of postgresql i.e. 9.1. There are multiple replication solutions avaliable in the market (PGCluster, Pgpool-II, Slony-I). Postgresql also provide in-built replication solutions (Streaming replication, Warm Standby and hot standby). I am confused which solution is best for the financial application for which I am doing POC. The application will write around 160 million records with row size of 2.5 KB in database. My questions is for following scenarios which replication solution will be suitable:
If I would require replication for backup purpose only
If I would require to scale the reads
If I would require High Avaliability and Consistency
Also It will be very helpful if you can share the perfomance or experience with postgresql replication solutions.
The short answer is "whatever your problem is, there is a solution."
Let's look at just a few of he main ones.
Slony-I is a replication solution that allows you to scale reads across part or all of your database. This is designed so that you could take part of your database and replicate it into your DMZ for, say, customer reports. On the other hand, this flexibility begets complexity, and while Slony lets you replicate only part of your database, Slony lets you replicate only part of your database...... Also Slony's flexibility doesn't stop there. It allows you to replicate across different versions of Pgsql, therefore making sure you have zero downtime for read queries during major upgrades.
Postgres-XC is really the successor in spirit to PGCluster. It offers Teradata-style clustering for PostgreSQL. If you really need to scale reads and writes, this is the solution for you but again that adds complexity too.
The built-in replication solutions are simplest, allow you to scale for purposes of taking backups and taking writes. It ensures high availability and consistency but a major upgrade requires downtime of all nodes.
So the thing is you need to figure out exactly what you want and then look for help in selecting the right tool for the job. I would recommend asking on the pgsql-general email lists when you get to that point.

Persistence strategy for low latency reads and writes

I am building an application that includes a feature to bulk tag millions of records, more or less interactively. The user interaction is very similar to Gmail where users can tag individual emails, or bulk tag large amounts of emails. I also need quick read access to these tag memberships as well, and where the read pattern is more or less random.
Right now we're using Mysql and inserting one row for every tag-document pair. Writing millions of rows to Mysql takes a while (high I/O), even with bulk insertions and heavy optimization. We need this to be an interactive process, not a batch process.
For the data that we're storing and reading, consistency and availability of the data are not as important as performance and scalability. So in the event of system failure while the writes are occurring, I can deal with some data loss. However, the data definitely needs to be persisted to secondary storage at some point.
So, to sum up, here are the requirements:
Low latency bulk writes of potentially tens of millions of records
Data needs to be persisted in some way
Low latency random reads
Durable writes not required
Eventual consistency is okay
Here are some solutions I've looked at:
Write behind caches (Terracotta, Gigaspaces, Coherence) where records are written to memory and drained to the database asynchronously. These scare me a little because they appear to add a certain amount of complexity to the app that I'd want to avoid.
Highly scalable key-value stores, like MongoDB, HBase, Tokyo Tyrant
If you have the budget to use Coherence for this, I highly recommend doing so. There is direct support for write-behind, eventual consistency behavior in Coherence and it is very survivable to both a database outage and Coherence cluster node outages (if you use >= 3 Coherence nodes on separate JVMs, preferably on separate hosts). I have implemented this for doing high-volume CRM for a Fortune 100 company's e-commerce site and it works fantastically.
One of the best aspects of this architecture is that you write your Java application code as if none of the write-behind behavior were taking place, and then plug in the Coherence topology and configuration that makes it happen. If you need to change the behavior or topology of Coherence later, no change in your application is required. I know there are probably a handful of reasonable ways to do this, but this behavior is directly supported in Coherence rather than having to invent or hand-roll a way of doing it.
To make a really fine point - your worry about adding application complexity is a good one. With Coherence, you simply write updates to the cache (or if you're using Hibernate it can be the L2 cache provider). Depending upon your Coherence configuration and topology, you have the option to deploy your application to use write-behind, distributed, caches. So, your application is no more complex (and, frankly unaware) due to the features of the cache.
Finally, I implemented the solution mentioned above from 2005-2007 when Coherence was made by Tangosol and they had the best possible support. I'm not sure how things are now under Oracle - hopefully still good.
I've worked on a large project that used asyncrhonous writes althoguh in that case it was just hand-written using background threads. You could also implement something like that by offloading the db write process to a JMS queue.
One thing that will certainly speed up db writes is to do them in batches. JDBC batch updates can be orders of magnitude faster than individual writes, and if you're doing them asynchronously you can just write them 500 at a time.
Depending on how your data is organized perhaps you would be able to use sharding,
if the read latency isn't low enough you can also try to add caching. Memcache is one popular solution.
Berkeley DB has a very high performance disk-based hash table that supports transactions, and integrates with a Java EE environment if you need that. If you're able to model the data as key/value pairs, this can be a very scalable solution.
http://www.oracle.com/technology/products/berkeley-db/je/index.html
(Note: oracle bought berkeley db about 5-10 years ago; the original product has been around for 15-20 years).

Categories