Scalable RDBMS

My name is Mukesh, I worked with fairly large (or medium large) scale websites as my previous assignments – and now in LeaseWeb’s cloud team, as an innovation engineer. When I say large scale I’m talking about a website serving 300 million webpages per day (all rendered within a second), a website storing about half a billion photos & videos, a website with an active user base of ~10 million, a web application with 3000 servers …and so on!

We all know it takes a lot to keep sites like these running especially if the company has decided to run it on commodity hardware. Coming from this background, I’d like to dedicate my first blog post to the subject of scalable databases.

A friend of mine,  marketing manager by profession, inspired by technology, asked me why are we using MySQL in knowing that it does not scale (or there is some special harry potter# magic?). He wanted to ask, from what reasons we have chosen MySQL?  And are there any plans to move to another database?

Well the answer for later one is easy “No, we’re not planning to move to another database”. The former question  however, can’t be answered in a single line.

#Talking of Harry Potter, what do you think about ‘The Deathly Hallows part -II’?

Think about Facebook –  a well recognised social networking website. Facebook handles more than 25 billion page views per day; even they use MySQL.

The bottleneck is not MySQL (or any common database). Generally speaking, every database product in the market has the following characteristics to some extent:

  1. PERSISTENCE:  Storage and (random) retrieval of data<
  2. CONCURRENCY:  The ability to support multiple users simultaneously (lock granularity is often an issue here)
  3. DISTRIBUTION:  Maintenance of relationships across multiple databases (support of locality of reference, data replication)
  4. INTEGRITY:  Methods to ensure data is not lost or corrupted (features including automatic two-phase commit, use of dual log files, roll-forward recovery)
  5. SCALABILITY:  Predictable performance as the number of users or the size of the database increase

This post deals about scalability, which we hear quite often when we talk about large systems/big data.

Data volume can be managed if you shard it. If you break the data on different servers at the application level, the scalability of MySQL is not such a big problem. Of course, you cannot make a JOIN with the data from different servers, but choosing a non-relational database doesn’t help either. There is no evidence that even Facebook uses (back in early 2008 its very own) Cassandra as primary storage, and it seems that the only things that’s needed there is a search for incoming messages.

In reality, distributed databases such as Cassandra, MongoDB and CouchDB or any new database (if that matters) lacks on scalability & stability unless there are some real users (I keep seeing post from users running into issues, or annoyance ) For example, the guys at Twitter were trying to move on with MySQL and Cassandra for about a year (great to see that have a bunch of feature working).
I’m not saying they aren’t good, they are getting better with time.  My point is any new database needs more time & cover a few large profiles to be mature (to be considered over MySQL). Of course, if someone tells about how he used any of these databases as primary storage for 1 billion cases in one year, then I’ll change my opinion.

I believe it’s a bad idea to risk your main base on new technology. It would be a disaster to lose or damage the database, and you may not be able to restore everything. Besides, if you’re not a developer of one of these newfangled databases and one of those few who actually use them in combat mode, you can only pray that the developer will fix bugs and issues with scalability as they become available.

In fact, you can go very far on a single MySQL without even caring about a partitioning data at the application level. While it’s easy to scale a server up on a bunch of kernels and tons of RAM, do not forget about replication. In addition, if the server is in front of the memcached layer (which simply scales), the only thing that your database cares is writes. For storing large objects, you can use S3 or any other distributed hash table.  Until you are sure that you need to scale the base as it grows, do not shoulder the burden of making the database an order of magnitude more scalable than you need it.

Most problems arise when you try to split the data over a large number of servers, but you can use an intermediate layer between the base, which is responsible for partitioning. Like for example FriendFeed does.
I believe that the relational model is the correct way of structuring data in most applications – content that users create. Schemes can contain data in a particular form as new versions of the service; they also serve as documentation and help avoid a heap of errors. SQL allows you to process more data as needed instead of getting tons of raw information, which then still need to be reprocessed in the application. I think once the whole hype around the NoSQL is over, someone will finally develop a relational database with free semantics.


  1. Use MySQL or other classic databases for important, persistent data.
  2. Use caching delivery mechanisms – or maybe even NoSQL – for fast delivery
  3. Wait until the dust settles, and the next generation, free-semantics relational database rises up.

2 thoughts on “Scalable RDBMS”

  1. Your entire essay does not address database scalability. In fact it says: be conservative, patch and use other solutions for scaling and don’t over design. Which could in fact work, but isn’t a solution for designing scalable databases. Your essay misses pointers to specific number crunching databases and the ability to deduplicate data in a database: normalisation, but also the technical implementation of column store databases (opposed to row storage). What I am also missing is the smart choice to place specific tables on different disks.

    I find the suggestion to use memcached for everything that comes from the database plainly stupid. Your database knows the best what parts of your data are most frequently accessed. If your database of choice isn’t supporting elementary forms of caching: why are you suggesting to use this database? There is a place for caching data, especially volatile data, and data that doesn’t have to be written back, to avoid cache invalidation. I do not consider it a typical solution to just cache everything that comes out of a query. How would you know if the content isn’t changed in the mean time? More coding, please not.

    But most of all, databases get slower over time. All disks, including SSDs get slower over time. Fragmentation is an issue most databases and filesystems do not address and requires manual interaction such as the VACUUM statement. Why is this such a big issue? Deleting rows is the best example. You would get empty space. Now empty spaces could obviously be filled up, but what if you use variable length data, such as strings? This will not fill up the space quite so good. So all kind of holes in the database arise after UPDATE and DELETE statements. The database will just append these updates on the end of its store. You could create the most beautiful index, which is not going to address the seek time required to step over these holes on a full table scan.

    But now to the scaling. What does scale a database? Splitting the data and merging it back as suggested in the article is a nice divide-and-conquer method. It also implies that auto-increments are properly configured. But what is the actual ‘target’? Is the target a single database that can have closed transaction, or is it primary a lookup situation? Many master-slave configurations give better read-only performance than one ‘super fast master’, but the opposite is true as well. A many writes to multiple databases, can be easily combined to a single read-only aggregate to scale up the number of parallel transactions.

    If I was Leaseweb and you had a good budget I would suggest you to use a different database system than you would receive your webrequests on. For multiple reasons: you have already abstracted the access to your database to a remote system, you understand that security is an issue and you understand that your operation system loves to schedule, and the few processes are running the more cpu time your database gets and the less context switches distract it.

    Choosing good database software also starts with looking if your database properly uses a multicore system, not only for concurrency or future scalability, but also for scheduling background processes such as the vacuum mentioned earlier.

Leave a Reply

Your email address will not be published. Required fields are marked *