Setting up keepalived on Ubuntu (load balancing using HAProxy on Ubuntu part 2)

In our previous post we have set up a HAProxy loadbalancer to balance the load of our web application between three webservers, here’s the diagram of the situation we have ended up with:

              +---------+
              |  uplink |
              +---------+
                   |
                   +
                   |
              +---------+
              | loadb01 |
              +---------+
                   |
     +-------------+-------------+
     |             |             |
+---------+   +---------+   +---------+
|  web01  |   |  web02  |   |  web03  |
+---------+   +---------+   +---------+

As we already concluded in the last post, there’s still a single point of failure in this setup. If the loadbalancer dies for some reason the whole site will be offline. In this post we will add a second loadbalancer and setup a virtual IP address shared between the loadbalancers. The setup will look like this:

              +---------+
              |  uplink |
              +---------+
                   |
                   +
                   |
+---------+   +---------+   +---------+
| loadb01 |---|virtualIP|---| loadb02 |
+---------+   +---------+   +---------+
                   |
     +-------------+-------------+
     |             |             |
+---------+   +---------+   +---------+
|  web01  |   |  web02  |   |  web03  |
+---------+   +---------+   +---------+

So our setup now is:
– Three webservers, web01 (192.168.0.1), web02 (192.168.0.2 ), and web03 (192.168.0.3) each serving the application
– The first load balancer (loadb01, ip: (192.168.0.100 ))
– The second load balancer (loadb02, ip: (192.168.0.101 )), configure this in the same way as we configured the first one.

To setup the virtual IP address we will use keepalived (als also suggested by Warren in the comments):

loadb01$ sudo apt-get install keepalived

Good, keepalived is now installed. Before we proceed with configuring keepalived itself, edit the following file:

loadb01$ sudo vi /etc/sysctl.conf

And add this line to the end of the file:

net.ipv4.ip_nonlocal_bind=1

This option is needed for applications (haproxy in this case) to be able to bind to non-local addresses (ip adresses which do not belong to an interface on the machine). To apply the setting, run the following command:

loadb01$ sudo sysctl -p

Now let’s add the configuration for keepalived, open the file:

loadb01$ sudo vi /etc/keepalived/keepalived.conf

And add the following contents (see comments for details ont he configuration!):

# Settings for notifications
global_defs {
    notification_email {
        your@emailaddress.com			# Email address for notifications 
    }
    notification_email_from loadb01@domain.ext  # The from address for the notifications
    smtp_server 127.0.0.1			# You can specifiy your own smtp server here
    smtp_connect_timeout 15
}
 
# Define the script used to check if haproxy is still working
vrrp_script chk_haproxy { 
    script "killall -0 haproxy" 
    interval 2 
    weight 2 
}
 
# Configuation for the virtual interface
vrrp_instance VI_1 {
    interface eth0
    state MASTER 				# set this to BACKUP on the other machine
    priority 101				# set this to 100 on the other machine
    virtual_router_id 51
 
    smtp_alert					# Activate email notifications
 
    authentication {
        auth_type AH
        auth_pass myPassw0rd			# Set this to some secret phrase
    }
 
    # The virtual ip address shared between the two loadbalancers
    virtual_ipaddress {
        192.168.0.200
    }
    
    # Use the script above to check if we should fail over
    track_script {
        chk_haproxy
    }
}

And start keepalived:

loadb01$ /etc/init.d/keepalived start

Now the next step is to install and configure keepalived on our second loadbalancer aswell, redo the steps starting from apt-get install keepalived. In the configuration step for keepalived, be sure change these two settings:

    state MASTER 				# set this to BACKUP on the other machine
    priority 101				# set this to 100 on the other machine

To:

    state BACKUP 			
    priority 100			

That’s it! We have now configured a virtual IP shared between our two loadbalancers, you can try loading the haproxy statistic page on the virtual IP adddress and should get the statistics for loadb01, then switch off loadb01 and refresh, the virtual IP address will now be assigned to the second loadbalancer and you should see the statistics page for that.

In a next post we will focus on adding MySQL to this setup as requested by Miquel in the comments on the previous post in this series. If there’s anything else you’d like us to cover, or if you have any questions please leave a comment!

Share

Hello, world!

Welcome to LeaseWeb Labs!

This might not be the first post on here, but it still seems like a good idea to have this welcome post.
The LeaseWeb Labs blog is an initiative from LeaseWeb’s IT department, to create a place where technically focused people can post about technical subjects. As (research, product and maintenance) departments, we get confronted with all dark and murky corners of development and engineering. It seems a shame to keep all that knowledge and learning for ourselves, so we decided to start sharing it here.

Things we’ll be posting about will vary; we have engineers working on high-traffic delivery systems, developers building front-ends for our various systems, and teams working on a pretty interesting new cloud platform – so you can expect information, howto’s and insights about almost any (IT related) subject. As we work in the hosting industry, there is definitely a bias towards hosting-related information 🙂

Some of these posts are aimed at basic system administration or development, and some will be more about complex scalability and high performance systems. We try to have something for everyone, but do let us know if there is something specific you want to hear about – you can contact us via the twitter account, on facebook, or use the comment system below.

To get you started, we’ve already placed a few articles that might interest you:

High availability load balancing using HAProxy on Ubuntu: Sander gives you a practical howto on implementing a basic high availability setup – part #1 in a series on high availability.

Scalable RDBMS: Mukesh is our scalability guru, and used to work on extremely high traffic (web) systems. He’s writing a first post in a series on scalable databases.

Tuning Zend framework and Doctrine: Alexander works mainly in PHP, and gives you some pointers on starting a customized Zend/Doctrine project to make it an even better combination.

We hope this gives you a general feeling of what we’re planning to do – and will be posting more. Any requests or comments are welcome!

Share

High availability load balancing using HAProxy on Ubuntu (part 1)

In this post we will show you how to easily setup loadbalancing for your web application. Imagine you currently have your application on one webserver called web01:

+---------+
| uplink  |
+---------+
     |
+---------+
|  web01  |
+---------+

But traffic has grown and you’d like to increase your site’s capacity by adding more webservers (web02 and web03), aswell as eliminate the single point of failure in your current setup (if web01 has an outage the site will be offline).

              +---------+
              | uplink  |
              +---------+
                   |
     +-------------+-------------+
     |             |             |
+---------+   +---------+   +---------+
|  web01  |   |  web02  |   |  web03  |
+---------+   +---------+   +---------+

In order to spread traffic evenly over your three web servers, we could install an extra server to proxy all the traffic an balance it over the webservers. In this post we will use HAProxy, an open source TCP/HTTP load balancer. (see: http://haproxy.1wt.eu/) to do that:

              +---------+
              |  uplink |
              +---------+
                   |
                   +
                   |
              +---------+
              | loadb01 |
              +---------+
                   |
     +-------------+-------------+
     |             |             |
+---------+   +---------+   +---------+
|  web01  |   |  web02  |   |  web03  |
+---------+   +---------+   +---------+

So our setup now is:
– Three webservers, web01 (192.168.0.1), web02 (192.168.0.2 ), and web03 (192.168.0.3) each serving the application
– A new server (loadb01, ip: (192.168.0.100 )) with Ubuntu installed.

Allright, now let’s get to work:

Start by installing haproxy on your loadbalancing machine:

loadb01$ sudo apt-get install haproxy

Continue reading High availability load balancing using HAProxy on Ubuntu (part 1)

Share

Scalable RDBMS

My name is Mukesh, I worked with fairly large (or medium large) scale websites as my previous assignments – and now in LeaseWeb’s cloud team, as an innovation engineer. When I say large scale I’m talking about a website serving 300 million webpages per day (all rendered within a second), a website storing about half a billion photos & videos, a website with an active user base of ~10 million, a web application with 3000 servers …and so on!

We all know it takes a lot to keep sites like these running especially if the company has decided to run it on commodity hardware. Coming from this background, I’d like to dedicate my first blog post to the subject of scalable databases.

A friend of mine,  marketing manager by profession, inspired by technology, asked me why are we using MySQL in knowing that it does not scale (or there is some special harry potter# magic?). He wanted to ask, from what reasons we have chosen MySQL?  And are there any plans to move to another database?

Well the answer for later one is easy “No, we’re not planning to move to another database”. The former question  however, can’t be answered in a single line.

#Talking of Harry Potter, what do you think about ‘The Deathly Hallows part -II’?

Think about Facebook –  a well recognised social networking website. Facebook handles more than 25 billion page views per day; even they use MySQL.

The bottleneck is not MySQL (or any common database). Generally speaking, every database product in the market has the following characteristics to some extent:

  1. PERSISTENCE:  Storage and (random) retrieval of data<
  2. CONCURRENCY:  The ability to support multiple users simultaneously (lock granularity is often an issue here)
  3. DISTRIBUTION:  Maintenance of relationships across multiple databases (support of locality of reference, data replication)
  4. INTEGRITY:  Methods to ensure data is not lost or corrupted (features including automatic two-phase commit, use of dual log files, roll-forward recovery)
  5. SCALABILITY:  Predictable performance as the number of users or the size of the database increase

This post deals about scalability, which we hear quite often when we talk about large systems/big data.

Data volume can be managed if you shard it. If you break the data on different servers at the application level, the scalability of MySQL is not such a big problem. Of course, you cannot make a JOIN with the data from different servers, but choosing a non-relational database doesn’t help either. There is no evidence that even Facebook uses (back in early 2008 its very own) Cassandra as primary storage, and it seems that the only things that’s needed there is a search for incoming messages.

In reality, distributed databases such as Cassandra, MongoDB and CouchDB or any new database (if that matters) lacks on scalability & stability unless there are some real users (I keep seeing post from users running into issues, or annoyance ) For example, the guys at Twitter were trying to move on with MySQL and Cassandra for about a year (great to see that have a bunch of feature working).
I’m not saying they aren’t good, they are getting better with time.  My point is any new database needs more time & cover a few large profiles to be mature (to be considered over MySQL). Of course, if someone tells about how he used any of these databases as primary storage for 1 billion cases in one year, then I’ll change my opinion.

I believe it’s a bad idea to risk your main base on new technology. It would be a disaster to lose or damage the database, and you may not be able to restore everything. Besides, if you’re not a developer of one of these newfangled databases and one of those few who actually use them in combat mode, you can only pray that the developer will fix bugs and issues with scalability as they become available.

In fact, you can go very far on a single MySQL without even caring about a partitioning data at the application level. While it’s easy to scale a server up on a bunch of kernels and tons of RAM, do not forget about replication. In addition, if the server is in front of the memcached layer (which simply scales), the only thing that your database cares is writes. For storing large objects, you can use S3 or any other distributed hash table.  Until you are sure that you need to scale the base as it grows, do not shoulder the burden of making the database an order of magnitude more scalable than you need it.

Most problems arise when you try to split the data over a large number of servers, but you can use an intermediate layer between the base, which is responsible for partitioning. Like for example FriendFeed does.
I believe that the relational model is the correct way of structuring data in most applications – content that users create. Schemes can contain data in a particular form as new versions of the service; they also serve as documentation and help avoid a heap of errors. SQL allows you to process more data as needed instead of getting tons of raw information, which then still need to be reprocessed in the application. I think once the whole hype around the NoSQL is over, someone will finally develop a relational database with free semantics.

Conclusion!

  1. Use MySQL or other classic databases for important, persistent data.
  2. Use caching delivery mechanisms – or maybe even NoSQL – for fast delivery
  3. Wait until the dust settles, and the next generation, free-semantics relational database rises up.
Share