Distributed storage for HA clusters (based on GlusterFS)

Nowadays everyone is searching for HA (High Available) solutions for running more powerful applications or websites. As you can see we already have post on our blog about configuring loadbalancers on Ubuntu (http://www.leaseweb.com/labs/2011/09/setting-up-keepalived-on-ubuntu-load-balancing-using-haproxy-on-ubuntu-part-2/) i will expand that post with the possibility to run highly available and balanced system which you can use for your shared hosting or high traffic website without having a single point of failure.

The actual problem of balanced / clustered solutions often the content server where you keep all data, that can be: databases, static files, uploaded files. All this content needs to be distributed across all your servers and you’ll have to keep track on modifications, which is not really convenient.  In other case you will have a single point of failure of your balanced solution. You can use rsync or lsyncd for doing this, but to make things more simple from administrative perspective and get more advantage for future, you can use a DFS (distributed file system), nowadays there are plenty suitable open source options like: ocfs, glusterfs, mogilefs, pvfs2, chironfs, xtreemfs etc.

Why we want to use DFS?

  • Data sharing of multiple users
  • user mobility
  • location transparency
  • location independence
  • backups and centralized management

For this post, I choose GlusterFS to start with. Why? It is opensource, it has modular, plug-able interface and you can run it on any linux based server without upgrading your kernel. Maybe i will make some overview of others DFS in one of my next blog posts.

We will use 3 dedicated servers like HP120G6:

1. HP120G6 / 1 x QC X3440 CPU / 2 x 1Gbit NICs / 4GB RAM / 2 x 1TB SATA2 (disks you can add later on fly :))

Let’s assume that we already have dedicated server with CentOS 6 installed on the first 1TB hard drive.

Just ssh to your host prepare HDDs and install GlusterFS server with agent.

#ssh root@85.17.xxx.xx

Now we need to prepare HDDs, we will use second 1TB drive for distributed storage:

#parted /dev/sdb
mklabel gpt
unit TB
mkpart primary 0 1TB
print
quit
mkfs.ext4 /dev/sdb1

After that we will install all required packages for gluster:

#yum -y install wget fuse fuse-libs automake bison gcc flex libtool
#yum -y install compat-readline5 compat-libtermcap
#wget http://packages.sw.be/rsync/rsync-3.0.7-1.el5.rfx.x86_64.rpm<

Let’s download the gluster packages from their website:

#wget http://download.gluster.com/pub/gluster/glusterfs/LATEST/CentOS/glusterfs-core-3.2.4-1.x86_64.rpm
#wget http://download.gluster.com/pub/gluster/glusterfs/LATEST/CentOS/glusterfs-fuse-3.2.4-1.x86_64.rpm
#wget http://download.gluster.com/pub/gluster/glusterfs/LATEST/CentOS/glusterfs-geo-replication-3.2.4-1.x86_64.rpm
#wget http://download.gluster.com/pub/gluster/glusterfs/LATEST/CentOS/glusterfs-rdma-3.2.4-1.x86_64.rpm
#rpm -Uvh glusterfs-core-3.2.4-1.x86_64.rpm
#rpm -Uvh glusterfs-geo-replication-3.2.4-1.x86_64.rpm

We also want to run our storage on a separated network card, let’s configure eth1 for that using internal IP range:

#nano /etc/sysconfig/network-scripts/ifcfg-eth1
DEVICE="eth1"
HWADDR="XX:XX:XX:XX:XX:XX"
NM_CONTROLLED="yes"
BOOTPROTO="static"
IPADDR="10.0.10.X"
NETMASK="255.255.0.0"
ONBOOT="yes"

We also want to use hostnames while configuring gluster:

#nano /etc/hosts
10.0.10.1       gluster1-server
10.0.10.2       gluster2-server
10.0.10.3       gluster3-server

Ensure that TCP ports 111, 24007,24008, 24009-(24009 + number of bricks across all volumes) are open on all Gluster servers.

You can use the following chains with iptables:

#iptables -A RH-Firewall-1-INPUT -m state --state NEW -m tcp -p tcp --dport 24007:24047 -j ACCEPT
#iptables -A RH-Firewall-1-INPUT -m state --state NEW -m tcp -p tcp --dport 111 -j ACCEPT
#iptables -A RH-Firewall-1-INPUT -m state --state NEW -m udp -p udp --dport 111 -j ACCEPT
#iptables -A RH-Firewall-1-INPUT -m state --state NEW -m tcp -p tcp --dport 38465:38467 -j ACCEPT
#service iptables save
#service iptables restart

Now check the version of installed glusterfs:

#/usr/sbin/glusterfs -V

To configure Red Hat-based systems to automatically start the glusterd daemon every time the system boots, enter the following from the command line:

#chkconfig glusterd on

GlusterFS offers a single command line utility known as the Gluster Console Manager to simplify configuration and management of your storage environment. The Gluster Console Manager provides functionality similar to the LVM (Logical Volume Manager) CLI or ZFS Command Line Interface, but across multiple storage servers. You can use the Gluster Console Manager online, while volumes are mounted and active.

You can run the Gluster Console Manager on any Gluster storage server. You can run Gluster commands either by invoking the commands directly from the shell, or by running the Gluster CLI in interactive mode.

To run commands directly from the shell, for example:

#gluster peer status

To run the Gluster Console Manager in interactive mode:

#gluster

Upon invoking the Console Manager, you will get an interactive shell where you can execute gluster commands, for example:

gluster > peer status

Before configuring a GlusterFS volume, you need to create a trusted storage pool consisting of the storage servers that will make up the volume. A storage pool is a trusted network of storage servers. When you start the first server, the storage pool consists of that server alone. To add additional storage servers to the storage pool, you can use the probe command from a storage server.

To add servers to the trusted storage pool use following command for each server you have, example:

gluster peer probe gluster2-server
gluster peer probe gluster3-server

Verify the peer status from the first server using the following commands:

# gluster peer probe gluster1-server
Number of Peers: 2

Hostname: gluster2-server
Uuid: xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx
State: Peer in Cluster (Connected)

Hostname: gluster3-server
Uuid: xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx
State: Peer in Cluster (Connected)

This way, you can add additional storage servers to your storage pool on the fly, to remove a server to the storage pool, use the following command:

# gluster peer detach gluster3-server
Detach successful

Now we can create a replicated volume.
A volume is a logical collection of bricks where each brick is an export directory on a server in the trusted storage pool. Most of the gluster management operations happen on the volume.
Replicated volumes replicate files throughout the bricks in the volume. You can use replicated volumes in environments where high-availability and high-reliability are critical.

First we ssh to each server we have to create folders and mount HDDs:

#mkdir /storage1
mount /dev/sdb1 /storage1

#mkdir /storage2
mount /dev/sdb1 /storage2

#mkdir /storage3
mount /dev/sdb1 /storage3

Create the replicated volume using the following command:

# gluster volume create test-volume replica 3 transport tcp gluster1-server:/storage1 gluster2-server:/storage2 gluster3-server:/storage3
Creation of test-volume has been successful
Please start the volume to access data.

You can optionally display the volume information using the following command:

# gluster volume info
Volume Name: test-volume
Type: Distribute
Status: Created
Number of Bricks: 3
Transport-type: tcp
Bricks:
Brick1: gluster1-server:/storage1
Brick2: gluster2-server:/storage2
Brick3: gluster3-server:/storage3

You must start your volumes before you try to mount them, start the volume using the following command:

# gluster volume start test-volume
Starting test-volume has been successful

Now we need to setup GlusterFS client to access our volume, Gluster offers multiple options to access gluster volumes:

  • Gluster Native Client – This method provides high concurrency, performance and transparent failover in GNU/Linux clients. The Gluster Native Client is POSIX conformant. For accessing volumes using gluster native protocol, you need to install gluster native client.
  • NFS – This method provides access to gluster volumes with NFS v3 or v4.
  • CIFS – This method provides access to volumes when using Microsoft Windows as well as SAMBA clients. For this access method, Samba packages need to be present on the client side.

We will talk about Gluster Native Client as it is a POSIX conformant, FUSE-based, client running in user space. Gluster Native Client is recommended for accessing volumes when high concurrency and high write performance is required.

Verify that the FUSE module is installed:

# modprobe fuse
# dmesg | grep -i fuse
fuse init (API version 7.XX)

Install required prerequisites on the client using the following command:

# sudo yum -y install openssh-server wget fuse fuse-libs openib libibverbs

Ensure that TCP and UDP ports 24007 and 24008 are open on all Gluster servers. Apart from these ports, you need to open one port for each brick starting from port 24009. For example: if you have five bricks, you need to have ports 24009 to 24014 open.

You can use the following chains with iptables:

# sudo iptables -A RH-Firewall-1-INPUT -m state --state NEW -m tcp -p tcp --dport 24007:24008 -j ACCEPT
# sudo iptables -A RH-Firewall-1-INPUT -m state --state NEW -m tcp -p tcp --dport 24009:24014 -j ACCEPT

Install Gluster Native Client (FUSE component) using following command:

#rpm -Uvh glusterfs-fuse-3.2.4-1.x86_64.rpm
#rpm -Uvh glusterfs-rdma-3.2.4-1.x86_64.rpm

After that we need to mount volumes we created before:

To manually mount a Gluster volume, use the following command on each server we have:

# mount -t glusterfs gluster1-server:/test-volume /mnt/glusterfs

You can configure your system to automatically mount the Gluster volume each time your system starts.
Edit the /etc/fstab file and add the following line on each server we have:

gluster1-server:/test-volume /mnt/glusterfs glusterfs defaults,_netdev 0 0

To test mounted volumes we can easily use command:

# df -h /mnt/glusterfs Filesystem Size Used Avail Use% Mounted on gluster1-server:/test-volume
1T 1.3G 1TB 1% /mnt/glusterfs

Now you can use /mnt/glusterfs mount as HA storage for your files, backups or even XEN virtualization and you don’t need to worry if 1 server is going down because of hardware problem or you planned some maintenance on it.

Next time, i will write about how to integrate Gluster with NFS without having a single point of failure, so you can use it with any solution which supports the NFS protocol.

Any suggestions and comments are appreciated,Thank you!

Share

Tuning Zend framework and Doctrine

In principle, the combination of Zend Framework with Doctrine is not too difficult. But first let’s talk about the preparations. According to the author of Zend Framework, the default file structure of project can be a bit more optimal.

Here is the default structure of the Zend Framework project files:


/
  application/
    default/
      controllers/
      layouts/
      models/
      views/
  html/
  library/

It can often be that you will have a number of applications (e.g., frontend and backend ), and you want to use the same model for them. In this case, it can be a good practice to create your models folder in library/, in which case the new structure would look as follows:

/
  application/
    default/
      controllers/
      layouts/
      views/
  html/
  library/
    Model/

In addition, the folder models/ is renamed to Model. We now proceed as follows:

  1. Download a fresh copy of Doctrine-xxx-Sandbox.tgz from the official website.
  2. Copy the contents of the lib/folder from the archive to our project library/ folder.
  3. Create another folder bin/sandbox/ in the root of our project and copy the rest of the archive there (except models/ folder and the index.php file).

Now the structure of our project should look like this:

/
  application/
    default/
      controllers/
      layouts/
      views/
  bin/
    sandbox/
      data/
      lib/
      migrations/
      schema/
      config.php
      doctrine
      doctrine.php
  html/
  library/
    Doctrine/
    Model/
    Doctrine.php

Clear the content of the folder bin/sandbox/lib/ – we now have the library in another place.
Now it’s time to configure the Doctrine to work with new file structure.

Change the value of the constant MODELS_PATH in the file bin/sandbox/config.php::

SANDBOX_PATH . DIRECTORY_SEPARATOR . '..' . DIRECTORY_SEPARATOR . '..' . DIRECTORY_SEPARATOR . 'library' . DIRECTORY_SEPARATOR . 'Model'


Next, change the connection settings for the database. Change the value of the DSN constant to reflect your database settings. For example, if you use the DBMS MySQL, the DSN might look like this:

'mysql://root@localhost/mydbname'


Configure include_paths on the first line in the config file, so our script can find files on new locations:

set_include_path( '.' . PATH_SEPARATOR . '..' . DIRECTORY_SEPARATOR . '..' . DIRECTORY_SEPARATOR . 'library' . DIRECTORY_SEPARATOR . PATH_SEPARATOR . '.' . DIRECTORY_SEPARATOR . 'lib' . PATH_SEPARATOR . get_include_path());

Then connect the main Doctrine library file directly after installation paths, and set the startup function:

require_once 'Doctrine.php';

/**
 * Setup autoload function
 */
spl_autoload_register( array(
    'Doctrine',
    'autoload'
));

Continue reading Tuning Zend framework and Doctrine

Share