How to create a highly available web hosting platform using Floating IPs

In this Leaseweb Labs post, we’re going step-by-step to a proof of concept of a (very basic) highly available web hosting platform. Using Floating IPs and keepalived, we’ll create an active/standby setup on two different dedicated servers, with automatic failover through the Leaseweb API, so your application will never be down. We’ll use 2 dedicated servers and 1 Floating IP address from Leaseweb to make this happen.

What are Floating IPs?

Floating IPs are a kind of virtual IP address that can be dynamically routed to any server in the same network. Some hosting providers may also call this Elastic IPs or Virtual IP’s.

Multiple servers can own the same Floating IP address, but it can only be active on one server at any given time.

Floating IPs can be used to implement features such as:

  • Failover in a high-availability cluster
  • Zero-downtime Continuous Deployment

Using Floating IPs

Using Floating IPs is quite simple, with Leaseweb, you can order them through the customer portal and set them up on your server as an additional IP address. But the real power lies in automation. By using the Leaseweb API, it’s possible to use any script or even some 3rd party software to automatically control Floating IPs.

When paired with free software such as keepalived, which can detect when a server is down and take action accordingly, it becomes possible to create a fully automated highly available platform for any application.

Step one: Set up the servers and Floating IPs

First, let’s set up the two servers with a simple HTTP web server and use a Floating IP address to access the website of either one server.

  • Server A (Leaseweb Server Id 20483) has IP address 212.32.230.75 and is pre-installed with CentOS 7
  • Server B (Leaseweb Server Id 37089) has IP address 212.32.230.66 and is pre-installed with Ubuntu 18.04
  • 89.149.192.0 is the Floating IP address

Setting up the Floating IP address in the Customer Portal

If you don’t have a Floating IP yet, then from the Floating IPs page in the Leaseweb Customer Portal click the  button to order Floating IPs. Once delivered, you will see an entry like this:

Click on the range to open its detail page:

Here it is possible to set up a relationship between a Floating IP and an Anchor IP. Leaseweb calls this a “Floating IP Definition”, and can be done with the  button.

Let’s create a new definition to link Floating IP 89.149.192.0 to the Anchor IP 212.32.230.75 of server A:

Once saved, there will be one Floating IP Definition visible:

Setting up the Floating IP address and a demonstration webpage on the servers

On a server, a Floating IP can be set up as any other additional IP address. A gateway address is not necessary, and the subnet mask is always 255.255.255.255, or /32 in CIDR notation.

To add an additional IP address to an interface in Linux without making the change persistent, we can simply use the
ip -4 address show
command to show which device the main IP address is configured on, and then do
ip address add <Floating IP address>/32 dev <Device>
to add the floating IP to the same device.

We also install a HTTP server and create a simple demonstration webpage:

# Check which device we need to add then IP address to
ip -4 address show
ip address add 89.149.192.0/32 dev eno1

# The Floating IP address should now be visible on the device
ip -4 address show

# Install a web server and create a basic default webpage
yum install -y httpd
systemctl start httpd
cat <<EOF > /var/www/html/index.html
<!DOCTYPE html>
<html>
<head><title>This is test server A</title></head>
<body><h1>This is test server A</h1></body>
</html>
EOF

Result:

tim@laptop:~$ ssh root@20483.lsw
[root@servera ~]# ip -4 address show
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
2: eno1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
    inet 212.32.230.75/26 brd 212.32.230.127 scope global eno1
       valid_lft forever preferred_lft forever

[root@servera ~]# ip address add 89.149.192.0/32 dev eno1

[root@servera ~]# ip -4 address show
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
2: eno1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
    inet 212.32.230.75/26 brd 212.32.230.127 scope global eno1
       valid_lft forever preferred_lft forever
    inet 89.149.192.0/32 scope global eno1
       valid_lft forever preferred_lft forever

[root@servera ~]# yum install -y httpd

[...]

[root@servera ~]# systemctl start httpd

[root@servera ~]# cat <<EOF > /var/www/html/index.html
> <!DOCTYPE html>
> <html>
> <head><title>This is test server A</title></head>
> <body><h1>This is test server A</h1></body>
> </html>
> EOF

[root@servera ~]#

(note: ssh root@20483.lsw is a neat little trick explained here: https://gist.github.com/timwb/1f95737d54563aedd7c97d5e671667cc)

You should now already be able to ping the Floating IP address, and opening it in a browser loads the demo webpage:

Next, add the same Floating IP address to server B, install a HTTP web server and create a simple demo webpage:

# Check which device we need to add the IP address to
ip -4 address show
ip address add 89.149.192.0/32 dev enp32s0

# The Floating IP address should now be visible on the device
ip -4 address show

# Install a web server and create a basic default webpage
apt install -y nginx
cat <<EOF > /var/www/html/index.html
<!DOCTYPE html>
<html>
<head><title>This is test server B</title></head>
<body><h1>This is test server B</h1></body>
</html>
EOF

Result:

tim@laptop:~$ ssh root@37089.lsw
root@serverb:~# ip -4 address show
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
2: enp32s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
    inet 212.32.230.66/26 brd 212.32.230.127 scope global enp32s0
       valid_lft forever preferred_lft forever
3: enp34s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
    inet 10.32.18.208/27 brd 10.32.18.223 scope global enp34s0
       valid_lft forever preferred_lft forever

root@serverb:~# ip address add 89.149.192.0/32 dev enp32s0

root@serverb:~# ip -4 address show
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
2: enp32s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
    inet 212.32.230.66/26 brd 212.32.230.127 scope global enp32s0
       valid_lft forever preferred_lft forever
    inet 89.149.192.0/32 scope global enp32s0
       valid_lft forever preferred_lft forever
3: enp34s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
    inet 10.32.18.208/27 brd 10.32.18.223 scope global enp34s0
       valid_lft forever preferred_lft forever

root@serverb:~# apt install -y nginx

[...]

root@serverb:~# cat <<EOF > /var/www/html/index.html
> <!DOCTYPE html>
> <html>
> <head><title>This is test server B</title></head>
> <body><h1>This is test server B</h1></body>
> </html>
> EOF

root@serverb:~#

FLIP’ing a Floating IP

Initially, we’ve setup Floating IP 89.149.192.0 with Anchor IP 212.32.230.75, which belongs to server A.

Suppose we’ve developed an updated web application on server B and after months of testing, it’s finally ready.

To direct users visiting 89.149.192.0 to server B, we need to update the Anchor IP of Floating IP 89.149.192.0, changing (FLIP’ing) it from 212.32.230.75 (server A) to 212.32.230.66 (server B).

To do this manually, click  in the Customer Portal and change the Anchor IP:

Now, when you refresh your browser, the page from server B is shown:

Congratulations, you’ve just done a zero-downtime deployment, but also set your first step towards a high availability, continuous deployment web hosting cluster.

Step 2: Using the API to manage Floating IPs

Of course, using the Leaseweb Customer Portal is a convenient way to set up and play with Floating IPs, but the real power is in automation.

The official documentation of the Floating IPs API can be found on developer.leaseweb.com

In the following examples we’ll use curl to perform http requests and the jq tool to pretty-print the API responses, but you can use any tool or library for interacting with a RESTful API. You can find your API key (X-Lsw-Auth) in the Customer Portal under API

Floating IPs and Floating IP ranges have a prefix length and are always written in CIDR notation. In the context of API calls, the forward slash “/” is replaced with an underscore “_” for compatibility in URLs. For a single Floating IP address (/32), the prefix length may be omitted.

List Floating IP ranges

To list Floating IP ranges, make a GET request to /floatingIps/v2/ranges:
curl --silent --request GET --url https://api.leaseweb.com/floatingIps/v2/ranges --header 'X-Lsw-Auth: 213423-2134234-234234-23424' |jq

{
  "ranges": [
    {
      "id": "89.149.192.0_29",
      "range": "89.149.192.0/29",
      "customerId": "12345678",
      "salesOrgId": "2000",
      "pop": "AMS-01"
    }
  ],
  "_metadata": {
    "limit": 20,
    "offset": 0,
    "totalCount": 1
  }
}

List the Floating IP definitions in a Floating IP range

To list the Floating IP definitions within a certain Floating IP range, make a GET request to 

{
  "floatingIpDefinitions": [
    {
      "id": "89.149.192.0",
      "rangeId": "89.149.192.0_29",
      "pop": "AMS-01",
      "customerId": "12345678",
      "salesOrgId": "2000",
      "floatingIp": "89.149.192.0/32",
      "anchorIp": "212.32.230.66",
      "status": "ACTIVE",
      "createdAt": "2019-06-17T14:15:11+00:00",
      "updatedAt": "2019-06-26T09:26:52+00:00"
    }
  ],
  "_metadata": {
    "totalCount": 1,
    "limit": 20,
    "offset": 0
  }

{
  "id": "89.149.192.0",
  "rangeId": "89.149.192.0_29",
  "pop": "AMS-01",
  "customerId": "12345678",
  "salesOrgId": "2000",
  "floatingIp": "89.149.192.3/32",
  "anchorIp": "212.32.230.66",
  "status": "CREATING",
  "createdAt": "2019-06-26T14:30:40+00:00",
  "updatedAt": "2019-06-26T14:30:40+00:00"
}

{
  "floatingIpDefinitions": [
    {
      "id": "89.149.192.0",
      "rangeId": "89.149.192.0_29",
      "pop": "AMS-01",
      "customerId": "12345678",
      "salesOrgId": "2000",
      "floatingIp": "89.149.192.0/32",
      "anchorIp": "212.32.230.66",
      "status": "ACTIVE",
      "createdAt": "2019-06-17T14:15:11+00:00",
      "updatedAt": "2019-06-26T14:23:58+00:00"
    },
    {
      "id": "89.149.192.3",
      "rangeId": "89.149.192.0_29",
      "pop": "AMS-01",
      "customerId": "12345678",
      "salesOrgId": "2000",
      "floatingIp": "89.149.192.3/32",
      "anchorIp": "212.32.230.66",
      "status": "ACTIVE",
      "createdAt": "2019-06-26T14:30:40+00:00",
      "updatedAt": "2019-06-26T14:30:45+00:00"
    }
  ],
  "_metadata": {
    "totalCount": 2,
    "limit": 20,
    "offset": 0
  }
}

 updating 89.149.192.0 with Anchor IP 212.32.230.75, so we’re directing traffic back to server A again:
curl --silent --request PUT --url https://api.leaseweb.com/floatingIps/v2/ranges/89.149.192.0_29/floatingIpDefinitions/89.149.192.0_32--header 'X-Lsw-Auth: ' --header 'content-type: application/json' --data '{
    "anchorIp": "212.32.230.75"
}' |jq

{
  "id": "89.149.192.0",
  "rangeId": "89.149.192.0_29",
  "pop": "AMS-01",
  "customerId": "12345678",
  "salesOrgId": "2000",
  "floatingIp": "89.149.192.0/32",
  "anchorIp": "212.32.230.66",
  "status": "UPDATING",
  "createdAt": "2019-06-17T14:15:11+00:00",
  "updatedAt": "2019-06-26T14:35:57+00:00"
}

Note that in the response, the old anchorIP is still listed and the status has changed to UPDATING. The update process is very fast, but not instantaneous. When making another GET request to , you can see that the update has processed seconds later:

{
  "floatingIpDefinitions": [
    {
      "id": "89.149.192.0",
      "rangeId": "89.149.192.0_29",
      "pop": "AMS-01",
      "customerId": "12345678",
      "salesOrgId": "2000",
      "floatingIp": "89.149.192.0/32",
      "anchorIp": "212.32.230.75",
      "status": "ACTIVE",
      "createdAt": "2019-06-17T14:15:11+00:00",
      "updatedAt": "2019-06-26T14:36:01+00:00"
    }
  ],
  "_metadata": {
    "totalCount": 1,
    "limit": 20,
    "offset": 0
  }
}

Delete a Floating IP definition

Deleting a Floating IP definition is as easy as making a DELETE call to :
curl --silent --request DELETE --url https://api.leaseweb.com/floatingIps/v2/ranges/89.149.192.0_29/floatingIpDefinitions/89.149.192.3--header 'X-Lsw-Auth: ' |jq

{
  "id": "89.149.192.3",
  "rangeId": "89.149.192.0_29",
  "pop": "AMS-01",
  "customerId": "12345678",
  "salesOrgId": "2000",
  "floatingIp": "89.149.192.3/32",
  "anchorIp": "212.32.230.66",
  "status": "REMOVING",
  "createdAt": "2019-06-26T14:30:40+00:00",
  "updatedAt": "2019-06-26T14:39:34+00:00"
}

Just like with the POST and PUT calls, it will take a couple of seconds to process.

Step three: Putting it all together – creating a highly available web hosting platform with Keepalived

Keepalived is a versatile piece of software that can be used to implement automatic failover using the Leaseweb Floating IPs API. We’ll demonstrate how to create a simple active/backup setup where the Floating IP is automatically routed to server B in the event that server A fails.

It can do many more things, and keep in mind this is meant as a proof-of-concept example only, meant to demonstrate the how to be highly available with automatic failover and Floating IPs in the simplest possible way.

The keepalived configuration

After installing, the configuration of keepalived resides in the /etc/keepalived/keepalived.conf file. In this file, we’ll instruct keepalived to:

  • Create a “vrrp” instance named webservers with id 123:
    Note: the id can be any random number between 0-255, but it needs to be the same between all servers.
    vrrp_instance webservers { ... }
    virtual_router_id
  • Setup server A to be the master, with priority 200:
    state MASTER
    priority 200
  • Setup server B to be the backup, with priority 100:
    state BACKUP
    priority 100
  • Communicate with each other using a shared secret:
    interface <interface name> (see the instructions under Setting up the Floating IP address on the servers)
    unicast_src_IP <server's IP address>
    unicast_peer { <other server's IP address> }
    authentication { ... }
  • Run a script to update the Anchor IP when either server becomes master
    notify_master /etc/keepalived/becomemaster.sh
  • Run a command to check if the web server is still running. On server A (CentOS) this is the httpd process, on server B (Ubuntu), this is the nginx process and we need to wrap the command in a small script instead.
    track_script { ... }

So, we run the following commands to setup server A:

# Install keepalived
yum install -y keepalived

# Write keepalived config
cat <<EOF > /etc/keepalived/keepalived.conf
vrrp_instance webservers {
    virtual_router_id 123
    state MASTER
    priority 200
    interface eno1
    unicast_src_ip 212.32.230.75
    unicast_peer {
        212.32.230.66
    }
    authentication {
        auth_type PASS
        auth_pass supersecret
    }
    notify_master /etc/keepalived/becomemaster.sh
    track_script {
        chk_apache
    }
}

vrrp_script chk_apache {
    script "/usr/sbin/pidof httpd"
    interval 2
}
EOF

# Write script that calls floating IP API to update the Floating IP with this server as Anchor IP
cat <<EOF > /etc/keepalived/becomemaster.sh
#!/bin/sh
curl --silent --request PUT --url https://api.leaseweb.com/floatingIps/v2/ranges/89.149.192.0_29/floatingIpDefinitions/89.149.192.0_32 --header 'X-Lsw-Auth: '"213423-2134234-234234-23424" --header 'content-type: application/json' --data '{ "anchorIp": "212.32.230.75" }'
EOF
chmod +x /etc/keepalived/becomemaster.sh

# Restart keepalived
systemctl restart keepalived

# Check keepalived status
systemctl status keepalived

Result:

tim@laptop:~$ ssh root@20483.lsw
[root@servera ~]# yum install -y keepalived

[...]

[root@servera ~]# cat <<EOF > /etc/keepalived/keepalived.conf
> vrrp_instance webservers {
>     virtual_router_id 123
>     state MASTER
>     priority 200
>     interface eno1
>     unicast_src_ip 212.32.230.75
>     unicast_peer {
>         212.32.230.66
>     }
>     authentication {
>         auth_type PASS
>         auth_pass supersecret
>     }
>     notify_master /etc/keepalived/becomemaster.sh
>     track_script {
>         chk_apache
>     }
> }
>
> vrrp_script chk_apache {
>     script "/usr/sbin/pidof httpd"
>     interval 2
> }
> EOF

[root@servera ~]# cat <<EOF > /etc/keepalived/becomemaster.sh
> #!/bin/sh
> curl --silent --request PUT --url https://api.leaseweb.com/floatingIps/v2/ranges/89.149.192.0_29/floatingIpDefinitions/89.149.192.0_32 --header 'X-Lsw-Auth: '"213423-2134234-234234-23424" --header 'content-type: application/json' --data '{ "anchorIp": "212.32.230.75" }'
> EOF

[root@servera ~]# chmod +x /etc/keepalived/becomemaster.sh

[root@servera ~]# systemctl restart keepalived

[root@servera ~]# systemctl status keepalived
● keepalived.service - LVS and VRRP High Availability Monitor
   Loaded: loaded (/usr/lib/systemd/system/keepalived.service; disabled; vendor preset: disabled)
   Active: active (running) since Tue 2019-07-23 11:27:03 UTC; 30s ago
  Process: 1346 ExecStart=/usr/sbin/keepalived $KEEPALIVED_OPTIONS (code=exited, status=0/SUCCESS)
 Main PID: 1347 (keepalived)
   CGroup: /system.slice/keepalived.service
           ├─1347 /usr/sbin/keepalived -D
           ├─1348 /usr/sbin/keepalived -D
           └─1349 /usr/sbin/keepalived -D

Jul 23 11:27:03 servera Keepalived_vrrp[1349]: Opening file '/etc/keepalived/keepalived.conf'.
Jul 23 11:27:03 servera Keepalived_vrrp[1349]: WARNING - default user 'keepalived_script' for script execution does not exist ...reate.
Jul 23 11:27:03 servera Keepalived_vrrp[1349]: Truncating auth_pass to 8 characters
Jul 23 11:27:03 servera Keepalived_vrrp[1349]: SECURITY VIOLATION - scripts are being executed but script_security not enabled.
Jul 23 11:27:03 servera Keepalived_vrrp[1349]: Using LinkWatch kernel netlink reflector...
Jul 23 11:27:03 servera Keepalived_vrrp[1349]: VRRP sockpool: [ifindex(2), proto(112), unicast(1), fd(10,11)]
Jul 23 11:27:03 servera Keepalived_vrrp[1349]: VRRP_Script(chk_apache) succeeded
Jul 23 11:27:04 servera Keepalived_vrrp[1349]: VRRP_Instance(webservers) Transition to MASTER STATE
Jul 23 11:27:05 servera Keepalived_vrrp[1349]: VRRP_Instance(webservers) Entering MASTER STATE
Jul 23 11:27:05 servera Keepalived_vrrp[1349]: Opening script file /etc/keepalived/becomemaster.sh
Hint: Some lines were ellipsized, use -l to show in full.

[root@servera ~]#

Then we setup server B:

# Install keepalived
apt install -y keepalived

# Write keepalived config
cat <<EOF > /etc/keepalived/keepalived.conf
vrrp_script chk_nginx {
    script "/etc/keepalived/chk_nginx.sh"
    interval 2
}

vrrp_instance webservers {
    virtual_router_id 123
    state BACKUP
    priority 100
    interface enp32s0
    unicast_src_ip 212.32.230.66
    unicast_peer {
        212.32.230.75
    }
    authentication {
        auth_type PASS
        auth_pass supersecret
    }
    notify_master /etc/keepalived/becomemaster.sh
    track_script {
        chk_nginx
    }
}
EOF

# Write script that calls floating IP API to update the Floating IP with this server as Anchor IP
cat <<EOF > /etc/keepalived/becomemaster.sh
#!/bin/sh
curl --silent --request PUT --url https://api.leaseweb.com/floatingIps/v2/ranges/89.149.192.0_29/floatingIpDefinitions/89.149.192.0_32 --header 'X-Lsw-Auth: '"213423-2134234-234234-23424" --header 'content-type: application/json' --data '{ "anchorIp": "212.32.230.66" }'
EOF
chmod +x /etc/keepalived/becomemaster.sh

# Restart keepalived
systemctl restart keepalived

# Check keepalived status
systemctl status keepalived

Result:

tim@laptop:~$ ssh root@37089.lsw
[root@serverb ~]# apt install -y keepalived

[...]

[root@serverb ~]# cat <<EOF > /etc/keepalived/keepalived.conf
> vrrp_instance webservers {
>     virtual_router_id 123
>     state BACKUP
>     priority 100
>     interface enp32s0
>     unicast_src_ip 212.32.230.66
>     unicast_peer {
>         212.32.230.75
>     }
>
>     authentication {
>         auth_type PASS
>         auth_pass supersecret
>     }
>
>     notify_master /etc/keepalived/becomemaster.sh
>
>     track_script {
>         chk_nginx
>     }
> }
>
> vrrp_script chk_nginx {
>     script "/etc/keepalived/chk_nginx.sh"
>     interval 2
> }
> EOF

[root@serverb ~]# cat <<EOF > /etc/keepalived/becomemaster.sh
> #!/bin/sh
> curl --silent --request PUT --url https://api.leaseweb.com/floatingIps/v2/ranges/89.149.192.0_29/floatingIpDefinitions/89.149.192.0_32 --header 'X-Lsw-Auth: '"213423-2134234-234234-23424" --header 'content-type: > application/json' --data '{ "anchorIp": "212.32.230.66" }'
> EOF

[root@serverb ~]# cat <<EOF > /etc/keepalived/chk_nginx.sh
> #!/bin/sh
> /bin/pidof nginx
> EOF

[root@serverb ~]# chmod +x /etc/keepalived/becomemaster.sh

[root@serverb ~]# systemctl restart keepalived

[root@serverb ~]# systemctl status keepalived
● keepalived.service - Keepalive Daemon (LVS and VRRP)
   Loaded: loaded (/lib/systemd/system/keepalived.service; enabled; vendor preset: enabled)
   Active: active (running) since Tue 2019-07-23 11:27:12 UTC; 48s ago
  Process: 24346 ExecStart=/usr/sbin/keepalived $DAEMON_ARGS (code=exited, status=0/SUCCESS)
 Main PID: 24355 (keepalived)
    Tasks: 3 (limit: 4574)
   CGroup: /system.slice/keepalived.service
           ├─24355 /usr/sbin/keepalived
           ├─24357 /usr/sbin/keepalived
           └─24358 /usr/sbin/keepalived

Jul 23 11:27:12 serverb Keepalived_vrrp[24358]: Registering Kernel netlink command channel
Jul 23 11:27:12 serverb Keepalived_vrrp[24358]: Registering gratuitous ARP shared channel
Jul 23 11:27:12 serverb Keepalived_vrrp[24358]: Opening file '/etc/keepalived/keepalived.conf'.
Jul 23 11:27:12 serverb Keepalived_vrrp[24358]: WARNING - default user 'keepalived_script' for script execution does not exist - please create.
Jul 23 11:27:12 serverb Keepalived_vrrp[24358]: Truncating auth_pass to 8 characters
Jul 23 11:27:12 serverb Keepalived_vrrp[24358]: SECURITY VIOLATION - scripts are being executed but script_security not enabled.
Jul 23 11:27:12 serverb Keepalived_vrrp[24358]: Using LinkWatch kernel netlink reflector...
Jul 23 11:27:12 serverb Keepalived_vrrp[24358]: VRRP_Instance(webservers) Entering BACKUP STATE
Jul 23 11:27:12 serverb Keepalived_healthcheckers[24357]: Opening file '/etc/keepalived/keepalived.conf'.
Jul 23 11:27:12 serverb Keepalived_vrrp[24358]: VRRP_Script(chk_nginx) succeeded

[root@serverb ~]# 

Watching keepalived in action

So now that we have our redundant setup and server A is the master. If we visit the Floating IP address in our browser, we see that it’s being served from server A:

Let’s simulate a failure on server A by shutting down the Apache web server with the and watch server B take over.

On server A, run:
systemctl stop httpd

Within a couple of seconds, you’ll see it failover to server B. Feel free to hammer F5 like your life depends on it!

Looking at the logs of keepalived on server B, you can see that it detected the failure on server A and automatically executed the script to update the Anchor IP:

journalctl -u keepalived |tail

[ ... ]

Jul 23 11:51:43 diy-dhcp-ams01-nl Keepalived_vrrp[24358]: VRRP_Instance(webservers) Transition to MASTER STATE
Jul 23 11:51:44 diy-dhcp-ams01-nl Keepalived_vrrp[24358]: VRRP_Instance(webservers) Entering MASTER STATE
Jul 23 11:51:44 diy-dhcp-ams01-nl Keepalived_vrrp[24358]: Opening script file /etc/keepalived/becomemaster.sh

That’s it, you now have your own (minimal implementation of) a highly available web hosting platform!

Share

Building a CaaS solution on bare metal servers

Welcome readers to the first Leaseweb Labs blog in our series on the topic of container solutions. This post is written by Santhosh Chamia veteran Engineer with vast experience building IaaS/Cloud platforms from the ground up. 

What are Containers as a Service (CaaS)? 

Containers as a Service (CaaS) is a hosted container infrastructure that offers an easy way to deploy containers on elastic infrastructureCaaS is suitable in contexts where developers want more control over container orchestration. With CaaS, developers can deploy complex applications on containers without worrying about the limitations of certain platforms. 

As a Senior Infrastructure Engineer at Leaseweb, my primary focus is on exceptional operational delivery. Container-based infrastructure and technology is an integral part of operations for myself and my team. We can deliver the power of Kubernetes to our applications quickly, securely, and efficiently using CaaS.  

This blog portrays a high-level CaaS solution on bare metal servers with rich elastic features. This may be useful for those who want to deploy on-premise enterprise-level Kubernetes clusters for the production workloads.  

Things to consider in CaaS Solution 

Infrastructure 

CaaS platforms are built on top of open hyper-converged infrastructure (HCI). They combine compute, storage, and network fabric into one platform – using low-cost commodity x86 hardware, which adds more value by throwing in software-defined systems, as well as horizontally scalable underlying infrastructure for CaaS. 

Container Orchestration (Kubernetes) 

Kubernetes is an open-source container-orchestration system for automating application deployment, scaling, and management. We are using Kubernetes for container orchestration in our CaaS platform. 

Storage (Class / volume plug-in) 

The Storage Class provides a way for administrators to describe the classes of storage they offer. Different Classes might map to quality-of-service levels. We are using volume plug-in RBD for high-performance workloads, and general workloads with NFS in our CaaS.  

Cluster Networking 

We are using cluster networking/CNI through Calico. Calico provides highly scalable networking and network policy solution for connecting Kubernetes pods based on the same IP networking principles as the internet.  

Cluster Networking makes use of layer 3 network and features the BGP routing protocol, network policy, and route reflector. This is when the nodes act as a client and peering to the controller servers, and controller servers use the BIRD Internet routing daemon to have better performance and stability. 

Load Balancing (on bare metal) 

Kubernetes does not offer an implementation of network load-balancers for bare metal clusters. We have deployed load balancing such as L4 with MetalLB and L7 with IngressMetalLB is a load-balancer implementation for bare metal Kubernetes clusters, using standard routing protocols. We deployed MetalLB with BGP routing protocols. 

In Kubernetes, an Ingress is an object that allows access to your Kubernetes services from outside the Kubernetes cluster. You configure access by creating a collection of rules that define which inbound connections reach which services using Nginx Ingress Controller. 

Kubernetes Security  

We have a number of security measures in our solution. These include:  

  • Transport Level Security (TLS) for all API traffic 
  • Network policies for a namespace to restrict access to pods/ports, and controlling the placement of pods onto nodes pools 
  • Separate namespaces for isolation between components 
  • Role-Based Access Control (RBAC) 
  • Limiting resource usage on a cluster using resource quota limits 
  • Using etcd ACLs 
  • Enabling audit logging to analysis in the event of a compromise. 

Kubernetes logging and monitoring 

Monitoring and logging for CaaS solution means using tools like: 

  • Icinga2 distributed monitoring – for underlying infrastructures
  • Prometheus/Grafana – for Kubernetes Cluster monitoring.
  • Elasticsearch, Fluentd, and Kibana (EFK) for stack managing logging

 Provisioning and life cycle

We are using Chef for provisioning and configuration management of base OS, and Ansible for Kubernetes cluster provisioning and lifecycle management. 

Infrastructure architecture diagram 

CaaS

Conclusion 

With this design, I am able to manage the underlying infrastructure and the Kubernetes cluster within the same umbrella. The solution is cost-effective and can be deployed with low-cost commodity x86 hardware.  

This CaaS solution is implemented using open-source technologies, so IT teams should consider the learning and development that is needed for developers to implement and manage this solution. Stay tuned for the next post, expect detailed technical blog in each domain. 

Share

How to create JWT authentication with API Platform

As the title suggests, in this blog we will together create a simple JWT authentication using API Platform and LexikJWTAuthenticationBundle. And of course, also using our lovely Doctrine User Provider.

Motivation

There too many tutorials online about symfony with JWT, and also some about the API Platform. But most of them are too short or missing certain things, which is unhelpful. It can also be confusing for developers when the tutorials don’t say what concepts you need to know first.

I hope this blog will be different – if you have any concerns, updates, questions, then drop a comment underneath and i’ll try to answer all of them.

Requirements

  • PHP >= 7.0 knowledge
  • Symfony knowledge (Autowiring, Dependency Injection)
  • Docker knowledge
  • REST APIs knowledge
  • PostgreSQL knowledge
  • Ubuntu or MacOs (Sorry Windows users :))

API Platform installation

The best way for me to install this is by using the git repository, or downloading the API Platform as .zip file from Github.

$ git clone https://github.com/api-platform/api-platform.git apiplatform-user-auth

$ cd apiplatform-user-auth

Now, first of all, the whole API Platform runs on specific ports, so you need to make sure that this is free and nothing is listening to it.

Finding the ports

You can find them in the docker-compose.yml file in the project root directory. They always be like [80, 81, 8080, 8081, 3000, 5432, 1337, 8443, 8444, 443, 444]

How to show this

Run this command

$ sudo lsof -nP | grep LISTEN

Kill any listening processes on any of the above ports.

$ sudo kill -9 $PROCESS_NUMBER

Installation:

  • Pull the required packages and everything needed.
docker-compose pull
  • Bring the application up and running.
$ docker-compose up -d
  • You may face some issue here, so it’s best to bring all containers down and run the command again like this.
$ docker-compose down
$ COMPOSE_HTTP_TIMEOUT=120 docker-compose up -d

Now the application should be running and everything should be in place:

$ docker ps

CONTAINER ID        IMAGE                            COMMAND                  CREATED              STATUS              PORTS                                                                    NAMES
6389d8efb6a0        apiplatform-user-auth_h2-proxy   "nginx -g 'daemon of…"   About a minute ago   Up About a minute   0.0.0.0:443-444->443-444/tcp, 80/tcp, 0.0.0.0:8443-8444->8443-8444/tcp   apiplatform-user-auth_h2-proxy_1_a012bc894b6c
a12ff2759ca4        quay.io/api-platform/varnish     "docker-varnish-entr…"   2 minutes ago        Up 2 minutes        0.0.0.0:8081->80/tcp                                                     apiplatform-user-auth_cache-proxy_1_32d747ba8877
6c1d29d1cbdd        quay.io/api-platform/nginx       "nginx -g 'daemon of…"   2 minutes ago        Up 2 minutes        0.0.0.0:8080->80/tcp                                                     apiplatform-user-auth_api_1_725cd9549081
62f69838dacb        quay.io/api-platform/php         "docker-entrypoint p…"   2 minutes ago        Up 2 minutes        9000/tcp                                                                 apiplatform-user-auth_php_1_cf09d32c3120
381384222af5        dunglas/mercure                  "./mercure"              2 minutes ago        Up 2 minutes        443/tcp, 0.0.0.0:1337->80/tcp                                            apiplatform-user-auth_mercure_1_54363c253a34
783565efb2eb        postgres:10-alpine               "docker-entrypoint.s…"   2 minutes ago        Up 2 minutes        0.0.0.0:5432->5432/tcp                                                   apiplatform-user-auth_db_1_8da243ca2865
1bc8e386bf02        quay.io/api-platform/client      "/bin/sh -c 'yarn st…"   2 minutes ago        Up About a minute   0.0.0.0:80->3000/tcp                                                     apiplatform-user-auth_client_1_1c413b4e4a5e
c22bef7a0b3f        quay.io/api-platform/admin       "/bin/sh -c 'yarn st…"   2 minutes ago        Up About a minute   0.0.0.0:81->3000/tcp                                                     apiplatform-user-auth_admin_1_cfecc5c6b442

Now, if you go to localhost:8080 you will see there some simple APIs listed, it is the example entity that comes with the project.

Create the User entity based on Doctrine User Provider

Install the doctrine maker package to help us make this quickly 🙂

$ docker-compose exec php composer require doctrine maker

Create your User entity

$ docker-compose exec php bin/console make:user

 The name of the security user class (e.g. User) [User]:
 > Users

 Do you want to store user data in the database (via Doctrine)? (yes/no) [yes]:
 >

 Enter a property name that will be the unique "display" name for the user (e.g. email, username, uuid) [email]:
 > email

 Will this app need to hash/check user passwords? Choose No if passwords are not needed or will be checked/hashed by some other system (e.g. a single sign-on server).

 Does this app need to hash/check user passwords? (yes/no) [yes]:
 >

The newer Argon2i password hasher requires PHP 7.2, libsodium or paragonie/sodium_compat. Your system DOES support this algorithm.
You should use Argon2i unless your production system will not support it.

 Use Argon2i as your password hasher (bcrypt will be used otherwise)? (yes/no) [yes]:
 >

 created: src/Entity/Users.php
 created: src/Repository/UsersRepository.php
 updated: src/Entity/Users.php
 updated: config/packages/security.yaml


  Success!


 Next Steps:
   - Review your new App\Entity\Users class.
   - Use make:entity to add more fields to your Users entity and then run make:migration.
   - Create a way to authenticate! See https://symfony.com/doc/current/security.html

If you go now to “api/src/Entity” you will find your entity there. If you scroll down a little bit to the getEmail & getPassword functions you will see something like this, which means the two properties will be used as the User identifier in the authentication. (I will not use the ROLES in this example as it is a simple one).

# api/src/Entity/Users.php

/**
* @see UserInterface
*/

As you know the latest versions of symfony using the autowiring feature so you can see that this entity is already wired and injected with teh repository called “api/src/Repository/UsersReporitory”.

# api/src/Entity/Users.php

/**
 * @ORM\Entity(repositoryClass="App\Repository\UsersRepository")
 */
class Users implements UserInterface
{
    ...
}

You can see clearly in this repository some per-implemented functions like findbyId(), but now let us create another function that helps us to create a new user.

  • To add a user into the Db, you will need to define an entity manager like the following:
# api/src/Repository/UsersRepository.php

class UsersRepository extends ServiceEntityRepository
{
  /** EntityManager $manager */
  private $manager;
....
}

and initialize it in the constructor like so:

# api/src/Repository/UsersRepository.php

/**
* UsersRepository constructor.
* @param RegistryInterface $registry
*/
public function __construct(RegistryInterface $registry)
{
  parent::__construct($registry, Users::class);

  $this->manager = $registry->getEntityManager();
}
  • Now, let us create our function:
# api/src/Repository/UsersRepository.php

/**
 * Create a new user
 * @param $data
 * @return Users
 * @throws \Doctrine\ORM\ORMException
 * @throws \Doctrine\ORM\OptimisticLockException
*/
public function createNewUser($data)
{
    $user = new Users();
    $user->setEmail($data['email'])
        ->setPassword($data['password']);

    $this->manager->persist($user);
    $this->manager->flush();

    return $user;
}
  • Let us create our controller to consume that repository. We can call it “AuthController”.
$ docker-compose exec php bin/console make:controller

 Choose a name for your controller class (e.g. TinyJellybeanController):
 > AuthController

 created: src/Controller/AuthController.php
 created: templates/auth/index.html.twig


  Success!


 Next: Open your new controller class and add some pages!

Now, let’s consume this createNewUser function. If you see your controller, you will find it only contains the index function, but we need to create another one will call it “register”.

  • We need the UsersRepository so should create the object first.
# api/src/Controller/AuthController.php

use App\Repository\UsersRepository;

class AuthController extends AbstractController
{
    /** @var UsersRepository $userRepository */
    private $usersRepository;

    /**
     * AuthController Constructor
     *
     * @param UsersRepository $usersRepository
     */
    public function __construct(UsersRepository $usersRepository)
    {
        $this->usersRepository = $usersRepository;
    }
    .......
}
  • Now, we need to make this controller know about the User repository, so we will inject it as a service.
# api/config/services.yaml

services:
    ......
  # Repositories
  app.user.repository:
      class: App\Repository\UsersRepository
      arguments:
          - Symfony\Bridge\Doctrine\RegistryInterface
  
  # Controllers
  app.auth.controller:
      class: App\Controller\AuthController
      arguments:
          - '@app.user.repository'
  • Now, it is time to implement our new endpoint to register (create) a new account.
# api/src/Controller/AuthController.php

# Import those
use Symfony\Component\HttpFoundation\Request;
use Symfony\Component\HttpFoundation\Response;

# Then add this to the class
/**
 * Register new user
 * @param Request $request
 *
 * @return Response
 */
public function register(Request $request)
{
    $newUserData['email']    = $request->get('email');
    $newUserData['password'] = $request->get('password');

    $user = $this->usersRepository->createNewUser($newUserData);

    return new Response(sprintf('User %s successfully created', $user->getUsername()));
}
  • Now, we need to make the framework know about this new endpoint by adding it to our routes file.
# src/config/routes.yaml

# Register api
register:
    path: /register
    controller: App\Controller\AuthController::register
    methods: ['POST']

Testing this new API:

  • Make the migration and update the DB first:
$ docker-compose exec php bin/console make:migration

$ docker-compose exec php bin/console doctrine:migrations:migrate

  WARNING! You are about to execute a database migration that could result in schema changes and data loss. Are you sure you wish to continue? (y/n) y

Now, from Postman or any other client you use. Here am using CURL.

$ curl -X POST -H "Content-Type: application/json" "http://localhost:8080/register?email=test1@mail.com&password=test1"
User test1@mail.com successfully created

To see this data in the DB:

$ docker-compose exec db psql -U api-platform api
psql (10.8)
Type "help" for help.

$ api=# select * from users;
 id |     email      | roles | password
----+----------------+-------+----------
  6 | test1@mail.com | []    | test1
(1 row)

Oooooh, wow the password is not encrypted what should we do!!!

So, as i said before this project is built on Symfony, that is why i said you need to have knowledge about symfony. So we will use the Password encoder class.

# api/src/Repository/UsersRepository.php

use Symfony\Component\Security\Core\Encoder\UserPasswordEncoderInterface;

class UsersRepository extends ServiceEntityRepository
{
    .......

  /** UserPasswordEncoderInterface $encoder */
  private $encoder;
    
  /**
   * UserRepository constructor.
   * @param RegistryInterface $registry
   * @param UserPasswordEncoderInterface $encoder
   */
  public function __construct(RegistryInterface $registry, UserPasswordEncoderInterface $encoder)
  {
      parent::__construct($registry, Users::class);

      $this->manager = $registry->getEntityManager();
      $this->encoder = $encoder;
  }
}
  • As always we need to inject it to the repository:
# api/config/services.yaml

services:
  .......
  # Repositories
  app.user.repository:
      class: App\Repository\UsersRepository
      arguments:
          - Symfony\Bridge\Doctrine\RegistryInterface
          - Symfony\Component\Security\Core\Encoder\UserPasswordEncoderInterface

Then update the create user function:

# api/src/Repository/UsersRepository.php

public function createNewUser($data)
{
    $user = new Users();
    $user->setEmail($data['email'])
        ->setPassword($this->encoder->encodePassword($user, $data['password']));
    .......
}
  • Now, try the register call again, remember to try with different email as we defined the email as Unique:
$ curl -X POST -H "Content-Type: application/json" "http://localhost:8080/register?email=test2@mail.com&password=test2"
User test2@mail.com successfully created
  • check the DB now again:
$ api=# select * from users;
 id |     email      | roles |                                            password
----+----------------+-------+-------------------------------------------------------------------------------------------------
  6 | test1@mail.com | []    | test1
  7 | test2@mail.com | []    | $argon2i$v=19$m=1024,t=2,p=2$VW9tYXEzZHp5U0RMSE5ydA$bo+V1X6rgYZ4ebN/bs1cpz+sf+DQdx3Duu3hvFUII8M
(2 rows)

Install LexikJWTAuthenticationBundle

  • Install the bundle and generate the secrets:
$ docker-compose exec php composer require jwt-auth

Create our authentication

  • (Additional) Before anything if you tried this call, for now, you will get this result:
$ curl -X GET -H "Content-Type: application/json" "http://localhost:8080/greetings"
{
    "@context": "/contexts/Greeting",
    "@id": "/greetings",
    "@type": "hydra:Collection",
    "hydra:member": [],
    "hydra:totalItems": 0
}
  • Let’s continue for now, create a new and simple endpoint that we will use in our testing. Now I will call it “/api”.
# api/src/Controller/AuthController.php

/**
* api route redirects
* @return Response
*/
public function api()
{
    return new Response(sprintf("Logged in as %s", $this->getUser()->getUsername()));
}
  • Add it to our Routes
# api/config/routes.yaml

api:
    path: /api
    controller: App\Controller\AuthController::api
    methods: ['POST']

Now, we need to make some configurations in our security config file:

  • This is our provider to our authentication or anything related to users in the application. It is already predefined, if you want change the user provider you can do it here.
# api/config/packages/security.yaml

app_user_provider:
    entity:
        class: App\Entity\Users
        property: email
  • Let’s make some configs for our “/register” API as we want this API to be public for anyone:
# api/config/packages/security

register:
    pattern:  ^/register
    stateless: true
    anonymous: true
  • Now, let us assume that we need everything generated by the API Platform to not work without the JWT token, meaning without authenticated users the API shouldn’t return anything. So I will update the “main” part configs to be like this:
# api/config/packages/security.yaml

main:
    anonymous: false
    stateless: true
    provider: app_user_provider
    json_login:
        check_path: /login
        username_path: email
        password_path: password
        success_handler: lexik_jwt_authentication.handler.authentication_success
        failure_handler: lexik_jwt_authentication.handler.authentication_failure
    guard:
        authenticators:
            - lexik_jwt_authentication.jwt_token_authenticator
  • Also, add some configs for our simple endpoint /api.
# api/config/packages/security.yaml

api:
    pattern: ^/api
    stateless: true
    anonymous: false
    provider: app_user_provider
    guard:
        authenticators:
            - lexik_jwt_authentication.jwt_token_authenticator
  • As you see in the above configs, we set the anonymous to false as we don’t want anyone to access these two APIs. Also we are telling the framework that the provider for you is the user provider that we defined before. At the end we are telling it which authenticator you will use and the authentication success/faliure messages.
  • Now, if you try the call, try it in the Additional part for the /greetings api:
$ curl -X GET -H "Content-Type: application/json" "http://localhost:8080/greetings"
  {
      "code": 401,
      "message": "JWT Token not found"
  }

It is the same with our simple endpoint /api that we created:

$ curl -X POST -H "Content-Type: application/json" "http://localhost:8080/api" 
  {
    "code": 401,
    "message": "JWT Token not found"
  }
  • As you can see it asks you to login :D, there is no JWT token specified so we will create a very simple API that is used by the lexik jwt to authenticate the users, and generate their tokens. Remember that the login check path should be the same as the check_path under json_login in the security file:
# api/config/packages/security.yaml
....
json_login:
        check_path: /login

# api/config/routes.yaml

# Login check to log the user and generate JWT token
api_login_check:
      path: /login
      methods: ['POST']
  • Now, let’s try it out and see if it will generate a token for us!
$ curl -X POST -H "Content-Type: application/json" http://localhost:8080/login -d '{"email":"test2@mail.com","password":"test2"}'
  {"token":"eyJ0eXAiOiJKV1QiLCJhbGciOiJSUzI1NiJ9.eyJpYXQiOjE1NTg2OTg4MTIsImV4cCI6MTU1ODcwMjQxMiwicm9sZXMiOlsiUk9MRV9VU0VSIl0sInVzZXJuYW1lIjoidGVzdDJAbWFpbC5jb20ifQ.nzd5FVhcyrfjYyN8jRgYFp3VOB2QytnPPRGNyp4ZtfLx6IRwg0TWZJPu5OFtOKPkdLO8DQAr_4Fpq_G6oPjzoxmGOASNuRoQonik9FCCq6oAIW3k5utzQecXDVE_ImnfgByc6WYW6a-aWLnsq1qtvxy274ojqdR0rWLePwSWX5K5-t08zDBgavO_87dVpYd0DLwhHIS7F10lNscET7bfWS-ioPDTv-G74OvkcpbcjgwHhXlO7TYubnrES-FsvAw7kezQe4BPxdbXr1w-XBZuqTNEU4MyrBuadSLgjoe_gievNBtkVhKErIkEQZVjeJIQ4xaKaxwmPxZcP9jYkE47myRdbMsL9XHSd0XmGq0bPuGjOJ2KLTmUb5oeuRnY-e9Q_V9BbouEGw0sjw2meo6Jot2MZyv5ZnLci_GwpRtWqmV7ZLw5jNyiLDFXR1rz70NcJh7EXqu9o4nno3oc68zokfDQvGkJJJZMtBrLCK5pKGMh0a1elIz41LRLZvpLYCrOZ2f4wCkGRD_U92iILD6w8EdVWGoO1wTn5Z2k8-GS1-QH9f-4KkOpaYGPCwwdrY7yioSt2oVbEj2FOb1jULteeP_Cpu44HyJktPLPW_wrN2OtZlUFr4Vz_owDSIvNESYk1JBQ_Fjlv9QGmUs9itzaDExjfB4QYoGkvpfNymtw2PI"}

As you see it created a JWT token for me, so I can use it to call any API in the application. If it shows some exception like Unable to generate token for the specified configurationsplease check this step here. First, open you .envfile. We will need the JWT_PASSPHRASE so keep it opened:

$ mkdir -p api/config/jwt
$ openssl genrsa -out api/config/jwt/private.pem -aes256 4096 # this will ask you for the JWT_PASSPHRASE
$ openssl rsa -pubout -in api/config/jwt/private.pem -out api/config/jwt/public.pem # will confirm the JWT_PASSPHRASE again
  • Let’s try to call /api or /greetings endpoints with this token now:
$ curl -X GET -H "Content-Type: application/json" -H "Authorization: Bearer eyJ0eXAiOiJKV1QiLCJhbGciOiJSUzI1NiJ9.eyJpYXQiOjE1NTg2OTg4MTIsImV4cCI6MTU1ODcwMjQxMiwicm9sZXMiOlsiUk9MRV9VU0VSIl0sInVzZXJuYW1lIjoidGVzdDJAbWFpbC5jb20ifQ.nzd5FVhcyrfjYyN8jRgYFp3VOB2QytnPPRGNyp4ZtfLx6IRwg0TWZJPu5OFtOKPkdLO8DQAr_4Fpq_G6oPjzoxmGOASNuRoQonik9FCCq6oAIW3k5utzQecXDVE_ImnfgByc6WYW6a-aWLnsq1qtvxy274ojqdR0rWLePwSWX5K5-t08zDBgavO_87dVpYd0DLwhHIS7F10lNscET7bfWS-ioPDTv-G74OvkcpbcjgwHhXlO7TYubnrES-FsvAw7kezQe4BPxdbXr1w-XBZuqTNEU4MyrBuadSLgjoe_gievNBtkVhKErIkEQZVjeJIQ4xaKaxwmPxZcP9jYkE47myRdbMsL9XHSd0XmGq0bPuGjOJ2KLTmUb5oeuRnY-e9Q_V9BbouEGw0sjw2meo6Jot2MZyv5ZnLci_GwpRtWqmV7ZLw5jNyiLDFXR1rz70NcJh7EXqu9o4nno3oc68zokfDQvGkJJJZMtBrLCK5pKGMh0a1elIz41LRLZvpLYCrOZ2f4wCkGRD_U92iILD6w8EdVWGoO1wTn5Z2k8-GS1-QH9f-4KkOpaYGPCwwdrY7yioSt2oVbEj2FOb1jULteeP_Cpu44HyJktPLPW_wrN2OtZlUFr4Vz_owDSIvNESYk1JBQ_Fjlv9QGmUs9itzaDExjfB4QYoGkvpfNymtw2PI" "http://localhost:8080/greetings"
{
    "@context": "/contexts/Greeting",
    "@id": "/greetings",
    "@type": "hydra:Collection",
    "hydra:member": [],
    "hydra:totalItems": 0
}

## Before
$ curl -X GET -H "Content-Type: application/json" "http://localhost:8080/greetings"
  {
      "code": 401,
      "message": "JWT Token not found"
  }
  • What about the /api endpoint, let’s try it out:
$ curl -X POST -H "Content-Type: application/json" -H "Authorization: Bearer eyJ0eXAiOiJKV1QiLCJhbGciOiJSUzI1NiJ9.eyJpYXQiOjE1NTg2OTg4MTIsImV4cCI6MTU1ODcwMjQxMiwicm9sZXMiOlsiUk9MRV9VU0VSIl0sInVzZXJuYW1lIjoidGVzdDJAbWFpbC5jb20ifQ.nzd5FVhcyrfjYyN8jRgYFp3VOB2QytnPPRGNyp4ZtfLx6IRwg0TWZJPu5OFtOKPkdLO8DQAr_4Fpq_G6oPjzoxmGOASNuRoQonik9FCCq6oAIW3k5utzQecXDVE_ImnfgByc6WYW6a-aWLnsq1qtvxy274ojqdR0rWLePwSWX5K5-t08zDBgavO_87dVpYd0DLwhHIS7F10lNscET7bfWS-ioPDTv-G74OvkcpbcjgwHhXlO7TYubnrES-FsvAw7kezQe4BPxdbXr1w-XBZuqTNEU4MyrBuadSLgjoe_gievNBtkVhKErIkEQZVjeJIQ4xaKaxwmPxZcP9jYkE47myRdbMsL9XHSd0XmGq0bPuGjOJ2KLTmUb5oeuRnY-e9Q_V9BbouEGw0sjw2meo6Jot2MZyv5ZnLci_GwpRtWqmV7ZLw5jNyiLDFXR1rz70NcJh7EXqu9o4nno3oc68zokfDQvGkJJJZMtBrLCK5pKGMh0a1elIz41LRLZvpLYCrOZ2f4wCkGRD_U92iILD6w8EdVWGoO1wTn5Z2k8-GS1-QH9f-4KkOpaYGPCwwdrY7yioSt2oVbEj2FOb1jULteeP_Cpu44HyJktPLPW_wrN2OtZlUFr4Vz_owDSIvNESYk1JBQ_Fjlv9QGmUs9itzaDExjfB4QYoGkvpfNymtw2PI" "http://localhost:8080/api"
Logged in as test2@mail.com

## Before
$ curl -X POST -H "Content-Type: application/json" "http://localhost:8080/api" 
  {
    "code": 401,
    "message": "JWT Token not found"
  }

As you can see from the JWT token, you know exactly who is logged in, and you can improve this by implementing additional User properties like isActive or userRoles…etc.

— — — — — — — — — — — — — — — — — — — — — — — — — — — — — —

Thank you for reading this tutorial, I hope that you learned something new!

If you have any questions please don’t hesitate to ask, or any feedback will be so useful.

You can find this whole tutorial and the example here on GitHub.

Share

A High-Performance Logger with PHP

Recently, at Leaseweb, another successful Hackathon came to an end. There was a lot of fun, a lot of coding, a lot of coffee and a lot of cool ideas that arose during these 2 days. In this blog post we want to share one of these ideas that made us proud and that was really fun to work on.

1. The motivation

As Leaseweb, we strive to know our customers better so that we can actively empower them. For that, we need to start logging events that have value for the business. So basically we want to implement a simple technical service (let’s call it a ‘Logging Service’) that accepts any kind of event with some rich data (in other words a payload), and then logs it, without interfering with the execution of the client.

Our Logging service would, on each request, return an ”OK” so that the client can continue with its own execution but still keep on logging events asynchronously.

2. Our tech stack and our tech choice for POC

Typically at Leaseweb, we have a pretty standardized technology stack that revolves around PHP+Apache or PHP+Nginx. If a server or an application is built using this kind of stack, we are bound to typical synchronous execution. This client sends a request to our application (in this case our Logging Service) and the client needs to wait until our application sends back a response after it finishes all the tasks. This is not an ideal scenario. We need a service that runs asynchronously, a service that receives the request, says ”OK” so that the client can continue with his execution, and then does its job.

In the market, there are several tools that would enable us to do this such as languages and tools like NodeJs, Golang or even some message queuing services that could be wired into our PHP stack. But as we are PHP enthusiasts at Leaseweb, we want to use our language of choice without adding dependencies. That is how we discovered ReactPHP (no relation to the front-end framework React). ReactPHP is a pure PHP library that allows the developer to do some cool reactive programming in PHP by running the code on an endless event loop :).

3. Implementation of the POC

With some ReactPHP libraries, we can handle HTTP requests in PHP itself. We no longer need a web server that handles the requests and creates PHP processes for us. We can just create a pure PHP process that handles everything for us and if we want to fully use the hardware power from our machine, we can create several PHP processes and put them behind a load balancer.

After choosing our technology we started playing around and implementing our idea. In order to make sure that asynchronous PHP would solve the majority of our concerns, we started benchmarking.

First we needed to script something to benchmark. Therefore we implemented 3 different scenarios:

  1. The all-time-favorite: An endpoint that prints Hello World.
  2. An API endpoint that calls a 3rd party API that takes 2 seconds to reply and proxies its response.
  3. An API-endpoint that accepts a payload with POST and logs it via HTTP to Elasticsearch (This is what we really want).

We implemented these 3 scenarios in two different stacks:

  • Stack A
    Traditional PHP+FPM+Nginx
  • Stack X
    4 PHP processes running on a loop with ReactPHP behind an Nginx load-balancer. The reason why we chose 4 processes was solely because this number looks good :). There are some theories in which suggestions are made regarding the number of processes that should run on a machine when using this approach. We will not go into further detail on this in this article. Note that in both Stack A and Stack X we used the exact same specs for the hardware server. On both, we had a CPU with 8 cores.

Then we ran some stress tests with Locust:

3.1 1st Benchmark – Hello world!

For the first benchmark, we just wanted to see how the implementation of the Scenario 1 would behave in both of our stacks.

Hello World with Stack X
Figure 1: Hello World with Stack X
Hello World with Stack A
Figure 2: Hello World with Stack A

We can see that both of the stacks perform very similar. The reason for this is that the computation needed to print a ”Hello World!” is minimal, therefore both of our stacks can answer a high amount of requests in a reliable way.

The real power of asynchronous code comes when we need to deal with Input/Output (I/O (reading from a DB, API, Filesystem, etc) because these operations are time-consuming (see [zhuk:2026:event-driven-with-php]). I/O is slow and CPU computation is fast. By going with an asynchronous approach, our program can execute other computations while waiting for I/O.

It is time to try this theory with the next benchmark:

3.2 2nd Benchmark – Response from a 3rd-party API

Following what we described in the previous section, the power of asynchronous code comes when we deal with input and output. So we were curious to find out how the APIs would behave if they need to call a 3rd party API that takes 2 seconds to respond and then it forwards its response.

Let’s run the tests:

Response from a 3rd-party API with Stack X
Figure 3: Response from a 3rd-party API with Stack X
Response from a 3rd-party API with Stack A
Figure 4: Response from a 3rd-party API with Stack A

With the benchmarks illustrated in figure 3 and 4, it’s encouraging to see we already have very interesting results. We ran a stress test where we gradually spawn 100 concurrent users that send a request per second, and we can easily see that the more the concurrent users grow, the less responsive Stack A becomes. Stack A is achieving an average response time of 40 seconds, while Stack X maintains an average response time of 2 seconds (which is the time that the 3rd-party API takes to respond).

With Stack A, each request made creates a process that will stay idle until the 3rd-party API responds. The implication is that, at some point in time, we will have hundreds of idle processes waiting for a reply. This will cause an immense overload on our machine’s resources.

Stack X performs exceedingly well. This is because the same processes that wait for the reply from the 3rd-party API will continue doing other work during their execution, for example, handling other HTTP requests and incoming responses from the 3rd-party API. With this, we can achieve much more efficiency in our stack.

After observing these results we wanted to push it a bit harder – we wanted to see whether we could break Stack A entirely. So we decided to run the same stress test for this scenario but this time with 1000 concurrent users.

Response from a 3rd-party API with Stack X with 1000 users
Figure 5: Response from a 3rd-party API with Stack X with 1000 users
Response from a 3rd-party API with Stack A with 1000 users
Figure 6: Response from a 3rd-party API with Stack A with 1000 users

We did it! We can see that at some point Stack A is unable to handle the requests anymore so it stops responding completely after reaching an average response time of 60 seconds. Stack X remains perfectly smooth with an average response time of 2 seconds :).

3.3 3rd Benchmark

It was indeed fun trying to see how the stacks behave with the previous scenarios but we want to see how it behaves in a real-world scenario. Next, we wanted our API to accept a JSON payload via an HTTP post and log it to an Elasticsearch cluster via HTTP to keep it simple (Scenario 3).

How the stacks work in a nutshell:

  • Stack X receives an HTTP Post request with the payload, sends a response to the client saying OK and then logs it to Elasticsearch (asynchronously).
  • Stack A receives an HTTP Post request with the payload, logs it to Elasticsearch and sends a response to the client saying OK.

Let’s bombard it with Locust again and why not with 1000 of concurrent connections right away:

Logging payloads with Stack X
Figure 7: Logging payloads with Stack X
Logging payloads with Stack A
Figure 8: Logging payloads with Stack A

We can see that we can achieve a pretty reliable and high-performance logger. And this only with pure PHP code!!

Because our intention was always to push the limits, we chose this benchmark with 1000 concurrent users being spawned gradually. Stack A at some point stops handling the requests, while the Stack X always keeps a pretty good response time, around 10ms.

4. What can we use it for now?

With this experiment, we pretty much built a central logging service!! Coming back to our main motivation, we can use this to log whatever we want, and we want to start logging meaningful domain events, from any application within our system with a simple non-blocking HTTP Request. For example, if we start logging meaningful events, we can get to know our customers better. If we log all of this into Elasticsearch we can also start making cool graphs from it. For example:

Graph with Business Events
Figure 9: Graph with Business Events

OR

Graph with End-user Actions
Figure 10: Graph with End-user Actions

Since this approach is so highly responsive, we can even start using it to log anything and maybe everything, where our logging exists in a central endpoint. System monitoring, near-real-time-analytics, domain events, trends, etc, etc. And all of this with pure PHP :).

5. Cons of the approach and future work

When using ReactPHP there are some important considerations, which in some scenarios can be seen as cons, that usually they are not applicable to projects that follow an architecture similar to Stack A.

  • ReactPHP uses reactive/event-driven programming which is a paradigm that might have a big learning curve.
  • Long-running PHP processes can lead to memory leak and in case of failures, they could affect all the current connections to the server
  • These processes need to be constantly carefully monitored in order to avoid and predict fatal failures.
  • The usage of blocking functions (functions that block the execution of the code) will massively affect the performance for all the connections to the server.

Also, some extra work on the ”Operational / Infra side” is needed to make sure that we continuously check on the process’ health and if something goes wrong, create new ones automatically. We also need to work on the way we deploy the code. We need to make sure that we restart a process sequentially when deploying new code so that our service can finish serving the requests that it has queued at that time.

6. Conclusion

Pushing the limits of our preferred technology is one of the most fun things to do. PHP is not a usual choice if there is the need for an high-performing application, but thanks to a lot of great work of the community around, solutions like ReactPHP starts to emerge. That opens a new path to discover new programming paradigms, it introduces different mind-sets on how to approach a problem and it challenges the knowledge we have regarding the technology.

Challenging what we already know, is one of the most interesting things that we can do because it takes us out of our comfort zone and helps us to become more mature. It is really fun and it makes us realise that we can never know everything.

We would like to thank and acknowledge everybody in the communities around the tools we used in this fun experiment 🙂

Some useful links:

by Joao Castro and Elrich Faul

Share

Leaseweb Cloud AWS EC2 support

As you might know, some of the products LeaseWeb include in its portfolio are Public and Private Cloud based on Apache CloudStack, which supports a full API. We, LeaseWeb, are very open about this, and we try to be as much involved and participative in the community and product development as possible. You might be familiar with this if you are a Private Cloud customer. In this article we target the current and former EC2 users, who probably have already tools built upon AWS CLI, by demonstrating how you can keep using them with LeaseWeb Private Cloud solutions.

Apache CloudStack has supported EC2 API for some time in the early days, but along the way, while EC2 API evolved, CloudStack’s support has somewhat stagnated. In fact, The AWS API component from CloudStack was recently detached from the main distribution as to simplify the maintenance of the code.

While this might sound like bad news, it’s not – at all. In the meantime, another project spun off, EC2Stack, and was embraced by Apache as well. This new stack supports the latest API (at the time of writing) and is much easier to maintain both in versatility as in codebase. The fact that it is written in Python opens up the audience for further contribution while at the same time allows for quick patching/upgrade without re-compiling.

So, at some point, I thought I could share with you how to quickly setup your AWS-compatible API so you can reuse your existing scripts. On to the details.

The AWS endpoint acts as an EC2 API provider, proxying requests to LeaseWeb API, which is an extension to the native CloudStack API. And since this API is available for Private Cloud customers, EC2Stack can be installed by the customer himself.

Following is an illustration of how this can be done. For the record, I’m using Ubuntu 14.04 as my desktop, and I’ll be setting up EC2stack against LeaseWeb’s Private Cloud in the Netherlands.

First step is to gather all information for EC2stack. Go to your LeaseWeb platform console, and obtain API keys for your user (sensitive information blurred):

apikeys-blurred

Note down the values for API Key and Secret Key (you should already know the concepts from AWS and/or LeaseWeb Private Cloud).

Now, install EC2Stack and configure it:

ntavares@mylaptop:~$ pip install ec2stack 
[…]
ntavares@mylaptop:~$ ec2stack-configure 
EC2Stack bind address [0.0.0.0]: 127.0.0.1 
EC2Stack bind port [5000]: 5000 
Cloudstack host [mgt.cs01.leaseweb.net]: csrp01nl.leaseweb.com 
Cloudstack port [443]: 443 
Cloudstack protocol [https]: https 
Cloudstack path [/client/api]: /client/api 
Cloudstack custom disk offering name []: dualcore
Cloudstack default zone name [Evoswitch]: CSRP01 
Do you wish to input instance type mappings? (Yes/No): Yes 
Insert the AWS EC2 instance type you wish to map: t1.micro 
Insert the name of the instance type you wish to map this to: Debian 7 amd64 5GB 
Do you wish to add more mappings? (Yes/No): No 
Do you wish to input resource type to resource id mappings for tag support? (Yes/No): No 
INFO  [alembic.migration] Context impl SQLiteImpl. 
INFO  [alembic.migration] Will assume non-transactional DDL. 

The value for the zone name will be different if your Private Cloud is not in the Netherlands POP. The rest of the values can be obtained from the platform console:

serviceoffering-blurred

template-blurred
You will probably have different (and more) mappings to do as you go, just re-run this command later on.

At this point, your EC2stack proxy should be able to talk to your Private Cloud, so we now need to instruct it to launch it to accept EC2 API calls for your user. For the time being, just run it on a separate shell:

ntavares@mylaptop:~$ ec2stack -d DEBUG 
 * Running on http://127.0.0.1:5000/ 
 * Restarting with reloader

And now register your user using the keys you collected from the first step:

ntavares@mylaptop:~$ ec2stack-register http://localhost:5000 H5xnjfJy82a7Q0TZA_8Sxs5U-MLVrGPZgBd1E-1HunrYOWBa0zTPAzfXlXGkr-p0FGY-9BDegAREvq0DGVEZoFjsT PYDwuKWXqdBCCGE8fO341F2-0tewm2mD01rqS1uSrG1n7DQ2ADrW42LVfLsW7SFfAy7OdJfpN51eBNrH1gBd1E 
Successfully Registered!

And that’s it, as far the API service is concerned. As you’d normally do with AWS CLI, you now need to “bind” the CLI to this new credentials:

ntavares@mylaptop:~$ aws configure 
AWS Access Key ID [****************yI2g]: H5xnjfJy82a7Q0TZA_8Sxs5U-MLVrGPZgBd1E-1HunrYOWBa0zTPAzfXlXGkr-p0FGY-9BDegAREvq0DGVEZoFjsT
AWS Secret Access Key [****************L4sw]: PYDwuKWXqdBCCGE8fO341F2-0tewm2mD01rqS1uSrG1n7DQ2ADrW42LVfLsW7SFfAy7OdJfpN51eBNrH1gBd1E  Default region name [CS113]: CSRP01 
Default output format

: text

And that’s it! You’re now ready to use AWS CLI as you’re used to:

ntavares@mylaptop:~$ aws --endpoint=http://127.0.0.1:5000 --output json ec2 describe-images | jq ' .Images[] | .Name ' 
"Ubuntu 12.04 i386 30GB" 
"Ubuntu 12.04 amd64 5GB" 
"Ubuntu 13.04 amd64 5GB" 
"CentOS 6 amd64 5GB" 
"Debian 6 amd64 5GB" 
"CentOS 7 amd64 20140822T1151" 
"Debian 7 64 10 20141001T1343" 
"Debian 6 i386 5GB" 
"Ubuntu 14.04 64bit with docker.io" 
"Ubuntu 12.04 amd64 30GB" 
"Debian 7 i386 5GB" 
"Ubuntu 14.04 amd64 20140822T1234" 
"Ubuntu 12.04 i386 5GB" 
"Ubuntu 13.04 i386 5GB" 
"CentOS 6 i386 5GB" 
"CentOS 6 amd64 20140822T1142" 
"Ubuntu 12.04 amd64 20140822T1247" 
"Debian 7 amd64 5GB"

Please note that I only used JSON output (and JQ to parse it) for summarising the results, as any other format wouldn’t fit on the page.

To create a VM with built-in SSH keys, you should create/setup your keypair in LeaseWeb Private Cloud, if you didn’t already. In the following example I’m generating a new one, but of course you could load your existing keys.

ssh-keypairs-blurred

You will want to copy paste the generated key (in Private Key) to a file and protect it. I saved mine in $HOME/.ssh/id_ntavares.csrp01.key .

ssh-keypairs2-blurred

This key will be used later to log into the created instances and extract the administrator password. Finally, instruct the AWS CLI to use this keypair when deploying your instances:

ntavares@mylaptop:~$ aws --endpoint=http://127.0.0.1:5000 ec2 run-instances \
 --instance-type dualcore \
 --image-id 7c123f01-9865-4312-a198-05e2db755e6a \
 --key-name ntavares-key 
INSTANCES	KVM	7c123f01-9865-4312-a198-05e2db755e6a	a0977df5-d25e-40cb-9f78-b3a551a9c571	dualcore	ntavares-key	2014-12-04T12:03:32+0100	10.42.1.129 
PLACEMENT	CSRP01 
STATE	16	running 

Note that the image-id is taken from the previous listing (the one I simplified with JQ).

Also note that although EC2stack is fairly new, and there are still some limitations to this EC2-CS bridge – see below for a mapping of supportedAPI calls. For instance, one that you can could run into at the time of writing this article (~2015) was the inability to deploy an instance if you’re using multiple Isolated networks (or multiple VPC). Amazon shares this concept as well, so a simple patch was necessary.

For this demo, we’re actually running in an environment with multiple isolated networks, so if you ran the above command, you’d get the following output:

ntavares@mylaptop:~$ aws --endpoint=http://127.0.0.1:5000 ec2 run-instances \
 --instance-type dualcore \
 --image-id 7c123f01-9865-4312-a198-05e2db755e6a \
 --key-name ntavares-key
A client error (InvalidRequest) occurred when calling the RunInstances operation: More than 1 default Isolated networks are found for account Acct[47504f6c-38bf-4198-8925-991a5f801a6b-rme]; please specify networkIds

In the meantime, LeaseWeb’s patch was merged, as many others, which both demonstrates the power of Open Source and the activity on this project.

Naturally, the basic maintenance tasks are there:

ntavares@mylaptop:~$ aws --endpoint=http://127.0.0.1:5000 ec2 describe-instances 
RESERVATIONS	None 
INSTANCES	KVM	7c123f01-9865-4312-a198-05e2db755e6a	a0977df5-d25e-40cb-9f78-b3a551a9c571	dualcore	ntavares-key	2014-12-04T12:03:32+0100	10.42.1.129	10.42.1.129 
PLACEMENT	CSRP01	default 
STATE	16	running

I’ve highlighted some information you’ll need now to login to the instance: the instance id, and IP address, respectively. You can login either with your ssh keypair:

[root@jump ~]# ssh -i $HOME/.ssh/id_ntavares.csrp01.key root@10.42.1.129 
Linux localhost 3.2.0-4-amd64 #1 SMP Debian 3.2.51-1 x86_64 

[...] 
Debian GNU/Linux comes with ABSOLUTELY NO WARRANTY, to the extent 
permitted by applicable law. 
root@a0977df5-d25e-40cb-9f78-b3a551a9c571:~#

If you need, you can also retrieve the password the same way you do with EC2:

ntavares@mylaptop:~$ aws --endpoint=http://127.0.0.1:5000 ec2 get-password-data --instance-id a0977df5-d25e-40cb-9f78-b3a551a9c571 
None dX5LPdKndjsZkUo19Z3/J3ag4TFNqjGh1OfRxtzyB+eRnRw7DLKRE62a6EgNAdfwfCnWrRa0oTE1umG91bWE6lJ5iBH1xWamw4vg4whfnT4EwB/tav6WNQWMPzr/yAbse7NZHzThhtXSsqXGZtwBNvp8ZgZILEcSy5ZMqtgLh8Q=

As it happens with EC2, password is returned encrypted, so you’ll need your key to display it:

ntavares@mylaptop:~$ aws --endpoint=http://127.0.0.1:5000 ec2 get-password-data --instance-id a0977df5-d25e-40cb-9f78-b3a551a9c571 | awk '{print $2}' > ~/tmp.1
ntavares@mylaptop:~$ openssl enc -base64 -in tmp.1 -out tmp.2 -d -A 
ntavares@mylaptop:~$ openssl rsautl -decrypt -in tmp.2 -text -inkey $HOME/.ssh/id_ntavares.csrp01.key 
ntavares@mylaptop:~$ cat tmp.3 ; echo 
hI5wueeur
ntavares@mylaptop:~$ rm -f tmp.{1,2,3} 
[root@jump ~]# sshpass -p hI5wueeur ssh root@10.42.1.129 
Linux localhost 3.2.0-4-amd64 #1 SMP Debian 3.2.51-1 x86_64 

[...]
Debian GNU/Linux comes with ABSOLUTELY NO WARRANTY, to the extent 
permitted by applicable law. 
Last login: Thu Dec  4 13:33:07 2014 from jump.rme.leasewebcloud.com 
root@a0977df5-d25e-40cb-9f78-b3a551a9c571:~#

The multiple isolated networks scenario

If you’re already running multiple isolated networks in your target platform (be either VPC-bound or not), you’ll need to pass argument –subnet-id to the run-instances command to specify which network to deploy the instance into; otherwise CloudStack will complain about not knowing in which network to deploy the instance. I believe this is due to the fact that Amazon doesn’t allow the use the Isolated Networking as freely as LeaseWeb – LeaseWeb delivers you the full flexibility at the platform console.

Since EC2stack does not support describe-network-acls (as of December 2014) in order to allow you to determine which Isolated networks you could use, the easiest way to determine them is to go to the platform console and copy & paste the Network ID of the network you’re interested in:

Then you could use –subnet-id:

ntavares@mylaptop:~$ aws --endpoint=http://127.0.0.1:5000 ec2 run-instances \
 --instance-type dualcore \
 --image-id 7c123f01-9865-4312-a198-05e2db755e6a \
 --key-name ntavares-key \
 --subnet-id 5069abd3-5cf9-4511-a5a3-2201fb7070f8
PLACEMENT	CSRP01 
STATE	16	running 

I hope I demonstrated a bit of what can be done in regards to compatible EC2 API. Other funtions are avaiable for more complex tasks although, as wrote earlier, EC2stack is quite new, for which you might need community assistance if you cannot develop the fix on your own. At LeaseWeb we are very interested to know your use cases, so feel free to drop us a note.

Share