Leaseweb Cloud AWS EC2 support

As you might know, some of the products LeaseWeb include in its portfolio are Public and Private Cloud based on Apache CloudStack, which supports a full API. We, LeaseWeb, are very open about this, and we try to be as much involved and participative in the community and product development as possible. You might be familiar with this if you are a Private Cloud customer. In this article we target the current and former EC2 users, who probably have already tools built upon AWS CLI, by demonstrating how you can keep using them with LeaseWeb Private Cloud solutions.

Apache CloudStack has supported EC2 API for some time in the early days, but along the way, while EC2 API evolved, CloudStack’s support has somewhat stagnated. In fact, The AWS API component from CloudStack was recently detached from the main distribution as to simplify the maintenance of the code.

While this might sound like bad news, it’s not – at all. In the meantime, another project spun off, EC2Stack, and was embraced by Apache as well. This new stack supports the latest API (at the time of writing) and is much easier to maintain both in versatility as in codebase. The fact that it is written in Python opens up the audience for further contribution while at the same time allows for quick patching/upgrade without re-compiling.

So, at some point, I thought I could share with you how to quickly setup your AWS-compatible API so you can reuse your existing scripts. On to the details.

The AWS endpoint acts as an EC2 API provider, proxying requests to LeaseWeb API, which is an extension to the native CloudStack API. And since this API is available for Private Cloud customers, EC2Stack can be installed by the customer himself.

Following is an illustration of how this can be done. For the record, I’m using Ubuntu 14.04 as my desktop, and I’ll be setting up EC2stack against LeaseWeb’s Private Cloud in the Netherlands.

First step is to gather all information for EC2stack. Go to your LeaseWeb platform console, and obtain API keys for your user (sensitive information blurred):


Note down the values for API Key and Secret Key (you should already know the concepts from AWS and/or LeaseWeb Private Cloud).

Now, install EC2Stack and configure it:

ntavares@mylaptop:~$ pip install ec2stack 
ntavares@mylaptop:~$ ec2stack-configure 
EC2Stack bind address []: 
EC2Stack bind port [5000]: 5000 
Cloudstack host []: 
Cloudstack port [443]: 443 
Cloudstack protocol [https]: https 
Cloudstack path [/client/api]: /client/api 
Cloudstack custom disk offering name []: dualcore
Cloudstack default zone name [Evoswitch]: CSRP01 
Do you wish to input instance type mappings? (Yes/No): Yes 
Insert the AWS EC2 instance type you wish to map: t1.micro 
Insert the name of the instance type you wish to map this to: Debian 7 amd64 5GB 
Do you wish to add more mappings? (Yes/No): No 
Do you wish to input resource type to resource id mappings for tag support? (Yes/No): No 
INFO  [alembic.migration] Context impl SQLiteImpl. 
INFO  [alembic.migration] Will assume non-transactional DDL. 

The value for the zone name will be different if your Private Cloud is not in the Netherlands POP. The rest of the values can be obtained from the platform console:


You will probably have different (and more) mappings to do as you go, just re-run this command later on.

At this point, your EC2stack proxy should be able to talk to your Private Cloud, so we now need to instruct it to launch it to accept EC2 API calls for your user. For the time being, just run it on a separate shell:

ntavares@mylaptop:~$ ec2stack -d DEBUG 
 * Running on 
 * Restarting with reloader

And now register your user using the keys you collected from the first step:

ntavares@mylaptop:~$ ec2stack-register http://localhost:5000 H5xnjfJy82a7Q0TZA_8Sxs5U-MLVrGPZgBd1E-1HunrYOWBa0zTPAzfXlXGkr-p0FGY-9BDegAREvq0DGVEZoFjsT PYDwuKWXqdBCCGE8fO341F2-0tewm2mD01rqS1uSrG1n7DQ2ADrW42LVfLsW7SFfAy7OdJfpN51eBNrH1gBd1E 
Successfully Registered!

And that’s it, as far the API service is concerned. As you’d normally do with AWS CLI, you now need to “bind” the CLI to this new credentials:

ntavares@mylaptop:~$ aws configure 
AWS Access Key ID [****************yI2g]: H5xnjfJy82a7Q0TZA_8Sxs5U-MLVrGPZgBd1E-1HunrYOWBa0zTPAzfXlXGkr-p0FGY-9BDegAREvq0DGVEZoFjsT
AWS Secret Access Key [****************L4sw]: PYDwuKWXqdBCCGE8fO341F2-0tewm2mD01rqS1uSrG1n7DQ2ADrW42LVfLsW7SFfAy7OdJfpN51eBNrH1gBd1E  Default region name [CS113]: CSRP01 
Default output format

: text

And that’s it! You’re now ready to use AWS CLI as you’re used to:

ntavares@mylaptop:~$ aws --endpoint= --output json ec2 describe-images | jq ' .Images[] | .Name ' 
"Ubuntu 12.04 i386 30GB" 
"Ubuntu 12.04 amd64 5GB" 
"Ubuntu 13.04 amd64 5GB" 
"CentOS 6 amd64 5GB" 
"Debian 6 amd64 5GB" 
"CentOS 7 amd64 20140822T1151" 
"Debian 7 64 10 20141001T1343" 
"Debian 6 i386 5GB" 
"Ubuntu 14.04 64bit with" 
"Ubuntu 12.04 amd64 30GB" 
"Debian 7 i386 5GB" 
"Ubuntu 14.04 amd64 20140822T1234" 
"Ubuntu 12.04 i386 5GB" 
"Ubuntu 13.04 i386 5GB" 
"CentOS 6 i386 5GB" 
"CentOS 6 amd64 20140822T1142" 
"Ubuntu 12.04 amd64 20140822T1247" 
"Debian 7 amd64 5GB"

Please note that I only used JSON output (and JQ to parse it) for summarising the results, as any other format wouldn’t fit on the page.

To create a VM with built-in SSH keys, you should create/setup your keypair in LeaseWeb Private Cloud, if you didn’t already. In the following example I’m generating a new one, but of course you could load your existing keys.


You will want to copy paste the generated key (in Private Key) to a file and protect it. I saved mine in $HOME/.ssh/id_ntavares.csrp01.key .


This key will be used later to log into the created instances and extract the administrator password. Finally, instruct the AWS CLI to use this keypair when deploying your instances:

ntavares@mylaptop:~$ aws --endpoint= ec2 run-instances \
 --instance-type dualcore \
 --image-id 7c123f01-9865-4312-a198-05e2db755e6a \
 --key-name ntavares-key 
INSTANCES	KVM	7c123f01-9865-4312-a198-05e2db755e6a	a0977df5-d25e-40cb-9f78-b3a551a9c571	dualcore	ntavares-key	2014-12-04T12:03:32+0100 
STATE	16	running 

Note that the image-id is taken from the previous listing (the one I simplified with JQ).

Also note that although EC2stack is fairly new, and there are still some limitations to this EC2-CS bridge – see below for a mapping of supportedAPI calls. For instance, one that you can could run into at the time of writing this article (~2015) was the inability to deploy an instance if you’re using multiple Isolated networks (or multiple VPC). Amazon shares this concept as well, so a simple patch was necessary.

For this demo, we’re actually running in an environment with multiple isolated networks, so if you ran the above command, you’d get the following output:

ntavares@mylaptop:~$ aws --endpoint= ec2 run-instances \
 --instance-type dualcore \
 --image-id 7c123f01-9865-4312-a198-05e2db755e6a \
 --key-name ntavares-key
A client error (InvalidRequest) occurred when calling the RunInstances operation: More than 1 default Isolated networks are found for account Acct[47504f6c-38bf-4198-8925-991a5f801a6b-rme]; please specify networkIds

In the meantime, LeaseWeb’s patch was merged, as many others, which both demonstrates the power of Open Source and the activity on this project.

Naturally, the basic maintenance tasks are there:

ntavares@mylaptop:~$ aws --endpoint= ec2 describe-instances 
INSTANCES	KVM	7c123f01-9865-4312-a198-05e2db755e6a	a0977df5-d25e-40cb-9f78-b3a551a9c571	dualcore	ntavares-key	2014-12-04T12:03:32+0100 
STATE	16	running

I’ve highlighted some information you’ll need now to login to the instance: the instance id, and IP address, respectively. You can login either with your ssh keypair:

[root@jump ~]# ssh -i $HOME/.ssh/id_ntavares.csrp01.key root@ 
Linux localhost 3.2.0-4-amd64 #1 SMP Debian 3.2.51-1 x86_64 

Debian GNU/Linux comes with ABSOLUTELY NO WARRANTY, to the extent 
permitted by applicable law. 

If you need, you can also retrieve the password the same way you do with EC2:

ntavares@mylaptop:~$ aws --endpoint= ec2 get-password-data --instance-id a0977df5-d25e-40cb-9f78-b3a551a9c571 
None dX5LPdKndjsZkUo19Z3/J3ag4TFNqjGh1OfRxtzyB+eRnRw7DLKRE62a6EgNAdfwfCnWrRa0oTE1umG91bWE6lJ5iBH1xWamw4vg4whfnT4EwB/tav6WNQWMPzr/yAbse7NZHzThhtXSsqXGZtwBNvp8ZgZILEcSy5ZMqtgLh8Q=

As it happens with EC2, password is returned encrypted, so you’ll need your key to display it:

ntavares@mylaptop:~$ aws --endpoint= ec2 get-password-data --instance-id a0977df5-d25e-40cb-9f78-b3a551a9c571 | awk '{print $2}' > ~/tmp.1
ntavares@mylaptop:~$ openssl enc -base64 -in tmp.1 -out tmp.2 -d -A 
ntavares@mylaptop:~$ openssl rsautl -decrypt -in tmp.2 -text -inkey $HOME/.ssh/id_ntavares.csrp01.key 
ntavares@mylaptop:~$ cat tmp.3 ; echo 
ntavares@mylaptop:~$ rm -f tmp.{1,2,3} 
[root@jump ~]# sshpass -p hI5wueeur ssh root@ 
Linux localhost 3.2.0-4-amd64 #1 SMP Debian 3.2.51-1 x86_64 

Debian GNU/Linux comes with ABSOLUTELY NO WARRANTY, to the extent 
permitted by applicable law. 
Last login: Thu Dec  4 13:33:07 2014 from 

The multiple isolated networks scenario

If you’re already running multiple isolated networks in your target platform (be either VPC-bound or not), you’ll need to pass argument –subnet-id to the run-instances command to specify which network to deploy the instance into; otherwise CloudStack will complain about not knowing in which network to deploy the instance. I believe this is due to the fact that Amazon doesn’t allow the use the Isolated Networking as freely as LeaseWeb – LeaseWeb delivers you the full flexibility at the platform console.

Since EC2stack does not support describe-network-acls (as of December 2014) in order to allow you to determine which Isolated networks you could use, the easiest way to determine them is to go to the platform console and copy & paste the Network ID of the network you’re interested in:

Then you could use –subnet-id:

ntavares@mylaptop:~$ aws --endpoint= ec2 run-instances \
 --instance-type dualcore \
 --image-id 7c123f01-9865-4312-a198-05e2db755e6a \
 --key-name ntavares-key \
 --subnet-id 5069abd3-5cf9-4511-a5a3-2201fb7070f8
STATE	16	running 

I hope I demonstrated a bit of what can be done in regards to compatible EC2 API. Other funtions are avaiable for more complex tasks although, as wrote earlier, EC2stack is quite new, for which you might need community assistance if you cannot develop the fix on your own. At LeaseWeb we are very interested to know your use cases, so feel free to drop us a note.


SSHFS + Linux = SFTP powered cloud storage


Do you like cloud storage? Did you read the comparison between Dropbox, Google Drive, One Drive, and Box? Still cannot decide? Great! Then this article is for you. After reading it, you will probably decide to get yourself a Linux box and build your own custom cloud storage using Linux and SSHFS.

In computing, SSHFS (SSH Filesystem) is a filesystem client to mount and interact with directories and files located on a remote server or workstation. The client interacts with the remote file system via the SSH File Transfer Protocol (SFTP), a network protocol providing file access, file transfer, and file management functionality over any reliable data stream that was designed as an extension of the Secure Shell protocol (SSH) version 2.0. – Wikipedia

Enable file sharing over SSH (SFTP)

SFTP is the secure variant of the file transfer protocol (FTP). A (Debian-based) Linux server only needs an SSH server to allow to serve the home directory of the local users via SFTP. The following commands enable this:

sudo apt-get install openssh-server

To install and enable the firewall:

sudo apt-get install ufw
sudo ufw allow 22
sudo ufw enable

To find the public IP address:


You need this IP address and the default port (22) to connect to your cloud storage. Note that if you run this Linux box at home you need to forward TCP port 22 on your broadband (DSL or Cable) modem/router. You can look up how to do this on your device via (disable your ad-blocker to make this website work correctly).

Advantages of SSHFS over public cloud storage

  • You can use your cloud server also as a web server, application server, database server, mail server, and DNS server (flexibility)
  • The cost per GB of storage and GB transferred is very low (costs)
  • More control over privacy of the data (security)

Disadvantages of SSHFS compared to public cloud storage

  • No automatic backups (data safety), but you can make a rsync cron job
  • No web interface available, but you could install one
  • No document versioning built-in, but you can use Git

SSHFS client for Linux

Linux has support for Filesystem in Userspace (FUSE). This means it supports mounting volumes without having root access. This is especially good when mounting external storage for a specific user. A cloud storage filesystem accessed over SSH is something you typically want to mount for a specific user.

To enable access to your cloud storage on Linux you must first make a directory where you want to mount the cloud storage. It is very convenient when the directory is automatically mounted as soon as it is accessed. This can be achieved by using the AutoFS tool. The AutoFS service (daemon), once installed, takes care of automatically mounting and unmounting the directory.

sudo apt-get install autofs sshfs
sudo nano /etc/auto.sshfs
sudo nano /etc/auto.master
ssh maurits@cloudserver
sudo service autofs restart

Now we have to create a autofs configuration that states in which directory the remote location is mounted. The following configuration tells AutoFS to use SSHFS to mount from “maurits@cloudserver” the directory “/home/maurits” onto the local “cloudserver” directory.

maurits@nuc:~$ cat /etc/auto.sshfs
cloudserver -fstype=fuse,rw,nodev,nonempty,noatime,allow_other,max_read=65536,IdentityFile=/home/maurits/.ssh/id_rsa,UserKnownHostsFile=/home/maurits/.ssh/known_hosts :sshfs\#maurits@cloudserver:/home/maurits

At the end of the file “/etc/auto.master” we add the following lines:

# Below are the sshfs mounts
/home/maurits/ssh /etc/auto.sshfs uid=1000,gid=1000,--timeout=30,--ghost

This means that the local directory “/home/maurits/ssh” will hold the directory “cloudserver” that we specified earlier. As you can see I also specified the user that owns the files and the seconds of inactivity after which the directory is unmounted and the SSH connection is ended.

Before everything works you must make sure you add yourself to the “fuse” group using the following command or the mounting will fail:

sudo usermod  -a -G fuse maurits

After doing this you may have to logout and login again before the changes are effective.

This setup allows me to edit the remote files as if they are locally available. Other software is not aware that the files are actually on a remote location. This also means I can use my favorite editors and/or stream the media files using my favorite media player.

I used the following sites to find out the above configuration:


Enhanced security using EncFS

There is a possibility to enhance the security of your cloud storage by adding EncFS to your SSH mounted filesystem. I’ll leave this as an exercise for the reader. EncFS can encrypt the files (and filenames) on the storage with AES-256. You can read some about that here and here. Using encryption may avoid the data being leaked in some cases, for instance, when a disk is broken and needs replacement. On the downside there are not many clients that support this.

SFTP in read-only mode

If you do not want to any risk corrupting files due to broken connections while writing, you can chose to run the SFTP subsystem in read-only mode. To do this you need to add the -R flag to the SFTP subsystem line in “/etc/ssh/sshd_config” so that it becomes:

Subsystem sftp /usr/lib/openssh/sftp-server -R

In my experience this type of file corruption is not happening a lot, but you better be safe than sorry. Also this will prevent you from accidentally deleting files. So if you do not need to write anyway, then you should put the system in read-only mode for safety reasons. Note that you can still use rsync when you put the SFTP system in read-only mode.

Disable password login for SSHD

Using passwords for logging in to SSH is not the most secure solution. If you open up your SSH to the Internet you should be using public key authentication. You can read about it here and here or follow these clear instructions. After doing that you can disable the password login by putting this line into the /”etc/ssh/sshd_config” file:

PasswordAuthentication no

SSHFS clients for other platforms

If you are not working on your Linux box, but you want to access your SSHFS cloud storage you can use one of the following clients (that all support private keys). I personally tested a lot of clients and although there are plenty of choices I recommend the following (none of these clients support EncFS):

Open source Windows client

Open source OSX client

Free iOS (iPhone/iPad) client (no media streaming support)

Free Android client (no media streaming support)

Final words

We have shown you how to setup your own cloud storage. Some may say it is not as good as Dropbox or Google Drive or any other commercial provider, others may argue it is better. What is good about it is the large choice in clients that are available for this kind of cloud storage, due to the open source nature of the technology.


VM template creation with oz-install

When you’re managing your infrastructure in the Cloud and are not satisfied with the pre-built VM templates, you will have to create your own templates. Creating templates by hand, especially when you update them regularly, is a very tedious and error prone task that should be avoided.

There are quite some tools around to help automating this process. For example, one could combine the virt-install tool with tools from the libguestfs project or use the scripts from the ubuntu-vm-builder project. One of the benefits of oz-install is its extensive OS support (it even supports creating Windows templates). In this post I will show you how to create a basic Ubuntu template with oz-install.

To efficiently build VM templates, your build machine requires a processor with virtualization extensions. With a recent Linux kernel version you could virtualize these extensions, but that will definitely slow down the whole process.

Package dependencies

To build the oz-install package, your have to make sure the following packages are installed on your system:

  • debhelper (>= 8.0.0)
  • python-all (>= 2.6.6-3)
  • build-essential
  • git-core (if you are fetching the source via Git)

dpkg-buildpackage will complain when these packages are missing.

There are several run-time dependencies, but these will be installed automatically by dpkg when installing oz-install.

Building the package

Building oz-install is not very difficult. The following steps are confirmed to work on Ubuntu 12.04, but will probably work on most Debian based Linux systems.

  1. Fetch the latest source from GitHub:
    mkdir ~/oz
    cd ~/oz
    git clone oz-git
  2. Build the deb with dpkg-buildpackage:
    cd ~/oz/oz-git
    dpkg-buildpackage -us -uc
  3. Now you can install the package you built:
    cd ~/oz
    dpkg -i oz_*_all.deb

Creating your first (simple) template

After installing the package, you can use the following configuration files to create a basic Ubuntu 12.04 template. Feel free to experiment with the various configuration options or supported operating systems (invoke oz-install without any options to view the supported operating systems).

Apart from OS installation, you can add customization options to the template definition file. This feature can be used to run shell commands on the template after it is installed. oz-customize will start the template and use SSH to connect to the machine and run the shell commands.

  1. Create your template definition file with your favourite editor (~/oz/my-template.tdl):
      <description>My first oz template</description>
       <install type='url'>
        <command name='hostname'>
          echo 'my-template' > /etc/hostname
  2. Create your preseed file with your favorite editor (~/oz/my-template.preseed):
    d-i debian-installer/locale string en_US.UTF-8
    d-i console-setup/ask_detect boolean false
    d-i console-setup/layoutcode string us
    d-i netcfg/choose_interface select auto
    d-i netcfg/get_hostname string unassigned-hostname
    d-i netcfg/get_domain string unassigned-domain
    d-i netcfg/wireless_wep string
    d-i clock-setup/utc boolean true
    d-i time/zone string US/Eastern
    d-i partman-auto/method string regular
    d-i partman-auto/choose_recipe select home
    d-i partman/confirm_write_new_label boolean true
    d-i partman/choose_partition select finish
    d-i partman/confirm boolean true
    d-i partman/confirm_nooverwrite boolean true
    d-i passwd/root-login boolean true
    d-i passwd/make-user boolean false
    d-i passwd/root-password password %ROOTPW%
    d-i passwd/root-password-again password %ROOTPW%
    tasksel tasksel/first multiselect standard
    d-i pkgsel/include/install-recommends boolean true
    d-i pkgsel/include string openssh-server python-software-properties wget whois curl acpid
    d-i grub-installer/only_debian boolean true
    d-i grub-installer/with_other_os boolean true
    d-i apt-setup/security_host string
    base-config apt-setup/security-updates boolean false
    ubiquity ubiquity/summary note
    ubiquity ubiquity/reboot boolean true
    d-i finish-install/reboot_in_progress note
    d-i mirror/country string manual
    d-i mirror/http/hostname string
    d-i mirror/http/directory string /ubuntu
  3. Run oz-install with customize options:
    cd ~/oz
    oz-install -b virtio -n virtio -p -u\
      -x ~/oz/my-template.xml \
      -a ~/oz/my-template.preseed \
  4. (optional) Monitor the installation with virt-viewer:
    virt-viewer my-template
  5. (optional) Import the template in libvirt and start it:
    virsh define ~/oz/my-template.xml
    virsh start my-template

Take this as an example and start experimenting with all the options available. After playing around with oz-install you should be able to create a structured template creation workflow. When you combine this with a set of custom scripts,
you can integrate the flow with, for example, jenkins to easily add new or updated templates to your virtualization or cloud platform in a modular way.