Fast dynamic DNS with cron, PHP and DuckDNS

ducky_icon_mediumMy home connection has a 200 mbit cable Internet connection with 20 mbit up. Great for running a server, but every two days my ISP changes my IP address. When this happens I cannot connect to my home network anymore using VPN. Annoying, but certainly a (programming) challenge to me. The simple solution for this is to use a dynamic DNS solution. The name DynDNS popped up in my head, but apparently they are not free anymore (bummer). That’s why I chose to use the free dynamic DNS service “DuckDNS“. Then I realized that I do want a fast update of my dynamic DNS entry when my IP address changes, but I do not want to hammer DuckDNS. That’s why I wrote a small script to achieve this. You find it below.

DuckDNS PHP script to avoid hammering

On my website I installed the following PHP script that will call DuckDNS if the IP address of the caller has changed. It is must be called with a post request that holds a shared secret. This will avoid bots (or hackers) to change the DNS entry. Note that additionally HTTPS (SSL) is used to guarantee confidentiality.

<?php
// settings
$domains = 'cable-at-home'; // cable-at-home.duckdns.org
$token = 'eb1183a2-153b-11e5-b60b-1697f925ec7b';
$ip = $_SERVER['REMOTE_ADDR'];
$file = '/tmp/duckdns.txt';
$secret = 'VeryHardToGuess';
// compare secret
if (!isset($_POST['secret']) || $_POST['secret']!=$secret) { http_response_code(403); die(); }
// compare with current ip
if ($ip==file_get_contents($file)) { http_response_code(304); die('OK'); }
// create url
$url = "https://www.duckdns.org/update?domains=$domains&token=$token&ip=$ip";
$ch = curl_init();
curl_setopt($ch, CURLOPT_URL, $url);
curl_setopt($ch, CURLOPT_RETURNTRANSFER, TRUE);
$result = curl_exec($ch);
curl_close($ch);
// if success update current ip
if ($result!='OK') { http_response_code(400); die($result); }
file_put_contents($file,$ip);
die('OK');

Install this script somewhere in your Apache “DocumentRoot” and name it “duckdns.php”.

Cron script that runs every minute

I installed the following cron job on my server that runs in my home and is connected with cable to the Internet using the “crontab -e” command:

* * * * * /usr/bin/curl -X POST -d 'secret=VeryHardToGuess' https://somedomain.com/duckdns.php

Every minute this cron job executes a curl call to the duckdns.php PHP script on my website (somedomain.com). Only if the IP address is changed the call to DuckDNS (https://www.duckdns.org/update) is made to update the IP address. This avoids hammering the DuckDNS service, but also allows you to get the fastest response to an IP address change.

Installation

Note that in order to make this work you have to create an account at DuckDNS and then modify the “$domains” and “$token” parameters in the PHP script accordingly. You need to change “somedomain.com” in the cron job with the URL of your website. Also do not forget to replace “VeryHardToGuess” in both the PHP script as the cron job with a real secret. Any questions? Use the comments below!

Share

Heka monolog decoder

This post is about how to use heka to give your symfony 2 application logs the care they deserve.

Application logs are very important for the quality of the product or service you are offering.

They help you find out what went wrong so you can explain and fix a bug that was reported recently. Or maybe to gather statistics to see how often a certain feature is used. For example how many bare metal reinstallation requests were issued last month and how many of those failed. Valuable information that you could use to decide what feature you are going to work on next.

At LeaseWeb we use quite a lot of php and Seldaek’s monolog is our logging library of choice. Recently we open sourced a heka decoder on github here. For you who do not know heka yet, check out their documentation.

Heka is an open source stream processing software system developed by Mozilla. Heka is a “Swiss Army Knife” type tool for data processing, useful for a wide variety of different tasks, such as …

Heka runs as a system daemon just like logstash or fluentd. Heka is written in go and comes with an easy to use plugin system based on lua. It has almost no dependencies and is lightweight. James Turnbull has written a nice article on how to get started with heka.

send symfony 2 application logs to elastic search

How better to explain with an example.

Lets say you have a symfony 2 application and you want the application logs to be sent to an Elastic Search platform.

On a debian based os you can download one of the heka debian packages from github.

    $ wget https://github.com/mozilla-services/heka/releases/download/v0.9.2/heka_0.9.2_i386.deb
    $ sudo dpkg -i heka_0.9.2_i386.deb

To configure heka you are required to edit the configuration file located at /etc/hekad.toml.

    $ vim /etc/hekad.toml

Take your time to read the excellent documentation on heka. There are many ways of using heka but we will use it as a forwarder:

  1. Define an input, where messages come from.
  2. Tell heka how it should decode the monolog log line into a heka message
  3. Tell heka how to encode the message so Elastic Search will understand it
  4. Define an output, where should the messages be sent to.

First we define the input:

    [Symfony2MonologFileInput]
    type = "LogstreamerInput"
    log_directory = "/var/www/app/logs"
    file_match = 'prod\.log'
    decoder = "Symfony2MonologDecoder"

Adjust `log_directory` and `file_match` according to your setup. As you can see we alread told heka to use the `Symfony2MonologDecoder` to we will define that one next:

    [Symfony2MonologDecoder]
    type = "SandboxDecoder"
    filename = "/etc/symfony2_decoder.lua"

Change the `filename` with the path where you placed the lua script on your system.

Now we have defined the input we can tell heka where to output messages to:

    [ESJsonEncoder]
    index = "%{Hostname}"
    es_index_from_timestamp = true
    type_name = "%{Type}"

    [ElasticSearchOutput]
    message_matcher = "TRUE"
    server = "http://192.168.100.1:9200"
    flush_interval = 5000
    flush_count = 10
    encoder = "ESJsonEncoder"

In the above example we assume that your Elastic Search server is running at 192.168.100.1.

And thats it.

A simple log line in app/logs/prod.log:

    [2015-06-03 22:08:02] app.INFO: Dit is een test {"bareMetalId":123,"os":"centos"} {"token":"556f5ea25f6af"}

Is now sent to Elastic Search. You should now be able to query your Elastic Search for log messages, assuming the hostname of your server running symfony 2 is myapi:

    $ curl http://192.168.100.1:9200/myapi/_search | python -mjson.tool
    {
        "_shards": {
            "failed": 0,
            "successful": 5,
            "total": 5
        },
        "hits": {
            "hits": [
                {
                    "_id": "ZIV7ryZrQRmXDiB6thY_yQ",
                    "_index": "myapi",
                    "_score": 1.0,
                    "_source": {
                        "EnvVersion": "",
                        "Hostname": "myapi",
                        "Logger": "Symfony2MonologFileInput",
                        "Payload": "Dit is een test",
                        "Pid": 0,
                        "Severity": 7,
                        "Timestamp": "2015-06-03T20:08:02.000Z",
                        "Type": "logfile",
                        "Uuid": "344e7cae-6ab7-4fb2-a770-d2cbad6653c3",
                        "channel": "app",
                        "levelname": "INFO",
                        "bareMetalId": 123,
                        "os": "centos",
                        "token": "556f5ea25f6af"
                    },
                    "_type": "logfile"
                },
        // ...
        }
    }

What is important to notice is that the keys token, bareMetalId and os in the monolog log line end up in Elastic Search as an indexable fields. From your php code you can add this extra information to your monolog messages by supplying an associative array as a second argument to the default monolog log functions:

    <?php

    $logger = $this->logger;
    $logger->info('The server was reinstalled', array('bareMetalId' => 123, 'os' => 'centos'));

Happy logging!

Share

Chef server API integration with PHP

In this post I will show you a quick example of how you can integrate with the chef server api from php.

If you don’t know chef I recommend to have a look at https://www.chef.io. Chef is a configuration management tool, similar to ansible or puppet.

Chef turns infrastructure into code. With Chef, you can automate how you build, deploy, and manage your infrastructure.

At LeaseWeb our infrastructure that supports our business consists of many machines. For us it was a logical step to use a configuration management tool to manage all those servers and we chose chef. We also use chef to automate most of our (web) application deployments.

While our “chef managed” infrastructure was getting bigger, deploying fixes and features got easier and more frequent we needed something so our organisation is able to know what is being deployed and when.

Php is the main language we use here and we use Guzzle for quick and easy integration with rest api’s and web services.

Guzzle is a PHP HTTP client that makes it easy to send HTTP requests and trivial to integrate with web services.

Read more about guzzle here http://guzzle.readthedocs.org/.

We have created a plugin for Guzzle3 that implements the chef server authentication algorithm as described in their documentation https://docs.chef.io/auth.html

The plugin can be found on our github page https://github.com/LeaseWeb/chefauth-guzzle-plugin.

The plugin takes care of adding all the necessary http headers and signing the request to make a fully authenticated call to the chef server.

To start consuming the chef server rest api either checkout the source code with git or add the plugin as a dependency to your project using `composer`:

    php composer.phar require "leaseweb/chef-guzzle-plugin":"1.0.0"

Once you have created a user in chef the two things you need to get started is the client name of this user (in this example we assume my-dashboard) and the private key of this client (in this example we assume it is stored in my-dashboard.pem):

    <?php

    use Guzzle\Http\Client;
    use LeaseWeb\ChefGuzzle\Plugin\ChefAuth\ChefAuthPlugin;

    // Supply your client name and location of the private key.
    $chefAuthPlugin = new ChefAuthPlugin("my-dashboard", "my-dashboard.pem");

    // Create a new guzzle client
    $client = new Client('https://manage.opscode.com');
    $client->addSubscriber($chefAuthPlugin);

    // Now you can make calls to the chef server
    $response = $client->get('/organizations/my-organization/nodes')->send();

    $nodes = $response->json();

There is a ton of things you can do with the chef api, read more about it here https://docs.chef.io/api_chef_server.html

Hopefully this plugin will make it easier to integrate your chef’ed infrastructure in your company processes.

We are playing around with:

  • automatically generating release notes for our applications,
  • automatically update our issue tracking systems after a chef deployment
  • and many more.
Share

Automatically provision your bare metal infrastructure

At LeaseWeb we are all about automating delivery processes. Be it for our virtual products or bare metal products. This post shows you one of the many things you can do with our API.

If you have a bare metal server at LeaseWeb I encourage you to login to our customer portal The LeaseWeb Self Service Center at https://secure.leaseweb.com and
In the API section you can manage your api keys for accessing the LeaseWeb API. To read more about what you can do with our API head over to the LeaseWeb Developer Portal

Recently we have published new api calls on our developer portal for customers to manage dhcp leases for their bare metal servers.

These api calls expose our internal dhcp infrastructure, that we use for automation, to our customers as a service.

    GET    /bareMetals/{bareMetalId}/leases                 # list all leases
    POST   /bareMetals/{bareMetalId}/leases                 # create a lease
    DELETE /bareMetals/{bareMetalId}/leases/{macAddress}    # delete a lease

Customers can use it to install operating systems which are not available in the LeaseWeb Self Service Center or if they would like to automatically provision their bare metal infrastructure.

When you use our api to create a dhcp lease you have the possibility to specify the dhcp option 67 Bootfile Name. We chainload the open source ipxe network boot firmware which has http support (read more about ipxe on their website http://ipxe.org/). This means that you can provide a valid http url for dhcp option 67 Bootfile Name that points to a pxe script containing instructions what the the boot loader should do next.

For example: let’s say you own the webserver at webserver.example.com where you have placed the following ipxe script at /boot.ipxe:

    $ curl http://webserver.example.com/boot.ipxe

    #!ipxe
    dhcp
    kernel http://webserver.example.com/archiso/boot/x86_64/vmlinuz archisobasedir=archiso archiso_http_srv=http://webserver.example.com/ ip=:::::eth0:dhcp
    initrd http://webserver.example.com/archiso/boot/x86_64/archiso.img
    boot

You can now create a dhcp lease for your bare metal server using our api:

    $ curl -H 'X-Lsw-Auth: my-api-key' -X POST https://api.leaseweb.com/v1/bareMetals/{bareMetalId}/leases -d bootFileName="http://webserver.example.com/boot.i

Obviously replace {bareMetalId} with the id of your bare metal server. To view the dhcp lease that we just created you can use this call:

    $ curl -H 'X-Lsw-Auth: my-api-key' https://api.leaseweb.com/v1/bareMetals/{bareMetalId}/leases
    
    {
        "_metadata": {
            "limit": 10, 
            "offset": 0, 
            "totalCount": 1
        }, 
        "leases": [
            {
                "ip": "203.0.113.1", 
                "mac": "AA:AA:AA:AA:AA:AA", 
                "options": [
                    // ...
                    {
                        "name": "Bootfile Name", 
                        "optionId": "67", 
                        "policyName": null, 
                        "type": "String", 
                        "userClass": "gPXE", 
                        "value": "http://webserver.example.com/boot.ipxe", 
                        "vendorClass": ""
                    }
                    // ...
                ], 
                "scope": "203.0.113.0"
            }
        ]
    }

Now you have to manually reboot your server or use our api to issue a power cycle:

    $ curl -H 'X-Lsw-Auth: my-api-key' -X POST https://api.leaseweb.com/v1/bareMetals/{bareMetalId}/reboot

The server will reboot, ask for dhcp lease and eventually read the instructions provided by you in /boot.ipxe which in this example is downloading a kernel and the archlinux live cd which are both served from your web server at `webserver.example.com`.

You should be careful and not forget to remove a dhcp lease when you are done. Otherwise during the next reboot it will boot from the network again.

    $ curl -H 'X-Lsw-Auth: my-api-key' -X DELETE https://api.leaseweb.com/v1/bareMetals/{bareMetalId}/leases/AA:AA:AA:AA:AA:AA

We automatically remove dhcp leases after 24 hours .

This service allows our customers to implement creative ideas that can automate their bare metal infrastructure.

Example: install arch linux over ssh without kvm

To continue the example I used this service to boot my modified version of the archlinux live cd which includes and starts openssh at boot and includes my public ssh keys. I use this trick to be able to manually install an operating system which is not available through the LeaseWeb Self Service Center.

I don’t need to contact technical support or have kvm on my server. Everything is done remotely over ssh. The modified live image is published on github here https://github.com/nrocco/archiso-sshd.

Clone the repository from github:

    $ git clone https://github.com/nrocco/archiso-sshd.git
    $ cd archiso-sshd

Add your ssh keys to authorized_keys of the root user:

    $ vim airootfs/root/.ssh/authorized_keys

Now build the image (you need to have the archiso package installed).

    $ make build

This might take a while. When done, copy the kernel, initrmfs and other generated files to the document root of your http server:

    $ cp -r work/iso/arch /var/www

Your document root might look like this now:

    $ find /var/www -type f
    /var/www/boot.ipxe
    /var/www/archiso/pkglist.x86_64.txt
    /var/www/archiso/x86_64/airootfs.md5
    /var/www/archiso/x86_64/airootfs.sfs
    /var/www/archiso/boot/x86_64/archiso.img
    /var/www/archiso/boot/x86_64/vmlinuz

That’s it. Now you boot from the network using our service.

Refer to airootfs/root/customize_airootfs.sh and airootfs/root/.ssh/authorized_keys for the specific customatizations.

What can you do with it?

This example is just the tip of the iceberg of possibilities. Let us know your ideas and use cases.

You might use it to boot into your own live image that does an automated installation of the operating system and kicks off the provisioning tool of your choice (chef, ansible, puppet) so your bare metal servers joins your infrastructure that helps supporting your business.

All fully automated.

Share

Working in the cloud to prevent viruses & trojans

This post touches some of the IT security topics that modern companies may have to deal with.

Endpoint security? Problematic!

Endpoint security is the security of your company’s laptop and desktop computers. The security of these computers in the outer perimeter of the network is a hot topic. You see the problem with home users that do not have the security devices and software that companies have. Viruses that encrypt personal documents with a password and ask a ransom to release it are common. Banking trojans are widespread as there is much money to be made. But also company databases containing millions of user credentials get stolen. Even PC manufacturers turn malicious under the pressure of advertisers. They ship new laptops with self-signed root certificates that nullify the web’s security system.

BYOD policy? Unstoppable!

Today Bring-Your-Own-Device (BYOD) policies are more popular than ever as people bring their private smart-phones to work. They identify with the device and the brand of the phone. Even the color of the phone or the installed software is part of their identity. People also want to use USB sticks, USB drives and their tablets at work as it has become part of their IT vocabulary. Working remote is encouraged and devices are carried from work to home and vice versa. This causes laptops to be connected to malicious networks, get stolen or just get lost. Fingerprint scanners and full-disk encryption and hardware tokes may help a bit, but do not solve all problems.

PC or Mac? Yes, indeed!

Apple laptops (and phones) are very expensive and have become important status symbols in the workplace. Some colleagues may be lucky to get a shiny Apple laptop or phone from the boss. Others are not that privileged and try to fake their success by buying one with their own money. For phones this is fully accepted. For laptops you see that more and more companies start to allow this. Companies see less interoperability problems, because all major business applications have become browser based. This causes the importance of the choice of desktop operating system to diminish rapidly.

Laptops without viruses

When Google launched it’s ChromeBook concept in 2011 I was expecting companies to start buying these for their employees. This laptop can safely be stolen, destroyed and is (by design) not vulnerable to viruses and trojans. It is even resilient against lost data due to forgotten backups. It’s secret? The laptop does not store any data on the it’s internal hard-disk, but stores everything in the cloud. You can simply reset the laptop to factory defaults, whenever the laptop misbehaves, without losing any data. Google has also started offering complementary corporate email and calendaring solutions. I really thought they had a winner on their hands. I was wrong. Companies did not massively convert.

Super fast and secure development workstations in the cloud!

At LeaseWeb we had (and still have) VMs to do development on, but these are not setup (or fast enough) to run your graphical development tools or VM tools like vagrant or docker. I identified this problem (in 2012) and started an experiment with working fully in the cloud.

I started offering a multi-user desktop development environment for a small group of 5 developers on a single server. The dual CPU server with 64 GB ram was operated by the team’s system engineer. The advantages were great: work from any machine without having to install your development environment. Connect from work or home to the same desktop and take up where you left off. You could also easily share files on the local disks and backups were made for you on the corporate backup systems. The environment was graphical and was totally over-dimensioned and thus super fast.

It failed (for that team). The multi-user desktop environment lifted most of the complaints that existed, but developers now felt that they had less freedom (and less privacy). Apparently they did not care about the source code not leaving the company or any of the other security advantages of working in the cloud (viruses, trojans and backups).

Fast forward to today. Many developers run Linux (often with encrypted disks) on their fast i5 laptops with 8GB of RAM. They put all their work in JIRA and Git, which are both in the cloud. So I guess that there is not much to gain anymore by moving development to the cloud.

But can’t anyone work in the cloud?

Could this pattern of working in the cloud also be applied to a company’s non-development department? These departments may have access to more important (financial) information and their employees may have less IT knowledge. This may cause viruses and trojans to pose a higher risk.

You could set up some (Windows) terminal servers with Remote Desktop Protocol (RDP) and work on these machines. You could run software updates during the nights, make backups for users and lock the system down to prevent viruses and trojans. Employees could use the local browser (on their ChromeBooks) for Internet usage and a locked down remote browser for the company web applications. This way the corporate (sensitive) data should stay protected.

What do you think? Would it work? Use the comments..

Share