Fast dynamic DNS with cron, PHP and DuckDNS

ducky_icon_mediumMy home connection has a 200 mbit cable Internet connection with 20 mbit up. Great for running a server, but every two days my ISP changes my IP address. When this happens I cannot connect to my home network anymore using VPN. Annoying, but certainly a (programming) challenge to me. The simple solution for this is to use a dynamic DNS solution. The name DynDNS popped up in my head, but apparently they are not free anymore (bummer). That’s why I chose to use the free dynamic DNS service “DuckDNS“. Then I realized that I do want a fast update of my dynamic DNS entry when my IP address changes, but I do not want to hammer DuckDNS. That’s why I wrote a small script to achieve this. You find it below.

DuckDNS PHP script to avoid hammering

On my website I installed the following PHP script that will call DuckDNS if the IP address of the caller has changed. It is must be called with a post request that holds a shared secret. This will avoid bots (or hackers) to change the DNS entry. Note that additionally HTTPS (SSL) is used to guarantee confidentiality.

<?php
// settings
$domains = 'cable-at-home'; // cable-at-home.duckdns.org
$token = 'eb1183a2-153b-11e5-b60b-1697f925ec7b';
$ip = $_SERVER['REMOTE_ADDR'];
$file = '/tmp/duckdns.txt';
$secret = 'VeryHardToGuess';
// compare secret
if (!isset($_POST['secret']) || $_POST['secret']!=$secret) { http_response_code(403); die(); }
// compare with current ip
if ($ip==file_get_contents($file)) { http_response_code(304); die('OK'); }
// create url
$url = "https://www.duckdns.org/update?domains=$domains&token=$token&ip=$ip";
$ch = curl_init();
curl_setopt($ch, CURLOPT_URL, $url);
curl_setopt($ch, CURLOPT_RETURNTRANSFER, TRUE);
$result = curl_exec($ch);
curl_close($ch);
// if success update current ip
if ($result!='OK') { http_response_code(400); die($result); }
file_put_contents($file,$ip);
die('OK');

Install this script somewhere in your Apache “DocumentRoot” and name it “duckdns.php”.

Cron script that runs every minute

I installed the following cron job on my server that runs in my home and is connected with cable to the Internet using the “crontab -e” command:

* * * * * /usr/bin/curl -X POST -d 'secret=VeryHardToGuess' https://somedomain.com/duckdns.php

Every minute this cron job executes a curl call to the duckdns.php PHP script on my website (somedomain.com). Only if the IP address is changed the call to DuckDNS (https://www.duckdns.org/update) is made to update the IP address. This avoids hammering the DuckDNS service, but also allows you to get the fastest response to an IP address change.

Installation

Note that in order to make this work you have to create an account at DuckDNS and then modify the “$domains” and “$token” parameters in the PHP script accordingly. You need to change “somedomain.com” in the cron job with the URL of your website. Also do not forget to replace “VeryHardToGuess” in both the PHP script as the cron job with a real secret. Any questions? Use the comments below!

MindaPHP now has RESTful API support

When building applications today you need to follow cool new architectures like “Microservice design” and “API-first”. APIs play an increasingly important role in applications today. Together with database abstraction layers they bring the database technology further from the business logic than ever.

In line with the “Ease of learning” vision for MindaPHP, I decided to add a minimal RESTful API client in the form of a cURL wrapper. It has full integration with the debugger as you can see below:

mindaphp_api

The wrapper class hardly influences the performance of cURL when the debugger is disabled. When the debugger is enabled the performance and the memory usage may be affected, but this gives you a great deal of control as you can see in the image above. In the example below you see how you can use the cURL wrapper to call the Bing search engine and extract the first 10 links for a search query.

<?php
$query = isset($_POST['q'])?$_POST['q']:'';
$results = array();

if ($query) {
    if (Curl::call('GET','http://www.bing.com/search',array('q'=>$query),$result)==200) {
        
        $dom = new DOMDocument();
        @$dom->loadHTML($result);
        
        $xpath = new DOMXpath($dom);
        $elements = $xpath->query('//ol["b_results"]/li[@class="b_algo"]//h2/a');

        foreach ($elements as $element) {
            $text = $element->nodeValue;
            $link = $element->getAttribute("href");
            $results[] = compact('text','link');
        }
    }
}

For more information check out Github or the demo on one of the links below:

 

Code: https://github.com/mevdschee/MindaPHP
Demo: http://maurits.server.nlware.com/hello/bing

Wikipedia client for the command line

wped

Don’t you want the power of Wikipedia at your fingertips when you are logged into your Linux server? With this tool you can! It is build with PHP/Curl and is based on the Wikipedia API.

Wikipedia (open)search API

Wikipedia allows us to program software that takes advantage of their huge database of articles and definitions. They do this by exposing an API. I implemented their “opensearch” API call to search for articles. We are not doing anything unintended. Wikipedia made the API to allow people to build useful (or less useful, you decide) tools like this one. They do ask people to behave nicely in their Wikipedia API Etiquette, so read them and behave!

HTML to text

Instead of making my own wikitext parser, I used the Wikipedia “parse” API call. This call transforms articles from the internal wikitext format to HTML. I use the command line application “elinks” (alternative to the “lynx” text-mode web browser) to convert the HTML to text so it can be displayed on the console. This browser software is configured not to use the network or write files to disk, so you do not have to worry about your privacy.

How does this work?

It is a PHP script that uses Curl. The whole wped script is currently only 70 lines of code. The opensearch call is really easy to implement as you can see below:

$url = 'http://en.wikipedia.org/w/api.php';
$user_agent = 'Wped/0.1 (http://github.com/mevdschee/wped) PHP-Curl';

$curl = curl_init();
curl_setopt($curl, CURLOPT_RETURNTRANSFER, true);
curl_setopt($curl, CURLOPT_USERAGENT, $user_agent);

$arguments = array(
  'action'=>'opensearch',
  'search'=>$search,
  'limit'=>3,
  'namespace'=>0,
  'format'=>'xml'
);

curl_setopt($curl, CURLOPT_URL, $url.'?'.http_build_query($arguments));
$results = simplexml_load_string(curl_exec($curl));

With this example it should be easy for you to start building your own Wikipedia tool, let me know if you did!

Quickstart

To install, just copy-paste the following lines in your console:

sudo apt-get install php5-cli php5-curl elinks
wget https://raw.githubusercontent.com/mevdschee/wped/master/wped.php -O wped
chmod 755 wped
sudo mv wped /usr/bin/wped

To uninstall use the following:

sudo rm /usr/bin/wped

Congratulations, now you can search for elephants using the “wped” command! And if you need the full article you can use the “-f” flag.

Yes! I need this, fast!

How can you have lived without it? Why would you ever use “wget”, when you can use “wped”? It adds so much fun to all your Wikipedia searches. The source and full instructions are available on my Github account: https://github.com/mevdschee/wped

 

Symfony2 Guzzle bundle for cURL API calling

The LswGuzzleBundle adds Guzzle API call functionality to your Symfony2 application. It is easy to use from the code and is aimed to provide full debugging capabilities. source: Github

On Packagist, you can see that in the past two years we have published 10 LeaseWeb Symfony2 bundles. The latest and tenth addition to the list is the LswGuzzleBundle. It is an alternative for the LswApiCallerBundle, so it also is a bundle that helps you to call API’s using cURL. The difference is that this bundle uses Guzzle as a wrapper for cURL.

Guzzle profiler and debugger

This bundle also adds a debug panel to your Symfony2, where you can find the cURL API calls Guzzle makes in your application. It provides profiling and extensive debugging capabilities to your Symfony2 application. We feel it is an essential bundle if you create an application that does not work with a database, but instead has an API as data store backend. Guzzle allows you to specify the parameters and and available calls in a Guzzle service description. Since this allows the bundle to understand the structure of the call, it will display the parsed parameters and output. This is an unique feature of this bundle.

guzzle_bundle

Open source

The bundle is MIT licensed. You can find it at:

As always there is a readme documentation on GitHub to get you started.

Symfony2 Bundle for cURL API Calling

The LswApiCallerBundle adds a CURL API caller to your Symfony2 application. It is easy to use from the code and is aimed to have full debugging capabilities. source: Github

With pride we release our 4th Symfony2 bundle: the LswApiCallerBundle. We already got some positive replies from other developers in the Symfony2 community on our other bundles. This bundle adds a debug panel to your Symfony2 where you can find the cURL API calls your application makes. It provides profiling and extensive debugging capabilities to your Symfony2 application. We feel it is an essential bundle if you create an application that does not work with a database, but instead has an API as data store backend.

api_caller

The bundle supports the following call types:

  • HTTP GET (HTML/JSON)
  • HTTP POST (JSON)
  • HTTP PUT (JSON)
  • HTTP DELETE (JSON)
  • … (and it can easily be extended)

You can find the bundle at:

As always there is a readme documentation on GitHub to get you started.