Set up Private DNS-over-TLS/HTTPS

Domain Name System (DNS) is a crucial part of Internet infrastructure. It is responsible for translating a human-readable, memorizable domain (like leaseweb.com) into a numeric IP address (such as 89.255.251.130).

In order to translate a domain into an IP address, your device sends a DNS request to a special DNS server called a resolver (which is most likely managed by your Internet provider). The DNS requests are sent in plain text so anyone who has access to your traffic stream can see which domains you visit.

There are two recent Internet standards that have been designed to solve the DNS privacy issue:

  • DNS over TLS (DoT):
  • DNS over HTTPS (DoH)

Both of them provide secure and encrypted connections to a DNS server.

DoT/DoH feature compatibility matrix:

Firefox Chrome Android 9+ iOS 14+
DoT
DoH

iOS 14 will be released later this year.

In this article, we will setup a private DoH and DoT recursor using pihole in a docker container, and dnsdist as a DNS frontend with Letsencrypt SSL certificates. As a bonus, our DNS server will block tracking and malware while resolving domains for us.

Installation

In this example we use Ubuntu 20.04 with docker and docker-compose installed, but you can choose your favorite distro (you might need to adapt a bit).

You may also need to disable systemd-resolved because it occupies port 53 of the server:

# Check which DNS resolvers your server is using:
systemd-resolve --status
# look for "DNS servers" field in output

# Stop systemd-resolved
systemctl stop systemd-resolved

# Then mask it to prevent from further starting
systemctl mask systemd-resolved

# Delete the symlink systemd-resolved used to manage
rm /etc/resolv.conf

# Create /etc/resolv.conf as a regular file with nameservers you've been using:
cat <<EOF > /etc/resolv.conf
nameserver <ip of the first DNS resolver>
nameserver <ip of the second DNS resolver>
EOF

Install dnsdist and certbot (for letsencrypt certificates):

# Install dnsdist repo
echo "deb [arch=amd64] http://repo.powerdns.com/ubuntu focal-dnsdist-15 main" > /etc/apt/sources.list.d/pdns.list
cat <<EOF > /etc/apt/preferences.d/dnsdist
Package: dnsdist*
Pin: origin repo.powerdns.com
Pin-Priority: 600
EOF
curl https://repo.powerdns.com/FD380FBB-pub.asc | apt-key add -

apt update
apt install dnsdist certbot

Pihole

Now we create our docker-compose project:

mkdir ~/pihole
touch ~/pihole/docker-compose.yml

The contents of docker-compose.yml file:

version: '3'
services:
  pihole:
    container_name: pihole
    image: 'pihole/pihole:latest'
    ports:
    # The DNS server will listen on localhost only, the ports 5300 tcp/udp.
    # So the queries from the Internet won't be able to reach pihole directly.
    # The admin web interface, however, will be reachable from the Internet.
      - '127.0.1.53:5300:53/tcp'
      - '127.0.1.53:5300:53/udp'
      - '8081:80/tcp'
    environment:
      TZ: Europe/Amsterdam
      VIRTUAL_HOST: dns.example.com # domain name we'll use for our DNS server
      WEBPASSWORD: super_secret # Pihole admin password
    volumes:
      - './etc-pihole/:/etc/pihole/'
      - './etc-dnsmasq.d/:/etc/dnsmasq.d/'
    restart: unless-stopped

Start the container:

docker-compose up -d

After the container is fully started (it may take several minutes) check that it is able to resolve domain names:

dig +short @127.0.1.53 -p5300 one.one.one.one
# Excpected output
# 1.0.0.1
# 1.1.1.1

Letsencrypt Configuration

Issue the certificate for our dns.example.com domain:

certbot certonly

Follow the instructions on the screen (i.e. select the proper authentication method suitable for you, and fill the domain name).

After the certificate is issued it can be found by the following paths:

  • /etc/letsencrypt/live/dns.example.com/fullchain.pem – certificate chain
  • /etc/letsencrypt/live/dns.example.com/privkey.pem – private key

By default only the root user can read certificates and keys. Dnsdist, however, is running as user and group _dnsdist, so permissions need to be adjusted:

chgrp _dnsdist /etc/letsencrypt/live/dns.example.com/{fullchain.pem,privkey.pem}
chmod g+r /etc/letsencrypt/live/dns.example.com/{fullchain.pem,privkey.pem}

# We should also make archive and live directories readable.
# That will not expose the keys since the private key isn't world-readable
chmod 755 /etc/letsencrypt/{live,archive}

The certificates are periodically renewed by Certbot, so dnsdist should be restarted after that happens since it is not able to detect the new certificate. In order to do so, we put a so-called deploy script into /etc/letsencrypt/renewal-hooks/deploy directory:

mkdir -p /etc/letsencrypt/renewal-hooks/deploy
cat <<EOF > /etc/letsencrypt/renewal-hooks/deploy/restart-dnsdist.sh
#!/bin/sh
systemctl restart dnsdist
EOF
chmod +x /etc/letsencrypt/renewal-hooks/deploy/restart-dnsdist.sh

Dnsdist Configuration

Create dnsdist configuration file /etc/dnsdist/dnsdist.conf with the following content:

addACL('0.0.0.0/0')

-- path for certs and listen address for DoT ipv4,
-- by default listens on port 853.
-- Set X(int) for tcp fast open queue size.
addTLSLocal("0.0.0.0", "/etc/letsencrypt/live/dns.example.com/fullchain.pem", "/etc/letsencrypt/live/dns.example.com/privkey.pem", { doTCP=true, reusePort=true, tcpFastOpenSize=64 })

-- path for certs and listen address for DoH ipv4,
-- by default listens on port 443.
-- Set X(int) for tcp fast open queue size.
-- 
-- In this example we listen directly on port 443. However, since the DoH queries are simple HTTPS requests, the server can be hidden behind Nginx or Haproxy.
addDOHLocal("0.0.0.0", "/etc/letsencrypt/live/dns.example.com/fullchain.pem", "/etc/letsencrypt/live/dns.example.com/privkey.pem", "/dns-query", { doTCP=true, reusePort=true, tcpFastOpenSize=64 })

-- set X(int) number of queries to be allowed per second from a IP
addAction(MaxQPSIPRule(50), DropAction())

--  drop ANY queries sent over udp
addAction(AndRule({QTypeRule(DNSQType.ANY), TCPRule(false)}), DropAction())

-- set X number of entries to be in dnsdist cache by default
-- memory will be preallocated based on the X number
pc = newPacketCache(10000, {maxTTL=86400})
getPool(""):setCache(pc)

-- server policy to choose the downstream servers for recursion
setServerPolicy(leastOutstanding)

-- Here we define our backend, the pihole dns server
newServer({address="127.0.1.53:5300", name="127.0.1.53:5300"})

setMaxTCPConnectionsPerClient(1000)    -- set X(int) for number of tcp connections from a single client. Useful for rate limiting the concurrent connections.
setMaxTCPQueriesPerConnection(100)    -- set X(int) , similiar to addAction(MaxQPSIPRule(X), DropAction())

Checking if DoH and DoT Works

Check if DoH works using curl with doh-url flag:

curl --doh-url https://dns.example.com/dns-query https://leaseweb.com/

Check if DoT works using kdig program from the knot-dnsutils package:

apt install knot-dnsutils

kdig -d @dns.example.com +tls-ca leaseweb.com

Setting up Private DNS on Android

Currently only Android 9+ natively supports encrypted DNS queries by using DNS-over-TLS technology.

In order to use it go to: Settings -> Connections -> More connection settings -> Private DNS -> Private DNS provider hostname -> dns.example.com

Conclusion

In this article we’ve set up our own DNS resolving server with the following features:

  • Automatic TLS certificates using Letsencrypt.
  • Supports both modern encrypted protocols: DNS over TLS, and DNS over HTTPS.
  • Implements rate-limit of incoming queries to prevent abuse.
  • Automatically updated blacklist of malware, ad, and tracking domains.
  • Easily upgradeable by simply pulling a new version of Docker image.
Share

Simple web application firewall using .htaccess

Apache provides a simple web application firewall by a allowing for a “.htaccess” file with certain rules in it. This is a file you put in your document root and may restrict or allow access from certain specific IP addresses. NB: These commands may also be put directly in the virtual host configuration file in “/etc/apache2/sites-available/”.

Use Case #1: Test environment

Sometimes you may want to lock down a site and only grant access from a limited set of IP addresses. The following example (for Apache 2.2) only allows access from the IP address “127.0.0.1” and blocks any other request:

Order Allow,Deny
Deny from all
Allow from 127.0.0.1

In Apache 2.4 the syntax has slightly changed:

Require all denied
Require ip 127.0.0.1

You can find your IP address on: whatismyipaddress.com

Use Case #2: Application level firewall

If you run a production server and somebody is abusing your system with a lot of requests then you may want to block a specific IP address. The following example (for Apache 2.2) only blocks access from the IP address “172.28.255.2” and allows any other request:

Order deny,allow
Allow from all
Deny from 172.28.255.2

In Apache 2.4 the syntax has slightly changed:

Require all granted
Require not ip 172.28.255.2

If you want to block an entire range you may also specify CIDR notation:

Require all granted
Require not ip 10.0.0.0/8
Require not ip 172.16.0.0/12
Require not ip 192.168.0.0/16

NB: Not only IPv4, but also IPv6 addresses may be used.

Share

Static code analysis for PHP templates

Templating is cool. Everybody is using Twig today. Other popular choices are: Smarty, Mustache and Latte. You may also want to read what Fabien Potencier has written about PHP templates languages. It makes sense.

Still I can think of two reasons why we don’t want a templating language and we rather use PHP itself for templating. First reason: PHP templating is easier to learn than a PHP templating language.  Second reason: it executes faster.

PHP templating languages improve security

I tried to understand what the primary reason is that people are using a templating language. It seems to be ease of use, while keeping the application secure. The following example shows how easily you can write unsafe code:

Hello <?php echo $POST['name']; ?>!

It would only be safe to print a POST variable when using:

<?php echo htmlspecialchars($POST['name'],ENT_QUOTES,'UTF-8'); ?>

A templating language typically allows you to write something like:

Hello {{ name }}!

I agree that security is improved by using a templating language. The templating language escapes the output strings in order to prevent XSS vulnerabilities. But still I wonder: Can’t we get the same security benefits when we use native PHP for templating?

Helper function

As you have seen the PHP way of escaping is rather long. Fortunately, you can easily define a function that allows an alternative syntax, for instance:

Hello <?php e($POST['name']); ?>!

Yup, that is the “e” for “echo” :-). Now we can report all native (unescaped) echo function calls as being potentially unsafe. This can be achieved by doing static code analysis. While analyzing the code the analyzer could complain like this:

PHP Warning:  In "template.php" you should not use "echo" on line 1. Error raised  in analyzer.php on line 11

This could be limited to debug mode as static code analysis actually takes some time and may harm the performance of your application.

Static code analysis in PHP

I worked out the idea of secure PHP templating using static code analysis. In development (debug) mode it should warn the programmer when he uses a potentially non-safe construct.

The following analyzer script shows how this works:

<?php
$tokens    = array('T_ECHO', 'T_PRINT', 'T_EXIT', 'T_STRING', 'T_EVAL', 'T_OPEN_TAG_WITH_ECHO');
$functions = array('echo', 'print', 'die', 'exit', 'var_dump', 'eval', '<?=');
$filename  = 'template.php';

$all_tokens = token_get_all(file_get_contents($filename));
foreach ($all_tokens as $token) {
  if (is_array($token)) {
    if (in_array(token_name($token[0]),$tokens)) {
      if (in_array($token[1],$functions)) {
        trigger_error('In "'.$filename.'" you should not use "'.htmlentities($token[1]).'" on line '.$token[2].'. Error raised ', E_USER_WARNING);
      }
    }
  }
}

It will analyze the “template.php” file and report potentially insecure or erroneous language constructs.

This form of templating and static code analysis is fully implemented in the MindaPHP framework that you can find on my Github account. You can find the source code of the PHP static code analyzer class here.

Share

Linux commands “astu” and “astsu” in Mr. Robot

mr_robot

People told me that the hacking in “Mr Robot” was pretty accurate. Mr Robot is a TV series about a hacker named “Elliot”. I had to see it, but until now I was lacking the time. Last Sunday was a prefect lazy day and I took the time to finally watch it. I must admit it was pretty amazing to see the inside of a data-center and all the geeky Linux command line screens in a such a popular TV series.

Linux commands “astu” and “astsu”

When Elliot (the main character) is hacking he uses two Linux commands frequently: “astu” and “astsu”. The commands play a critical role in the series. I did not know what they did, so I wondered:

Did anyone figure out what the “astsu” command is supposed to be? Did he just type random characters or what? The other commands I noticed were all real.

On which some other user on the Cyberpunk and Science Fiction board replied:

It seems to be used like sudo (or ssh) would so I guess the idea was that the company that he works for has its own way to allow safe privilege escalation and this is the tool they install astsu = AllSafe Toolkit Super User (allsafe security being the company name).

You should read the Mr. Robot Episode 1 Analysis for more detail on the actual commands used during the hacking.

Things I liked

Some things were really spot on in the series and I liked them a lot:

  1. The correctness, detail and accuracy of the hacking that goes on.
  2. Elliot has some social challenges and thus feels like an outsider.
  3. Elliot is unhappy and this is his strength, as he has nothing to lose.

But not everything was good, there was also some stuff that bothered me in the series.

Things that bothered me

Here is a list of the most annoying things in the series:

  1. Elliot uses a smart-phone and he never switches SIM or phone.
  2. Elliot’s schizophrenia is making his conspiracy thinking less genuine.
  3. Computers and downers do not match. Caffeine on the other hand…

I feel the makers of Mr. Robot should have thought these things over better. Nevertheless they made an enjoyable TV series. Recommended!

 

Share

Limit concurrent PHP requests using Memcache

When you run a website you may want to use nginx reverse proxy to cache some of your static assets and also to limit the amount of connections per client IP to each of your applications. Some good modules for nginx are:

Many people are not running a webfarm, but they still want to protect themselves against scrapers and hackers that may slow the website (or even make it unavailable). The following script allows you to protect your PHP application from too many concurrent connections per IP address. You need to have Memcache installed and you need to be running a PHP web application that uses a front controller.

Installing Memcache for PHP

Run the following command to install Memcache for PHP on a Debian based Linux machine (e.g. Ubuntu):

sudo apt-get install php5-memcache memcached

This is easy. You can flush your Memcache data by running:

telnet 0 11211
flush_all

You may have to restart apache for the Memcache extension to become active.

sudo service apache2 restart

Modifying your front controller

It is as simple as opening up your “index.php” or “app.php” (Symfony) and then pasting in the following code in the top of the file:

<?php
function firewall($concurrency,$spinLock,$interval,$cachePrefix,$reverseProxy)
{
  $start = microtime(true);
  if ($reverseProxy && isset($_SERVER['HTTP_X_FORWARDED_FOR'])) {
    $ip = array_pop(explode(',',$_SERVER['HTTP_X_FORWARDED_FOR']));
  }
  else {
    $ip = $_SERVER['REMOTE_ADDR'];
  }
  $memcache=new Memcache();
  $memcache->connect('127.0.0.1', 11211);
  $key=$cachePrefix.'_'.$ip;
  $memcache->add($key,0,false,$interval);
  register_shutdown_function(function() use ($memcache,$key){ $memcache->decrement($key); });
  while ($memcache->increment($key)>$concurrency) {
    $memcache->decrement($key);
    if (!$spinLock || microtime(true)-$start>$interval) {
      http_response_code(429);
      die('429: Too Many Requests');
    }
    usleep($spinLock*1000000);
  }
}
firewall(10,0.15,300,'fw_concurrency_',false);

Add these lines if you want to test the script in stand-alone mode:

session_start();
session_write_close();
usleep(3000000);

With the default setting you can protect a small WordPress blog as it limits your visitors to do 10 concurrent(!) requests per IP address. Note that this is a lot more than 10 visitors per IP address. A normal visitor does not do concurrent requests to PHP as your browser tends to send only one request at a time. Even multiple users may not do concurrent requests (if you are lucky). In case concurrent requests do happen they will be delayed for “x” times 150 ms until the concurrency level (from that specific IP) is below 10. Other IP addresses are not affected/slowed down.

If you use a reverse proxy you can configure this (to get the correct IP address from the “X-Forwarded-For” header). Also if you set “$spinLock” to “false” then you will serve “429: Too Many Requests” if there are too many concurrent requests instead of stalling the connection.

This functionality is included as the “Firewall” feature of the new MindaPHP framework and also as the firewall functionality in the LeaseWeb Memcache Bundle for Symfony. Let me know what you think about it using the comments below.

Share