Creating a Real-time HoneyPot Attack Map

Every device connected to the internet is open for cyber attacks. It takes less than one minute before a system is attacked once it is connected to the internet. Recently, I worked on a hackathon project to visualize honeypot attacks on a map in real-time.

A honeypot is a computer system that mimics a target for hackers. It tries to fool hackers into thinking it is a real computer system, distracting them from other targets.

Initial Setup

There are many honeypot systems around but for this project I used T-Pot created by Deutsche Telekom Security. It consists of many honeypot daemons and tools out-of-the-box and is easy to setup.

To create a live map, we need to have the following:

  • Running T-Pot instance
  • Small server with a webserver (nginx) and Node.js/NPM

Follow the instructions to install T-Pot. Confirm your T-Pot instance is running and you see attacks appearing in the dashboards.

There are many instructions on how to install a server with Nginx and Node.js (one example can be found here).

Node.js Application to Receive Logs

On the webserver, we create a small Node.js application that will do two simple tasks:

  • Receive data from the T-Pot installation (logstash)
  • Run a small websocket server to broadcast the received data to connected clients

Install required packages
In our Node.js application we use two packages: `express` and `ws`. First install both packages:

npm install ws express

Now we create a small application called `server.js`:

vi server.js

Insert the following code into the file:

#!/usr/bin/env nodejs
const http = require('http');
const WebSocket = require('ws');
const express = require('express');
const app = express();

const PORT = 8080;
const WS_PORT = 8081;


// Create a WebSocket Server so clients can connect to it
const wss = new WebSocket.Server({ port: WS_PORT })
wss.on('connection', function connection(ws) {

// Now we create a simple HTTP server which receives a message
// and forwards the message to the connected WebSocket clients
app.get('/', (req, res) => {
});'/', (req, res) => {
  wss.clients.forEach(function each(client) {
    if (client.readyState === WebSocket.OPEN) {

app.listen(PORT, () => {
   console.log('The server is running at port 8080!');

This small Node.js application simply listens on port 8080 to receive a POST message. WebSockets can connect to port 8081. Once a message is received on port 8080, it is sent to all connected WebSocket clients.

Test Application

To test your application, make it executable:

chmod +x server.js

Now run the application:


The output will be:

The server is running at port 8080!

You can use a process manager like PM2 to daemonize your application:

sudo npm install -g pm2
pm2 start server.js

PM2 will restart applications automatically if the application crashed or is killed. In order to have your application to run after a system (re)boot, you will need to execute another command:

pm2 startup systemd

This command output might include a command which needs to be run with superuser privileges:

sudo env PATH=$PATH:/usr/bin /usr/lib/node_modules/pm2/bin/pm2 startup systemd -u your_user — hp /home/your_user

Webpage Showing the Map

Now we will create a small webpage, and add some Javascript code. This code will open a websocket to receive updates and plot them on a map. To create a world map, I used Mapbox GL JS. You will need to create a (free) account in order to create an API key that will be used to create a map.

If you have a server running default Nginx, create a new `index.html` in the web-root folder:

cd /var/www/html
vi index.html

Insert the following HTML code into the file:

   <link href="" rel="stylesheet" integrity="sha384-giJF6kkoqNQ00vy+HMDP7azOuL0xtbfIcaT9wjKHr8RbDVddVHyTfAAsrekwKmP1" crossorigin="anonymous">
   <script src=''></script>
    <link href='' rel='stylesheet' />  
        @import url(,500);

        body { background-color: black }
        #map { height: calc(100vh - 275px);; width: ; z-index: 1; }
        .table { color: #fff; font-family: Inconsolata,sans-serif; font-size: 15px; border-color: #525252;}    
        .thead { font-weight: 700; color: #525252; }            
        @-webkit-keyframes flashrow {
           from { background-color: #525252; }
           to { background-color: var(--bs-table-bg); }
         @-moz-keyframes flashrow {
           from { background-color: #525252; }
           to { background-color: var(--bs-table-bg); }
         @-o-keyframes flashrow {
           from { background-color: #525252; }
           to { background-color: var(--bs-table-bg); }
         @keyframes flashrow {
           from { background-color: #525252; }
           to { background-color: var(--bs-table-bg); }
         .flashrow {
           -webkit-animation: flashrow 1.5s; /* Safari 4+ */
           -moz-animation:    flashrow 1.5s; /* Fx 5+ */
           -o-animation:      flashrow 1.5s; /* Opera 12+ */
           animation:         flashrow 1.5s; /* IE 10+ */
   <div class="container">   
    <div class="row">
      <div class="col">
         <div id="map"></div>
   <div class="row">
      <div class="col" id="ticker">
         <table id="tickettable" class="table table-black ticker">
            <thead class="text-uppercase thead">
                  <th class="col-lg-1 thead">Time</th>
                  <th class="col-lg-2 thead">Country</th>
                  <th class="col-lg-3 thead">AS Organisation</th>
                  <th class="col-lg-6 thead">TYPE</th>
<script src='map.js'></script>

This page simple loads Mapbox GL Javascript libraries and some styles. It also loads another Javascript file (at the bottom) which will open the Websocket and update the map.
Let’s create this Javascript file:

vi map.js

and insert the following code into the file:

// Set the IP to your webserver IP
// Set your mapboxGL AccessToken

// Remove points from map after x-seconds
var displayTime = 300;

// Set some defaults for the map

var framesPerSecond = 15; 
var initialOpacity = 1;
var opacity = initialOpacity;
var initialRadius = 3;
var radius = initialRadius;
var maxRadius = 15;
let points = new Map();
var timers = [];

//Set your accessToken here
mapboxgl.accessToken = MAPBOX_TOKEN;

//Create new mapboxGl Map. Set your used style
var map = new mapboxgl.Map({
    container: 'map',
    style: 'mapbox://styles/leaseweb/ckkiepmg40ds717ry6l0htwag',
    center: [0, 0],
    zoom: 1.75

// Create a popup, but don't add it to the map yet.
var popup = new mapboxgl.Popup({
   closeButton: false,
   closeOnClick: false

// Once the map is loaded, we open the Websockets
map.on('load', function () {   

function openWebSockets(map) { 
   if ("WebSocket" in window) {
      // Let us open a web socket
      var ws = new WebSocket( WEBSOCKET_SERVER); 

      ws.onopen = function() {
         // Web Socket is connected, send data using send()         
         console.log("WS Open...");
      ws.onmessage = function (event) { 
         var received_msg = JSON.parse(;             

      ws.onerror = function(error) {
         console.log('Websocket error: ');

      ws.onclose = function() { 
         // websocket is closed.
         console.log("Connection is closed..."); 

   } else {
      // The browser doesn't support WebSocket
      alert("WebSocket NOT supported by your Browser!");

function animateMarker(timestamp, pointId) {
   if(!(pointId === undefined)) {
      if (points.has(pointId)) {         
        timers[pointId] = setTimeout(function() {
            requestAnimationFrame(function(timestamp) {
              animateMarker(timestamp, pointId);

            radius = points.get(pointId)[0];
            opacity = points.get(pointId)[1];
            radius += (maxRadius - radius) / framesPerSecond;            
            opacity -= ( .9 / framesPerSecond );
            if (opacity < 0) {
               opacity = 0;

            map.setPaintProperty('point-'+pointId, 'circle-radius', radius);
            map.setPaintProperty('point-'+pointId, 'circle-opacity', opacity);
            if (opacity <= 0) {
                radius = initialRadius;
                opacity = initialOpacity;
            points.set(pointId,[radius, opacity ]);        
        }, 1000 / framesPerSecond);
     } else {
      //The point is removed, we don't do anything at this moment

function addPoint(msg) {
   geo = JSON.parse(msg.geoip);     
   var ip = geo.ip;
   //Create a geohash based on the lat/lon of the IP. We used factor 7 to prevent overlapping point animations
   var geohash = encodeGeoHash(geo.latitude, geo.longitude, 7);
   //Get the AS Organisation name (or unknown)
   var ASORG = (geo.as_org === undefined ? 'Unknown' : geo.as_org);

   //Remove the flashrow style from last added row
   var flashrows = document.getElementById("tickettable").getElementsByClassName('flashrow');
   while (flashrows[0]) {
   //Get table to add the newly added point information
   var tbody = document.getElementById("tickettable").getElementsByTagName('tbody')[0];
   tbody.insertRow().innerHTML = '<tr><td class="flashrow">' + new Date().toLocaleTimeString() + '</td>' +
         '<td class="flashrow">' + geo.country_name + '</td>' +
         '<td class="flashrow">' + ASORG + '</td>' +
         '<td class="flashrow">' + msg.protocol.toUpperCase() + ' Attack on port ' + msg.dest_port +'</td>' +    
   //If we have more than 5 items in the list, remove the first one
   if (tbody.rows.length > 5) {

   //Add the point to the map if it is not already on the map
   if (!(geohash === undefined)) {               
      if (!(points.has(geohash))) {
         //Add the point to hash to keep of all active points and prevent duplicate points.
         points.set(geohash, [initialRadius, initialOpacity ]);               

         //Set a timer to remove the poinrt after 5minutes
         setTimeout(function() { removePoint(geohash) }, displayTime * 1000);            

         map.addSource('points-'+geohash, {
           "type": "geojson",
           "data": {
               "type": "Feature",
               "geometry": {
                  "type": "Point",
                  "coordinates": [ geo.longitude, geo.latitude]
               "properties": {
                  "description": "<strong>" + ASORG + " (AS " + geo.asn +")</strong><p>IP: " + ip + "<BR>City: " + (geo.city_name === undefined ? 'Unknown' : geo.city_name) + 
                     "<BR>Region: " + (geo.region_name === undefined ? 'Unknown' : geo.region_name) + "<BR>Country: " + (geo.country_name === undefined ? 'Unknown' : geo.country_name) + "</P>"
           "id": "point-"+geohash,
           "source": "points-"+geohash,
           "type": "circle",
           "paint": {
               "circle-radius": initialRadius,
               "circle-radius-transition": {duration: 0},
               "circle-opacity-transition": {duration: 0},
               "circle-color": "#dd7cbf"

         map.on('mouseenter', 'point-'+geohash, function (e) {
            // Change the cursor style as a UI indicator.
            map.getCanvas().style.cursor = 'pointer';
            var coordinates = e.features[0].geometry.coordinates.slice();
            var description = e.features[0].properties.description;
            // Ensure that if the map is zoomed out such that multiple
            // copies of the feature are visible, the popup appears
            // over the copy being pointed to.
            while (Math.abs(e.lngLat.lng - coordinates[0]) > 180) {
               coordinates[0] += e.lngLat.lng > coordinates[0] ? 360 : -360;
            // Populate the popup and set its coordinates
            // based on the feature found.
         map.on('mouseleave', 'point-'+geohash, function () {
            map.getCanvas().style.cursor = '';

         //Animate the added point.
         animateMarker(0, geohash);               

function removePoint(ip) {

function encodeGeoHash(latitude, longitude, precision) {
  var BITS = [16, 8, 4, 2, 1];

  var BASE32 = "0123456789bcdefghjkmnpqrstuvwxyz";
  var isEven = 1;
  var lat = [-90.0, 90.0];
  var lon = [-180.0, 180.0];
  var bit = 0;
  var ch = 0;
  precision = precision || 12;

  var geohash = "";
  while (geohash.length < precision) {
    var mid;
    if (isEven) {
      mid = (lon[0] + lon[1]) / 2;
      if (longitude > mid) {
        ch |= BITS[bit];
        lon[0] = mid;
      } else {
        lon[1] = mid;
    } else {
      mid = (lat[0] + lat[1]) / 2;
      if (latitude > mid) {
        ch |= BITS[bit];
        lat[0] = mid;
      } else {
        lat[1] = mid;

    isEven = !isEven;
    if (bit < 4) {
    } else {
      geohash += BASE32[ch];
      bit = 0;
      ch = 0;
  return geohash;

You will need to make two small modification at the top of the file:

  • Set the IP of your webserver
  • Set your Mapbox access token

Once you have done this, open the page in your browser and a map should appear. NOTE: nothing will happen at this moment 🙂

Configure Logstash

Now we are all set for the last part: configuring Logstash to also forward (some) logs to our Node-application.
On your T-Pot server, we need to get the Logstash configuration as described on the T-Pot Wiki:

docker exec -it logstash ash
cd /etc/logstash/conf.d/
cp logstash.conf /data/elk/logstash.conf

Open the Logstash configuration and add the following lines to the output section, after the Elasticsearch output:

if [type] == "ConPot" and [dest_port] and [event_type] == "NEW_CONNECTION" and [src_ip] != "${MY_INTIP}" {
   http {
     url => "http://${HTTP_LOGIP}"
     http_method => "post"
     mapping => {
         "type" => "%{type}"
         "protocol" => "Elastic"
         "source" => "%{src_ip}"
         "dest_port" => "%{dest_port}"
         "geoip" => "%{geoip}"
if [type] == "Ciscoasa" and [src_ip] != "${MY_INTIP}" {
   http {
     url => "http://${HTTP_LOGIP}"
     http_method => "post"
     mapping => {
          "type" => "%{type}"
         "protocol" => "Ciscoasa"
         "source" => "%{src_ip}"
         "geoip" => "%{geoip}"
if [type] == "Mailoney" and [dest_port] and [src_ip] != "${MY_INTIP}" {
   http {
     url => "http://${HTTP_LOGIP}"
     http_method => "post"
     mapping => {
          "type" => "%{type}"
         "protocol" => "Mail"
         "source" => "%{src_ip}"
         "dest_port" => "%{dest_port}"
         "geoip" => "%{geoip}"
if [type] == "ElasticPot" and [dest_port] and [src_ip] != "${MY_INTIP}" {
   http {
     url => "http://${HTTP_LOGIP}"
     http_method => "post"
     mapping => {
          "type" => "%{type}"
         "protocol" => "Elastic"
         "source" => "%{src_ip}"
         "dest_port" => "%{dest_port}"
         "geoip" => "%{geoip}"
if [type] == "Adbhoney" and [dest_port] and [src_ip] != "${MY_INTIP}" {
   http {
     url => "http://${HTTP_LOGIP}"
     http_method => "post"
     mapping => {
          "type" => "%{type}"
         "protocol" => "ADB"
         "source" => "%{src_ip}"
         "dest_port" => "%{dest_port}"
         "geoip" => "%{geoip}"
if [type] == "Dionaea" and [dest_port] and [src_ip] != "${MY_INTIP}" {
   http {
     url => "http://${HTTP_LOGIP}"
     http_method => "post"
     mapping => {
          "type" => "%{type}"
         "protocol" => "%{[connection][transport]}"
         "service" => "%{[connection][protocol]}"
         "source" => "%{src_ip}"
         "dest_port" => "%{dest_port}"
         "geoip" => "%{geoip}"
if [type] == "Fatt" and [protocol] != "ssh" and [src_ip] != "${MY_INTIP}" {
   http {
     url => "http://${HTTP_LOGIP}"
     http_method => "post"
     mapping => {
          "type" => "%{type}"
         "protocol" => "%{protocol}"
         "source" => "%{src_ip}"
         "dest_port" => "%{dest_port}"
         "geoip" => "%{geoip}"
if [type] == "Cowrie" and [dest_port] and [protocol] and [src_ip] != "${MY_INTIP}" {
   http {
     url => "http://${HTTP_LOGIP}"
     http_method => "post"
     mapping => {
          "type" => "%{type}"
         "protocol" => "%{protocol}"
         "source" => "%{src_ip}"
         "dest_port" => "%{dest_port}"
         "geoip" => "%{geoip}"
if [type] == "HoneyTrap" and [dest_port] and [src_ip] != "${MY_INTIP}" {
   http {
     url => "http://${HTTP_LOGIP}"
     http_method => "post"
     mapping => {
          "type" => "%{type}"
         "protocol" => "%{[attack_connection][protocol]}"
         "source" => "%{src_ip}"
         "dest_port" => "%{dest_port}"
         "geoip" => "%{geoip}"

We need to add a new variable to the docker environment:

vi /opt/tpot/etc/compose/elk_environment

Then add the following line to the file, to exclude the T-Pot server in the Logstash messages:


Now add a new docker volume for the Logstash service:

vi /opt/tpot/etc/tpot.yml

Go to the Logstash service section and add the following line:

- /data/elk/logstash.conf:/etc/logstash/conf.d/logstash.conf

Now we are all set and it’s time to restart your T-port service:

systemctl start tpot

That’s It!

Now take a look at your map. If there are attacks on your server, they should appear on the map and in the listing below.

You can trigger an event by, for example, opening a regular SSH session to your T-Pot server:


Simply close the connection once it is established, and your location should appear on the map.

Daily figures

Almost immediately when you start running a honeypot you will see attacks. Within one day, I saw over 200.000 attacks, mostly on common ports like HTTP(S), SSH and SMTP. You can use this data to make your environments more safe, or just use them for some fun projects.

Some notes
As this was a quick project in limited time, there are definitely some optimisations or better coding that can take place 🙂 The Javascript will give a few errors after some time, probably due to points being removed from the map while a call to update the same point happens at the same time.
In addition, some points on the map will suddenly run on steroids, animating at higher frames than they did initially. The Node.js application was made quick and dirty but is suitable for this demo.

Technical Careers at Leaseweb

We are searching for the next generation of engineers and developers to help us build the infrastructure to automate our global hosting services! If you are interested in finding out more, check out our Careers at Leaseweb.


Measuring and Monitoring With Prometheus and Alertmanager Part 2

This is part two in the series about Prometheus and Alertmanager.

In the first part we installed the Prometheus server and the node exporter, in addition to discovering some of the measuring and graphing capabilities using the Prometheus server web interface.

In this part, we will be looking at Grafana to expand the possibilities of graphing our metrics, and we will use Alertmanager to alert us of any metrics that are outside the boundaries we define for them. Finally, we will install a dashboard application for a nice tactical overview of our Prometheus monitoring platform.

Installing Grafana

The installation of Grafana is fairly straightforward, and all of the steps involved are described in full detail in the official documentation.

For Ubuntu, which we’re using in this series, the steps involved are:

sudo apt-get install -y apt-transport-https software-properties-common wget
wget -q -O - | sudo apt-key add -
echo "deb stable main" | sudo tee -a /etc/apt/sources.list.d/grafana.list
sudo apt-get update
sudo apt-get install grafana

At this point Grafana will be installed, but the service has not been started yet.

To start the service and verify that the service has started:

sudo systemctl daemon-reload
sudo systemctl start grafana-server
sudo systemctl status grafana-server

You should see something like this:

Prometheus and Alertmanager

If you want Grafana to start at boot time, run the following:

sudo systemctl enable grafana-server.service

Grafana listens on port 3000 by default, so at this point you should be able to access your Grafana installation at http://<IP of your Grafana server>:3000

You will be welcomed by the login screen. The default login after installation is admin with password admin.

Prometheus and Alertmanager

After succesfully logging in, you will be asked to change the password for the admin user. Do this immediately!

Creating the Prometheus Data Source

Our next step is to create a new data source in Grafana that connects to our Prometheus installation. To do this, go to Configuration > Data Sources and click the blue Add data source button.

Grafana supports various time series data sources, but we will pick the top one, which is Prometheus.

Enter the URL of your Prometheus server, and that’s it! Leave all the other fields untouched, they are not needed at this point.

You should now have a Prometheus data source in Grafana, and we can start creating some dashboards!

Creating Our First Grafana Dashboard

A lot of community-created dashboards can be found at We’re going to use one of them that will give us a very nice overview of the metrics scraped from the node exporter.

To import a dashboard click the + icon in the side menu, and then click Import.

Prometheus and Alertmanager

Enter the dashboard ID 1860 in the ‘Import via’ field and click ‘Load’.

The dashboard should be imported, and the only thing we still need to do is select our Prometheus data source we just created in the dropdown at the bottom of the page and click ‘Import’:

Prometheus and Alertmanager

You should now have your first pretty Grafana dashboard, that shows all of the important metrics offered by the node exporter.

Prometheus and Alertmanager

Adding Alertmanager in the Mix

Now that we have all these metrics of our nodes flowing into Prometheus, and we have a nice way of visualising this data, it would be nice if we could also raise alerts when things don’t go as planned. Grafana offers some basic alerting functionality for Prometheus data sources, but if you want more advanced features, Alertmanager is the way to go.

Alerting rules are set up in Prometheus server. These rules allow you to define alert conditions based on PromQL expressions. Whenever an alert expression amounts to a result, the alert is considered active.

To turn this active alert condition into an action, Alertmanager comes into play. It is able to send out notification to a large variety of methods such as email, various communication platforms such as Slack or Mattermost, or several incident/on-call management tools such as Pagerduty and OpsGenie. Alertmanager also handles summarization, aggregation, rate limiting and silencing of the alerts.

Let’s go ahead and install Alertmanager on the Prometheus server instance we installed in part one of this blog.

Installing Alertmanager

Start off by creating a seperate user for alertmanager:

useradd -M -r -s /bin/false alertmanager

Next, we need a directory for the configuration:

mkdir /etc/alertmanager
chown alertmanager:alertmanager /etc/alertmanager

Then download Alertmanager and verify its integrity:

cd /tmp
wget -O - -q | grep linux-amd64 | shasum -c -

The last command should result in alertmanager-0.21.0.linux-amd64.tar.gz: OK. If it doesn’t, the downloaded file is corrupted, and you should try again.

Next we unpack the file and move the various components into place:

tar xzf alertmanager-0.21.0.linux-amd64.tar.gz
cp alertmanager-0.21.0.linux-amd64/{alertmanager,amtool} /usr/local/bin/
chown alertmanager:alertmanager /usr/local/bin/{alertmanager,amtool}

And clean up our downloaded files in /tmp:

rm -f /tmp/alertmanager-0.21.0.linux-amd64.tar.gz
rm -rf /tmp/alertmanager-0.21.0.linux-amd64

We need to supply Alertmanager with an initial configuration. For our first test, we will configure alerting by email (Be sure to adapt this configuration for your email setup!):

  smtp_from: 'AlertManager <>'
  smtp_smarthost: ''
  smtp_hello: 'alertmanager'
  smtp_auth_username: 'username'
  smtp_auth_password: 'password'
  smtp_require_tls: true

  group_by: ['instance', 'alert']
  group_wait: 30s
  group_interval: 5m
  repeat_interval: 3h
  receiver: myteam

  - name: 'myteam'
      - to: ''

Save this in a file called /etc/alertmanager/alertmanager.yml and set permissions:

chown alertmanager:alertmanager /etc/alertmanager/alertmanager.yml

To be able to start and stop our alertmanager instance, we will create a systemd unit file. Use you favorite editor to create the file /etc/systemd/system/alertmanager.service and add the following to it (replacing <server IP> with the IP or resolvable FQDN of your server):


ExecStart=/usr/local/bin/alertmanager \
    --config.file=/etc/alertmanager/alertmanager.yml \
    --web.external-url http://<server IP>:9093


Activate and start the service with the following commands:

systemctl daemon-reload
systemctl start alertmanager
systemctl enable alertmanager

The command systemctl status alertmanager should now indicate that our service is up and running:

Prometheus and Alertmanager

Now we need to alter the configuration of our Prometheus server to inform it about our Alertmanager instance. Edit the file /etc/prometheus/prometheus.yml. There should already be a alerting section. All we need to do change the section so it looks like this:

# Alertmanager configuration
  - static_configs:
    - targets:
       - localhost:9093

We also need to tell Prometheus where our alerting rules live. Change the rule_files section to look like this:

# Load rules once and periodically evaluate them according to the global 'evaluation_interval'.
  - "/etc/prometheus/rules/*.yml"

Save the changes, and create the directory for the alert rules:

mkdir /etc/prometheus/rules
chown prometheus:prometheus /etc/prometheus/rules

Restart the Prometheus server to apply the changes:

systemctl restart prometheus

Creating Our First Alert Rule

Alerting rules are written using the Prometheus expression language or PromQL. One of the easiest things to check is whether all Prometheus targets are up, and trigger an alert when a certain exporter target becomes unreachable. This is done with the simple expression up.

Let’s create our first alert by creating the file /etc/prometheus/rules/alert-rules.yml with the following content:

- name: alert-rules
  - alert: ExporterDown
    expr: up == 0
    for: 5m
      severity: critical
      description: 'Metrics exporter service for {{ $labels.job }} running on {{ $labels.instance }} has been down for more than 5 minutes.'
      summary: 'Exporter down (instance {{ $labels.instance }})'

This alert will trigger as soon as any of the exporter targets in Prometheus is not reported as up for more than 5 minutes. We apply the severity label critical to it.

Restart prometheus with systemctl restart prometheus to load the new alert rule.

You should be able to see the alert rule in the prometheus web interface now too, by going to the Alerts section.

Prometheus and Alertmanager

Now the easiest way for us to check if this alert actually fires, and we get our email notification, is to stop the node exporter service:

systemctl status node_exporter

As soon as we do this, we can see that the alert status has changed in the Prometheus server dashboard. It is now marked as active, but is not yet firing, because the condition needs to persist for a minimum of 5 minutes, as specified in our alert rule.

Prometheus and Alertmanager

When the 5 minute mark is reached, the alert fires, and we should receive an email from Alertmanager alerting us about the situation:

Prometheus and Alertmanager
Prometheus and Alertmanager

We should also be able to manage the alert now in the Alertmanager web interface. Open http://<server IP>:9093 in your browser and the alert that we just triggered should be listed. We can choose to silence the alert, to prevent any more alerts from being sent out.

Prometheus and Alertmanager

Click silence, and you will be able to configure the duration of the silence period, add a creator and a description for some more metadata, and expand or limit the group of alerts this particular silence applies to. If, for example, i would have wanted to silence all ExporterDown alerts for the next 2 hours, I could remove the instance matcher.

Prometheus and Alertmanager

More Advanced Alert Examples

Since Prometheus alerts use the same powerful PromQL expressions as queries, we are able to define rules that go way beyond whether a service is up or down. For a full rundown of all the PromQL functions available, check out the Prometheus documentation or the excellent PromQL for humans.

Memory Usage

For starters, here is an example of an alert rule to check the memory usage of a node. It fires once the percentage of memory available is smaller than 10% of the total memory available for a duration of 5 minutes:

  - alert: HostOutOfMemory
    expr: node_memory_MemAvailable_bytes / node_memory_MemTotal_bytes * 100 < 10
    for: 5m
      severity: warning
      summary: 'Host out of memory (instance {{ $labels.instance }})'
      description: 'Node memory is filling up (< 10% left)\n  VALUE = {{ $value }}\n  LABELS: {{ $labels }}'

Disk Space

We can do something similar for disk space. This alert will fire as soon as one of our target’s filesystems has less than 10% of its capacity available for a duration of 5 minutes:

  - alert: HostOutOfDiskSpace
    expr: (node_filesystem_avail_bytes * 100) / node_filesystem_size_bytes < 10
    for: 5m
      severity: warning
      summary: 'Host out of disk space (instance {{ $labels.instance }})'
      description: 'Disk is almost full (< 10% left)\n  VALUE = {{ $value }}\n  LABELS: {{ $labels }}'

CPU Usage

To alert on CPU usage, we can use the metrics available under node_cpu_seconds_total. In the previous part of this blog we already went into which specific metrics we can find there.

This alert takes the rate of idle CPU seconds, and multiplies this by 100 to get the average percentage of idle CPU cycles over the last 5 minutes. We average this by instance to include all CPU’s (cores) in this average otherwise we would end up with an average percentage for each CPU in the system.

The alert will fire when the average CPU usage of the system exceeds 80% for 5 minutes:

  - alert: HostHighCpuLoad
    expr: 100 - (avg by(instance) (rate(node_cpu_seconds_total{mode="idle"}[5m])) * 100) > 80
    for: 5m
      severity: warning
      summary: 'Host high CPU load (instance {{ $labels.instance }})'
      description: 'CPU load is > 80%\n  VALUE = {{ $value }}\n  LABELS: {{ $labels }}'

Predictive Alerting

Using the PromQL function predict_linear we can expand on the disk space alert mentioned earlier. predict_linear can predict the value of a certain time series X seconds from now. We can use this to predict when our disk is going to fill up, if the pattern follows a linear prediction model.

The following alert will trigger if the linear prediction algorithm, using disk usage patterns over the last hour, determines that the disk will fill up in the next four hours:

  - alert: DiskWillFillIn4Hours
    expr: predict_linear(node_filesystem_free_bytes[1h], 4 * 3600) < 0
    for: 5m
      severity: warning
      summary: 'Disk {{ $labels.device }} will fill up in the next 4 hours'
      description: |
        Based on the trend over the last hour, it looks like the disk {{ $labels.device }} on {{ $labels.mountpoint }}
        will fill up in the next 4 hours ({{ $value | humanize }}% space remaining)

Give Me More!

If you are interested in more examples of alert rules, you can find a very extensive collection at Awesome Prometheus alerts. You can find examples here for exporters we haven’t covered too, such as the Blackbox or MySQL exporter.

Syntax Checking Your Alert Rule Definitions

Prometheus comes with a tool that allows you to verify the syntax of your alert rules. This will come in handy for local development of rules or in CI/CD pipelines, to make sure that no broken syntax makes it to your production Prometheus platform.

You can invoke the tool by running promtool check rules /etc/prometheus/rules/alert-rules.yml

# promtool check rules /etc/prometheus/rules/alert-rules.yml
Checking /etc/prometheus/rules/alert-rules.yml
  SUCCESS: 5 rules found

Scraping Metrics From Alertmanager

Alertmanager has a built in metrics endpoint that exports metrics about how many alerts are firing, resolved or silenced. Now that we have all components running, we can add alertmanager as a target to our Prometheus server to start scraping these metrics.

On your Prometheus server, open /etc/prometheus/prometheus.yml with your favorite editor and add the following new job under the scrape_configs section (replace with the IP of your alertmanager instance):

  - job_name: 'alertmanager'
    - targets: ['']

Restart Prometheus, and check in the Prometheus web console if you can see the new Alertmanager section under Status > Targets. If all goes well, a query in the Prometheus web console for alertmanager_cluster_enabled should return one result with the value 1.

We can now continue with adding alert rules for Alertmanager itself:

  - alert: PrometheusNotConnectedToAlertmanager
    expr: prometheus_notifications_alertmanagers_discovered < 1
    for: 5m
      severity: critical
      summary: 'Prometheus not connected to alertmanager (instance {{ $labels.instance }})'
      description: 'Prometheus cannot connect the alertmanager\n  VALUE = {{ $value }}\n  LABELS: {{ $labels }}'
  - alert: PrometheusAlertmanagerNotificationFailing
    expr: rate(alertmanager_notifications_failed_total[1m]) > 0
    for: 5m
      severity: critical
      summary: 'Prometheus AlertManager notification failing (instance {{ $labels.instance }})'
      description: 'Alertmanager is failing to send notifications\n  VALUE = {{ $value }}\n  LABELS: {{ $labels }}'

The first rule will fire when Alertmanager is no longer connected to Prometheus for over 5 minutes, the second rule will fire when Alertmanager fails to send out notification alerts. But how will we know about the alert, if notifications are failing? That’s where the next section comes in handy!

Alertmanager Dashboard Using Karma

The Alertmanager web console is useful for a basic overview of alerts and to manage silences, but it is not really suitable for use as a dashboard that gives us a tactical overview of our Prometheus monitoring platform.

For this, we will use Karma.

Prometheus and Alertmanager

Karma offers a nice overview of active alerts, grouping of alerts by a certain label, silence management, alert achknowledgement and more.

We can install it on the same machine where Alertmanager is running using the following steps;

Start off by creating a seperate user and configuration folder for karma:

useradd -M -r -s /bin/false karma
mkdir /etc/karma
chown karma:karma /etc/karma

Then download the file and verify its checksum:

cd /tmp
wget -O - -q | grep linux-amd64 | shasum -c -

Make sure the last command returns karma-linux-amd64.tar.gz: OK again. Now unpack the file and move it into place:

tar xzf karma-linux-amd64.tar.gz
mv karma-linux-amd64 /usr/local/bin/karma
rm karma-linux-amd64.tar.gz

Create the file /etc/karma/karma.yml and add the following default configuration (replace the username and password):

  interval: 1m
    - name: alertmanager
      uri: http://localhost:9093
      timeout: 20s
      - username: cartman
        password: secret

Set the proper permissions on the config file

chown karma:karma /etc/karma/karma/yml
chmod 640 /etc/karma/karma/yml

Create the file /etc/systemd/system/karma.service with the following content:

Description=Karma Alertmanager dashboard

ExecStart=/usr/local/bin/karma \


Activate and start the service with the following commands:

systemctl daemon-reload
systemctl start karma
systemctl enable karma

The command systemctl status karma should now indicate that karma is up and running:

Prometheus and Alertmanager

You should be able to visit your new Karma dashboard now at http://<alertmanager server IP>:8080. Here’s what it looks like when we stop the node_exporter service again and wait for 5 minutes for the alert to fire:

Prometheus and Alertmanager

If you want to explore all the possibilities and configuration options of Karma, then please see the documentation.


In this series we’ve installed Prometheus, the node exporter, and the Alertmanager. We’ve given a small introduction in PromQL and how to write Prometheus queries and alert rules, and used Grafana to graph metrics and Karma to offer an overview of triggered alerts.

If you want to explore further, check out the following resources:


Understanding and Interpreting CPU Steal Time on Virtual Machines

Virtual machines report on different types of usage metrics, such as server load, memory usage, and steal time. Customers often ask about steal time – what is it, and why is it reported on their virtual machines? Read on as we explain how steal time works to better understand what it means for your virtual machine. 

What is Steal Time? 

Steal time is the percentage of time the virtual machine process is waiting on the physical CPU for its CPU time. You can monitor processes and resource usage by running the “top” command on your Linux server. Among usage metrics, is steal time is labeled as ‘st’.

CPU in Virtual Environments

In cloud environments, the hypervisor acts as the interface between the physical server and its virtualized environment. The hypervisor kernel manages all these tasks by scheduling the running processes to the physical cores of the server. Processes such as virtual machines, networking operations, and storage I/O requests are given with some CPU time to process jobs. CPU time is allocated between these processes, which shifts priorities and creates contention between these processes over the physical cores.

%Idle Time

Steal time can also be visible on virtual machines alongside idle time. Idle time means that there is CPU time allocated by the hypervisor, but the virtual machine did not use that time. In this case, we can assume there was no effect on the performance at all.

When the idle time percentage is 0 and steal time is present, we can assume that processes on the virtual machine are processed with a delay.

Multi-Tenant Cloud

Leaseweb cloud platforms consist of single-tenant and multi-tenant environments. Leaseweb CloudStack products allow you to develop and run a multi-tenant environment, enabling different kinds of users to run their cloud infrastructures at a lower cost. Along with not overselling virtual cores on our premium CloudStack platforms, we also do not pin virtual machines to CPU cores. This allows the hypervisor to allocate CPU time from all the server’s physical cores to any of its active processes.

Theoretically speaking, if the virtual machine has immediate access to its assigned cores 100% of the time, there would be no steal time visible. However, hypervisors are running many different tasks and are continuously performing actions such as rescheduling tasks for efficiency and processing received data from other systems. All these processes require CPU time from the hypervisor’s CPU, resulting in delayed access to the physical cores and adding steal time to the virtual machine.

Analyze Service Performance

A small amount of steal time is often unavoidable in modern hosting environments, particularly when running on shared cloud hosting. The steal time virtual machines experience is not always visible from outside the virtualized operating system.

If you see a constant steal time registered by the virtual machine, try finding a correlation with the tasks you are executing. More importantly, how does this steal time result in performance loss? Are you noticing any loss in performance on your applications? If so, try measuring output to discover latency in the whole flow of your application in accordance with steal time. Keep your hosting provider informed in case you do see an experience impact on your application. In many situations, they can find a more suitable environment by moving your virtual machine to a different hypervisor.


Automate Your Server Platform Using The Leaseweb API

This blog is about using the Leaseweb API to automate the management of your dedicated servers running with Leaseweb- including examples of how to deploy your dedicated servers without even logging in once! While this article focuses solely on dedicated servers, we also have many API calls for specifically managing your cloud(s) at Leaseweb.


Did you know Leaseweb provides access to more than 150 backend features to manage and control your platform? We develop our systems with an ‘API-first’ philosophy, meaning that anything you do via our Customer Portal can also be done via the API (or more!). Many of these functions will help you when deploying a new dedicated server.

Getting Started

Let’s started by exploring what we can do with the API. Full disclaimer: it doesn’t matter if you are a newbie in the world of API calls or an expert – we have documented everything clearly and with plenty of examples to make your life easier. Detailed API information and examples of five different programming languages (including Ruby, PHP and Python) can be found on the Leaseweb developer website. We have also documented how to access the API on our developer website.

Once you have created your API key you can begin exploring different possibilities. Some options we provide that programmatically control your dedicated server platform via our API are:

  • Server management: fully manage your dedicated servers, including OS install, power cycle, hardware scan, etc.
  • Private Network: add or remove servers from the Private Network.
  • Floating IPs: control which servers the Floating IPs are routed to.
  • IP Management: see which IP is assigned to which server, null route IPs, or set reverse DNS entries.


Assume I have a server farm that consists of several dedicated servers, each with its own Internet uplink public IP. The servers are interconnected on the backend using Private Networking, while the Floating IPs are used to quickly redirect traffic to a different server for disaster recovery purposes.

What are some of the basic API calls I could use to control my farm? Let me give you some examples.

(Re)Installation of an Operating System

Using a POST operation, I can request a reinstall of the OS:

I also need to know the different parameters (payload) like serverID and OperatingSystemID, which can be retrieved via different API calls. In the payload area I can specify what the partition layout should be so that the OS will be installed to my exact specifications.

Custom Operating System Image

Alternatively, it is possible to install your own custom installation image using PXE boot while using the API to set the DHCP option. This will include it in the DHCP lease for the server.

Once the server boots, it will get a DHCP lease and will retrieve the installation image from the URL specified in the DHCP option. For more information, check out our Knowledge Base article that explains this feature in more detail.

Other Features

Some other nice features to control the server farm include possibilities to:

  •  Power cycle a server

This will turn the power off and on for the PDU that the server is connected to.

  • Show which IPs (including Floating IPs) are assigned to a server or are null-routed

  • Perform a hardware scan (reboots the server) and get hardware inventory of a server

  • Inspect how much data traffic a server is doing, both up and downstream

Private Networking

Servers in the same metro data center area (which in most cases means Leaseweb data centers that are relatively close to each other) have the possibility to be connected to the Leaseweb Private Network. This backend network provides a secure and unmetered layer-2 connection between all servers with port speeds of up to 10Gbps.

I am able to check if a server is already connected to my assigned Private Network by using the Inspect Private Network operation. This gives me a list (array) of server IDs that are connected.

If needed, I can add or remove a server from the PN or change the port speed.

Floating IPs

Floating IPs provide the possibility to dynamically reroute traffic to a different server (anchor IP). It is possible automate this process using the API. Assuming a Floating IP definition has already been defined and traffic is routed to my first server AnchorIP, I can use a PUT request to change the AnchorIP to my 2nd server.

A good example would be to embed this in a monitoring system. Once the monitoring system detects a ‘server unavailable’, it can automatically redirect traffic to the standby server using this API call.


While this blog is too short to sum up all the possibilities and features that are possible, it should give you some idea of what a powerful mechanism the API can be to automate and manage your environment.


Set up Private DNS-over-TLS/HTTPS

Domain Name System (DNS) is a crucial part of Internet infrastructure. It is responsible for translating a human-readable, memorizable domain (like into a numeric IP address (such as

In order to translate a domain into an IP address, your device sends a DNS request to a special DNS server called a resolver (which is most likely managed by your Internet provider). The DNS requests are sent in plain text so anyone who has access to your traffic stream can see which domains you visit.

There are two recent Internet standards that have been designed to solve the DNS privacy issue:

  • DNS over TLS (DoT):
  • DNS over HTTPS (DoH)

Both of them provide secure and encrypted connections to a DNS server.

DoT/DoH feature compatibility matrix:

Firefox Chrome Android 9+ iOS 14+

iOS 14 will be released later this year.

In this article, we will setup a private DoH and DoT recursor using pihole in a docker container, and dnsdist as a DNS frontend with Letsencrypt SSL certificates. As a bonus, our DNS server will block tracking and malware while resolving domains for us.


In this example we use Ubuntu 20.04 with docker and docker-compose installed, but you can choose your favorite distro (you might need to adapt a bit).

You may also need to disable systemd-resolved because it occupies port 53 of the server:

# Check which DNS resolvers your server is using:
systemd-resolve --status
# look for "DNS servers" field in output

# Stop systemd-resolved
systemctl stop systemd-resolved

# Then mask it to prevent from further starting
systemctl mask systemd-resolved

# Delete the symlink systemd-resolved used to manage
rm /etc/resolv.conf

# Create /etc/resolv.conf as a regular file with nameservers you've been using:
cat <<EOF > /etc/resolv.conf
nameserver <ip of the first DNS resolver>
nameserver <ip of the second DNS resolver>

Install dnsdist and certbot (for letsencrypt certificates):

# Install dnsdist repo
echo "deb [arch=amd64] focal-dnsdist-15 main" > /etc/apt/sources.list.d/pdns.list
cat <<EOF > /etc/apt/preferences.d/dnsdist
Package: dnsdist*
Pin: origin
Pin-Priority: 600
curl | apt-key add -

apt update
apt install dnsdist certbot


Now we create our docker-compose project:

mkdir ~/pihole
touch ~/pihole/docker-compose.yml

The contents of docker-compose.yml file:

version: '3'
    container_name: pihole
    image: 'pihole/pihole:latest'
    # The DNS server will listen on localhost only, the ports 5300 tcp/udp.
    # So the queries from the Internet won't be able to reach pihole directly.
    # The admin web interface, however, will be reachable from the Internet.
      - ''
      - ''
      - '8081:80/tcp'
      TZ: Europe/Amsterdam
      VIRTUAL_HOST: # domain name we'll use for our DNS server
      WEBPASSWORD: super_secret # Pihole admin password
      - './etc-pihole/:/etc/pihole/'
      - './etc-dnsmasq.d/:/etc/dnsmasq.d/'
    restart: unless-stopped

Start the container:

docker-compose up -d

After the container is fully started (it may take several minutes) check that it is able to resolve domain names:

dig +short @ -p5300
# Excpected output

Letsencrypt Configuration

Issue the certificate for our domain:

certbot certonly

Follow the instructions on the screen (i.e. select the proper authentication method suitable for you, and fill the domain name).

After the certificate is issued it can be found by the following paths:

  • /etc/letsencrypt/live/ – certificate chain
  • /etc/letsencrypt/live/ – private key

By default only the root user can read certificates and keys. Dnsdist, however, is running as user and group _dnsdist, so permissions need to be adjusted:

chgrp _dnsdist /etc/letsencrypt/live/{fullchain.pem,privkey.pem}
chmod g+r /etc/letsencrypt/live/{fullchain.pem,privkey.pem}

# We should also make archive and live directories readable.
# That will not expose the keys since the private key isn't world-readable
chmod 755 /etc/letsencrypt/{live,archive}

The certificates are periodically renewed by Certbot, so dnsdist should be restarted after that happens since it is not able to detect the new certificate. In order to do so, we put a so-called deploy script into /etc/letsencrypt/renewal-hooks/deploy directory:

mkdir -p /etc/letsencrypt/renewal-hooks/deploy
cat <<EOF > /etc/letsencrypt/renewal-hooks/deploy/
systemctl restart dnsdist
chmod +x /etc/letsencrypt/renewal-hooks/deploy/

Dnsdist Configuration

Create dnsdist configuration file /etc/dnsdist/dnsdist.conf with the following content:


-- path for certs and listen address for DoT ipv4,
-- by default listens on port 853.
-- Set X(int) for tcp fast open queue size.
addTLSLocal("", "/etc/letsencrypt/live/", "/etc/letsencrypt/live/", { doTCP=true, reusePort=true, tcpFastOpenSize=64 })

-- path for certs and listen address for DoH ipv4,
-- by default listens on port 443.
-- Set X(int) for tcp fast open queue size.
-- In this example we listen directly on port 443. However, since the DoH queries are simple HTTPS requests, the server can be hidden behind Nginx or Haproxy.
addDOHLocal("", "/etc/letsencrypt/live/", "/etc/letsencrypt/live/", "/dns-query", { doTCP=true, reusePort=true, tcpFastOpenSize=64 })

-- set X(int) number of queries to be allowed per second from a IP
addAction(MaxQPSIPRule(50), DropAction())

--  drop ANY queries sent over udp
addAction(AndRule({QTypeRule(DNSQType.ANY), TCPRule(false)}), DropAction())

-- set X number of entries to be in dnsdist cache by default
-- memory will be preallocated based on the X number
pc = newPacketCache(10000, {maxTTL=86400})

-- server policy to choose the downstream servers for recursion

-- Here we define our backend, the pihole dns server
newServer({address="", name=""})

setMaxTCPConnectionsPerClient(1000)    -- set X(int) for number of tcp connections from a single client. Useful for rate limiting the concurrent connections.
setMaxTCPQueriesPerConnection(100)    -- set X(int) , similiar to addAction(MaxQPSIPRule(X), DropAction())

Checking if DoH and DoT Works

Check if DoH works using curl with doh-url flag:

curl --doh-url

Check if DoT works using kdig program from the knot-dnsutils package:

apt install knot-dnsutils

kdig -d +tls-ca

Setting up Private DNS on Android

Currently only Android 9+ natively supports encrypted DNS queries by using DNS-over-TLS technology.

In order to use it go to: Settings -> Connections -> More connection settings -> Private DNS -> Private DNS provider hostname ->


In this article we’ve set up our own DNS resolving server with the following features:

  • Automatic TLS certificates using Letsencrypt.
  • Supports both modern encrypted protocols: DNS over TLS, and DNS over HTTPS.
  • Implements rate-limit of incoming queries to prevent abuse.
  • Automatically updated blacklist of malware, ad, and tracking domains.
  • Easily upgradeable by simply pulling a new version of Docker image.