Using Correlation IDs in API Calls

Over the years, the IT industry has moved from a single domain, monolithic architecture to a microservice architecture. In a microservice architecture, complex processes are split into smaller and simpler sub-processes. While this kind of architecture has many benefits, there are also some downsides – for example, if you send one request to a Leaseweb API, it ends up in multiple requests in other backend systems [FIGURE 1]. How do you keep track of requests and responses processed by multiple systems? This is where Correlation IDs come into play.

[FIGURE 1: Example request/response flow]

Using a Correlation ID

A Correlation ID is a unique, randomly generated identifier value that is added to every request and response. In a microservice architecture, the initial Correlation ID is passed to your sub-processes. If a sub-system also makes sub-requests, it will also pass the Correlation ID to those systems.

How you pass the Correlation ID to other systems depends on your architecture. At Leaseweb we are using REST APIs a lot, with HTTP headers to pass on the Correlation ID. As a rule, we assign a Correlation ID as soon as possible, and always use a Correlation ID if it is passed on. Our public API only accepts Correlation IDs from internally trusted clients. For any other client (such as an employee or customer API clients) a new Correlation ID is generated for the request.

Real Value of Correlation IDs

The real value of Correlation IDs is realized when you also log the Correlation IDs. Debugging or tracing requests becomes much easier, as you can search all of your logs for the same Correlation ID. Combined with central logging solutions (such as the ELK stack), searching logs becomes even easier and can be done by non-technical colleagues. Providing tools to your colleagues to troubleshoot issues allows them to have more responsibility and gives you more time to work on more technical projects.

We mainly use Correlation IDs at Leaseweb for debugging purposes. When an error occurs, we provide the Correlation ID to the client/customer. If users provide the Correlation ID when submitting a support ticket, we can visualize the entire process needed to fulfil the client’s initial intent. This has significantly improved the time it takes us to fix bugs.

[FIGURE 2: Example of one Correlation ID with multiple requests]

Debugging issues is a time-consuming process if Correlation IDs are not used. When your environment scales, you will need to find solutions to group transactions happening in your systems. By using a Correlation ID, you can easily group requests and events in your systems, allowing you to spend more time fixing the problem and less time trying to find it.

Practical examples on how to implement Correlation IDs

The following examples use Symfony, a popular web application framework. These concepts can also be applied to any other framework, such as Laravel, Django, Flask or Ruby on Rails.

If you are unfamiliar with the concept of Service Containers and Dependency Injection, we recommend reading the excellent Symfony documentation about it here: https://symfony.com/doc/current/service_container.html

Using Monolog to append Correlation IDs to your application logs

When processing a HTTP request your application often logs some information – such as when an error occurred, or an important change made in your system that you want to keep track of. When using the Monolog logging library in PHP (https://seldaek.github.io/monolog/), you can use the concept of “Processors” (read more about that here on symfony.com).

One way to do this is by creating a Monolog Processor class:

<?php

namespace App\Monolog\Processor;

use Symfony\Component\HttpFoundation\RequestStack;

class CorrelationIdProcessor
{
    protected $requestStack;

    public function __construct(RequestStack $requestStack)
    { 

       $this->requestStack = $requestStack;

    }
 
    public function processRecord(array $record)
    {
        $request = $this->requestStack->getCurrentRequest();

        if (!$request) {
            return;
        }

        $correlationId = $request->headers->get(‘X-My-Correlation-ID');

        if (empty($correlationId)) {
             return;
        }

        // If we have a correlation id include it in every monolog line
        $record['extra']['correlation_id'] = $correlationId;
 
        return $record;
    }
}

Then register this class on the service container as a monolog processor in services.yml:

# app/config/services.yml

services:
  App\Monolog\Processor\CorrelationIdProcessor:
    arguments: ["@request_stack"]
    tags:
      - name: monolog.processor
        method: processRecord

Now, every time you log something in your application with Monolog:

$this->logger->info('shopping_cart_emptied', [‘cart_id’ => 123]);

You will see the Correlation ID of the HTTP Request in your log files:

$ grep ‘shopping_cart_emptied’ var/logs/prod.log

[2020-07-03 12:14:45] app.INFO: shopping_cart_emptied {“cart_id”: 123} {"correlation_id":"d135d5f1-3dd0-45fa-8f26-55d8d6a44876"}

You can utilize the same pattern to log the name of the user that is currently logged in, the remote IP address of the API client, or anything else that makes troubleshooting faster for you.

Using Guzzle to append Correlation IDs when making sub-requests

If your API makes API calls to other microservices (and you use Guzzle to do this) you can make use of Handlers and Middleware.

Some teams at Leaseweb depend on many downstream microservices, and can therefore have multiple guzzle clients as services on the service container. While each Guzzle client is configured with its own base URL and/or authentication, it is possible for all of the Guzzle clients to share the same HandlerStack.

First, create the middleware:

<?php

namespace App\Guzzle\Middleware;

use Symfony\Component\HttpFoundation\RequestStack;
use Psr\Http\Message\RequestInterface;

class CorrelationIdMiddleware
{
    protected $requestStack;
 
    public function __construct(RequestStack $requestStack)
    {
        $this->requestStack = $requestStack;
    }

    public function __invoke(callable $handler)
    {
        return function (RequestInterface $request, array $options = []) use ($handler) {
            $request = $this->requestStack->getCurrentRequest();

            if (!$request) {
                return $handler($request, $options);
            }

            $correlationId = $request->headers->get(‘X-My-Correlation-ID');

            if (empty($correlationId)) {
                 return $handler($request, $options);
            } 
 
            $request = $request->withHeader(‘X-My-Correlation-ID’, $correlationId);
 
            return $handler($request, $options);
        };
    }
}

Define this middleware as service on the service container and create a HandlerStack:

# app/config/services.yml

services:
  correlation_id_middleware:
    class: App\Guzzle\Middleware:
    arguments: ["@request_stack"]

  correlation_id_handler_stack:
    class: GuzzleHttp\HandlerStack
    factory: ['GuzzleHttp\HandlerStack', 'create']
    calls:
      - [push, ["@correlation_id_middleware", "correlation_id_forwarder"]]

With these two services defined, you can now configure all your Guzzle clients using the HandlerStack so that the Correlation ID of the current HTTP request is forwarded to downstream HTTP requests:

# app/config/services.yml

services:
  my_downstream_api:
    class:
    arguments:
      - base_uri: https://my-downstream-api.example.com
        handler: "@correlation_id_handler_stack”

Now every API call that you make to https://my-downstream-api.example.com will include the HTTP request header ‘X-My-Correlation-ID’ and have the same value as the Correlation ID of the current HTTP request. You can also apply the same Monolog and Guzzle tricks described here to the downstream API.

Expose Correlation IDs in error responses

The missing link between these processes is to now expose your Correlation IDs to your users so they can also log them or use them in support cases they report to your organization.

Symfony makes this easy using Event Listeners. You can define Event Listeners in Symfony to pre-process HTTP requests as well as to post-process HTTP Responses just before they are returned by Symfony to the API caller. In this example, we will create a HTTP Response listener and add the Correlation ID of the current HTTP request as a HTTP Header in the HTTP Response.

First, we create a service on the Service Container:

<?php
 
namespace App\Listener;
 
use Symfony\Component\HttpFoundation\RequestStack;
use Symfony\Component\HttpKernel\Event\FilterResponseEvent;

class CorrelationIdResponseListener
{
    protected $requestStack;
 
    public function __construct(RequestStack $requestStack)
    {
        $this->requestStack = $requestStack;
    }

    public function onKernelResponse(FilterResponseEvent $event)
    {
        $request = $this->requestStack->getCurrentRequest();

        if (!$request) {
            return;
        }

        $correlationId = $request->headers->get(‘X-My-Correlation-ID');

        if (empty($correlationId)) {
             return;
        }

        $event->getResponse()->headers->set(‘X-My-Correlation-ID’, $correlationId);
    }
}

Now configure it as a Symfony Event Listener:

# app/config/services.yml

services:
  correlation_id_response_listener:
    class: App\Listener\CorrelationIdResponseListener
    arguments: ["@request_stack"]
    tags:
      - { name: kernel.event_listener, event: kernel.response, method: onKernelResponse }

Every response that is generated by your Symfony application will now include a X-My-Correlation-ID HTTP response header with the same Correlation ID as the HTTP request.

The Value of Correlation IDs

Using Correlation IDs throughout your whole stack gives you more insight into all (sub)requests during a transaction. Using the right tools allows others to debug issues, giving your developers more time to work on new awesome features.

Implementing Correlation IDs isn’t hard to do, and can be achieved quickly depending on your software stack. At Leaseweb, the use of Correlation IDs has saved us hours of time while debugging issues on numerous occasions.

Technical Careers at Leaseweb

We are searching for the next generation of engineers and developers to help us build infrastructure to automate our global hosting services! If you are interested in finding out more, check out our Careers at Leaseweb.

Share

Measuring and Monitoring With Prometheus and Alertmanager

As one of the most successful projects of the Cloud Native Computing Foundation (CNCF), it is highly likely that you have heard of Prometheus. Initially built at SoundCloud in 2012 to fulfil their monitoring needs, Prometheus is now one of the most popular solutions for time-series based monitoring.

At Leaseweb, we use Prometheus for a variety of purposes – from basic system monitoring of our internal systems, to blackbox monitoring from several of our network locations, to cloud data usage and capacity monitoring.

Whether you have one or several servers, it is always good to have insight into what your systems are doing and how they are performing. In this article, we will show you how to set up a basic Prometheus server and expose system metrics using node_exporter.

For later blogs in this series, we will add Alertmanager to our Prometheus server and use Grafana to graph our recorded metrics.

This is an overview of the components involved and their role:

  • Prometheus: Scrapes metrics on external data sources (or ‘exporters’), stores metrics in time-series databases, and exposes metrics through API.
  • node_exporter: Exposes several system metrics, such as CPU & disk usage
  • Alertmanager: Handles alerts generated by the Prometheus server. Takes care of deduplicating, grouping, and routing alerts to the correct alert channel such as email, Telegram, PagerDuty, Slack, etc.
  • Grafana: Uses Prometheus as a datasource to graph the recorded metrics.

For this tutorial, we are going to use three servers running Ubuntu 18.04 LTS. However, the instructions can be easily adapted for any other recent Linux distribution. These can either be bare metal servers or cloud instances. When your Prometheus setup grows and you start to scrape more and more metrics, it is advisable to have SSD based storage in your Prometheus server.

If you want to start out small or experiment, you can also combine several components on one system.

A Note on Security

Since Prometheus was designed to be run in a private network/cloud setting, it does not offer any authentication or access control out of the box. Because of this, be careful not to expose any of the services to the outside world. There are several ways you can achieve this (implementation of which is outside of the scope of this tutorial).

To achieve this, you could use the Leaseweb private networking feature and bind the Prometheus related services to your private networking interface. Other options are to use a reverse proxy that implements basic authentication, or using firewall rules to only allow certain IP addresses to connect to your Prometheus-related services.

Installing Prometheus

To start off, we will install the Prometheus server. The prometheus package is part of the standard Ubuntu distribution repositories, but unfortunately the version (2.1.0) is quite old. At the time of writing this blog post, the latest version is 2.16.0, which is what we will be using.

On the system that will be our Prometheus server, we start off by creating a user and group called prometheus:

useradd -M -r -s /bin/false prometheus

Next, we create the directories that will contain the configuration and the data of Prometheus:

mkdir /etc/prometheus /var/lib/prometheus

Download Prometheus server and verify its integrity:

cd /tmp
wget https://github.com/prometheus/prometheus/releases/download/v2.16.0/prometheus-2.16.0.linux-amd64.tar.gz
wget -O - -q https://github.com/prometheus/prometheus/releases/download/v2.16.0/sha256sums.txt | grep linux-amd64 | shasum -c -

The last command should result in  prometheus-2.16.0.linux-amd64.tar.gz: OK. If it doesn’t, the downloaded file is corrupted. Next we unpack the file and move the various components into place:

tar xzf prometheus-2.16.0.linux-amd64.tar.gz
cp prometheus-2.16.0.linux-amd64/{prometheus,promtool} /usr/local/bin/
chown prometheus:prometheus /usr/local/bin/{prometheus,promtool}
cp -r prometheus-2.16.0.linux-amd64/{consoles,console_libraries} /etc/prometheus/
cp prometheus-2.16.0.linux-amd64/prometheus.yml /etc/prometheus/prometheus.yml

chown -R prometheus:prometheus /etc/prometheus
chown prometheus:prometheus /var/lib/prometheus

And clean up our downloaded files in /tmp

rm -f /tmp/prometheus-2.16.0.linux-amd64.tar.gz
rm -rf /tmp/prometheus-2.16.0.linux-amd64

Add prometheus itself to the config for scraping initially.

To be able to start and stop our prometheus server, we will create a systemd unit file.Use you favorite editor to create the file /etc/systemd/system/prometheus.service and add the following to it:

[Unit]
Description=Prometheus Time Series Collection and Processing Server
Wants=network-online.target
After=network-online.target

[Service]
User=prometheus
Group=prometheus
Type=simple
ExecStart=/usr/local/bin/prometheus \
    --config.file /etc/prometheus/prometheus.yml \
    --storage.tsdb.path /var/lib/prometheus \
    --web.console.templates=/etc/prometheus/consoles \
    --web.console.libraries=/etc/prometheus/console_libraries

[Install]
WantedBy=multi-user.target

Activate and start the service with the following commands:

systemctl daemon-reload
systemctl start prometheus
systemctl enable prometheus

The command systemctl status prometheus should now indicate that our service is up and running:

You should be able to access the web interface of the prometheus server now on http://<server IP>:9090:

If we go to Status > Targets we can see that the Prometheus server itself has already been added as a scraping target for metrics. This default target collects metrics about the performance of the Prometheus server. You can view the metrics that are being recorded under http://<server IP>:9090/metrics.

Prometheus provides two convenient endpoints for monitoring its health and status. You can use these to add to any other monitoring system you might have.

root@HRA-blogtest:~# curl localhost:9090/-/healthy
Prometheus is Healthy.
root@HRA-blogtest:~# curl localhost:9090/-/ready
Prometheus is Ready.

Monitor System Metrics with the Node Exporter

To make things a little more interesting, we are going to add a target to obtain system metrics of the Prometheus server. For this, we need to install the node exporter first.

Installing the node exporter

Download Prometheus node exporter and verify its integrity:

cd /tmp
wget https://github.com/prometheus/node_exporter/releases/download/v0.18.1/node_exporter-0.18.1.linux-amd64.tar.gz
wget -O - -q https://github.com/prometheus/node_exporter/releases/download/v0.18.1/sha256sums.txt | grep linux-amd64 | shasum -c -

The last command should result in node_exporter-0.18.1.linux-amd64.tar.gz: OK. If it doesn’t, the downloaded file is corrupted.

Next we unpack the file and move the node exporter into place:

tar xzf node_exporter-0.18.1.linux-amd64.tar.gz
cp node_exporter-0.18.1.linux-amd64/node_exporter /usr/local/bin/
chown prometheus:prometheus /usr/local/bin/node_exporter

And clean up our downloaded files in /tmp

rm -f /tmp/node_exporter-0.18.1.linux-amd64.tar.gz
rm -rf /tmp/node_exporter-0.18.1.linux-amd64

Create a unit file /etc/systemd/system/node_exporter.service for the node exporter using your favorite editor.

[Unit]
Description=Prometheus Node Exporter
Wants=network-online.target
After=network-online.target

[Service]
User=prometheus
Group=prometheus
Type=simple
ExecStart=/usr/local/bin/node_exporter

[Install]
WantedBy=multi-user.target

Reload the systemd configuration to activate our unit file, start the service, and enable the service to start at boot time:

systemctl daemon-reload
systemctl start node_exporter.service
systemctl enable node_exporter.service

The node exporter should now be running. You can verify this with systemctl status node_exporter

The node exporter listens on TCP port 9100. You should be able to see the node exporter metrics now at http://<server IP>:9100/metrics.

Adding the node exporter target to Prometheus

Now that the node exporter is running, we need to adapt the configuration of the Prometheus server so it can start scraping our node exporter metrics.

Open /etc/prometheus/prometheus.yml in your editor and adapt the scrape config section to look like the following:

# A scrape configuration containing exactly one endpoint to scrape:
# Here it's Prometheus itself.
scrape_configs:
  # The job name is added as a label `job=<job_name>` to any timeseries scraped from this config.
  - job_name: 'prometheus'

    # metrics_path defaults to '/metrics'
    # scheme defaults to 'http'.

    static_configs:
    - targets: ['localhost:9090']

  - job_name: 'node'
    scrape_interval: 5s
    static_configs:
    - targets: ['localhost:9100']

Save the changes and restart the prometheus server configuration with systemctl restart prometheus

The Prometheus server web interface should show a new target now under Status > Targets:

Querying and Graphing the Recorded Metrics

Now that everything is set up, it is time to start looking into some of the things we are now measuring! Switch to the Graph tab in the Prometheus server web interface.

Enter node_memory_MemAvailable_bytes and click Execute. The Console tab will show you the current amount of memory free in bytes.

Switch to the Graph tab and you will see a graph of the amount of bytes of free memory there were over the course of the last hour. You can increase and decrease the time range with the plus and minus on the top left of the graph.

There is another metric that records the total amount of memory in the system. It is called node_memory_MemTotal_bytes. We can use this to calculate the percentage of memory free in the system. Enter the following in the query area and click execute:

(node_memory_MemAvailable_bytes / node_memory_MemTotal_bytes) * 100

The graph will now show the percentage of free memory over time.

We can make this even more accurate by taking into account buffered and cached memory:

((node_memory_MemFree_bytes + node_memory_Buffers_bytes + node_memory_Cached_bytes) / node_memory_MemTotal_bytes) * 100

Or turn it around and show the percentage of used memory instead:

(node_memory_MemTotal_bytes - node_memory_MemFree_bytes - node_memory_Buffers_bytes - node_memory_Cached_bytes) / node_memory_MemTotal_bytes * 100

The CPU usage is recorded in the metrics under node_cpu_seconds_total. This metric has several modes of the CPU recorded:

  • user: Time spent in userland
  • system: Time spent in the kernel
  • iowait: Time spent waiting for I/O
  • idle: Time the CPU had nothing to do
  • irq&softirq: Time servicing interrupts
  • guest: If you are running VMs, the CPU they use
  • steal: If you are a VM, time other VMs “stole” from your CPUs

These metrics are recorded as counters, so to get the per second values we will use the irate function:

irate(node_cpu_seconds_total{job="node"}[5m])

As you can see, when you have multiple CPU’s in your server, it will return metrics for each CPU individually. To get the overall value across all CPU’s we can use PromQL’s aggregation features using sum by:

sum by (mode, instance) (irate(node_cpu_seconds_total{job="node"}[5m]))

We can also calculate the percentage of CPU used by taking the per second idle rate and multiplying it by 100 (to get the percent CPU idle), and then subtracting it from 100%:

100 - (avg by (instance) (irate(node_cpu_seconds_total{job="node",mode="idle"}[5m])) * 100)

And finally, to get the amount of data sent or received by our server, we can use irate(node_network_transmit_bytes_total{device!="lo"}[1m]) and irate(node_network_receive_bytes_total{device!="lo"}[1m]). This will give us a bytes-per-minute graph. The device!="lo" makes sure we exclude the local loopback interface.

To turn this into megabits, we will have to do some math:

(sum(irate(node_network_receive_bytes_total{device!="lo"}[1m])) by (instance, device) * 8 / 1024 / 1024)

To get a full idea of the possibilities of the PromQL querying language, see the documentation. By investigating the metrics available in the node exporter, you can create a lot more graphs like these – for example, for the amount of available disk space, the amount of file descriptors used, and a lot more.

In the next part of this blog, we will go deeper into visualizing the metrics using Grafana, and will also define alerting rules to receive alerts through Alertmanager.

Share

PHP-CRUD-API now supports authorization and validation

Another milestone is reached for the PHP-CRUD-API project. A project that aims to provide a high performance, consistent data API over REST that is easy to deploy (it is a single PHP file!) and requires minimal configuration. By popular demand we have added four important new features:

  1. Tables and the actions on them can be restricted with custom rules.
  2. Access to specific columns can be restricted using your own algorithm.
  3. You can specify “sanitizers” to, for example, strip HTML tags from input.
  4. You can specify “validators” functions to show errors on invalid input.

These features are built by allowing you to define callback functions in your configuration. These functions can then contain your application specific logic. How these function work and how you can load them is explained below.

Table authorizer

The following function can be used to authorize access to specific tables:

/**
 * @param action    'create','read','update','delete','list'
 * @param database  name of your database (e.g. 'northwind')
 * @param table     name of the table (e.g. 'customers')
 * @returns bool    indicates that access is granted  
 **/
  
$f1=function($action,$database,$table){
  return true; 
};

Column authorizer

The following function can be used to authorize access to specific columns:

/**
 * @param action    'create','read','update','delete','list'
 * @param database  name of your database (e.g. 'northwind')
 * @param table     name of the table (e.g. 'customers')
 * @param column    name of the column (e.g. 'password')
 * @returns bool    indicates that access is granted  
 **/
  
$f2=function($action,$database,$table,$column){
  return true; 
};

Input sanitizer

The following function can be used to sanitize input for specific columns:

/**
 * @param action    'create','read','update','delete','list'
 * @param database  name of your database (e.g. 'northwind')
 * @param table     name of the table (e.g. 'customers')
 * @param column    name of the column (e.g. 'username')
 * @param type      type of the column (depends on engine)
 * @param value     input from the user (e.g. 'johndoe88')
 * @returns string  sanitized value
 **/
  
$f3=function($action,$database,$table,$column,$type,$value){
  return $value; 
};

Input validator

The following function can be used to validate input for specific columns:

/**
 * @param action    'create','read','update','delete','list'
 * @param database  name of your database (e.g. 'northwind')
 * @param table     name of the table (e.g. 'customers')
 * @param column    name of the column (e.g. 'username')
 * @param type      type of the column (depends on engine)
 * @param value     input from the user (e.g. 'johndoe88')
 * @param context   all input fields in this action
 * @returns string  validation error (if any) or null
 **/
  
$f4=function($action,$database,$table,$column,$type,$value,$context){
  return null;
};

Configuration

This is an example configuration that requires the above snippets to be defined.

$api = new MySQL_CRUD_API(array(
  'hostname'=>'localhost',
  'username'=>'xxx',
  'password'=>'xxx',
  'database'=>'xxx',
  'charset'=>'utf8',
  'table_authorizer'=>$f1,
  'column_authorizer'=>$f2,
  'input_sanitizer'=>$f3,
  'input_validator'=>$f4
));
$api->executeCommand();

You can find the project on Github.

Share

PHP script to tail a log file using telnet

tail

Why would you need a PHP script to tail a log file using telnet? You don’t! But it the script is cool anyway. It allows you to connect to your web server over telnet, talk some HTTP to your web server, and run a PHP script that shows a tail of a log file. It uses ANSI sequences (colors!) to provide a nice user interface specifically to tail a log file with the “follow” option (like “tail -f”). Below you find the PHP script that you have to put on the web server:

<?php
// configuration
$file = '/var/log/apache2/access.log';
$ip = '127.';
// start of script
$title = "\033[H\033[2K$file";
if (strpos($_SERVER['REMOTE_ADDR'],$ip)!==0) die('Access Denied');
$stream = fopen($file, 'r');
if (!$stream) die("Could not open file: $file\n");
echo "\033[m\033[2J";
fseek($stream, 0, SEEK_END);
echo str_repeat("\n",4500)."\033[s$title";
flush();
while(true){
  $data = stream_get_contents($stream);
  if ($data) {
    echo "\033[32m\033[u".$data."\033[s".str_repeat("\033[m",1500)."$title";
    flush();
  }
  usleep(100000);
}
fclose($stream);

To tail (and follow) a remote file you need to talk HTTP to the web server using telnet and request to load the PHP tail script. First you connect using telnet:

$ telnet localhost 80
Trying 127.0.0.1...
Connected to localhost.
Escape character is '^]'.

After connecting you have to “speak” some HTTP (just type this):

GET /tail.php HTTP/1.1
Host: localhost

NB: Make sure you end the above telnet commands with an empty line! After this the screen should be empty showing any new log lines in real-time in green on the telnet window.

You can use Ctrl + ‘]’ to get to the telnet prompt and type “quit” to exit.

If you don’t want to copy the code above, then you can also find the latest version of tail.php on Github.

Share

Creating a simple REST API in PHP

I’m the author of php-crud-api and I want to share the core of the application with you. It includes routing a JSON REST request, converting it into SQL, executing it and giving a meaningful response. I tried to write the application as short as possible and came up with these 65 lines of code:

<?php

// get the HTTP method, path and body of the request
$method = $_SERVER['REQUEST_METHOD'];
$request = explode('/', trim($_SERVER['PATH_INFO'],'/'));
$input = json_decode(file_get_contents('php://input'),true);

// connect to the mysql database
$link = mysqli_connect('localhost', 'user', 'pass', 'dbname');
mysqli_set_charset($link,'utf8');

// retrieve the table and key from the path
$table = preg_replace('/[^a-z0-9_]+/i','',array_shift($request));
$key = array_shift($request)+0;

// escape the columns and values from the input object
$columns = preg_replace('/[^a-z0-9_]+/i','',array_keys($input));
$values = array_map(function ($value) use ($link) {
  if ($value===null) return null;
  return mysqli_real_escape_string($link,(string)$value);
},array_values($input));

// build the SET part of the SQL command
$set = '';
for ($i=0;$i<count($columns);$i++) {
  $set.=($i>0?',':'').'`'.$columns[$i].'`=';
  $set.=($values[$i]===null?'NULL':'"'.$values[$i].'"');
}

// create SQL based on HTTP method
switch ($method) {
  case 'GET':
    $sql = "select * from `$table`".($key?" WHERE id=$key":''); break;
  case 'PUT':
    $sql = "update `$table` set $set where id=$key"; break;
  case 'POST':
    $sql = "insert into `$table` set $set"; break;
  case 'DELETE':
    $sql = "delete `$table` where id=$key"; break;
}

// excecute SQL statement
$result = mysqli_query($link,$sql);

// die if SQL statement failed
if (!$result) {
  http_response_code(404);
  die(mysqli_error());
}

// print results, insert id or affected row count
if ($method == 'GET') {
  if (!$key) echo '[';
  for ($i=0;$i<mysqli_num_rows($result);$i++) {
    echo ($i>0?',':'').json_encode(mysqli_fetch_object($result));
  }
  if (!$key) echo ']';
} elseif ($method == 'POST') {
  echo mysqli_insert_id($link);
} else {
  echo mysqli_affected_rows($link);
}

// close mysql connection
mysqli_close($link);

This code is written to show you how simple it is to make a fully operational REST API in PHP.

Running

Save this file as “api.php” in your (Apache) document root and call it using:

http://localhost/api.php/{$table}/{$id}

Or you can use the PHP built-in webserver from the command line using:

$ php -S localhost:8888 api.php

The URL when ran in from the command line is:

http://localhost:8888/api.php/{$table}/{$id}

NB: Don’t forget to adjust the ‘mysqli_connect’ parameters in the above script!

REST API in a single PHP file

Although the above code is not perfect it actually does do 3 important things:

  1. Support HTTP verbs GET, POST, UPDATE and DELETE
  2. Escape all data properly to avoid SQL injection
  3. Handle null values correctly

One could thus say that the REST API is fully functional. You may run into missing features of the code, such as:

  1. No related data (automatic joins) supported
  2. No condensed JSON output supported
  3. No support for PostgreSQL or SQL Server
  4. No POST parameter support
  5. No JSONP/CORS cross domain support
  6. No base64 binary column support
  7. No permission system
  8. No search/filter support
  9. No pagination or sorting supported
  10. No column selection supported

Don’t worry, all these features are available in php-crud-api, which you can get from Github. On the other hand, now that you have the essence of the application, you may also write your own!

Share