Using Correlation IDs in API Calls

Over the years, the IT industry has moved from a single domain, monolithic architecture to a microservice architecture. In a microservice architecture, complex processes are split into smaller and simpler sub-processes. While this kind of architecture has many benefits, there are also some downsides – for example, if you send one request to a Leaseweb API, it ends up in multiple requests in other backend systems [FIGURE 1]. How do you keep track of requests and responses processed by multiple systems? This is where Correlation IDs come into play.

[FIGURE 1: Example request/response flow]

Using a Correlation ID

A Correlation ID is a unique, randomly generated identifier value that is added to every request and response. In a microservice architecture, the initial Correlation ID is passed to your sub-processes. If a sub-system also makes sub-requests, it will also pass the Correlation ID to those systems.

How you pass the Correlation ID to other systems depends on your architecture. At Leaseweb we are using REST APIs a lot, with HTTP headers to pass on the Correlation ID. As a rule, we assign a Correlation ID as soon as possible, and always use a Correlation ID if it is passed on. Our public API only accepts Correlation IDs from internally trusted clients. For any other client (such as an employee or customer API clients) a new Correlation ID is generated for the request.

Real Value of Correlation IDs

The real value of Correlation IDs is realized when you also log the Correlation IDs. Debugging or tracing requests becomes much easier, as you can search all of your logs for the same Correlation ID. Combined with central logging solutions (such as the ELK stack), searching logs becomes even easier and can be done by non-technical colleagues. Providing tools to your colleagues to troubleshoot issues allows them to have more responsibility and gives you more time to work on more technical projects.

We mainly use Correlation IDs at Leaseweb for debugging purposes. When an error occurs, we provide the Correlation ID to the client/customer. If users provide the Correlation ID when submitting a support ticket, we can visualize the entire process needed to fulfil the client’s initial intent. This has significantly improved the time it takes us to fix bugs.

[FIGURE 2: Example of one Correlation ID with multiple requests]

Debugging issues is a time-consuming process if Correlation IDs are not used. When your environment scales, you will need to find solutions to group transactions happening in your systems. By using a Correlation ID, you can easily group requests and events in your systems, allowing you to spend more time fixing the problem and less time trying to find it.

Practical examples on how to implement Correlation IDs

The following examples use Symfony, a popular web application framework. These concepts can also be applied to any other framework, such as Laravel, Django, Flask or Ruby on Rails.

If you are unfamiliar with the concept of Service Containers and Dependency Injection, we recommend reading the excellent Symfony documentation about it here: https://symfony.com/doc/current/service_container.html

Using Monolog to append Correlation IDs to your application logs

When processing a HTTP request your application often logs some information – such as when an error occurred, or an important change made in your system that you want to keep track of. When using the Monolog logging library in PHP (https://seldaek.github.io/monolog/), you can use the concept of “Processors” (read more about that here on symfony.com).

One way to do this is by creating a Monolog Processor class:

<?php

namespace App\Monolog\Processor;

use Symfony\Component\HttpFoundation\RequestStack;

class CorrelationIdProcessor
{
    protected $requestStack;

    public function __construct(RequestStack $requestStack)
    { 

       $this->requestStack = $requestStack;

    }
 
    public function processRecord(array $record)
    {
        $request = $this->requestStack->getCurrentRequest();

        if (!$request) {
            return;
        }

        $correlationId = $request->headers->get(‘X-My-Correlation-ID');

        if (empty($correlationId)) {
             return;
        }

        // If we have a correlation id include it in every monolog line
        $record['extra']['correlation_id'] = $correlationId;
 
        return $record;
    }
}

Then register this class on the service container as a monolog processor in services.yml:

# app/config/services.yml

services:
  App\Monolog\Processor\CorrelationIdProcessor:
    arguments: ["@request_stack"]
    tags:
      - name: monolog.processor
        method: processRecord

Now, every time you log something in your application with Monolog:

$this->logger->info('shopping_cart_emptied', [‘cart_id’ => 123]);

You will see the Correlation ID of the HTTP Request in your log files:

$ grep ‘shopping_cart_emptied’ var/logs/prod.log

[2020-07-03 12:14:45] app.INFO: shopping_cart_emptied {“cart_id”: 123} {"correlation_id":"d135d5f1-3dd0-45fa-8f26-55d8d6a44876"}

You can utilize the same pattern to log the name of the user that is currently logged in, the remote IP address of the API client, or anything else that makes troubleshooting faster for you.

Using Guzzle to append Correlation IDs when making sub-requests

If your API makes API calls to other microservices (and you use Guzzle to do this) you can make use of Handlers and Middleware.

Some teams at Leaseweb depend on many downstream microservices, and can therefore have multiple guzzle clients as services on the service container. While each Guzzle client is configured with its own base URL and/or authentication, it is possible for all of the Guzzle clients to share the same HandlerStack.

First, create the middleware:

<?php

namespace App\Guzzle\Middleware;

use Symfony\Component\HttpFoundation\RequestStack;
use Psr\Http\Message\RequestInterface;

class CorrelationIdMiddleware
{
    protected $requestStack;
 
    public function __construct(RequestStack $requestStack)
    {
        $this->requestStack = $requestStack;
    }

    public function __invoke(callable $handler)
    {
        return function (RequestInterface $request, array $options = []) use ($handler) {
            $request = $this->requestStack->getCurrentRequest();

            if (!$request) {
                return $handler($request, $options);
            }

            $correlationId = $request->headers->get(‘X-My-Correlation-ID');

            if (empty($correlationId)) {
                 return $handler($request, $options);
            } 
 
            $request = $request->withHeader(‘X-My-Correlation-ID’, $correlationId);
 
            return $handler($request, $options);
        };
    }
}

Define this middleware as service on the service container and create a HandlerStack:

# app/config/services.yml

services:
  correlation_id_middleware:
    class: App\Guzzle\Middleware:
    arguments: ["@request_stack"]

  correlation_id_handler_stack:
    class: GuzzleHttp\HandlerStack
    factory: ['GuzzleHttp\HandlerStack', 'create']
    calls:
      - [push, ["@correlation_id_middleware", "correlation_id_forwarder"]]

With these two services defined, you can now configure all your Guzzle clients using the HandlerStack so that the Correlation ID of the current HTTP request is forwarded to downstream HTTP requests:

# app/config/services.yml

services:
  my_downstream_api:
    class:
    arguments:
      - base_uri: https://my-downstream-api.example.com
        handler: "@correlation_id_handler_stack”

Now every API call that you make to https://my-downstream-api.example.com will include the HTTP request header ‘X-My-Correlation-ID’ and have the same value as the Correlation ID of the current HTTP request. You can also apply the same Monolog and Guzzle tricks described here to the downstream API.

Expose Correlation IDs in error responses

The missing link between these processes is to now expose your Correlation IDs to your users so they can also log them or use them in support cases they report to your organization.

Symfony makes this easy using Event Listeners. You can define Event Listeners in Symfony to pre-process HTTP requests as well as to post-process HTTP Responses just before they are returned by Symfony to the API caller. In this example, we will create a HTTP Response listener and add the Correlation ID of the current HTTP request as a HTTP Header in the HTTP Response.

First, we create a service on the Service Container:

<?php
 
namespace App\Listener;
 
use Symfony\Component\HttpFoundation\RequestStack;
use Symfony\Component\HttpKernel\Event\FilterResponseEvent;

class CorrelationIdResponseListener
{
    protected $requestStack;
 
    public function __construct(RequestStack $requestStack)
    {
        $this->requestStack = $requestStack;
    }

    public function onKernelResponse(FilterResponseEvent $event)
    {
        $request = $this->requestStack->getCurrentRequest();

        if (!$request) {
            return;
        }

        $correlationId = $request->headers->get(‘X-My-Correlation-ID');

        if (empty($correlationId)) {
             return;
        }

        $event->getResponse()->headers->set(‘X-My-Correlation-ID’, $correlationId);
    }
}

Now configure it as a Symfony Event Listener:

# app/config/services.yml

services:
  correlation_id_response_listener:
    class: App\Listener\CorrelationIdResponseListener
    arguments: ["@request_stack"]
    tags:
      - { name: kernel.event_listener, event: kernel.response, method: onKernelResponse }

Every response that is generated by your Symfony application will now include a X-My-Correlation-ID HTTP response header with the same Correlation ID as the HTTP request.

The Value of Correlation IDs

Using Correlation IDs throughout your whole stack gives you more insight into all (sub)requests during a transaction. Using the right tools allows others to debug issues, giving your developers more time to work on new awesome features.

Implementing Correlation IDs isn’t hard to do, and can be achieved quickly depending on your software stack. At Leaseweb, the use of Correlation IDs has saved us hours of time while debugging issues on numerous occasions.

Technical Careers at Leaseweb

We are searching for the next generation of engineers and developers to help us build infrastructure to automate our global hosting services! If you are interested in finding out more, check out our Careers at Leaseweb.

Share

How to create JWT authentication with API Platform

As the title suggests, in this blog we will together create a simple JWT authentication using API Platform and LexikJWTAuthenticationBundle. And of course, also using our lovely Doctrine User Provider.

Motivation

There too many tutorials online about symfony with JWT, and also some about the API Platform. But most of them are too short or missing certain things, which is unhelpful. It can also be confusing for developers when the tutorials don’t say what concepts you need to know first.

I hope this blog will be different – if you have any concerns, updates, questions, then drop a comment underneath and i’ll try to answer all of them.

Requirements

  • PHP >= 7.0 knowledge
  • Symfony knowledge (Autowiring, Dependency Injection)
  • Docker knowledge
  • REST APIs knowledge
  • PostgreSQL knowledge
  • Ubuntu or MacOs (Sorry Windows users :))

API Platform installation

The best way for me to install this is by using the git repository, or downloading the API Platform as .zip file from Github.

$ git clone https://github.com/api-platform/api-platform.git apiplatform-user-auth

$ cd apiplatform-user-auth

Now, first of all, the whole API Platform runs on specific ports, so you need to make sure that this is free and nothing is listening to it.

Finding the ports

You can find them in the docker-compose.yml file in the project root directory. They always be like [80, 81, 8080, 8081, 3000, 5432, 1337, 8443, 8444, 443, 444]

How to show this

Run this command

$ sudo lsof -nP | grep LISTEN

Kill any listening processes on any of the above ports.

$ sudo kill -9 $PROCESS_NUMBER

Installation:

  • Pull the required packages and everything needed.
docker-compose pull
  • Bring the application up and running.
$ docker-compose up -d
  • You may face some issue here, so it’s best to bring all containers down and run the command again like this.
$ docker-compose down
$ COMPOSE_HTTP_TIMEOUT=120 docker-compose up -d

Now the application should be running and everything should be in place:

$ docker ps

CONTAINER ID        IMAGE                            COMMAND                  CREATED              STATUS              PORTS                                                                    NAMES
6389d8efb6a0        apiplatform-user-auth_h2-proxy   "nginx -g 'daemon of…"   About a minute ago   Up About a minute   0.0.0.0:443-444->443-444/tcp, 80/tcp, 0.0.0.0:8443-8444->8443-8444/tcp   apiplatform-user-auth_h2-proxy_1_a012bc894b6c
a12ff2759ca4        quay.io/api-platform/varnish     "docker-varnish-entr…"   2 minutes ago        Up 2 minutes        0.0.0.0:8081->80/tcp                                                     apiplatform-user-auth_cache-proxy_1_32d747ba8877
6c1d29d1cbdd        quay.io/api-platform/nginx       "nginx -g 'daemon of…"   2 minutes ago        Up 2 minutes        0.0.0.0:8080->80/tcp                                                     apiplatform-user-auth_api_1_725cd9549081
62f69838dacb        quay.io/api-platform/php         "docker-entrypoint p…"   2 minutes ago        Up 2 minutes        9000/tcp                                                                 apiplatform-user-auth_php_1_cf09d32c3120
381384222af5        dunglas/mercure                  "./mercure"              2 minutes ago        Up 2 minutes        443/tcp, 0.0.0.0:1337->80/tcp                                            apiplatform-user-auth_mercure_1_54363c253a34
783565efb2eb        postgres:10-alpine               "docker-entrypoint.s…"   2 minutes ago        Up 2 minutes        0.0.0.0:5432->5432/tcp                                                   apiplatform-user-auth_db_1_8da243ca2865
1bc8e386bf02        quay.io/api-platform/client      "/bin/sh -c 'yarn st…"   2 minutes ago        Up About a minute   0.0.0.0:80->3000/tcp                                                     apiplatform-user-auth_client_1_1c413b4e4a5e
c22bef7a0b3f        quay.io/api-platform/admin       "/bin/sh -c 'yarn st…"   2 minutes ago        Up About a minute   0.0.0.0:81->3000/tcp                                                     apiplatform-user-auth_admin_1_cfecc5c6b442

Now, if you go to localhost:8080 you will see there some simple APIs listed, it is the example entity that comes with the project.

Create the User entity based on Doctrine User Provider

Install the doctrine maker package to help us make this quickly 🙂

$ docker-compose exec php composer require doctrine maker

Create your User entity

$ docker-compose exec php bin/console make:user

 The name of the security user class (e.g. User) [User]:
 > Users

 Do you want to store user data in the database (via Doctrine)? (yes/no) [yes]:
 >

 Enter a property name that will be the unique "display" name for the user (e.g. email, username, uuid) [email]:
 > email

 Will this app need to hash/check user passwords? Choose No if passwords are not needed or will be checked/hashed by some other system (e.g. a single sign-on server).

 Does this app need to hash/check user passwords? (yes/no) [yes]:
 >

The newer Argon2i password hasher requires PHP 7.2, libsodium or paragonie/sodium_compat. Your system DOES support this algorithm.
You should use Argon2i unless your production system will not support it.

 Use Argon2i as your password hasher (bcrypt will be used otherwise)? (yes/no) [yes]:
 >

 created: src/Entity/Users.php
 created: src/Repository/UsersRepository.php
 updated: src/Entity/Users.php
 updated: config/packages/security.yaml


  Success!


 Next Steps:
   - Review your new App\Entity\Users class.
   - Use make:entity to add more fields to your Users entity and then run make:migration.
   - Create a way to authenticate! See https://symfony.com/doc/current/security.html

If you go now to “api/src/Entity” you will find your entity there. If you scroll down a little bit to the getEmail & getPassword functions you will see something like this, which means the two properties will be used as the User identifier in the authentication. (I will not use the ROLES in this example as it is a simple one).

# api/src/Entity/Users.php

/**
* @see UserInterface
*/

As you know the latest versions of symfony using the autowiring feature so you can see that this entity is already wired and injected with teh repository called “api/src/Repository/UsersReporitory”.

# api/src/Entity/Users.php

/**
 * @ORM\Entity(repositoryClass="App\Repository\UsersRepository")
 */
class Users implements UserInterface
{
    ...
}

You can see clearly in this repository some per-implemented functions like findbyId(), but now let us create another function that helps us to create a new user.

  • To add a user into the Db, you will need to define an entity manager like the following:
# api/src/Repository/UsersRepository.php

class UsersRepository extends ServiceEntityRepository
{
  /** EntityManager $manager */
  private $manager;
....
}

and initialize it in the constructor like so:

# api/src/Repository/UsersRepository.php

/**
* UsersRepository constructor.
* @param RegistryInterface $registry
*/
public function __construct(RegistryInterface $registry)
{
  parent::__construct($registry, Users::class);

  $this->manager = $registry->getEntityManager();
}
  • Now, let us create our function:
# api/src/Repository/UsersRepository.php

/**
 * Create a new user
 * @param $data
 * @return Users
 * @throws \Doctrine\ORM\ORMException
 * @throws \Doctrine\ORM\OptimisticLockException
*/
public function createNewUser($data)
{
    $user = new Users();
    $user->setEmail($data['email'])
        ->setPassword($data['password']);

    $this->manager->persist($user);
    $this->manager->flush();

    return $user;
}
  • Let us create our controller to consume that repository. We can call it “AuthController”.
$ docker-compose exec php bin/console make:controller

 Choose a name for your controller class (e.g. TinyJellybeanController):
 > AuthController

 created: src/Controller/AuthController.php
 created: templates/auth/index.html.twig


  Success!


 Next: Open your new controller class and add some pages!

Now, let’s consume this createNewUser function. If you see your controller, you will find it only contains the index function, but we need to create another one will call it “register”.

  • We need the UsersRepository so should create the object first.
# api/src/Controller/AuthController.php

use App\Repository\UsersRepository;

class AuthController extends AbstractController
{
    /** @var UsersRepository $userRepository */
    private $usersRepository;

    /**
     * AuthController Constructor
     *
     * @param UsersRepository $usersRepository
     */
    public function __construct(UsersRepository $usersRepository)
    {
        $this->usersRepository = $usersRepository;
    }
    .......
}
  • Now, we need to make this controller know about the User repository, so we will inject it as a service.
# api/config/services.yaml

services:
    ......
  # Repositories
  app.user.repository:
      class: App\Repository\UsersRepository
      arguments:
          - Symfony\Bridge\Doctrine\RegistryInterface
  
  # Controllers
  app.auth.controller:
      class: App\Controller\AuthController
      arguments:
          - '@app.user.repository'
  • Now, it is time to implement our new endpoint to register (create) a new account.
# api/src/Controller/AuthController.php

# Import those
use Symfony\Component\HttpFoundation\Request;
use Symfony\Component\HttpFoundation\Response;

# Then add this to the class
/**
 * Register new user
 * @param Request $request
 *
 * @return Response
 */
public function register(Request $request)
{
    $newUserData['email']    = $request->get('email');
    $newUserData['password'] = $request->get('password');

    $user = $this->usersRepository->createNewUser($newUserData);

    return new Response(sprintf('User %s successfully created', $user->getUsername()));
}
  • Now, we need to make the framework know about this new endpoint by adding it to our routes file.
# src/config/routes.yaml

# Register api
register:
    path: /register
    controller: App\Controller\AuthController::register
    methods: ['POST']

Testing this new API:

  • Make the migration and update the DB first:
$ docker-compose exec php bin/console make:migration

$ docker-compose exec php bin/console doctrine:migrations:migrate

  WARNING! You are about to execute a database migration that could result in schema changes and data loss. Are you sure you wish to continue? (y/n) y

Now, from Postman or any other client you use. Here am using CURL.

$ curl -X POST -H "Content-Type: application/json" "http://localhost:8080/register?email=test1@mail.com&password=test1"
User test1@mail.com successfully created

To see this data in the DB:

$ docker-compose exec db psql -U api-platform api
psql (10.8)
Type "help" for help.

$ api=# select * from users;
 id |     email      | roles | password
----+----------------+-------+----------
  6 | test1@mail.com | []    | test1
(1 row)

Oooooh, wow the password is not encrypted what should we do!!!

So, as i said before this project is built on Symfony, that is why i said you need to have knowledge about symfony. So we will use the Password encoder class.

# api/src/Repository/UsersRepository.php

use Symfony\Component\Security\Core\Encoder\UserPasswordEncoderInterface;

class UsersRepository extends ServiceEntityRepository
{
    .......

  /** UserPasswordEncoderInterface $encoder */
  private $encoder;
    
  /**
   * UserRepository constructor.
   * @param RegistryInterface $registry
   * @param UserPasswordEncoderInterface $encoder
   */
  public function __construct(RegistryInterface $registry, UserPasswordEncoderInterface $encoder)
  {
      parent::__construct($registry, Users::class);

      $this->manager = $registry->getEntityManager();
      $this->encoder = $encoder;
  }
}
  • As always we need to inject it to the repository:
# api/config/services.yaml

services:
  .......
  # Repositories
  app.user.repository:
      class: App\Repository\UsersRepository
      arguments:
          - Symfony\Bridge\Doctrine\RegistryInterface
          - Symfony\Component\Security\Core\Encoder\UserPasswordEncoderInterface

Then update the create user function:

# api/src/Repository/UsersRepository.php

public function createNewUser($data)
{
    $user = new Users();
    $user->setEmail($data['email'])
        ->setPassword($this->encoder->encodePassword($user, $data['password']));
    .......
}
  • Now, try the register call again, remember to try with different email as we defined the email as Unique:
$ curl -X POST -H "Content-Type: application/json" "http://localhost:8080/register?email=test2@mail.com&password=test2"
User test2@mail.com successfully created
  • check the DB now again:
$ api=# select * from users;
 id |     email      | roles |                                            password
----+----------------+-------+-------------------------------------------------------------------------------------------------
  6 | test1@mail.com | []    | test1
  7 | test2@mail.com | []    | $argon2i$v=19$m=1024,t=2,p=2$VW9tYXEzZHp5U0RMSE5ydA$bo+V1X6rgYZ4ebN/bs1cpz+sf+DQdx3Duu3hvFUII8M
(2 rows)

Install LexikJWTAuthenticationBundle

  • Install the bundle and generate the secrets:
$ docker-compose exec php composer require jwt-auth

Create our authentication

  • (Additional) Before anything if you tried this call, for now, you will get this result:
$ curl -X GET -H "Content-Type: application/json" "http://localhost:8080/greetings"
{
    "@context": "/contexts/Greeting",
    "@id": "/greetings",
    "@type": "hydra:Collection",
    "hydra:member": [],
    "hydra:totalItems": 0
}
  • Let’s continue for now, create a new and simple endpoint that we will use in our testing. Now I will call it “/api”.
# api/src/Controller/AuthController.php

/**
* api route redirects
* @return Response
*/
public function api()
{
    return new Response(sprintf("Logged in as %s", $this->getUser()->getUsername()));
}
  • Add it to our Routes
# api/config/routes.yaml

api:
    path: /api
    controller: App\Controller\AuthController::api
    methods: ['POST']

Now, we need to make some configurations in our security config file:

  • This is our provider to our authentication or anything related to users in the application. It is already predefined, if you want change the user provider you can do it here.
# api/config/packages/security.yaml

app_user_provider:
    entity:
        class: App\Entity\Users
        property: email
  • Let’s make some configs for our “/register” API as we want this API to be public for anyone:
# api/config/packages/security

register:
    pattern:  ^/register
    stateless: true
    anonymous: true
  • Now, let us assume that we need everything generated by the API Platform to not work without the JWT token, meaning without authenticated users the API shouldn’t return anything. So I will update the “main” part configs to be like this:
# api/config/packages/security.yaml

main:
    anonymous: false
    stateless: true
    provider: app_user_provider
    json_login:
        check_path: /login
        username_path: email
        password_path: password
        success_handler: lexik_jwt_authentication.handler.authentication_success
        failure_handler: lexik_jwt_authentication.handler.authentication_failure
    guard:
        authenticators:
            - lexik_jwt_authentication.jwt_token_authenticator
  • Also, add some configs for our simple endpoint /api.
# api/config/packages/security.yaml

api:
    pattern: ^/api
    stateless: true
    anonymous: false
    provider: app_user_provider
    guard:
        authenticators:
            - lexik_jwt_authentication.jwt_token_authenticator
  • As you see in the above configs, we set the anonymous to false as we don’t want anyone to access these two APIs. Also we are telling the framework that the provider for you is the user provider that we defined before. At the end we are telling it which authenticator you will use and the authentication success/faliure messages.
  • Now, if you try the call, try it in the Additional part for the /greetings api:
$ curl -X GET -H "Content-Type: application/json" "http://localhost:8080/greetings"
  {
      "code": 401,
      "message": "JWT Token not found"
  }

It is the same with our simple endpoint /api that we created:

$ curl -X POST -H "Content-Type: application/json" "http://localhost:8080/api" 
  {
    "code": 401,
    "message": "JWT Token not found"
  }
  • As you can see it asks you to login :D, there is no JWT token specified so we will create a very simple API that is used by the lexik jwt to authenticate the users, and generate their tokens. Remember that the login check path should be the same as the check_path under json_login in the security file:
# api/config/packages/security.yaml
....
json_login:
        check_path: /login

# api/config/routes.yaml

# Login check to log the user and generate JWT token
api_login_check:
      path: /login
      methods: ['POST']
  • Now, let’s try it out and see if it will generate a token for us!
$ curl -X POST -H "Content-Type: application/json" http://localhost:8080/login -d '{"email":"test2@mail.com","password":"test2"}'
  {"token":"eyJ0eXAiOiJKV1QiLCJhbGciOiJSUzI1NiJ9.eyJpYXQiOjE1NTg2OTg4MTIsImV4cCI6MTU1ODcwMjQxMiwicm9sZXMiOlsiUk9MRV9VU0VSIl0sInVzZXJuYW1lIjoidGVzdDJAbWFpbC5jb20ifQ.nzd5FVhcyrfjYyN8jRgYFp3VOB2QytnPPRGNyp4ZtfLx6IRwg0TWZJPu5OFtOKPkdLO8DQAr_4Fpq_G6oPjzoxmGOASNuRoQonik9FCCq6oAIW3k5utzQecXDVE_ImnfgByc6WYW6a-aWLnsq1qtvxy274ojqdR0rWLePwSWX5K5-t08zDBgavO_87dVpYd0DLwhHIS7F10lNscET7bfWS-ioPDTv-G74OvkcpbcjgwHhXlO7TYubnrES-FsvAw7kezQe4BPxdbXr1w-XBZuqTNEU4MyrBuadSLgjoe_gievNBtkVhKErIkEQZVjeJIQ4xaKaxwmPxZcP9jYkE47myRdbMsL9XHSd0XmGq0bPuGjOJ2KLTmUb5oeuRnY-e9Q_V9BbouEGw0sjw2meo6Jot2MZyv5ZnLci_GwpRtWqmV7ZLw5jNyiLDFXR1rz70NcJh7EXqu9o4nno3oc68zokfDQvGkJJJZMtBrLCK5pKGMh0a1elIz41LRLZvpLYCrOZ2f4wCkGRD_U92iILD6w8EdVWGoO1wTn5Z2k8-GS1-QH9f-4KkOpaYGPCwwdrY7yioSt2oVbEj2FOb1jULteeP_Cpu44HyJktPLPW_wrN2OtZlUFr4Vz_owDSIvNESYk1JBQ_Fjlv9QGmUs9itzaDExjfB4QYoGkvpfNymtw2PI"}

As you see it created a JWT token for me, so I can use it to call any API in the application. If it shows some exception like Unable to generate token for the specified configurationsplease check this step here. First, open you .envfile. We will need the JWT_PASSPHRASE so keep it opened:

$ mkdir -p api/config/jwt
$ openssl genrsa -out api/config/jwt/private.pem -aes256 4096 # this will ask you for the JWT_PASSPHRASE
$ openssl rsa -pubout -in api/config/jwt/private.pem -out api/config/jwt/public.pem # will confirm the JWT_PASSPHRASE again
  • Let’s try to call /api or /greetings endpoints with this token now:
$ curl -X GET -H "Content-Type: application/json" -H "Authorization: Bearer eyJ0eXAiOiJKV1QiLCJhbGciOiJSUzI1NiJ9.eyJpYXQiOjE1NTg2OTg4MTIsImV4cCI6MTU1ODcwMjQxMiwicm9sZXMiOlsiUk9MRV9VU0VSIl0sInVzZXJuYW1lIjoidGVzdDJAbWFpbC5jb20ifQ.nzd5FVhcyrfjYyN8jRgYFp3VOB2QytnPPRGNyp4ZtfLx6IRwg0TWZJPu5OFtOKPkdLO8DQAr_4Fpq_G6oPjzoxmGOASNuRoQonik9FCCq6oAIW3k5utzQecXDVE_ImnfgByc6WYW6a-aWLnsq1qtvxy274ojqdR0rWLePwSWX5K5-t08zDBgavO_87dVpYd0DLwhHIS7F10lNscET7bfWS-ioPDTv-G74OvkcpbcjgwHhXlO7TYubnrES-FsvAw7kezQe4BPxdbXr1w-XBZuqTNEU4MyrBuadSLgjoe_gievNBtkVhKErIkEQZVjeJIQ4xaKaxwmPxZcP9jYkE47myRdbMsL9XHSd0XmGq0bPuGjOJ2KLTmUb5oeuRnY-e9Q_V9BbouEGw0sjw2meo6Jot2MZyv5ZnLci_GwpRtWqmV7ZLw5jNyiLDFXR1rz70NcJh7EXqu9o4nno3oc68zokfDQvGkJJJZMtBrLCK5pKGMh0a1elIz41LRLZvpLYCrOZ2f4wCkGRD_U92iILD6w8EdVWGoO1wTn5Z2k8-GS1-QH9f-4KkOpaYGPCwwdrY7yioSt2oVbEj2FOb1jULteeP_Cpu44HyJktPLPW_wrN2OtZlUFr4Vz_owDSIvNESYk1JBQ_Fjlv9QGmUs9itzaDExjfB4QYoGkvpfNymtw2PI" "http://localhost:8080/greetings"
{
    "@context": "/contexts/Greeting",
    "@id": "/greetings",
    "@type": "hydra:Collection",
    "hydra:member": [],
    "hydra:totalItems": 0
}

## Before
$ curl -X GET -H "Content-Type: application/json" "http://localhost:8080/greetings"
  {
      "code": 401,
      "message": "JWT Token not found"
  }
  • What about the /api endpoint, let’s try it out:
$ curl -X POST -H "Content-Type: application/json" -H "Authorization: Bearer eyJ0eXAiOiJKV1QiLCJhbGciOiJSUzI1NiJ9.eyJpYXQiOjE1NTg2OTg4MTIsImV4cCI6MTU1ODcwMjQxMiwicm9sZXMiOlsiUk9MRV9VU0VSIl0sInVzZXJuYW1lIjoidGVzdDJAbWFpbC5jb20ifQ.nzd5FVhcyrfjYyN8jRgYFp3VOB2QytnPPRGNyp4ZtfLx6IRwg0TWZJPu5OFtOKPkdLO8DQAr_4Fpq_G6oPjzoxmGOASNuRoQonik9FCCq6oAIW3k5utzQecXDVE_ImnfgByc6WYW6a-aWLnsq1qtvxy274ojqdR0rWLePwSWX5K5-t08zDBgavO_87dVpYd0DLwhHIS7F10lNscET7bfWS-ioPDTv-G74OvkcpbcjgwHhXlO7TYubnrES-FsvAw7kezQe4BPxdbXr1w-XBZuqTNEU4MyrBuadSLgjoe_gievNBtkVhKErIkEQZVjeJIQ4xaKaxwmPxZcP9jYkE47myRdbMsL9XHSd0XmGq0bPuGjOJ2KLTmUb5oeuRnY-e9Q_V9BbouEGw0sjw2meo6Jot2MZyv5ZnLci_GwpRtWqmV7ZLw5jNyiLDFXR1rz70NcJh7EXqu9o4nno3oc68zokfDQvGkJJJZMtBrLCK5pKGMh0a1elIz41LRLZvpLYCrOZ2f4wCkGRD_U92iILD6w8EdVWGoO1wTn5Z2k8-GS1-QH9f-4KkOpaYGPCwwdrY7yioSt2oVbEj2FOb1jULteeP_Cpu44HyJktPLPW_wrN2OtZlUFr4Vz_owDSIvNESYk1JBQ_Fjlv9QGmUs9itzaDExjfB4QYoGkvpfNymtw2PI" "http://localhost:8080/api"
Logged in as test2@mail.com

## Before
$ curl -X POST -H "Content-Type: application/json" "http://localhost:8080/api" 
  {
    "code": 401,
    "message": "JWT Token not found"
  }

As you can see from the JWT token, you know exactly who is logged in, and you can improve this by implementing additional User properties like isActive or userRoles…etc.

— — — — — — — — — — — — — — — — — — — — — — — — — — — — — —

Thank you for reading this tutorial, I hope that you learned something new!

If you have any questions please don’t hesitate to ask, or any feedback will be so useful.

You can find this whole tutorial and the example here on GitHub.

Share

Leaseweb Cloud AWS EC2 support

As you might know, some of the products LeaseWeb include in its portfolio are Public and Private Cloud based on Apache CloudStack, which supports a full API. We, LeaseWeb, are very open about this, and we try to be as much involved and participative in the community and product development as possible. You might be familiar with this if you are a Private Cloud customer. In this article we target the current and former EC2 users, who probably have already tools built upon AWS CLI, by demonstrating how you can keep using them with LeaseWeb Private Cloud solutions.

Apache CloudStack has supported EC2 API for some time in the early days, but along the way, while EC2 API evolved, CloudStack’s support has somewhat stagnated. In fact, The AWS API component from CloudStack was recently detached from the main distribution as to simplify the maintenance of the code.

While this might sound like bad news, it’s not – at all. In the meantime, another project spun off, EC2Stack, and was embraced by Apache as well. This new stack supports the latest API (at the time of writing) and is much easier to maintain both in versatility as in codebase. The fact that it is written in Python opens up the audience for further contribution while at the same time allows for quick patching/upgrade without re-compiling.

So, at some point, I thought I could share with you how to quickly setup your AWS-compatible API so you can reuse your existing scripts. On to the details.

The AWS endpoint acts as an EC2 API provider, proxying requests to LeaseWeb API, which is an extension to the native CloudStack API. And since this API is available for Private Cloud customers, EC2Stack can be installed by the customer himself.

Following is an illustration of how this can be done. For the record, I’m using Ubuntu 14.04 as my desktop, and I’ll be setting up EC2stack against LeaseWeb’s Private Cloud in the Netherlands.

First step is to gather all information for EC2stack. Go to your LeaseWeb platform console, and obtain API keys for your user (sensitive information blurred):

apikeys-blurred

Note down the values for API Key and Secret Key (you should already know the concepts from AWS and/or LeaseWeb Private Cloud).

Now, install EC2Stack and configure it:

ntavares@mylaptop:~$ pip install ec2stack 
[…]
ntavares@mylaptop:~$ ec2stack-configure 
EC2Stack bind address [0.0.0.0]: 127.0.0.1 
EC2Stack bind port [5000]: 5000 
Cloudstack host [mgt.cs01.leaseweb.net]: csrp01nl.leaseweb.com 
Cloudstack port [443]: 443 
Cloudstack protocol [https]: https 
Cloudstack path [/client/api]: /client/api 
Cloudstack custom disk offering name []: dualcore
Cloudstack default zone name [Evoswitch]: CSRP01 
Do you wish to input instance type mappings? (Yes/No): Yes 
Insert the AWS EC2 instance type you wish to map: t1.micro 
Insert the name of the instance type you wish to map this to: Debian 7 amd64 5GB 
Do you wish to add more mappings? (Yes/No): No 
Do you wish to input resource type to resource id mappings for tag support? (Yes/No): No 
INFO  [alembic.migration] Context impl SQLiteImpl. 
INFO  [alembic.migration] Will assume non-transactional DDL. 

The value for the zone name will be different if your Private Cloud is not in the Netherlands POP. The rest of the values can be obtained from the platform console:

serviceoffering-blurred

template-blurred
You will probably have different (and more) mappings to do as you go, just re-run this command later on.

At this point, your EC2stack proxy should be able to talk to your Private Cloud, so we now need to instruct it to launch it to accept EC2 API calls for your user. For the time being, just run it on a separate shell:

ntavares@mylaptop:~$ ec2stack -d DEBUG 
 * Running on http://127.0.0.1:5000/ 
 * Restarting with reloader

And now register your user using the keys you collected from the first step:

ntavares@mylaptop:~$ ec2stack-register http://localhost:5000 H5xnjfJy82a7Q0TZA_8Sxs5U-MLVrGPZgBd1E-1HunrYOWBa0zTPAzfXlXGkr-p0FGY-9BDegAREvq0DGVEZoFjsT PYDwuKWXqdBCCGE8fO341F2-0tewm2mD01rqS1uSrG1n7DQ2ADrW42LVfLsW7SFfAy7OdJfpN51eBNrH1gBd1E 
Successfully Registered!

And that’s it, as far the API service is concerned. As you’d normally do with AWS CLI, you now need to “bind” the CLI to this new credentials:

ntavares@mylaptop:~$ aws configure 
AWS Access Key ID [****************yI2g]: H5xnjfJy82a7Q0TZA_8Sxs5U-MLVrGPZgBd1E-1HunrYOWBa0zTPAzfXlXGkr-p0FGY-9BDegAREvq0DGVEZoFjsT
AWS Secret Access Key [****************L4sw]: PYDwuKWXqdBCCGE8fO341F2-0tewm2mD01rqS1uSrG1n7DQ2ADrW42LVfLsW7SFfAy7OdJfpN51eBNrH1gBd1E  Default region name [CS113]: CSRP01 
Default output format

: text

And that’s it! You’re now ready to use AWS CLI as you’re used to:

ntavares@mylaptop:~$ aws --endpoint=http://127.0.0.1:5000 --output json ec2 describe-images | jq ' .Images[] | .Name ' 
"Ubuntu 12.04 i386 30GB" 
"Ubuntu 12.04 amd64 5GB" 
"Ubuntu 13.04 amd64 5GB" 
"CentOS 6 amd64 5GB" 
"Debian 6 amd64 5GB" 
"CentOS 7 amd64 20140822T1151" 
"Debian 7 64 10 20141001T1343" 
"Debian 6 i386 5GB" 
"Ubuntu 14.04 64bit with docker.io" 
"Ubuntu 12.04 amd64 30GB" 
"Debian 7 i386 5GB" 
"Ubuntu 14.04 amd64 20140822T1234" 
"Ubuntu 12.04 i386 5GB" 
"Ubuntu 13.04 i386 5GB" 
"CentOS 6 i386 5GB" 
"CentOS 6 amd64 20140822T1142" 
"Ubuntu 12.04 amd64 20140822T1247" 
"Debian 7 amd64 5GB"

Please note that I only used JSON output (and JQ to parse it) for summarising the results, as any other format wouldn’t fit on the page.

To create a VM with built-in SSH keys, you should create/setup your keypair in LeaseWeb Private Cloud, if you didn’t already. In the following example I’m generating a new one, but of course you could load your existing keys.

ssh-keypairs-blurred

You will want to copy paste the generated key (in Private Key) to a file and protect it. I saved mine in $HOME/.ssh/id_ntavares.csrp01.key .

ssh-keypairs2-blurred

This key will be used later to log into the created instances and extract the administrator password. Finally, instruct the AWS CLI to use this keypair when deploying your instances:

ntavares@mylaptop:~$ aws --endpoint=http://127.0.0.1:5000 ec2 run-instances \
 --instance-type dualcore \
 --image-id 7c123f01-9865-4312-a198-05e2db755e6a \
 --key-name ntavares-key 
INSTANCES	KVM	7c123f01-9865-4312-a198-05e2db755e6a	a0977df5-d25e-40cb-9f78-b3a551a9c571	dualcore	ntavares-key	2014-12-04T12:03:32+0100	10.42.1.129 
PLACEMENT	CSRP01 
STATE	16	running 

Note that the image-id is taken from the previous listing (the one I simplified with JQ).

Also note that although EC2stack is fairly new, and there are still some limitations to this EC2-CS bridge – see below for a mapping of supportedAPI calls. For instance, one that you can could run into at the time of writing this article (~2015) was the inability to deploy an instance if you’re using multiple Isolated networks (or multiple VPC). Amazon shares this concept as well, so a simple patch was necessary.

For this demo, we’re actually running in an environment with multiple isolated networks, so if you ran the above command, you’d get the following output:

ntavares@mylaptop:~$ aws --endpoint=http://127.0.0.1:5000 ec2 run-instances \
 --instance-type dualcore \
 --image-id 7c123f01-9865-4312-a198-05e2db755e6a \
 --key-name ntavares-key
A client error (InvalidRequest) occurred when calling the RunInstances operation: More than 1 default Isolated networks are found for account Acct[47504f6c-38bf-4198-8925-991a5f801a6b-rme]; please specify networkIds

In the meantime, LeaseWeb’s patch was merged, as many others, which both demonstrates the power of Open Source and the activity on this project.

Naturally, the basic maintenance tasks are there:

ntavares@mylaptop:~$ aws --endpoint=http://127.0.0.1:5000 ec2 describe-instances 
RESERVATIONS	None 
INSTANCES	KVM	7c123f01-9865-4312-a198-05e2db755e6a	a0977df5-d25e-40cb-9f78-b3a551a9c571	dualcore	ntavares-key	2014-12-04T12:03:32+0100	10.42.1.129	10.42.1.129 
PLACEMENT	CSRP01	default 
STATE	16	running

I’ve highlighted some information you’ll need now to login to the instance: the instance id, and IP address, respectively. You can login either with your ssh keypair:

[root@jump ~]# ssh -i $HOME/.ssh/id_ntavares.csrp01.key root@10.42.1.129 
Linux localhost 3.2.0-4-amd64 #1 SMP Debian 3.2.51-1 x86_64 

[...] 
Debian GNU/Linux comes with ABSOLUTELY NO WARRANTY, to the extent 
permitted by applicable law. 
root@a0977df5-d25e-40cb-9f78-b3a551a9c571:~#

If you need, you can also retrieve the password the same way you do with EC2:

ntavares@mylaptop:~$ aws --endpoint=http://127.0.0.1:5000 ec2 get-password-data --instance-id a0977df5-d25e-40cb-9f78-b3a551a9c571 
None dX5LPdKndjsZkUo19Z3/J3ag4TFNqjGh1OfRxtzyB+eRnRw7DLKRE62a6EgNAdfwfCnWrRa0oTE1umG91bWE6lJ5iBH1xWamw4vg4whfnT4EwB/tav6WNQWMPzr/yAbse7NZHzThhtXSsqXGZtwBNvp8ZgZILEcSy5ZMqtgLh8Q=

As it happens with EC2, password is returned encrypted, so you’ll need your key to display it:

ntavares@mylaptop:~$ aws --endpoint=http://127.0.0.1:5000 ec2 get-password-data --instance-id a0977df5-d25e-40cb-9f78-b3a551a9c571 | awk '{print $2}' > ~/tmp.1
ntavares@mylaptop:~$ openssl enc -base64 -in tmp.1 -out tmp.2 -d -A 
ntavares@mylaptop:~$ openssl rsautl -decrypt -in tmp.2 -text -inkey $HOME/.ssh/id_ntavares.csrp01.key 
ntavares@mylaptop:~$ cat tmp.3 ; echo 
hI5wueeur
ntavares@mylaptop:~$ rm -f tmp.{1,2,3} 
[root@jump ~]# sshpass -p hI5wueeur ssh root@10.42.1.129 
Linux localhost 3.2.0-4-amd64 #1 SMP Debian 3.2.51-1 x86_64 

[...]
Debian GNU/Linux comes with ABSOLUTELY NO WARRANTY, to the extent 
permitted by applicable law. 
Last login: Thu Dec  4 13:33:07 2014 from jump.rme.leasewebcloud.com 
root@a0977df5-d25e-40cb-9f78-b3a551a9c571:~#

The multiple isolated networks scenario

If you’re already running multiple isolated networks in your target platform (be either VPC-bound or not), you’ll need to pass argument –subnet-id to the run-instances command to specify which network to deploy the instance into; otherwise CloudStack will complain about not knowing in which network to deploy the instance. I believe this is due to the fact that Amazon doesn’t allow the use the Isolated Networking as freely as LeaseWeb – LeaseWeb delivers you the full flexibility at the platform console.

Since EC2stack does not support describe-network-acls (as of December 2014) in order to allow you to determine which Isolated networks you could use, the easiest way to determine them is to go to the platform console and copy & paste the Network ID of the network you’re interested in:

Then you could use –subnet-id:

ntavares@mylaptop:~$ aws --endpoint=http://127.0.0.1:5000 ec2 run-instances \
 --instance-type dualcore \
 --image-id 7c123f01-9865-4312-a198-05e2db755e6a \
 --key-name ntavares-key \
 --subnet-id 5069abd3-5cf9-4511-a5a3-2201fb7070f8
PLACEMENT	CSRP01 
STATE	16	running 

I hope I demonstrated a bit of what can be done in regards to compatible EC2 API. Other funtions are avaiable for more complex tasks although, as wrote earlier, EC2stack is quite new, for which you might need community assistance if you cannot develop the fix on your own. At LeaseWeb we are very interested to know your use cases, so feel free to drop us a note.

Share

PHP-CRUD-API now supports authorization and validation

Another milestone is reached for the PHP-CRUD-API project. A project that aims to provide a high performance, consistent data API over REST that is easy to deploy (it is a single PHP file!) and requires minimal configuration. By popular demand we have added four important new features:

  1. Tables and the actions on them can be restricted with custom rules.
  2. Access to specific columns can be restricted using your own algorithm.
  3. You can specify “sanitizers” to, for example, strip HTML tags from input.
  4. You can specify “validators” functions to show errors on invalid input.

These features are built by allowing you to define callback functions in your configuration. These functions can then contain your application specific logic. How these function work and how you can load them is explained below.

Table authorizer

The following function can be used to authorize access to specific tables:

/**
 * @param action    'create','read','update','delete','list'
 * @param database  name of your database (e.g. 'northwind')
 * @param table     name of the table (e.g. 'customers')
 * @returns bool    indicates that access is granted  
 **/
  
$f1=function($action,$database,$table){
  return true; 
};

Column authorizer

The following function can be used to authorize access to specific columns:

/**
 * @param action    'create','read','update','delete','list'
 * @param database  name of your database (e.g. 'northwind')
 * @param table     name of the table (e.g. 'customers')
 * @param column    name of the column (e.g. 'password')
 * @returns bool    indicates that access is granted  
 **/
  
$f2=function($action,$database,$table,$column){
  return true; 
};

Input sanitizer

The following function can be used to sanitize input for specific columns:

/**
 * @param action    'create','read','update','delete','list'
 * @param database  name of your database (e.g. 'northwind')
 * @param table     name of the table (e.g. 'customers')
 * @param column    name of the column (e.g. 'username')
 * @param type      type of the column (depends on engine)
 * @param value     input from the user (e.g. 'johndoe88')
 * @returns string  sanitized value
 **/
  
$f3=function($action,$database,$table,$column,$type,$value){
  return $value; 
};

Input validator

The following function can be used to validate input for specific columns:

/**
 * @param action    'create','read','update','delete','list'
 * @param database  name of your database (e.g. 'northwind')
 * @param table     name of the table (e.g. 'customers')
 * @param column    name of the column (e.g. 'username')
 * @param type      type of the column (depends on engine)
 * @param value     input from the user (e.g. 'johndoe88')
 * @param context   all input fields in this action
 * @returns string  validation error (if any) or null
 **/
  
$f4=function($action,$database,$table,$column,$type,$value,$context){
  return null;
};

Configuration

This is an example configuration that requires the above snippets to be defined.

$api = new MySQL_CRUD_API(array(
  'hostname'=>'localhost',
  'username'=>'xxx',
  'password'=>'xxx',
  'database'=>'xxx',
  'charset'=>'utf8',
  'table_authorizer'=>$f1,
  'column_authorizer'=>$f2,
  'input_sanitizer'=>$f3,
  'input_validator'=>$f4
));
$api->executeCommand();

You can find the project on Github.

Share

Creating a simple REST API in PHP

I’m the author of php-crud-api and I want to share the core of the application with you. It includes routing a JSON REST request, converting it into SQL, executing it and giving a meaningful response. I tried to write the application as short as possible and came up with these 65 lines of code:

<?php

// get the HTTP method, path and body of the request
$method = $_SERVER['REQUEST_METHOD'];
$request = explode('/', trim($_SERVER['PATH_INFO'],'/'));
$input = json_decode(file_get_contents('php://input'),true);

// connect to the mysql database
$link = mysqli_connect('localhost', 'user', 'pass', 'dbname');
mysqli_set_charset($link,'utf8');

// retrieve the table and key from the path
$table = preg_replace('/[^a-z0-9_]+/i','',array_shift($request));
$key = array_shift($request)+0;

// escape the columns and values from the input object
$columns = preg_replace('/[^a-z0-9_]+/i','',array_keys($input));
$values = array_map(function ($value) use ($link) {
  if ($value===null) return null;
  return mysqli_real_escape_string($link,(string)$value);
},array_values($input));

// build the SET part of the SQL command
$set = '';
for ($i=0;$i<count($columns);$i++) {
  $set.=($i>0?',':'').'`'.$columns[$i].'`=';
  $set.=($values[$i]===null?'NULL':'"'.$values[$i].'"');
}

// create SQL based on HTTP method
switch ($method) {
  case 'GET':
    $sql = "select * from `$table`".($key?" WHERE id=$key":''); break;
  case 'PUT':
    $sql = "update `$table` set $set where id=$key"; break;
  case 'POST':
    $sql = "insert into `$table` set $set"; break;
  case 'DELETE':
    $sql = "delete `$table` where id=$key"; break;
}

// excecute SQL statement
$result = mysqli_query($link,$sql);

// die if SQL statement failed
if (!$result) {
  http_response_code(404);
  die(mysqli_error());
}

// print results, insert id or affected row count
if ($method == 'GET') {
  if (!$key) echo '[';
  for ($i=0;$i<mysqli_num_rows($result);$i++) {
    echo ($i>0?',':'').json_encode(mysqli_fetch_object($result));
  }
  if (!$key) echo ']';
} elseif ($method == 'POST') {
  echo mysqli_insert_id($link);
} else {
  echo mysqli_affected_rows($link);
}

// close mysql connection
mysqli_close($link);

This code is written to show you how simple it is to make a fully operational REST API in PHP.

Running

Save this file as “api.php” in your (Apache) document root and call it using:

http://localhost/api.php/{$table}/{$id}

Or you can use the PHP built-in webserver from the command line using:

$ php -S localhost:8888 api.php

The URL when ran in from the command line is:

http://localhost:8888/api.php/{$table}/{$id}

NB: Don’t forget to adjust the ‘mysqli_connect’ parameters in the above script!

REST API in a single PHP file

Although the above code is not perfect it actually does do 3 important things:

  1. Support HTTP verbs GET, POST, UPDATE and DELETE
  2. Escape all data properly to avoid SQL injection
  3. Handle null values correctly

One could thus say that the REST API is fully functional. You may run into missing features of the code, such as:

  1. No related data (automatic joins) supported
  2. No condensed JSON output supported
  3. No support for PostgreSQL or SQL Server
  4. No POST parameter support
  5. No JSONP/CORS cross domain support
  6. No base64 binary column support
  7. No permission system
  8. No search/filter support
  9. No pagination or sorting supported
  10. No column selection supported

Don’t worry, all these features are available in php-crud-api, which you can get from Github. On the other hand, now that you have the essence of the application, you may also write your own!

Share