A High-Performance Logger with PHP

Recently, at Leaseweb, another successful Hackathon came to an end. There was a lot of fun, a lot of coding, a lot of coffee and a lot of cool ideas that arose during these 2 days. In this blog post we want to share one of these ideas that made us proud and that was really fun to work on.

1. The motivation

As Leaseweb, we strive to know our customers better so that we can actively empower them. For that, we need to start logging events that have value for the business. So basically we want to implement a simple technical service (let’s call it a ‘Logging Service’) that accepts any kind of event with some rich data (in other words a payload), and then logs it, without interfering with the execution of the client.

Our Logging service would, on each request, return an ”OK” so that the client can continue with its own execution but still keep on logging events asynchronously.

2. Our tech stack and our tech choice for POC

Typically at Leaseweb, we have a pretty standardized technology stack that revolves around PHP+Apache or PHP+Nginx. If a server or an application is built using this kind of stack, we are bound to typical synchronous execution. This client sends a request to our application (in this case our Logging Service) and the client needs to wait until our application sends back a response after it finishes all the tasks. This is not an ideal scenario. We need a service that runs asynchronously, a service that receives the request, says ”OK” so that the client can continue with his execution, and then does its job.

In the market, there are several tools that would enable us to do this such as languages and tools like NodeJs, Golang or even some message queuing services that could be wired into our PHP stack. But as we are PHP enthusiasts at Leaseweb, we want to use our language of choice without adding dependencies. That is how we discovered ReactPHP (no relation to the front-end framework React). ReactPHP is a pure PHP library that allows the developer to do some cool reactive programming in PHP by running the code on an endless event loop :).

3. Implementation of the POC

With some ReactPHP libraries, we can handle HTTP requests in PHP itself. We no longer need a web server that handles the requests and creates PHP processes for us. We can just create a pure PHP process that handles everything for us and if we want to fully use the hardware power from our machine, we can create several PHP processes and put them behind a load balancer.

After choosing our technology we started playing around and implementing our idea. In order to make sure that asynchronous PHP would solve the majority of our concerns, we started benchmarking.

First we needed to script something to benchmark. Therefore we implemented 3 different scenarios:

  1. The all-time-favorite: An endpoint that prints Hello World.
  2. An API endpoint that calls a 3rd party API that takes 2 seconds to reply and proxies its response.
  3. An API-endpoint that accepts a payload with POST and logs it via HTTP to Elasticsearch (This is what we really want).

We implemented these 3 scenarios in two different stacks:

  • Stack A
    Traditional PHP+FPM+Nginx
  • Stack X
    4 PHP processes running on a loop with ReactPHP behind an Nginx load-balancer. The reason why we chose 4 processes was solely because this number looks good :). There are some theories in which suggestions are made regarding the number of processes that should run on a machine when using this approach. We will not go into further detail on this in this article. Note that in both Stack A and Stack X we used the exact same specs for the hardware server. On both, we had a CPU with 8 cores.

Then we ran some stress tests with Locust:

3.1 1st Benchmark – Hello world!

For the first benchmark, we just wanted to see how the implementation of the Scenario 1 would behave in both of our stacks.

Hello World with Stack X
Figure 1: Hello World with Stack X
Hello World with Stack A
Figure 2: Hello World with Stack A

We can see that both of the stacks perform very similar. The reason for this is that the computation needed to print a ”Hello World!” is minimal, therefore both of our stacks can answer a high amount of requests in a reliable way.

The real power of asynchronous code comes when we need to deal with Input/Output (I/O (reading from a DB, API, Filesystem, etc) because these operations are time-consuming (see [zhuk:2026:event-driven-with-php]). I/O is slow and CPU computation is fast. By going with an asynchronous approach, our program can execute other computations while waiting for I/O.

It is time to try this theory with the next benchmark:

3.2 2nd Benchmark – Response from a 3rd-party API

Following what we described in the previous section, the power of asynchronous code comes when we deal with input and output. So we were curious to find out how the APIs would behave if they need to call a 3rd party API that takes 2 seconds to respond and then it forwards its response.

Let’s run the tests:

Response from a 3rd-party API with Stack X
Figure 3: Response from a 3rd-party API with Stack X
Response from a 3rd-party API with Stack A
Figure 4: Response from a 3rd-party API with Stack A

With the benchmarks illustrated in figure 3 and 4, it’s encouraging to see we already have very interesting results. We ran a stress test where we gradually spawn 100 concurrent users that send a request per second, and we can easily see that the more the concurrent users grow, the less responsive Stack A becomes. Stack A is achieving an average response time of 40 seconds, while Stack X maintains an average response time of 2 seconds (which is the time that the 3rd-party API takes to respond).

With Stack A, each request made creates a process that will stay idle until the 3rd-party API responds. The implication is that, at some point in time, we will have hundreds of idle processes waiting for a reply. This will cause an immense overload on our machine’s resources.

Stack X performs exceedingly well. This is because the same processes that wait for the reply from the 3rd-party API will continue doing other work during their execution, for example, handling other HTTP requests and incoming responses from the 3rd-party API. With this, we can achieve much more efficiency in our stack.

After observing these results we wanted to push it a bit harder – we wanted to see whether we could break Stack A entirely. So we decided to run the same stress test for this scenario but this time with 1000 concurrent users.

Response from a 3rd-party API with Stack X with 1000 users
Figure 5: Response from a 3rd-party API with Stack X with 1000 users
Response from a 3rd-party API with Stack A with 1000 users
Figure 6: Response from a 3rd-party API with Stack A with 1000 users

We did it! We can see that at some point Stack A is unable to handle the requests anymore so it stops responding completely after reaching an average response time of 60 seconds. Stack X remains perfectly smooth with an average response time of 2 seconds :).

3.3 3rd Benchmark

It was indeed fun trying to see how the stacks behave with the previous scenarios but we want to see how it behaves in a real-world scenario. Next, we wanted our API to accept a JSON payload via an HTTP post and log it to an Elasticsearch cluster via HTTP to keep it simple (Scenario 3).

How the stacks work in a nutshell:

  • Stack X receives an HTTP Post request with the payload, sends a response to the client saying OK and then logs it to Elasticsearch (asynchronously).
  • Stack A receives an HTTP Post request with the payload, logs it to Elasticsearch and sends a response to the client saying OK.

Let’s bombard it with Locust again and why not with 1000 of concurrent connections right away:

Logging payloads with Stack X
Figure 7: Logging payloads with Stack X
Logging payloads with Stack A
Figure 8: Logging payloads with Stack A

We can see that we can achieve a pretty reliable and high-performance logger. And this only with pure PHP code!!

Because our intention was always to push the limits, we chose this benchmark with 1000 concurrent users being spawned gradually. Stack A at some point stops handling the requests, while the Stack X always keeps a pretty good response time, around 10ms.

4. What can we use it for now?

With this experiment, we pretty much built a central logging service!! Coming back to our main motivation, we can use this to log whatever we want, and we want to start logging meaningful domain events, from any application within our system with a simple non-blocking HTTP Request. For example, if we start logging meaningful events, we can get to know our customers better. If we log all of this into Elasticsearch we can also start making cool graphs from it. For example:

Graph with Business Events
Figure 9: Graph with Business Events

OR

Graph with End-user Actions
Figure 10: Graph with End-user Actions

Since this approach is so highly responsive, we can even start using it to log anything and maybe everything, where our logging exists in a central endpoint. System monitoring, near-real-time-analytics, domain events, trends, etc, etc. And all of this with pure PHP :).

5. Cons of the approach and future work

When using ReactPHP there are some important considerations, which in some scenarios can be seen as cons, that usually they are not applicable to projects that follow an architecture similar to Stack A.

  • ReactPHP uses reactive/event-driven programming which is a paradigm that might have a big learning curve.
  • Long-running PHP processes can lead to memory leak and in case of failures, they could affect all the current connections to the server
  • These processes need to be constantly carefully monitored in order to avoid and predict fatal failures.
  • The usage of blocking functions (functions that block the execution of the code) will massively affect the performance for all the connections to the server.

Also, some extra work on the ”Operational / Infra side” is needed to make sure that we continuously check on the process’ health and if something goes wrong, create new ones automatically. We also need to work on the way we deploy the code. We need to make sure that we restart a process sequentially when deploying new code so that our service can finish serving the requests that it has queued at that time.

6. Conclusion

Pushing the limits of our preferred technology is one of the most fun things to do. PHP is not a usual choice if there is the need for an high-performing application, but thanks to a lot of great work of the community around, solutions like ReactPHP starts to emerge. That opens a new path to discover new programming paradigms, it introduces different mind-sets on how to approach a problem and it challenges the knowledge we have regarding the technology.

Challenging what we already know, is one of the most interesting things that we can do because it takes us out of our comfort zone and helps us to become more mature. It is really fun and it makes us realise that we can never know everything.

We would like to thank and acknowledge everybody in the communities around the tools we used in this fun experiment 🙂

Some useful links:

by Joao Castro and Elrich Faul

Share

Leaseweb Cloud AWS EC2 support

As you might know, some of the products LeaseWeb include in its portfolio are Public and Private Cloud based on Apache CloudStack, which supports a full API. We, LeaseWeb, are very open about this, and we try to be as much involved and participative in the community and product development as possible. You might be familiar with this if you are a Private Cloud customer. In this article we target the current and former EC2 users, who probably have already tools built upon AWS CLI, by demonstrating how you can keep using them with LeaseWeb Private Cloud solutions.

Apache CloudStack has supported EC2 API for some time in the early days, but along the way, while EC2 API evolved, CloudStack’s support has somewhat stagnated. In fact, The AWS API component from CloudStack was recently detached from the main distribution as to simplify the maintenance of the code.

While this might sound like bad news, it’s not – at all. In the meantime, another project spun off, EC2Stack, and was embraced by Apache as well. This new stack supports the latest API (at the time of writing) and is much easier to maintain both in versatility as in codebase. The fact that it is written in Python opens up the audience for further contribution while at the same time allows for quick patching/upgrade without re-compiling.

So, at some point, I thought I could share with you how to quickly setup your AWS-compatible API so you can reuse your existing scripts. On to the details.

The AWS endpoint acts as an EC2 API provider, proxying requests to LeaseWeb API, which is an extension to the native CloudStack API. And since this API is available for Private Cloud customers, EC2Stack can be installed by the customer himself.

Following is an illustration of how this can be done. For the record, I’m using Ubuntu 14.04 as my desktop, and I’ll be setting up EC2stack against LeaseWeb’s Private Cloud in the Netherlands.

First step is to gather all information for EC2stack. Go to your LeaseWeb platform console, and obtain API keys for your user (sensitive information blurred):

apikeys-blurred

Note down the values for API Key and Secret Key (you should already know the concepts from AWS and/or LeaseWeb Private Cloud).

Now, install EC2Stack and configure it:

ntavares@mylaptop:~$ pip install ec2stack 
[…]
ntavares@mylaptop:~$ ec2stack-configure 
EC2Stack bind address [0.0.0.0]: 127.0.0.1 
EC2Stack bind port [5000]: 5000 
Cloudstack host [mgt.cs01.leaseweb.net]: csrp01nl.leaseweb.com 
Cloudstack port [443]: 443 
Cloudstack protocol [https]: https 
Cloudstack path [/client/api]: /client/api 
Cloudstack custom disk offering name []: dualcore
Cloudstack default zone name [Evoswitch]: CSRP01 
Do you wish to input instance type mappings? (Yes/No): Yes 
Insert the AWS EC2 instance type you wish to map: t1.micro 
Insert the name of the instance type you wish to map this to: Debian 7 amd64 5GB 
Do you wish to add more mappings? (Yes/No): No 
Do you wish to input resource type to resource id mappings for tag support? (Yes/No): No 
INFO  [alembic.migration] Context impl SQLiteImpl. 
INFO  [alembic.migration] Will assume non-transactional DDL. 

The value for the zone name will be different if your Private Cloud is not in the Netherlands POP. The rest of the values can be obtained from the platform console:

serviceoffering-blurred

template-blurred
You will probably have different (and more) mappings to do as you go, just re-run this command later on.

At this point, your EC2stack proxy should be able to talk to your Private Cloud, so we now need to instruct it to launch it to accept EC2 API calls for your user. For the time being, just run it on a separate shell:

ntavares@mylaptop:~$ ec2stack -d DEBUG 
 * Running on http://127.0.0.1:5000/ 
 * Restarting with reloader

And now register your user using the keys you collected from the first step:

ntavares@mylaptop:~$ ec2stack-register http://localhost:5000 H5xnjfJy82a7Q0TZA_8Sxs5U-MLVrGPZgBd1E-1HunrYOWBa0zTPAzfXlXGkr-p0FGY-9BDegAREvq0DGVEZoFjsT PYDwuKWXqdBCCGE8fO341F2-0tewm2mD01rqS1uSrG1n7DQ2ADrW42LVfLsW7SFfAy7OdJfpN51eBNrH1gBd1E 
Successfully Registered!

And that’s it, as far the API service is concerned. As you’d normally do with AWS CLI, you now need to “bind” the CLI to this new credentials:

ntavares@mylaptop:~$ aws configure 
AWS Access Key ID [****************yI2g]: H5xnjfJy82a7Q0TZA_8Sxs5U-MLVrGPZgBd1E-1HunrYOWBa0zTPAzfXlXGkr-p0FGY-9BDegAREvq0DGVEZoFjsT
AWS Secret Access Key [****************L4sw]: PYDwuKWXqdBCCGE8fO341F2-0tewm2mD01rqS1uSrG1n7DQ2ADrW42LVfLsW7SFfAy7OdJfpN51eBNrH1gBd1E  Default region name [CS113]: CSRP01 
Default output format

: text

And that’s it! You’re now ready to use AWS CLI as you’re used to:

ntavares@mylaptop:~$ aws --endpoint=http://127.0.0.1:5000 --output json ec2 describe-images | jq ' .Images[] | .Name ' 
"Ubuntu 12.04 i386 30GB" 
"Ubuntu 12.04 amd64 5GB" 
"Ubuntu 13.04 amd64 5GB" 
"CentOS 6 amd64 5GB" 
"Debian 6 amd64 5GB" 
"CentOS 7 amd64 20140822T1151" 
"Debian 7 64 10 20141001T1343" 
"Debian 6 i386 5GB" 
"Ubuntu 14.04 64bit with docker.io" 
"Ubuntu 12.04 amd64 30GB" 
"Debian 7 i386 5GB" 
"Ubuntu 14.04 amd64 20140822T1234" 
"Ubuntu 12.04 i386 5GB" 
"Ubuntu 13.04 i386 5GB" 
"CentOS 6 i386 5GB" 
"CentOS 6 amd64 20140822T1142" 
"Ubuntu 12.04 amd64 20140822T1247" 
"Debian 7 amd64 5GB"

Please note that I only used JSON output (and JQ to parse it) for summarising the results, as any other format wouldn’t fit on the page.

To create a VM with built-in SSH keys, you should create/setup your keypair in LeaseWeb Private Cloud, if you didn’t already. In the following example I’m generating a new one, but of course you could load your existing keys.

ssh-keypairs-blurred

You will want to copy paste the generated key (in Private Key) to a file and protect it. I saved mine in $HOME/.ssh/id_ntavares.csrp01.key .

ssh-keypairs2-blurred

This key will be used later to log into the created instances and extract the administrator password. Finally, instruct the AWS CLI to use this keypair when deploying your instances:

ntavares@mylaptop:~$ aws --endpoint=http://127.0.0.1:5000 ec2 run-instances \
 --instance-type dualcore \
 --image-id 7c123f01-9865-4312-a198-05e2db755e6a \
 --key-name ntavares-key 
INSTANCES	KVM	7c123f01-9865-4312-a198-05e2db755e6a	a0977df5-d25e-40cb-9f78-b3a551a9c571	dualcore	ntavares-key	2014-12-04T12:03:32+0100	10.42.1.129 
PLACEMENT	CSRP01 
STATE	16	running 

Note that the image-id is taken from the previous listing (the one I simplified with JQ).

Also note that although EC2stack is fairly new, and there are still some limitations to this EC2-CS bridge – see below for a mapping of supportedAPI calls. For instance, one that you can could run into at the time of writing this article (~2015) was the inability to deploy an instance if you’re using multiple Isolated networks (or multiple VPC). Amazon shares this concept as well, so a simple patch was necessary.

For this demo, we’re actually running in an environment with multiple isolated networks, so if you ran the above command, you’d get the following output:

ntavares@mylaptop:~$ aws --endpoint=http://127.0.0.1:5000 ec2 run-instances \
 --instance-type dualcore \
 --image-id 7c123f01-9865-4312-a198-05e2db755e6a \
 --key-name ntavares-key
A client error (InvalidRequest) occurred when calling the RunInstances operation: More than 1 default Isolated networks are found for account Acct[47504f6c-38bf-4198-8925-991a5f801a6b-rme]; please specify networkIds

In the meantime, LeaseWeb’s patch was merged, as many others, which both demonstrates the power of Open Source and the activity on this project.

Naturally, the basic maintenance tasks are there:

ntavares@mylaptop:~$ aws --endpoint=http://127.0.0.1:5000 ec2 describe-instances 
RESERVATIONS	None 
INSTANCES	KVM	7c123f01-9865-4312-a198-05e2db755e6a	a0977df5-d25e-40cb-9f78-b3a551a9c571	dualcore	ntavares-key	2014-12-04T12:03:32+0100	10.42.1.129	10.42.1.129 
PLACEMENT	CSRP01	default 
STATE	16	running

I’ve highlighted some information you’ll need now to login to the instance: the instance id, and IP address, respectively. You can login either with your ssh keypair:

[root@jump ~]# ssh -i $HOME/.ssh/id_ntavares.csrp01.key root@10.42.1.129 
Linux localhost 3.2.0-4-amd64 #1 SMP Debian 3.2.51-1 x86_64 

[...] 
Debian GNU/Linux comes with ABSOLUTELY NO WARRANTY, to the extent 
permitted by applicable law. 
root@a0977df5-d25e-40cb-9f78-b3a551a9c571:~#

If you need, you can also retrieve the password the same way you do with EC2:

ntavares@mylaptop:~$ aws --endpoint=http://127.0.0.1:5000 ec2 get-password-data --instance-id a0977df5-d25e-40cb-9f78-b3a551a9c571 
None dX5LPdKndjsZkUo19Z3/J3ag4TFNqjGh1OfRxtzyB+eRnRw7DLKRE62a6EgNAdfwfCnWrRa0oTE1umG91bWE6lJ5iBH1xWamw4vg4whfnT4EwB/tav6WNQWMPzr/yAbse7NZHzThhtXSsqXGZtwBNvp8ZgZILEcSy5ZMqtgLh8Q=

As it happens with EC2, password is returned encrypted, so you’ll need your key to display it:

ntavares@mylaptop:~$ aws --endpoint=http://127.0.0.1:5000 ec2 get-password-data --instance-id a0977df5-d25e-40cb-9f78-b3a551a9c571 | awk '{print $2}' > ~/tmp.1
ntavares@mylaptop:~$ openssl enc -base64 -in tmp.1 -out tmp.2 -d -A 
ntavares@mylaptop:~$ openssl rsautl -decrypt -in tmp.2 -text -inkey $HOME/.ssh/id_ntavares.csrp01.key 
ntavares@mylaptop:~$ cat tmp.3 ; echo 
hI5wueeur
ntavares@mylaptop:~$ rm -f tmp.{1,2,3} 
[root@jump ~]# sshpass -p hI5wueeur ssh root@10.42.1.129 
Linux localhost 3.2.0-4-amd64 #1 SMP Debian 3.2.51-1 x86_64 

[...]
Debian GNU/Linux comes with ABSOLUTELY NO WARRANTY, to the extent 
permitted by applicable law. 
Last login: Thu Dec  4 13:33:07 2014 from jump.rme.leasewebcloud.com 
root@a0977df5-d25e-40cb-9f78-b3a551a9c571:~#

The multiple isolated networks scenario

If you’re already running multiple isolated networks in your target platform (be either VPC-bound or not), you’ll need to pass argument –subnet-id to the run-instances command to specify which network to deploy the instance into; otherwise CloudStack will complain about not knowing in which network to deploy the instance. I believe this is due to the fact that Amazon doesn’t allow the use the Isolated Networking as freely as LeaseWeb – LeaseWeb delivers you the full flexibility at the platform console.

Since EC2stack does not support describe-network-acls (as of December 2014) in order to allow you to determine which Isolated networks you could use, the easiest way to determine them is to go to the platform console and copy & paste the Network ID of the network you’re interested in:

Then you could use –subnet-id:

ntavares@mylaptop:~$ aws --endpoint=http://127.0.0.1:5000 ec2 run-instances \
 --instance-type dualcore \
 --image-id 7c123f01-9865-4312-a198-05e2db755e6a \
 --key-name ntavares-key \
 --subnet-id 5069abd3-5cf9-4511-a5a3-2201fb7070f8
PLACEMENT	CSRP01 
STATE	16	running 

I hope I demonstrated a bit of what can be done in regards to compatible EC2 API. Other funtions are avaiable for more complex tasks although, as wrote earlier, EC2stack is quite new, for which you might need community assistance if you cannot develop the fix on your own. At LeaseWeb we are very interested to know your use cases, so feel free to drop us a note.

Share

PHP-CRUD-API now supports authorization and validation

Another milestone is reached for the PHP-CRUD-API project. A project that aims to provide a high performance, consistent data API over REST that is easy to deploy (it is a single PHP file!) and requires minimal configuration. By popular demand we have added four important new features:

  1. Tables and the actions on them can be restricted with custom rules.
  2. Access to specific columns can be restricted using your own algorithm.
  3. You can specify “sanitizers” to, for example, strip HTML tags from input.
  4. You can specify “validators” functions to show errors on invalid input.

These features are built by allowing you to define callback functions in your configuration. These functions can then contain your application specific logic. How these function work and how you can load them is explained below.

Table authorizer

The following function can be used to authorize access to specific tables:

/**
 * @param action    'create','read','update','delete','list'
 * @param database  name of your database (e.g. 'northwind')
 * @param table     name of the table (e.g. 'customers')
 * @returns bool    indicates that access is granted  
 **/
  
$f1=function($action,$database,$table){
  return true; 
};

Column authorizer

The following function can be used to authorize access to specific columns:

/**
 * @param action    'create','read','update','delete','list'
 * @param database  name of your database (e.g. 'northwind')
 * @param table     name of the table (e.g. 'customers')
 * @param column    name of the column (e.g. 'password')
 * @returns bool    indicates that access is granted  
 **/
  
$f2=function($action,$database,$table,$column){
  return true; 
};

Input sanitizer

The following function can be used to sanitize input for specific columns:

/**
 * @param action    'create','read','update','delete','list'
 * @param database  name of your database (e.g. 'northwind')
 * @param table     name of the table (e.g. 'customers')
 * @param column    name of the column (e.g. 'username')
 * @param type      type of the column (depends on engine)
 * @param value     input from the user (e.g. 'johndoe88')
 * @returns string  sanitized value
 **/
  
$f3=function($action,$database,$table,$column,$type,$value){
  return $value; 
};

Input validator

The following function can be used to validate input for specific columns:

/**
 * @param action    'create','read','update','delete','list'
 * @param database  name of your database (e.g. 'northwind')
 * @param table     name of the table (e.g. 'customers')
 * @param column    name of the column (e.g. 'username')
 * @param type      type of the column (depends on engine)
 * @param value     input from the user (e.g. 'johndoe88')
 * @param context   all input fields in this action
 * @returns string  validation error (if any) or null
 **/
  
$f4=function($action,$database,$table,$column,$type,$value,$context){
  return null;
};

Configuration

This is an example configuration that requires the above snippets to be defined.

$api = new MySQL_CRUD_API(array(
  'hostname'=>'localhost',
  'username'=>'xxx',
  'password'=>'xxx',
  'database'=>'xxx',
  'charset'=>'utf8',
  'table_authorizer'=>$f1,
  'column_authorizer'=>$f2,
  'input_sanitizer'=>$f3,
  'input_validator'=>$f4
));
$api->executeCommand();

You can find the project on Github.

Share

PHP script to tail a log file using telnet

tail

Why would you need a PHP script to tail a log file using telnet? You don’t! But it the script is cool anyway. It allows you to connect to your web server over telnet, talk some HTTP to your web server, and run a PHP script that shows a tail of a log file. It uses ANSI sequences (colors!) to provide a nice user interface specifically to tail a log file with the “follow” option (like “tail -f”). Below you find the PHP script that you have to put on the web server:

<?php
// configuration
$file = '/var/log/apache2/access.log';
$ip = '127.';
// start of script
$title = "\033[H\033[2K$file";
if (strpos($_SERVER['REMOTE_ADDR'],$ip)!==0) die('Access Denied');
$stream = fopen($file, 'r');
if (!$stream) die("Could not open file: $file\n");
echo "\033[m\033[2J";
fseek($stream, 0, SEEK_END);
echo str_repeat("\n",4500)."\033[s$title";
flush();
while(true){
  $data = stream_get_contents($stream);
  if ($data) {
    echo "\033[32m\033[u".$data."\033[s".str_repeat("\033[m",1500)."$title";
    flush();
  }
  usleep(100000);
}
fclose($stream);

To tail (and follow) a remote file you need to talk HTTP to the web server using telnet and request to load the PHP tail script. First you connect using telnet:

$ telnet localhost 80
Trying 127.0.0.1...
Connected to localhost.
Escape character is '^]'.

After connecting you have to “speak” some HTTP (just type this):

GET /tail.php HTTP/1.1
Host: localhost

NB: Make sure you end the above telnet commands with an empty line! After this the screen should be empty showing any new log lines in real-time in green on the telnet window.

You can use Ctrl + ‘]’ to get to the telnet prompt and type “quit” to exit.

If you don’t want to copy the code above, then you can also find the latest version of tail.php on Github.

Share

Simple web application firewall using .htaccess

Apache provides a simple web application firewall by a allowing for a “.htaccess” file with certain rules in it. This is a file you put in your document root and may restrict or allow access from certain specific IP addresses. NB: These commands may also be put directly in the virtual host configuration file in “/etc/apache2/sites-available/”.

Use Case #1: Test environment

Sometimes you may want to lock down a site and only grant access from a limited set of IP addresses. The following example (for Apache 2.2) only allows access from the IP address “127.0.0.1” and blocks any other request:

Order Allow,Deny
Deny from all
Allow from 127.0.0.1

In Apache 2.4 the syntax has slightly changed:

Require all denied
Require ip 127.0.0.1

You can find your IP address on: whatismyipaddress.com

Use Case #2: Application level firewall

If you run a production server and somebody is abusing your system with a lot of requests then you may want to block a specific IP address. The following example (for Apache 2.2) only blocks access from the IP address “172.28.255.2” and allows any other request:

Order deny,allow
Allow from all
Deny from 172.28.255.2

In Apache 2.4 the syntax has slightly changed:

Require all granted
Require not ip 172.28.255.2

If you want to block an entire range you may also specify CIDR notation:

Require all granted
Require not ip 10.0.0.0/8
Require not ip 172.16.0.0/12
Require not ip 192.168.0.0/16

NB: Not only IPv4, but also IPv6 addresses may be used.

Share