How to mock MySQLi when unit testing with PHPUnit

PHPUnit is the most used unit testing framework for PHP. Today I wanted to unit test some PHP code that relies on MySQLi. The dilemma is that you either need an actual database and load a fixture or you need to mock the database. As Claudio Lasalla clearly puts:

Unit tests are not “unit” tests if they test things other than the System Under Test (SUT).

And further explains:

Unit tests check on the behavior of units. Think of a class as being a unit. Classes, more often than not, have external dependencies. Tests for such classes should not use their real dependencies because if the dependencies have defects, the tests fail, even though the code inside the class may be perfectly fine.

This theory made total sense to me. That’s why I decided to mock the MySQLi dependency. In this post I will show you just how far I came before I realized this was not going to work out (for me).

The code

The test class, that extends “PHPUnit Framework TestCase”, has an extra method “expectQueries()”. The class looks like this:

<?php

class MySQL_CRUD_API_Test extends PHPUnit_Framework_TestCase
{
	private function expectQueries($queries)
	{
		$mysqli = $this->getMockBuilder('mysqli')
			->setMethods(array('query','real_escape_string'))
			->getMock();
		$mysqli->expects($this->any())
			->method('real_escape_string')
			->will($this->returnCallback(function($str) { return addslashes($str); }));
		$mysqli->expects($this->any())
			->method('query')
			->will($this->returnCallback(function($query) use ($queries) {
				$this->assertTrue(isset($queries[$query]));
				$results = $queries[$query];
				$mysqli_result = $this->getMockBuilder('mysqli_result')
					->setMethods(array('fetch_row','close'))
					->disableOriginalConstructor()
					->getMock();
				$mysqli_result->expects($this->any())
					->method('fetch_row')
					->will($this->returnCallback(function() use ($results) {
						static $r = 0;
						return isset($results[$r])?$results[$r++]:false;
					}));
				return $mysqli_result;
			}));

		return $mysqli;
	}

	public function testSomeSubjectThatUsesMysqli()
	{
		$mysqli = $this->expectQueries(array(
			"SELECT * FROM `table`" =>array(array('1','value1'),array('2','value2'),array('3','value3')),
			"SELECT * FROM `table` LIMIT 2" =>array(array('1','value1'),array('2','value2')),
			// other queries that may be called
		));
		// do something that uses $mysqli
	}
}

The subject-under-test is actually doing something like this:

$result = $mysqli->query("SELECT * FROM `table`");
while ($row = $result->fetch_row()) {
	// do something with the data in $row
}
$result->close();

And in the test it will return the corresponding rows for the queries that you execute. Nice huh?

Not ready

This is a proof-of-concept of a mock of the MySQLi component for PHPUnit. The ‘real_escape_string’ function has a sloppy implementation. It does not (yet) support the much used ‘prepare’, ‘execute’ or ‘fetch_fields’ methods. To give an idea of the completeness, for MySQLi it now support 2/62 functions and properties, for MySQLi Statement 0/28 and for MySQLi Result 2/15. Apart from this incompleteness there is the problem that you may need to support meta information, such as field names and types, to have a fully working mock. If you feel like continuing my work, then feel free to take my code.

Conclusion

Although this was a nice exercise and it may even be the right thing to do in theory, it did not seem to make much sense (to me) in practice. So I gave up on this approach and my current implementation runs all tests against a real database. It loads a database from a SQL file (fixture) in the static ‘setUpBeforeClass()’ function. This may not be so ‘correct’ or ‘clean’ (from a unit testing point of view), but it is much faster to write and easier to maintain.

My question for you: Am I wrong or is the theory wrong? Please tell me using the comments.

Testing framework using Maven, TestNG and Webdriver

We have set a goal to create a flexible and extendable automated testing framework, which should expand test coverage for as many LeaseWeb applications functionalities as possible. Tests should be also run on CI (Jenkins in our case). We are writing our tests on Java with help of several tools:

  • Webdriver is a driver that contains programming interface for controlling all kinds of possible actions in browser.
  • TestNG is a testing framework. It structures, groups, and launches tests. It also generates testing reports.
  • Maven is a software project management and comprehension tool. It manages all dependencies and different flows for building a project.

Implementation

1. We created a Maven project with next structure:
src/main/java – contains packages with Page Objects. Each package represents separate application and can contain packages with general methods and objects of the framework.
src/main/resources – here we store config files and other resources that are used in our tests.
src/test/java – contains packages with tests. Each package contains tests related only to one application or a test flow that goes through the couple of applications.

This structure is important for a project compiling in Maven.

2. The main goal was to write tests in a readable manner where each step is one single line of code that clearly describes what it does, and at the same time hides all actions that need to be done by Webdriver “under the hood” in our Page Object classes. By this we make it really easy to read and understand what our test does for every team member – from developers to product managers and even sales people.

3. Test run. We decided to use TestNG for several reasons:

a. People are moving from JUnit to TestNG once they start to write integration and end-to-end tests (e.g. with Selenium). On unit tests level you can put up with JUnit deficiencies, but once you start to write integration and end-to-end tests, JUnit stands in your way and TestNG comes to help you out.

b. No need to create your own Thread objects! You can use @Test annotation with ThreadPoolSize, and invocationCount parameters to run parallel tests.

c. Parameterization. A good testing framework should make it simple, and avoid copy\paste coding. Alas, the default JUnit implementation of the parametrized tests is unusable. In short, JUnit expects all test methods to be parameter-less. Because of this constraint, parameterized tests with JUnit looks awkward because parameters are passed by constructor and kept as fields. People who work with JUnit usually test their code with only one set of parameters. I suspect this is because parameterized tests are unusable with JUnit. And this is bad because such tests are really useful. TestNG allows you to use @DataProvider annotation, which allows you to parameterize your tests however you like and improve your test coverage dramatically with less lines of code in your tests.

d. Parameters to passing the tests:

  • Configure your tests so they, for example, invoke GUI methods on different servers/ports?
  • Run tests using MySQL database instead of h2?
  • Run GUI tests with Opera, Firefox, Chrome (or even IE 6)?
  • It is all easy to do with @Parameters annotation in tests.

e. Groups of tests. Each test class (and method) can belong to multiple groups, and with TestNG it’s very easy to run only selected group with @Test (group) annotation. You can easily set in CI which group of tests you want to run – smoke, GUI, API, etc.

f. Test dependencies. It’s not desirable practice with unit tests, but when it comes to integration tests, pretty often you cannot avoid it. This is where TestNG becomes very helpful and easy to use.

g. Reporting. TestNG provides more detailed and helpful reports in comparison with JUnit.

All these settings and tweaks can be specified in textng.xml file, which  is used by TestNG to run tests both on your local dev machine and on CI Server. It will declare which tests should be run or skipped and contains information about groups, packages, and classes. This file is located in the root project folder.

<!DOCTYPE suite SYSTEM "http://testng.org/testng-1.0.dtd" >

<suite name="CAS" verbose="1" >
  <test name="RegressionLogin"   >
  <classes>
	<class name="OrderFlowTests">
      <methods>
        <include name="orderFlow" />
      </methods>
    </class>

    <class name="LoginOppTests" >
   	  <methods>
        <include name="successLoginToOpp" />
      </methods>
 	</class>
  </classes>

 </test>
</suite>

4. Our pom.xml setup.

The beauty of Maven for us, is that it keeps and maintains all dependencies that we use in our project. It makes it much easier to setup the project on a new machine, and also makes a lot easier to run the project on CI. We need at least two plugins to start maven-compiler-plugin и maven-surefire-plugin (in the tags of which we have written our testing.xml). We made few tweaks in the plugin configuration section to show our testing.xml file, and also specify which file naming formats should be considered as tests for TestNG. File is located in to the root project folder.

<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
	xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
	<modelVersion>4.0.0</modelVersion>
	<groupId>NewLeaseWebSite</groupId>
	<artifactId>NewLeaseWebSite</artifactId>
	<version>0.0.1-SNAPSHOT</version>
	<dependencies>
		<dependency>
			<groupId>org.testng</groupId>
			<artifactId>testng</artifactId>
			<version>6.3.1</version>
		</dependency>
		<dependency>
			<groupId>org.seleniumhq.selenium</groupId>
			<artifactId>selenium-firefox-driver</artifactId>
			<version>2.37.1</version>
		</dependency>
		<dependency>
			<groupId>org.seleniumhq.selenium</groupId>
			<artifactId>selenium-java</artifactId>
			<version>2.26.0</version>
		</dependency>
	</dependencies>
	<build>
		<testSourceDirectory>src/tests/java</testSourceDirectory>
		<resources>
			<resource>
				<directory>src/main/resources</directory>
				<excludes>
					<exclude>**/*.java</exclude>
				</excludes>
			</resource>
		</resources>
		<pluginManagement>
			<plugins>
				<plugin>
					<artifactId>maven-compiler-plugin</artifactId>
					<version>3.1</version>
					<configuration>
						<source>1.7</source>
						<target>1.7</target>
					</configuration>
				</plugin>
				<plugin>
					<groupId>org.apache.maven.plugins</groupId>
					<artifactId>maven-surefire-plugin</artifactId>
					<version>2.16</version>
					<configuration>
						<suiteXmlFiles>
							<suiteXmlFile>testng.xml</suiteXmlFile>
						</suiteXmlFiles>
						<includes>
							<include>**/Test*.java</include>
							<include>**/*Tests*.java</include>
							<include>**/*Tests.java</include>
							<include>**/*Test.java</include>
							<include>**/*TestCase.java</include>
						</includes>
					</configuration>
				</plugin>
			</plugins>
		</pluginManagement>
	</build>
</project>

As a result, we have a test framework that can be easily extended, should not require much effort for maintenance in the future, and can be run on any CI server with several simple setup steps.

Using Selenium for automated functional testing

We are now in an age when it is accepted by most IT managers that Software Quality Assurance – and Testing/Validation & Verification in particular – is not only a desirable element in any project, but is actually an essential aspect that cannot be ignored if you want to remain competitive towards more SQA-mature rivals.

The cost of cutting corners usually arrives later, and with heavy interest. In extreme cases, it can make you go bankrupt if the fault has enough impact, but in an age of fast social networking and real-time news – damaging your reputation when your products fail is the most common setback.

Fortunately, it is now possible to rely not only on enterprise-class tools but also on simple, open-source free suites, which help refine the art from management of the initial requirements to test management, automation, defect management, and related practices.


Record and play

Development cycles are getting shorter and the ability to test rapidly in an agile mode going forward is what companies want. At LeaseWeb’s Cloud unit, we have been attempting to find a balance between test costs and coverage, and towards that, there has been an increased effort in areas like unit testing, test automation, and continuous integration.

Shorter periods do not necessarily reduce the risk of new functionality affecting existing functionality; in fact, with frequent releases, there is less time for regression testing. In this post, we will present a small introduction on how an investment on the automation of functional tests, especially regression suites has helped us move faster, without added risk.

Regression cycles can be a drain on productivity and absorb time that would otherwise be spent in other activities, including improving the test management process. Manual regression testing can also be a mundane and boring activity for most test teams, requiring concentration even when following a clear test plan or checklist.

Our team has realized the necessity to automate functional tests and has taken some initial steps:

  • We looked at our test suites and identified core test cases in very high risk/high impact areas.
  • Of the flagged tests, we determined which were more time-consuming on each development cycle, and of those, which would be better performed via automation, and which would benefit the most from experienced-based, human testing, especially when involving exploratory testing techniques. It was determined that the most time-consuming test runs were related to regression on a Bootstrap-frontend-framework based web portal used by our internal teams to manage our Cloud offerings.
  • A subset of the test cases were then selected for automation scripting, allowing us an initial level of coverage of about 43%.
  • With that in mind, we looked at several possible tools for automation, some strongly GUI-dependent, others relying on headless browsers. Both have their strengths and weaknesses, usually trading between ease of first write/maintenance and customization power.

For now, we settled on using Selenium IDE (Se-IDE), a free Firefox extension created by Shinya Kasatani of Japan that can automate the browser usage through a record-and-playback feature. It is both a recording tool and a simple, accessible IDE.

NNE-article1-1The main advantage of Se-IDE is its ease of use when quickly recording test cases by test teams with low-programming skill sets, as well as future easy maintenance. At the same time, it retains the ability to export tests to formats that can be used on Selenium 2 (formerly called WebDriver). In the first case, most interactivity occurs visibly on the browser, while in the latter, extra power is possible via direct control of the browser at OS level.

Even with Se-IDE having limited native functionality, its original commands can be expanded by directly coding JavaScript in the IDE window, should the user need it. In this first article, we will provide an overview of how Se-IDE can be used, with later articles/tutorials focusing on specifics like Element Locators and advanced tests.

Setting up

As a starting point, install the Selenium IDE plugin in Firefox. Selenium IDE has a plugin system that allows for easy extension and customization, with a few additional browser extensions and Se-IDE plugins that can prove useful:

  • Firebug and FirePath: These two extensions provide various useful tools for the Selenium scripter, but object identification will probably be one of the most useful.
  • Highlight Elements: This Selenium plugin allows much easier visual identification of objects being targeted during the execution of a script.

NNE-article1-2

  • Stored Variables: Selenium does not natively provide a way to keep track of stored variables, so this plugin is quite useful.

NNE-article1-3

NNE-article1-4Running the main Selenium IDE plugin

After running Selenium IDE, you can also configure dedicated extensions for the IDE. That can be configured in Options>General>Selenium IDE Extensions. Multiple extensions can be added, comma-separated:

NNE-article1-5

Expanding Selenium IDE further via JavaScript user extensions

The following also prove useful:

  • Sideflow: Selenium does not natively provide a powerful way to control script flow. You can enhance that by adding labels, while cycles, GoTo jumps and so on, via the Sideflow plugin.

NNE-article1-6

Controlling script flow with simple labels, while, and GoToIf usage

  • Random: This library allows the creation of random numbers and strings according to certain parameters. It proves useful to randomize data in relevant test cases.

The usual recipe

After running the IDE window, you will be presented with a blank project. In most cases, the formula to prepare a test case revolves around three basic steps:

  1. Record an execution via manually running a test case.
  2. Fine-tune the recorded script to make it as generic and robust to change as possible, without compromising its ability to validate test conditions. This is the lengthier stage.
  3. Integrate your new test case in the corresponding suite, adding flow-control if necessary. In certain usages, you will want to limit the execution of a test case block depending on previous results and/or required coverage for a release.

Inevitably, you will have to revisit your script as the tested product evolves. Scope changes like new features might require updates. Similarly, the identification of faults in production might bring the need to expand coverage with new tests.

NNE-article1-7

(The Selenium IDE interface, including extra plugins)

Once a stable pool of tests is consolidated, they are executed considerably faster, especially when repeat runs are required. That does not mean, however, that the test cases will not need to be revised often; in our environment, the sprint cycles mean that new functionality is released every two weeks.

In some (most) cases, such new functionality does not affect older regression test cases, but there are occasions when a major interface change might require tweaks to the current scripts, or even a full rewrite. Every time you refactor a script, you might find new ways to make it more adaptable to future changes and updates.

Tracking variables

One way to make the test cases more robust is to use stored variables instead of hardcoded content as much as possible. You can do this with the Store command, later retrieving content with a $ wrapper for full flexibility.

For example, if you store “bar” as the variable my_string_text1, you can later use it in any command, e.g. a Type command with “foo${ my_string_text1}” as value would result in “foobar” being output anywhere during script execution.

If you installed the Stored-Vars plugin mentioned before, a new tab at the bottom of the interface will allow you to keep track of variables, useful during debug/step execution.

Extending the native commands with JavaScript usage

Se-IDE provides a limited number of native functions out-of-the-box. In case you require something that Se-IDE does not originally do, you can add your own JavaScript code. A simple example would be randomizing a username:

NNE-article1-8

Using this in the ‘Target’ field of a Store command, would store “TestUser” plus a random 0-99 number in the variable entered in the value field. While Se-IDE did not natively allow it, a simple code snipped added the feature.

Another simple example would be selecting one of three random locations for a web form, in that case you could do it by using something like this in the target field:

NNE-article1-9

Locating objects

Web test automation predominantly revolves around GUI objects i.e. various UI controls like pull-downs, input boxes, text fields and so on. These objects can be identified via Name, ID, Link, XPath, CSS, etc. and some might have changing properties during runtime.

Once a script is initially recorded, you might find necessary to adjust object identifiers, as they allow Se-IDE to identify targets for actions during runtime, and the accuracy of this process is vital.

Firebug can help identify objects precisely, something you can confirm with the FIND button in the “Target” area of Selenium IDE. You can use Firebug’s “Inspect” tool, select the element and then click FirePath to see its CSS or XPath identifiers.

NNE-article1-10

(Firebug and FirePath showing the XPath locator for an image)

By default, the Se-IDE frequently generates index-based XPath while recording – this is not the right approach and maintainability becomes an issue, as the likelihood of a script breaking simply because an object is later moved is high. For this reason, it might be beneficial to convert those locators to IDs or CSS.

Se-IDE locators’ work on single HTML references at a time, but often you need to work with a nested HTML structure with frames. Firebug can help analyze the HTML DOM to help you derive the best object identification locator.

Wrapping up

In our next posts, we are going to work on a short tutorial and later create advanced automated test cases using only Se-IDE installed as described, as well as delve deeper into locators, recording and editing, and custom JavaScript usage.

Even though the Se-IDE has a decent amount of functionality, it also has a few limitations, for example, it is restricted to only Firefox and it lacks the ability to scale well. To counteract that, we will show you later how to use the IDE to help write cases for the standalone, external Selenium Server.

Testing web applications for bugs using OWASP ZAP

Websites and other web applications are for many companies the main communication tools towards their customers. These customer-facing applications provide access to valuable data and system assets, often outside the corporate perimeter. Bugs in these applications can cause companies a lot of damage, both in data loss and reputation. This is why organizations needs to be confident that security is guaranteed.

Organizations like The Open Web Application Security Project (OWASP) focus on improving the security of web applications. Since 2003, OWASP publicizes every three years the most important security related problems in software applications. This popular security resource consists of the most serious web application bugs in the industry. These order of the problems is defined by multiplying the ‘Likelihood’ against the ‘Impact’. In 2013 the Top 10 looked like this:

owasp_top10
Picture 1: OWASP Top 10

Besides offering this information, OWASP also has tools that help developers and testers to find these bugs. One of the tools is the OWASP ZAP project. ZAP provides all the essentials for web application testing including; Intercepting Proxy, Active and Passive Scanners, Spider, Report Generation, Brute Force, and Fuzzing. Scanning your web application is quite easy due to the clear interface of OWASP ZAP. By scanning applications during the development cycle, developers and testers can focus on preventing bugs and fixing them before the sofware goes live.

zap
Picture 2: Scan policy (what do you want to include)

A basic test for security related bugs using ZAP would consist of:

  • Configuring your browser to proxy via ZAP
  • Exploring the web application manually
  • Using the Spider to find ‘hidden’ content
  • Running the Active Scanner to find bugs

With the results of the Active Scanner, the end user can print out a report with the bugs found. The ZAP tool provides a reporting feature which allows you to generate reports that help you to identify the bugs that may have been found during the scans. The issues are presented to the user with an overview of their impact and often with a technical solution for the problem. These results are very valuable and can be directly used by developers and testers to improve the software.

alerts
Picture 3: Results found by the Active Scanner

ZAP works with predefined test cases to find issues. However, logical bugs, misconfigurations, etc., aren’t always detected by these scanners. So only running the scans on your web application doesn’t give the end user the guarantee that all possible issues are found. For a more reliable result a manual test is also required.

OWASP ZAP is a free tool and is included in the latest version of the free Kali Linux. If you would like to know more about OWASP, please refer to the following URLs:

Testing techniques for better manual testing

gr5

Manual testing continues to be the most popular method for validating the functionality of software applications. Manual testing is simple and straightforward. However, as technologies have progressed and applications become more complex, the process of manual testing has stayed mostly unchanged. “To err is human” and it is quite obvious that manual testing is error prone, time consuming, and monotonous for every tester. We all know that automation testing is more reliable and reduces testing time. But not all the applications can be done by a click of button. Human intuition cannot be replaced by automation. It is best to combine both the automated and manual testing. Manual testing is something that has not changed for almost more than two decades and in some cases it is a better choice than automated testing. With changing requirements and meeting deadlines, it is easier to change the test cases accordingly.

Certain testing techniques can be implemented as a part of manual testing so we can base our test cases better . Although there is no such hard and fast rules, here at LeaseWeb, we utilize the test techniques based on the requirements that best suit the scenario. Some of the techniques that we can use are as follows.

gr8

Equivalence partitioning

Inputs to the application are divided into groups that are expected to exhibit similar behavior. The key goal is to complete the test coverage and to lessen duplication.

Partition system inputs and outputs into ‘equivalence sets’ . If input is a 5-digit integer between 10,000 and 99,999, equivalence partitions are < 10,000, 10,000 – 99, 999 and > 10, 000

Boundary value analysis

In this technique, the test data chosen lie along the data extremes. Boundary values include maximum, minimum, just inside/outside boundaries, typical values, and error values. The idea is that, if a systems works correctly for these special values then it will work correctly for all values in between.

Choose test cases at the boundary of these sets : 00000, 09999, 10000, 99999, 10001

gr3

Decision table testing

Test cases are designed with the combination of inputs that contain logical conditions. In the entry decision table. It consists of four areas called the condition stub, the condition entry (TC1, TC2 etc,.), the action stub, and the action entry. Each column of the table is a rule that specifies the conditions under which the actions named in the action stub will take place. Where 1 is True and 0 is false, and X is a condition that is not applicable or irrelevant.

gr4

Use case testing

Test cases are designed based on the use cases designed by the functional designers. In the use case, the description between the actors, users and the systems is given along with the alternate flows which helps the tester to grab all the scenarios clearly and base the test cases on these. All the preconditions required before testing are clearly mentioned . The flow graphs are useful to understand the working of the system. Use case diagrams are also helpful in the acceptance testing since they are designed with the customer and user participation.

gr7

Ad-hoc testing

Testing is done based on the skills, intuition, and experience. There are no strong test cases for this type of testing.

An example of ad-hoc testing is exploratory testing, which is defined like simultaneous learning, and means that tests are dynamically designed, executed, and modified. When we first look at a new feature or system, we don’t know much about the system. We design experiments (or tests) to help us learn more about it. We then explore the system qualities and risks that we believe the customers, users, or other stakeholders may care about.