MariaDB and WordPress explained


In my extensive search for the tech-stack that suits WordPress best, MariaDB showed significant performance increases among it’s competitors.

…but will it work in the WordPress ecosystem?

First thing to check when testing a new database engine with WordPress is compatibility.

WordPress officially supports only MySQL, though there are some database layer-abstraction techniques which are too complex to execute to be offering any performance increases.

MariaDB, on the other hand, is 100% compatible with MySQL.

All the goodies that increase performance and offer new features are tucked under the hood of the familiar MySQL.

Compatibility – check!


Unlike MySQL, MariaDB comes with query caching on by default. What this means is that every query that is being sent to the database engine is first looked up in the cache to see if the user can get what he wants without even running a query on the database.

Performance increases on a WordPress site, which always has a much larger number of database reads than writes, are significant.

Subquery caching

Another form of caching which MariaDB offers is subquery caching.

If we have two queries run one after another:

Query 1:


Query 2:

MySQL would run both of these queries in entirety, increasing CPU-load and response time.

MariaDB, on the other hand would create a “virtual” table consisting of the values returned from the subquery – so that when the second query is sent, data is read from the cached “virtual” table instead of running the subquery again.

Storage engines

MariaDB is included with numerous storage engines, each suitable for a specific set of usage scenarios.

Two storage engines we’ve considered are MariaDB’s more performant forks of MyISAM and InnoDB: Aria and XtraDB.

Though MyISAM and Aria offer greater read speeds and smaller memory footprints out of the box, through heavy usage and database growth these storage solutions become fragmented and slow.

What point is in having a site that’s super fast with a couple of visitors, but super slow at it’s peak?

XtraDB and InnoDB solve fragmentation problems by using clustered indices.

All the time, indices are reordered to a cluster to ensure spatial proximity of related data – which is crucial when your queries have a large number of JOIN and GROUP statements.

In a WordPress environment, where viewing a single post trigger multiple queries with JOINS, clustered indices make a difference.

Future development

Being community driven and completely open source, MariaDB experiences rapid growth in features, encourages development of experimental storage engines, which could end up to be the next big thing.

As of version 10.0.1 MariaDB has added support for Cassandra storage engine with an intention to include more NoSQL storage engines in the future. Though NoSQL currently means nothing to your WordPress installations – since WordPress only supports SQL engines, this is a big sign showing  MariaDB’s vibrant development efforts.

If we can ensure scalability, maintainability, unprecedented speed and performance – there’s no such thing as an ‘nontraditional’ tech-stack.

Ansible DTAP – Development, testing, acceptance and production

Quis custodiet ipsos custodes?

A majority of Ansible use cases are in application deployment and continuous delivery, a job at which Ansible truly excels. But when using Ansible for such mission critical things, an age-old question might arise:
Who is going to guard to guardians?
In other words, how are we going about continuous delivery and super-cool automated deployments if the Ansible scripts themselves don’t pass the same process?
In my previous post – Testing Ansible I’ve identified 4 different steps in testing our Ansible scripts.
This DTAP process should give an overview and a loosely coupled framework for putting our Ansible code to production.

Development – “Ground zero”

The most important thing about development is to be completely fearless about making errors and to make errors completely and easily reversible.

To achieve that comfort in development, a virtual environment is a must: be it a container, VPS, or a VM… What ever suits the project you are developing. My recommended development environment is Hashicorp’s Vagrant.
Vagrant gives you the comfort of putting up multiple virtual machines in a single environment – properly abstracting the production infrastructure you might have.
When the development is finished, the following tests need to be run:

  1. Syntax test – did we write code or did we write gibberish?
  2. Dry run – are all the prerequisites for the configuration changes present?
  3. Run the scripts – will the script actually run?
  4. Idempotency test – will I make any harm by running the configuration script again on an already configured machine?

It’s important to note that these tests can be fully automated and take almost no time to run – ideal for development.

Testing – “The first trials”

Delayed assertion

The development is finished – great! We’re 1/4 of the way closer to production.
We are left with one important test mentioned in Testing Ansible: Delayed assertion.
Delayed assertion is just you writing more code to accurately test if all conditions required by the feature are met.
After running the smoke tests mentioned in the Development phase and running the Delayed assertion tests, we need to ask the authority to give us clearance for staging.

Authorization for staging

Our code is now swimming with the big fishes – no more development comfort, ad-hoc changes and SSH sessions…

Once we ask the authority for permission to go to staging – we are in the rapids flowing to production, everything from now on is fully automated.
The authority, in our case, is the CI Server.

CI Server’s role in this step is to re-run all the steps done in development, to fix a common cause for failure – developer not testing properly.

The CI Server workflow:

  1. Code is commited to the repository.
  2. Needed VMs/VPSs are spun up exactly as in the development environment
  3. Tests are being run on the machines in the exact way as they should be run on the development environment
  4. If everything is ok – we are ready to get serious.

Staging / Acceptance – “Getting serious”

We have the code, we have the proof that the tests are passing. Now comes staging.
The attitude we have towards the staging environment should be identical to the production environment – otherwise we simply haven’t set the stage well.
The only difference between the staging and the production environment is in the fact that no end-users are using it!
The same tests as in the previous step are run but this time on an exact copy of the production infrastructure.
Depending on the CI tool which you are using, this will be easier or harder to setup, but the ideal workflow should be the following:

  1. Run tests in the staging environment
  2. If the tests fail, mark the build as unsafe and don’t destroy the staging environment
  3. If the tests pass, mark the build as passing and destroy the staging environment


Why don’t we destroy the staging environment when the tests are failing but quickly dispose of it if they’re not?
Simply because we wan’t to have access to the environment which failed to provision normally – to gather data on the failure and to make sure we avoid it in the next build. Marking the build as unsafe in this case simply means that this specific revision CANNOT finish up in the production environment – no excuses.

Production – “The point of no return”

There isn’t much to say about production, I recommend visiting List of religions and spiritual traditions on Wikipedia and picking who to pray to that nothing breaks. Once you stop praying that nothing breaks and start praying that the tests you have written have good coverage you know you’re getting better. Once you stop praying even that the tests are ok, and leave the office immediately after deploying to production, you know that you are a sociopath who just likes to watch the world burn, congratulations!


CI tool – I’m currently experimenting with Go CD which, coming from a short and troublesome experience with Jenkins seems like a nice refreshment.

Anti-concurrent deployments – this is a major issue when you’ve built an automated workflow from development to production, you don’t want people running deployments at the same time, because something will break, and it will break hard. If you can’t setup this kind of control in your CI tool, I recommend Etsy’s PushBot which is an IRC Bot which allows developers to queue in for their turn on deploying.

Military-grade ACLs – you don’t want to trust no-one, not even yourself. Granularize access to certain parts of the workflow wherever, whenever possible. A good practice would be to implement a sharded key shared by multiple members of the team for deploying changes to production, after successfully passing the tests in staging environment.

Testing Ansible

Ansible is an ingenious piece of software which saves time put into documenting configuration procedures, documenting configuration schemes, running procedures and bootstrapping your infrastructure a whole lot easier.

By it’s design, Ansible does not force you to use any particular style or structure. If you’d rather use long playbooks instead of granular and nicely put roles, so be it, you will achieve your goal!

The problem is that the resources are scarce when it comes to testing deployed configuration, and the method for testing configurations must be as flexible as Ansible itself.

Every journey starts with analysis, cometh the 4 types of tests.

Syntax test

The first thing you should do after writing any configuration script is to test it’s syntax for errors. Most errors are syntax related so don’t hesitate to run a quick check.

Dry run

ansible-playbook provision.yml --check

Dry running will run your specified playbook without making changes to the target machines. You will see if the tasks can run, not if they will run.

When doing dry runs you will encounter some false negatives. For example, I want to install Elasticsearch from a .deb package that I packaged without pulling it from a repository: the way I’m doing that is:

  1. Copy es.deb to /tmp/
  2. Install debian package located in /tmp/es.deb
  3. Remove the package from /tmp/es.deb

Dry run will fail in this case because it will check if it can install the file located at /tmp/es.deb  which wasn’t previously copied – just checked if the copy was possible.

How we go around this problem is by adding an always_run flag to the tasks we want to run regardless if it is a dry or a regular run – in this case the copy and clean up tasks.

Idempotency test

Idempotency is a fancy word borrowed from maths with a very simple meaning. Basically it means that no matter how many times you paint the wall green – the wall will stay green. In other words another taking of the same action will not have a different outcome when repeated in succession. Idempotency is what separates ad-hoc from production grade Ansible scripts. If I want to install a new utility on my entire infrastructure, and for some reason the script fails after running on 50% of the infrastructure, I don’t want to cherry pick the failed/unconfigured machines, I want to run the same script again, because I will be double sure that the first half is A-OK!

Delayed assertion

Delayed assertion is what a Java programmer thinks of when you mention the word test.

So far we’ve only done smoke tests – tests which tell us if the code has any chance of achieving our result – not if it does achieve.

Idempotency, dry run and syntax tests are easily automated, but the computer won’t know what we want unless we explicitly state our cause.

How the development process looks so far:

  1. You want to have an Nginx web server listening on port 80.
  2. You write an Ansible script that passes all before mentioned tests.
  3. … you come to a sad realization that the only thing you are sure of is that the script you wrote in step 2. does something right, not if it does what you defined in step 1.

How the process should look like:

  1. You put your wish into code.
  2. You write an Ansible script that passes the syntax, dry run and idempotency tests – aiming for the fulmilment of the wish.
  3. You check if the wish you put into code is satisfied.

Putting wish into code in this example would look like this:

  • I want to have an open TCP socket on port 80
  • I want to curl http://localhost:80 and receive status code 200
  • I want to have Nginx started & listening on port 80

Making assertions in Ansible can be done using the script and assert modules.

But why put “delayed” next to assertion?

Let’s say I want to check if all of my tasks needed for installation of Nginx executed properly on the remote machine.

I will execute the dry run test on the remote machine and see if something is different:

If  I get an output from the above command that nothing needs to be changed (in other words – no differences exist) I know that the machine is configured according to my Nginx role and that all tasks were executed.

But who will guarantee that my server is listening on port 80 and returning 200 status codes? 

The answer can’t be simpler: I will have it checked by Ansible!

The delayed assertion is looking at the outcome and testing if it you can use it after properly doing all the exact steps.

Why delayed? Because we know all the steps work what they are supposed to, but we don’t know if in the bigger picture the purpose is met, we have to delay the test of the bigger picture after we have assembled the smaller bits and pieces.

How to set up Vim on Windows

If you’ve ever tried to achieve comfort working in Vim on Windows, you know what a hassle it can be.

On OS X and *nix, Vim works completely fine in the terminal emulator. Unfortunately there’s not one terminal emulator for Windows that can work good with different color schemes and handle terminal re-sizing well – things that are crucial to comfortable text editor usage.

Install gVim

Let’s start by downloading Vim from the official website.

Make sure that you download the version including the GUI  (gvim).

After the installation is complete, start up gVim to be presented with this atrocity:


Every Vim command you are familiar with is here, but the toolbars, the font, the colors – everything is wrong.

Chip away the ugly UI

gVim is configured in the same way as on any *nix system, using the ~/.vimrc file located in your home directory.

Let’s open up gVim and enter the following command: :e ~./vimrc

Now we’re ready to start punching our preferences into the configuration.

First thing we’ll do is set the encoding to UTF-8 to properly display international characters:

Next we'll hide the ugly toolbar:

Let’s be honest, we don’t need the menu either:

If you don’t use scrollbars for navigation, we can hide them as well:

Hit :x  to write the configuration and exit gVim, so that we can start it again with the applied configuration.

It should now have a much cleaner look:

Install custom fonts

Download and install fonts of your choice from the powerline-fonts repository on I recommend downloading it from this repo because of compatibility with the powerline & airline vim plugins. My personal favourite is the Hack font.

To set the downloaded font as the default we need to open up .vimrc one more time and type the following:

Make sure to replace [font-name] and [font-size] with your preferences.

And you’re ready to start typing away!