3 Simple Server Changes to Boost Website Performance

Site speed matters. Google is now using page load times as a minor ranking factor, plus research done by Akamai and Gomez.com shows that a lot of people will leave your site if pages take longer than 2 seconds to load. This can lead to lost trust and even worse, lost revenue. So what can be done?

Glen over at ViperChill wrote a really good guide on speeding up websites to which I contributed some info. The guide is a good mix of methods that will work on just about any site however I believe that speed optimisations begin with the server. The reason is simple - everything starts with the server returning the content. Thus, by getting your server to handle more traffic more quickly you can make more money, plus you can save on server costs too. It’s a win-win situation (unless you’re the sales rep for a hosting company ;))

This post is a follow-on from Glen’s (as I wasn’t able to go into any real depth) so to recap, the methods I supplied were:

  • Install a PHP accelerator
  • Turn on MySQL query caching
  • Mount cache directories into memory

Now, the reason I gave Glen those methods was that they give massive performance benefits and are very easy to set up - any server admin worth their salt can get them going in 10 minutes tops. In fact, I am going to supply you with some stats to show you why you should use them!

The Setup

Doing website load testing can be a tricky task due to the number of factors involved. As I wanted to show that these methods don’t need any major configuration changes to work I decided to keep things simple by using the standard basic test - push the server till it starts to break and work from there. As Linux is the most common webserver operating system in use I built a small VPS on my Windows desktop using Virtualbox and installed a basic LAMP stack on it with all the default settings. The testing was done on my laptop via Siege, which is a lovely load-testing program designed purely to load-test servers (it doesn’t bother rendering pages). The Wordpress site itself was a base install with the following content added: 3401 blog posts of between 962 and 4810 words each, 14693 comments (25-50 a post) with each one between 114 and 245 words long, 89 categories and finally 266 images added to random posts.

No, I didn’t add all of that manually, thank God :p For these types of tests you normally start off with a few connections and then rapidly increase them in order to simulate real traffic patterns. However, to simulate a worst-case scenario I hit the VPS with a flood of traffic from the start - 30 concurrent connections making as many requests for pages, CSS files, images, etc as possible in 2 minutes. This is similar to 30 rogue spiders going crazy on your site. As my home network is 100 Mb/sec with almost no latency, network speeds were taken out of the equation. These tests were purely about the server.

For each test, I had a look at the following metrics.

  • Total Requests - This is the total number of requests made. Requests that were still waiting for a response from the server at the end of the test were not counted.
  • Requests/sec - This is simply the number of requests divided by the length of the test (2 minutes)
  • Availability - The percentage of requests that were successful. Requests could fail any several reasons including time outs, server errors, etc.
  • Average Response - This is the average response time of all the successful responses, in seconds.
  • Longest Response - The longest response time out of all the successful responses, in seconds. The first requests were almost always the longest as Apache had to ‘wake up’ from idling, load PHP into memory, etc.
  • Shortest Response - The shortest response time out of all the successful responses, in seconds.

Baseline Test

As you can see below, the base test showed some truly horrendous results. While not something I recorded, the CPU usage shot through the roof in about 10 seconds and the VPS became very sluggish. After the test completed, the CPU load took so long to come down again that I simply forced the VPS to reboot. Hell, it shot up so quickly that the console stopped responding till long after the test had completed.


Total Requests Requests/sec Availability Average Response Longest Response Shortest Response
147 1.23 41.5% 7.76 27.94 1.26

Also, look at the availability - less than half the request succeeded. This means that only about 40% of users would see anything, and that’s only after waiting for around 8 seconds. This really sucks.

Install a PHP Accelerator And Enable MySQL Caching

I decided to combine these two methods because hey, if you’re going to implement one you may as well implement the other. PHP is a dynamic scripting language which means that every time a page is requested, the code has to be read, analyzed, compiled and then executed which uses memory and CPU time. Obviously, this becomes a major bottleneck especially when the server is busy.

PHP accelerators solve this problem by caching previously compiled scripts into memory for re-use. By removing the time spent reading, analyzing, etc the scripts finish quicker and thus the server can handle more traffic. There are several PHP accelerators to choose from, but my personal preference is XCache. It is fast, stable, easy to install, easy to use and works ‘out of the box’ so you don’t need to make any changes to your website code (always a benefit!).

A default Wordpress install makes around 27 database queries per page request. These are often small and quick (less than a second to finish all 27) but as you add plugins and get more traffic these queries add up and you start to bully the MySQL server. Thankfully, the majority of SQL queries simply request data so MySQL self-defence 101 always starts with query caching. It works by caching the results of previous queries in memory so that if the same query comes in, MySQL just gives it the result. If the database table changes, for example you make a new post, the query is flushed and an updated one will be stored next time.


Total Reqs Requests/sec Availability Average Response Longest Response Shortest Response
5050 42.08 100% 0.71 4.95 0.08

As you can see, caching has had a major effect on the results. In order to better visualise the performance increase, here’s a quick graph of the requests per second:

Baseline vs Xcache/MySQL - Requests

That’s a massive jump. Looking at that graph reminds me of Usain Bolt running against normal people :) While this is damn good improvement (a 3313% increase!), to me the most important part is the 100% availability. This shows that the server was stable which is vital for converting traffic - even if the results were slower, you can’t convert traffic when the pages don’t load.

Mount Cache Directories Into Memory

Static HTML files are hands-down quicker, easier and more resource-efficient to serve than PHP files. This is why I’d always recommend it as the first optimisation to implement. As the site is using Wordpress, I used the WP Super Cache plugin which works in a similar way to PHP and MySQL caching however the biggest difference is that it saves the cache to the hard drive instead of memory. This still offers huge performance gains over other caching but has two drawbacks:

  • Hard drives are slower than RAM. Even SSD drives are slower than RAM (and by a wide margin).
  • If your server is very busy or your storage system is bad (crap drives, crap RAID card, etc) performance starts to degrade quite noticeably.

Should you start to encounter I/O issues, the answer is to simply mount the directories into memory, something Linux supports by default via tmpfs. You are even able to limit the amount of memory reserved which helps you manage resources (always handy!). The only caveat is that as the files are stored in memory they are lost should the server reboot or similar, thus you should only mount non-critical directories.


Total Requests Requests/sec Availability Average Response Longest Response Shortest Response
214,364 1786.36 100% 0.32 1.23 0.02

Look at those results! Over 1700 requests a second, and the response times are amazing as well! That’s a massive jump (a 4245% increase over the XCache test!) and not only that, the CPU was virtually idling throughout the test (only 2 of the 4 cores were in use). To continue the Usain Bolt analogy, this is like strapping a rocket to his back. Here’s a graph to show what I mean:

Xcache/MySQL vs Mounted Directory - Requests

To tell you the truth, when I saw the results I was actually hesitant to use them because I thought that people would say that I was making stuff up. However, this is not so! Remember that I removed network speeds from the testing, so these results are purely from the VPS. This is what the VPS was capable of with the other factors removed. What this means is that this was the fastest and most efficient way it could deliver website content. How does this relate to the ‘real world’? Well, even if network speeds and latency slow your response times down your server won’t be strained, even under high traffic volumes. This can save you a lot of money on hardware costs. You’d be silly not to try HTML caching. The benefits are really worth it!


I hope I’ve given you a much better understanding of these simple optimisations and how they can benefit you. I also hope that I’ve gotten you interested in implementing them :) With a lot of guides focusing on ‘sexy’ things like CDNs and CSS tricks it’s often easy to forget the basics. There is actually a lot more than can be done at the server level, however that’s a post for another day. If you have any questions or the like, feel free to leave a comment below :)


Next Post
Find Out Process Memory Usage

Previous Post
PHP - Generate Random Strings