Notify My Android support for monitoring

December 5th, 2016 by

Great scenery, terrible mobile coverage

Great scenery, terrible mobile coverage


If you’re anything like me, December involves a tour of parents who have retired to far flung corners of the land and who are now living in houses with perfectly serviceable wifi but absolutely no mobile phone coverage. This creates a problem if you’re supposed to be listening out for computers that go bleep in the night, as the SMS notifications don’t get through.

To address this, we’ve just implemented Notify My Android support in our monitoring service. As the name suggests, this allows us to push monitoring alerts to your Android phone. It’s pretty easy to set up:

  1. Register for an account with Notify My Android (a free account will allow 5 notifications per day, a one-off payment of $5 will get you unlimited notifications).
  2. Log in, and generate an API key
  3. Download the app on your phone, and log in
  4. Visit our control panel and add an entry to your notification list that looks like nma:API_KEY where API_KEY is the key generated above.

If you use the other kind of phone we have equivalent functionality using Prowl. This works the same, except you put prowl: before your API key.

All of our dedicated and virtual servers include basic ping monitoring as standard. Comprehensive monitoring of other services is available as an add-on, or as standard with any of our managed services.

Backup Upgrade

November 25th, 2016 by
We're using AES rather than 8 rotor enigma encryption.

We’re using AES rather than 8 rotor enigma encryption.

We’ve just completed an upgrade to our backup services. We’ve relocated the London node into Meridian Gate, which means for all London hosted virtual machines your primary backup is now in a different building to your server. We’ve kept our secondary backup service in our Cambridge data centre 60 miles distant.

To further improve, we have taken the opportunity to enable disk-encryption, so that all data stored on the primary backup server is now encrypted at rest providing an additional layer of assurance for our clients and fewer questions to answer on security questionnaires.
We’ve also restricted the number of ssh ciphers allowed to access the backup server to further improve the security of data in transit. We’ve also increased the available space and provided a performance boost in the IO layer so backups and restores will complete more quickly.

Of course we’ve kept some important features from the old backup service such as scanning our managed customers’ backups to make sure they’re up to date and making sure that we alert customers before their backups start failing due to lack of space. Obviously all traffic to and from the backup server is free and it supports both IPv6 and IPv4.

If these are the sort of boring tasks on your todo list and you’d like us to do them for you, please get in touch at sales@mythic-beasts.com.

ANAME records

October 7th, 2016 by
Company policy requires that blog posts have a picture.

Company policy requires that all blog posts have a picture.

We’ve just added support to our control panel and DNS API for “ANAME” records. ANAME records, also known as ALIAS records, aren’t real DNS records, but are a handy way of simulating CNAME records in places where you can’t use a real CNAME.

It works like this:

You’ve got DNS for your domain managed with Mythic Beasts, and you want to host your website with some 3rd party service provider. They’ll tell you to point DNS for your website at their server. You create a CNAME record for www.yourdomain.com and point it at server.3rdparty.com. So far so good.

You also want requests for your bare domain, e.g. http://yourdomain.com to be served by your provider, so you try to create a CNAME for yourdomain.com and get told you can’t. This is because you will already have MX, NS and SOA records for your bare domain, and CNAMEs aren’t allowed to co-exist with other records for the same name.

The usual fall back is to create A or AAAA records that point directly to the IP address of server.3rdparty.com, but this sucks because their IP is now hard coded into your zone, and if they ever want to change the IP of that server they’ve got to try and get all of their customers to update their DNS.

The nice solution would be SRV records, standardised DNS records that allow you to point different protocols at different servers. Unfortunately, they’re not supported for HTTP or HTTPS.

This is where ANAME records come in. You can create an ANAME just like a CNAME, but without the restrictions on co-existing with other records. We resolve the ANAME and substitute the corresponding IP addresses into records into your zone. We then regularly check for any changes, and update your changes accordingly.

Naturally, our ANAME implementation fully supports IPv6: if the hostname you point the ANAME at returns AAAA records, we’ll include those in addition to any A records returned.

Sneak preview from Mythic Labs, Raspberry Pi netboot

August 5th, 2016 by

We don’t like to pre-announce things that aren’t ready for public consumption. It’s no secret that we’d love to offer hosted Raspberry Pis in the data centre, and in our view the blocker for this being possible is the unreliability of SD cards which require physical attention when they fail. So we’ve provided some assistance to Gordon to help with getting netboot working for the Raspberry Pi. We built a sensible looking netboot setup and spent a fair amount of time debugging and reading packets to try and help work out why the netboot was occasionally stopping.

This isn’t yet a production service and you can’t buy a hosted Raspberry Pi server. Yet. But if you’d be interested, we’d love to hear from you at sales@mythic-beasts.com.



This is a standard Raspberry Pi 3 with a Power over Ethernet (PoE) adapter. You have to boot the Pi once from a magic SD card which enables netboot. Then you remove the SD card and plug it in to the powered network port. PoE means we can power cycle it using the managed switch. At boot, it talks to a standard tftpd server and isc-dhcp-server, this then delivers the kernel which runs from an NFS root. It’s a minimal Raspbian Jessie from debootstrap plus sshd and occupies a mere 381M versus the 1.3G for a standard Raspbian install. The switch is reporting the Pi 3 consuming 2W.

The Raspberry Pi topple is just for fun.



VPS-not-so-lite

June 17th, 2016 by

cloud-cpuWe’ve upgraded our VPS Lite service to bring it onto the same hardware platform as our standard VPS offerings. Our base level virtual server is now the VPS 1 and has 1GB RAM and 20GB disk. It’s fractionally more expensive than the old VPS Lite, but offers double the disk and RAM. We’ve also introduced an SSD option, in addition to traditional spinning disks, for faster IO performance.

Customers of the existing VPS Lite service will be migrated onto the new hardware platform, and given the option of keeping their current spec on faster hardware, or upgrading to the new VPS 1 spec.

Naturally, the new VPS 1 comes with IPv6 as standard: IPv4 is an option, although our IPv4 to IPv6 reverse proxy, and the availability of NAT64 for outbound traffic means that most websites can be hosted easily on a IPv6-only server.

Full details of the current specifications can be found on the Virtual Servers page.

PROXY protocol + nginx = broken header

May 9th, 2016 by

We recently announced support for PROXY protocol in our IPv4 to IPv6 reverse proxy, and happily linked to the instructions for making it work with NGINX. One of our customers has pointed out that they didn’t actually work, and we’ve now got to the bottom of why not.

NGINX version

First issue: you need NGINX >= 1.9.10, as there was a bug with using proxy_protocol on IPv6 listeners. If you’re on Debian Jessie, you can get a suitable version from Jessie backports.

PROXY protocol version

Second issue: NGINX only speaks PROXY protocol v1 and our proxy was attempting to speak v2.

v1 is a human readable plain text protocol, whereas v2 is binary. If you see something like this in the error log:

2016/05/09 11:11:30 [error] 6058#6058: *1 broken header: "

QUIT
!
 ]Y??.????PGET / HTTP/1.1

Then that’s a good sign that you’ve got a v2 reverse proxy talking to you.

We’ve now changed our proxy to only speak PROXY protocol v1 by default. We will look into making this a configurable option in the future. The Apache module seems happy speaking either version.

Whilst we’re here, here are some other failure modes you might see. This in the access log, is v2 PROXY protocol being spoken to NGINX which is not configured for PROXY protocol at all.

2a00:1098:0:82:1000:3b:1:1 - - [09/May/2016:11:08:55 +0100] "\x00" 400 172 "-" "-"

And this is v1 PROXY protocol being spoken to NGINX which is not configured for it:

2a00:1098:0:82:1000:3b:1:1 - - [09/May/2016:11:39:30 +0100] "PROXY TCP4 93.89.134.240 46.235.225.189 64221 80" 400 173 "-" "-"

PROXY protocol support for our, err, proxy

April 29th, 2016 by

We’re increasingly using our IPv4 to IPv6 reverse proxy to host websites on IPv6-only virtual machines. One of the downsides of proxying is that your server doesn’t get to see the client’s real IP address. For non-SSL connections, the proxy can insert an “X-Forwarded-For” header, but SSL is increasingly becoming the norm, and one of the nice things about an SNI-aware reverse proxy is that it doesn’t need to do SSL off load: we don’t need your certificates on our proxy and your traffic stays encrypted until it hits your server. Of course, this means that we can’t go inserting any headers into your connection either.

Fortunately, there is a solution: PROXY protocol. This is a protocol-agnostic mechanism for passing information from a reverse proxy to a server, including the client IP address.

We’ve just added support for PROXY protocol to our reverse proxy:

proxy-protocol

Turning this on allows your server to get the client IP address, but as it’s an additional protocol, not part of HTTP, your server must be expecting it: turning this on and pointing it at a standard HTTP server will result in a broken website.

Most web servers have support for this. NGINX has support built in, and just needs “proxy_protocol” adding after the listen directive:

server {
    listen 80   proxy_protocol;
    listen 443  ssl proxy_protocol;
    ...
}

You will probably also want some additional configuration to actually set the IP address that gets used for logs etc., and also to ensure that you only trust proxy information from the real proxy servers.

For Apache, support is provided by mod_proxy_protocol, which needs to be installed manually. Once done, configuration is easy:

<VirtualHost *:443>
  ...
  ProxyProtocol On

  CustomLog ${APACHE_LOG_DIR}/access.log "%a %l %u %t \"%r\" %>s %b \"%{Referer}i\" \"%{User-agent}i\""

The CustomLog line instructs Apache to use the real client IP rather than the proxy. You should now see v4 addresses being happily logged on your IPv6 server:

root@vm1:~# tail -n 1 /var/log/apache2/access.log
93.93.130.44 - - [29/Apr/2016:14:05:32 +0100] "GET / HTTP/1.1" 200 321 "-" "curl/7.26.0"

Unfortunately the module doesn’t currently provide a way to restrict enablement to trusted proxies only. As such, you’ll probably want to install a firewall to restrict HTTP/HTTPS traffic to only come from our proxies, as otherwise clients could easily fake their IP address.

One thing to watch out for is that although this is applied within a VirtualHost configuration, it’ll actually apply to all virtual hosts on the same IP address and port. This is an unavoidable side effect of the fact that the proxy information is sent before we start talking HTTP. Of course, with IPv6, throwing another IP address at the problem isn’t an issue.

IPv6 only hosting

April 27th, 2016 by

Last week at the UK Network Operators Forum Pete gave a talk about our IPv6 only hosting, progress we’ve made and barriers we’ve overcome.

It’s now available to view online

The little computer that did

April 13th, 2016 by

At the end of March we migrated the Raspberry Pi website from a very big multi-core server to a tiny cluster of eight Raspberry Pi 3s. Here’s a bit more detail about how it worked.

The Pi rack not fooling anyone on April 1st

The Pi rack not fooling anyone on April 1st

Booting

For the Raspberry Pi 3 launch we tried out some Pis running in a data centre environment with high load using the SD card for the root filesystem. They kept crashing, if you exceed the write capability of the card the delays make the kernel think the storage has failed and the system falls over. We also want to be able to remotely rebuild the filesystem so we can fix a broken Pi remotely. So we’ve put the root filesystem on a network file server, which is accessed over NFS.

The Raspberry Pi runs the latest kernel, 4.1.18-v7+ and boots from the SD card with a configuration as follows:

dwc_otg.lpm_enable=0 console=ttyAMA0,115200 console=tty1 root=/dev/nfs rootfstype=nfs
  ip=10.46.189.2::10.46.189.1:255.255.255.252::eth0:off 
  nfsroot=10.46.189.1:/export/10.46.189.2 elevator=deadline 
  fsck.repair=yes rootwait

This brings up a block of 4 IP addresses on eth0. One address for the network, one for broadcast, one for the Pi and one for the network fileserver. It then mounts the NFS filesystem at:

nfsroot=10.46.189.1:/export/10.46.189.2

and uses that as the root filesystem.

Overly simple introduction to VLANs

On a traditional switch, you plug things and any ethernet port can talk to any other ethernet port. If you want to have two different networks you need two different switches, and any computer that needs to be on both networks needs two network ports. In our case we’re trying to have a private network for storage for each Raspberry Pi, so each Pi requires its own switch and the fileserver needs it’s own network port for every Raspberry Pi connected to keep them separate. This is going to get expensive very quickly.

Instead we turn on virtual LANs (VLAN). We connect our fileserver to port 24 and create a VLAN for ports 1 & 24, another for 2&24, etc. The switch configuration for the fileserver port specifies these VLANs as “tagged”, meaning our switch adds a header to the front of every packet from a Raspberry Pi port that allows the fileserver to tell which VLAN, and therefore which Raspberry Pi, the packet came from. The fileserver can reply with the same header, and that packet will only be sent to that specific Raspberry Pi. It behaves as if each Raspberry Pi has its own switch.

Network on the fileserver

The fileserver sees each VLAN as a separate network card, named eth0.N where N identifies the VLAN. We can configure them like any other network interface:

auto eth0.10
iface eth0.10 inet static
	address 10.46.189.1
	netmask 255.255.255.252

auto eth0.11
iface eth0.11 inet static
	address 10.46.189.5
	netmask 255.255.255.252

eth0.10 and eth0.11 appear to be network cards with a tiny network with one Raspberry Pi on the end, but in reality there’s a single physical ethernet connection underneath all of them.

Network on the Raspberry Pi

On the Raspberry Pi, eth0 is already configured on the Raspberry Pi by the boot line above to talk to the fileserver. On our switch configuration, we specify that private network is “untagged” on Raspberry Pi port, which means that it won’t have a VLAN header on it and we can access it as “eth0” rather than “eth0.N” as we did on the fileserver.

In order to do anything useful, we also need to give the Raspberry Pis access to the public network. On our network, the public network is accessible on VLAN 131. We configure this to be a “tagged” VLAN on the Raspberry Pi port, meaning it becomes accessible on the eth0.131 interface. We can configure this in the normal way, and in keeping with other back-end servers on the Raspberry Pi setup, it only has an IPv6 address:

auto eth0.131
iface eth0.131 inet6 static
	address	2a00:1098:0:84:1000:1::2
	netmask 64
	gateway	2a00:1098:0:84::1

Effectively the Raspberry Pi believes it has two network cards, one on eth0 which is a private network shared with the fileserver, one on eth0.131 which has an IPv6 address and is connected to the real internet.

Why all that configuration?

In an ideal world we’d have a single IPv6 address for each Pi, and mount the network filesystem with it. However, with an NFS root filesystem, potentially another user on the LAN who can steal your IPv6 address can access your files. There’s a second complication, IPv4 is built into the standard kernel on the Raspberry Pi and the differences per Pi are constrained to just the kernel command line, with IPv6 we’d have to build it into an initrd which would load up the IPv6 modules and set up the NFS mounts.

Planning for the future we’ve spoken to Gordon about how PXE boot on the Raspberry Pi will work and it’s extremely likely that it’s going to require IPv4 to pull in the bootloader, kernel and initrd. Whilst there is native IPv6 in the Raspberry Pi office, there isn’t any IPv6 on their test lan for developing the boot code and it’s a currently not a major priority for the Pi despite around 5% of the UK having native IPv6.

So if we want to make this commercial, each Pi needs its own storage network and it needs IPv4 on the storage network.

Power over Ethernet

We’ve added a Power over Ethernet HAT to our Raspberry Pis. This means that they receive power over the ethernet cable in addition to the two separate networks. As well as reducing the amount of space used by power bricks, it also means you can power cycle a Raspberry Pi just by re-configuring the switch.

Software

Each Raspberry Pi runs Raspbian with Apache2 installed. We’ve pulled in PHP7 from Debian Stretch to improve PHP performance and then copied all the files for the Raspberry Pi website onto the NFS root for each Raspberry Pi (so the fileserver effectively has 8 copies – one for each Pi). We then just added the IPv6 addresses of the Raspberry Pis into the site’s load balancer, deleted the addresses for the main x86 servers and waited for everything to explode.

Did it work?

Slightly to our surprise, yes and well. We had a couple of issues – the Pi is much slower than the x86 servers, not only clock speed but also the speed of the network card used to access the filesystem and the database server. Some rarely used functions, such as registering a new Raspberry Jam, weren’t really quick enough under the new setup and gave people some error pages as the connections timed out. Uploading images for new WordPress posts was similarly an issue as receiving a 3MB file and distributing eight copies on a 100Mbps network isn’t very fast. But mostly it worked.

Did power cycling the Pis via the switch work?

We never tested it in production, every Pi remained up and stable for the whole 3.5 day duration we had the system in use. In testing it’s been fine.

Can I buy one?

Not yet. At present you can still break a Pi by destroying the flash, and the enclosure doesn’t allow for replacement without taking the whole shelf (which in production would contain 96 Pis) offline. Once we have full netboot for the Pi, it is a service we could offer.

Can I register my interest to buy a Pi in the cloud?

Sure – email us at sales@mythic-beasts.com and we’ll add you to a list to keep you up to date.

Let’s Encrypt SSL Certificates using DNS API – HOWTO

March 16th, 2016 by

Here at Mythic Beasts, we’ve been busily undermining sales of our SSL certificates by rolling out support for free certificates from Let’s Encrypt, partly because we think that the internet should be secure by default, but mostly because we’re lazy and Let’s Encrypt makes it easy to fully automate certificate issue and deployment.

Domain validated certificates

The majority of SSL certificates in use today are “Domain Validated” certificates. These are issued automatically by a certificate authority once you have completed some action that proves that you are in control of the domain for which the certificate is being requested. This can include responding to an email send to an address at your domain, or posting a file to a specific location on your website.

Let’s Encrypt DNS challenge

One of the options for validation offered by Let’s Encrypt is a DNS challenge (known as “dns-01”), whereby you prove ownership of your domain by adding a specific entry to its DNS zone. This option is quite interesting, as it allows you to avoid meddling in any way with your web server configuration and, if your DNS is hosted with Mythic Beasts, you can automate the addition of the necessary records using our DNS API.

Automating via our DNS API

In order to support this, we’ve developed a hook script that works with the letsencrypt.sh client.

We’ve also written a step-by-step guide to configuring dns-01 validation using our DNS API.

Please note, if you’re a hosting account customer, you don’t need to worry about any of this. You can get an SSL certificate for your website simply by hitting a button in the control panel.

Thanks go to David Earl for testing this and providing the initial implementation of the hook script..