CAA records

September 1st, 2017 by

A handful of the hundreds of different organisations, all of whom must be trustworthy.

Everybody knows that SSL is a good idea. It secures communications. At the heart of SSL is a list of certificate authorities. These are organisations that the confirm the identity of the SSL certificate. For example, if GeoTrust says that Raspberry Pi is Raspberry Pi we know that we’re talking to the right site and our communications aren’t being sniffed.

However, the list of certificate authorities is large and growing and as it stands, you’ve got to trust all of them to only issue certificates to the right people. Of course, through incompetence or malice, they can make mistakes.

CAA records are a relatively new mechanism that aims to stop this happening, making it harder to impersonate secure organisations, execute bank robberies and steal peoples’ identities.



CAA records enable you to list in your domain’s DNS the certificate authorities that are allowed to issue certificates for your domain. So, Google has a record stating that only Google and Symantec are allowed to issue certificates for google.com. If someone manages to persuade Comodo they are Google and should be issued a google.com certificate, Comodo will be obliged to reject the request based on the CAA records.

Of course, in order to be of any use, you need to be able to trust the DNS records. Fortunately, these days we have DNSSEC (dns security).

How does it work?

A typical CAA record looks something like this:

example.com. IN CAA 3600 0 issue "letsencrypt.org"

This states that only Let’s Encrypt may issue certificates for example.com or its subdomains, such as www.example.com.

Going through each part in turn:

  • example.com – the name of the hostname to which the record apply. In our DNS interface, you can use a hostname of “@” to refer to your domain.
  • IN CAA – the record type.
  • 3600 – the “time to live” (TTL). The amount of time, in seconds, for which this record may be cached.
  • 0 – any CAA flags
  • issue– the type of property defined by this record (see below)
  • "letsencrypt.org" – the value of the property

At present, there are three defined property types:

  • issue – specifies which authorities may issue certificates of any type for this hostname
  • issuewild – specifies which authorities may issue wildcard certificates for this hostname
  • iodef – provides a URL for authorities to contact in the event of an attempt to issue an unauthorised certificate

CAA records can be added using the new section at the bottom of the DNS management page in our control panel:

The @ in the first field denotes a record that applies to the domain itself.

At Mythic Beasts, we’re a bit skeptical about the value of CAA records. In order to protect against the incompetence of CAs, they rely on CAs competently checking the CAA records before issuing certificates. That said, they do provide a straightforward check that CAs can build into their automated processes to detect and reject unauthorised requests, so publishing CAA records will raise the bar somewhat for anyone looking to fraudulently obtain a certificate for your domain.

One click HTTPS + HSTS

March 27th, 2017 by

Last year we rolled out one-click HTTPS hosting for our hosting accounts using free Let’s Encrypt certificates.  We’ve been making some further improvements to our control panel so that once you have enabled and tested HTTPS hosting, it’s also easy to redirect all HTTP traffic to your HTTPS site.

We’ve also added an option to enable HTTP Strict Transport Security (HSTS).  This allows you to use HTTPS on your website and commit that you’re not going to stop using it any time soon (we use 14 days by default).  Once a user has visited your site their browser will cache the redirect from HTTP to HTTPS and will automatically redirect any future requests without even visiting the HTTP version of your site.

HSTS makes it harder for an attacker to impersonate your site as even if they can intercept your traffic, they won’t be able to present an non-HTTPS version of your site to any user that has visited your site within the last 14 days.

HTTPS and HSTS control panel settings

We believe that the web should be secure by default, and hope that these latest changes will make it that little bit easier to secure your website.  These features are available on all of our web and email hosting accounts.  We’ll also happily enable this as part of the service for customer of our managed server hosting.

 

PHP7 on Pi 3 in the cloud (take 2)

March 24th, 2017 by

On Wednesday, we showed you how to get PHP7 up and running on one of our Pi 3 servers. Since then, we’ve implemented something that’s been on our to do list for a little while: OS selection. You can now have Ubuntu 16.04 and the click of a button, so getting up and running with PHP7 just got easier:

1. Get yourself a Pi 3 in our cloud.

2. Hit the “Reinstall” button:

3. Select Ubuntu 16.04:

4. Upload your SSH key (more details), turn the server on, SSH in and run:

apt-get install apache2 php7.0 php7.0-curl php7.0-gd php7.0-json \
    php7.0-mcrypt php7.0-mysql php7.0-opcache libapache2-mod-php7.0
echo "<?=phpinfo()? >" >/var/www/html/info.php

Browse to http://www.yourservername.hostedpi.com/info.php and you’re running PHP7:

Hosting a website on an IPv6 Pi part 2: PROXY protocol

March 10th, 2017 by

Update, 2021: We recommend using the built in mod_remoteip rather than mod_proxy_protocol for recent versions of Apache (2.4.31 and newer) – which includes Debian Buster and the current version of Raspberry Pi OS. Our instructions for this are available on our support site.

 

In our previous post, we configured an SSL website on an IPv6-only Raspberry Pi server, using our IPv4 to IPv6 reverse proxy service.

The one problem with this is that our Pi would see HTTP and HTTPS requests coming from the proxy servers, rather than the actual clients requesting them.

Historically, the solution to this problem is to have the proxy add X-Forwarded-For headers to the HTTP request, but this only works if the request is unencrypted HTTP, or an HTTPS connection that is decrypted by the proxy. One of the nice features of our proxy is that it passes encrypted HTTPS straight to your server: we don’t need your private keys on the proxy server, and we can’t see or interfere with your traffic.

Of course, this means that we can’t add X-Forwarded-For headers to pass on the client IP address. Enter PROXY protocol. With this enabled, our proxies add an extra header before the HTTP or HTTPS request, with details of the real client. This is easy to enable in our control panel:

You also need to configure Apache to understand and make use of the PROXY protocol header. This is a little more involved, as the necessary module isn’t currently packaged as part of the standard Apache distribution (although this is changing), so we need to download and build it ourselves. First some extra packages are needed:

apt-get install apache2-dev git

This will install a good number of packages, and take a few minutes to complete. Once done, you can download, install and build mod_proxy_protocol

git clone https://github.com/roadrunner2/mod-proxy-protocol.git
cd mod-proxy-protocol
make

At this point you should just be able to type make install but at time of writing, there seems to be some problem with the packaging. So instead do this:

cp .libs/mod_proxy_protocol.so /usr/lib/apache2/modules/

Now you can load the module:

echo "LoadModule proxy_protocol_module /usr/lib/apache2/modules/mod_proxy_protocol.so" > /etc/apache2/mods-available/proxy_protocol.load
a2enmod proxy_protocol

You also need to configure Apache to use it. To do this, edit /etc/apache2/sites-enabled/000-default.conf and replace each line that contains CustomLog with the following two lines:

	ProxyProtocol On
	CustomLog ${APACHE_LOG_DIR}/access.log "%a %l %u %t \"%r\" %>s %b \"%{Referer}i\" \"%{User-agent}i\""

This tells Apache to use Proxy Protocol, and to use the supplied IP address in its log files. Now restart Apache:

systemctl reload apache2

Visit you website, and if all is working well, you should start seeing actual client IP addresses in the log file, /var/log/apache2/access_log:

93.93.130.44 - - [24/Feb/2017:20:13:25 +0000] "GET / HTTP/1.1" 200 10701 "-" "curl/7.26.0"

Trusting your log files

With the above configuration, we’ve told Apache to use the client IP address supplied by our proxy servers. What we haven’t done is told it that it can’t trust any random server that pitches up talking PROXY protocol. This means that it’s trivial to falsify IP addresses in our log files. To prevent this, let’s set up a firewall, so that only our proxy servers are allowed to connect on the HTTP and HTTPS ports. We use the iptables-persistent package to ensure that our firewall is configured when the server is rebooted.

apt-get install iptables-persistent

ip6tables -A INPUT -s proxy.mythic-beasts.com -p tcp -m tcp --dport 80 -j ACCEPT
ip6tables -A INPUT -s proxy.mythic-beasts.com -p tcp -m tcp --dport 443 -j ACCEPT
ip6tables -A INPUT -p tcp --dport 80 -j REJECT
ip6tables -A INPUT -p tcp --dport 443 -j REJECT

ip6tables-save

And we’re done! Our IPv6-only Raspberry Pi3 is now hosting an HTTPS website, and despite being behind a proxy server, we’re tracking real client IP addresses in our logs.

One-click SPF

March 9th, 2017 by

Sender Policy Framework (SPF) has been around for a while, but recently we’ve seen email providers getting much more active in using it to filter mail. Most notably, Gmail appearing to be flagging mail from all domains without an SPF record as untrusted.

In a nutshell, SPF allows you to publish a DNS record that declares a list of all of the mail servers that may legitimately send mail from your domain. It’s not perfect, but it’s a useful tool in reducing email with a forged sender address.

Getting SPF records right can be a bit tricky, but for domains hosted with Mythic Beasts that send mail exclusively via our mail servers, you can now add the correct SPF record with a single click.

One-click SPF enablement

The SPF settings are available on the domain pages in our control panel.

We’d love to make it even easier and just add the record for you, but we can’t be sure that customers are only using our mail servers to send mail, and if not, adding the record will make things worse, although we are planning to add this record by default for newly hosted domains.

It’s worth noting that SPF does not cause problems when sending mail via mailing lists as all decent mailing list software will use its own sender address rather than yours. You may be aware of a change made by Yahoo! that caused considerable problems for mailing lists, but this was related to another system, DMARC, which builds on top of SPF. SPF on its own works just fine with mailing lists.

Hosting a website on a Raspberry Pi with IPv6 and SSL (part 1)

March 2nd, 2017 by

Our hosted Raspberry Pi 3 servers make a great platform for learning how to run a server. They’re particularly interesting as they only have IPv6 connectivity, yet they can still be used very easily to host a website that’s visible to the whole Internet. This guide walks through the process of setting up a website on one of our hosted Pis, including hosting your own domain name, setting up an SSL certificate from Let’s Encrypt, automating certificate renewal, and using our IPv4 to IPv6 HTTP reverse proxy.

Get a Raspberry Pi

First, get yourself a hosted Raspberry Pi server. You can order these from our website, and be up and running in two minutes:

Click on the link to configure your server and you’ll be shown details of your server, and prompted to configure an SSH key:

We use SSH keys rather than passwords. Click on the link, and you’ll be asked to paste in an SSH public key.  If you don’t have an SSH public key, you’ll need to generate one.  On Unix you can use ssh-keygen and on Windows you can use PuTTYgen. Details of exactly how to do this are beyond this guide, but Google will throw up plenty of other guides.

Connect to your server

Once done, you’re ready to SSH to your server. If you’ve got an IPv6 connection, you can connect directly. The Pi used for this walkthrough is called “mywebsite”, so where you see that in these instructions, use whatever name you chose for your server. To SSH directly, connect directly to mywebsite.hostedpi.com. Sadly, the majority of users currently only have IPv4 connectivity, which means you’ll need to use our gateway box. Your server page will give you details of the port you need to connect to. In my case, it’s 5125:

$ ssh -p 5125 root@ssh.mywebsite.hostedpi.com
The authenticity of host '[ssh.mywebsite.hostedpi.com]:5125 ([93.93.134.53]:5125)' can't be established.
ECDSA key fingerprint is SHA256:Hf/WDZdAn9n1gpdWQBtjRyd8zykceU1EfqaQmvUGiVY.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '[ssh.mywebsite.hostedpi.com]:5125,[93.93.134.53]:5125' (ECDSA) to the list of known hosts.

The programs included with the Debian GNU/Linux system are free software;
the exact distribution terms for each program are described in the
individual files in /usr/share/doc/*/copyright.

Debian GNU/Linux comes with ABSOLUTELY NO WARRANTY, to the extent
permitted by applicable law.
Last login: Fri Nov  4 14:49:20 2016 from 2a02:390:748e:3:82cd:6992:3629:2f50
root@raspberrypi:~#

You’re in!

Install a web server

We’re going to use the Apache web server, which you can install with the following commands:

apt-get update
apt-get install apache2

And upload some content:

scp -P 5125 * root@ssh.mywebsite.hostedpi.com:/var/www/html/

Now, visit http://www.yourserver.hostedpi.com in your browser, and you should see something like this:

Another computer on the web serving cat pictures!
 

Host your own domain name

Magically, this site on your IPv6-only Raspberry Pi 3 is accessible even to IPv4-only users. To understand how that magic works, we’ll now host a different domain name on the Pi. We going to use the name mywebsite.uid0.com.

First, we need to set up the DNS for this hostname, but rather than pointing it directly at our server, we going to direct it at our IPv4 to IPv6 HTTP proxy, by creating a CNAME to proxy.mythic-beasts.com:

If you’re using a hostname that already has other records, such as a bare domain name that already has MX and NS records, you can use an ANAME pseudo-record.

Our proxy server listens for HTTP and HTTPS requests on both IPv4 and IPv6 addresses, and then uses information in the request header to determine which server to direct it to. This allows us to share one IPv4 address between many IPv6-only servers (actually, it’s two IPv4 addresses as we’ve got a pair of proxy servers in different data centres).

We need to tell the proxy server where to send requests for our hostname. To do this, visit the IPv4 to IPv6 Proxy page the control panel.  The endpoint address is the IP address of your server, which you can find on the details page for your server, as shown above.

For the moment, leave PROXY protocol disabled – we’ll explain that shortly.  After adding the proxy configuration, wait a few minutes, and after no more than five, you should be able to access the website using the hostname set above.

Enable HTTPS

We’re firmly of the view that secure connections should be the norm for websites, and now that Let’s Encrypt provide free SSL certificates, there’s really no excuse not to.

We’re going to use the dehydrated client, as it’s packaged for the Debian operating system that Raspbian is based on. Unfortunately, it’s not yet in the standard Raspian distribution, so in order to get it, you’ll need to use the “backports” repository.

To do this, first add the backports package repository to your apt configuration:

echo 'deb http://httpredir.debian.org/debian jessie-backports main contrib non-free' > /etc/apt/sources.list.d/jessie-backports.list

Then add the keys that these packages are signed with:

apt-key adv --keyserver keyserver.ubuntu.com --recv-keys 7638D0442B90D010
apt-key adv --keyserver keyserver.ubuntu.com --recv-keys 8B48AD6246925553

Now update your local package list, and install dehydrated:

apt-get update
apt-get install dehydrated-apache2

We need to configure dehydrated to tell it which hostnames we want certificates for, which we do by putting the names in /etc/dehydrated/domains.txt:

echo "mywebsite.uid0.com" > /etc/dehydrated/domains.txt

It’s also worth setting the email address in the certificate so that you get an email if the automatic renewal that we’re going to setup fails for any reason, and the certificate is close to expiry:

echo "CONTACT_EMAIL=devnull@example.com" > /etc/dehydrated/conf.d/mail.sh

Now we’re ready to issue a certificate, which we do by running dehydrated -c. This will generate the necessary private key for the server, and then ask Let’s Encrypt to issue a certificate. Let’s Encrypt will issue us with a challenge: a file that we have to put on our website that Let’s Encrypt can then check for. dehydrated automates this all for us:

root@raspberrypi:~# dehydrated -c
# INFO: Using main config file /etc/dehydrated/config
# INFO: Using additional config file /etc/dehydrated/conf.d/mail.sh
Processing mywebsite.uid0.com
 + Signing domains...
 + Generating private key...
 + Generating signing request...
 + Requesting challenge for mywebsite.uid0.com...
 + Responding to challenge for mywebsite.uid0.com...
 + Challenge is valid!
 + Requesting certificate...
 + Checking certificate...
 + Done!
 + Creating fullchain.pem...
 + Done!

We now need to configure Apache for HTTPS hosting, and tell it about our certificates. First, enable the SSL module:

a2enmod ssl

Now add a section for an SSL enabled server running on port 443. You’ll need to amend the certificate paths to match your hostname. You can copy and paste the block below straight into your terminal, or you can edit the 000-default.conf file using your preferred text editor.

cat >> /etc/apache2/sites-enabled/000-default.conf <<EOF
<VirtualHost *:443>
	ServerAdmin webmaster@mywebsite.hostedpi.com
	DocumentRoot /var/www/html

	ErrorLog ${APACHE_LOG_DIR}/error.log
        CustomLog ${APACHE_LOG_DIR}/access.log combined

        SSLEngine On
        SSLCertificateFile /var/lib/dehydrated/certs/mywebsite.uid0.com/fullchain.pem
        SSLCertificateKeyFile /var/lib/dehydrated/certs/mywebsite.uid0.com/privkey.pem

</VirtualHost>
EOF

Now restart Apache:

systemctl reload apache2

and you should have an HTTPS website running on your Pi:

Automating certificate renewal

Let’s Encrypt certificates are only valid for three months. This isn’t really a problem, because we can easily automate renewal by running dehydrated in a cron job. To do this, we simply create a file in the directory /etc/cron.daily/:

cat > /etc/cron.daily/dehydrated <<EOF
#!/bin/sh

exec /usr/bin/dehydrated -c >/var/log/dehydrated-cron.log 2>&1
EOF
chmod 0755 /etc/cron.daily/dehydrated

dehydrated will check the age of the certificate daily, and if it’s within 30 days of expiry, will request a new one, logging to /var/log/dehydrated-cron.log.

Rotate your log files!

When setting up a log file, it’s always good practice to also set up log rotation, so that it can’t grow indefinitely (failure to do this has cost one of our founders a number of beers due to servers running out of diskspace). To do this, we drop a file into /etc/logrotate.d/:

cat > /etc/logrotate.d/dehydrated <<EOF
/var/log/dehydrated-cron.log
{
        rotate 12
        monthly
        missingok
        notifempty
        delaycompress
        compress
}
EOF

Client IP addresses

If you look at your web server log files, you’ll see one disadvantage of using our proxy to expose your site to the IPv4 world: all requests appear to come from our proxy servers, rather than the actual clients. This is obviously a bit annoying for log file analysis, but is a big problem for any kind of IP-based access controls or rate limiting. Fortunately, there’s a solution, which we’ll look at in the next post.

Notify My Android support for monitoring

December 5th, 2016 by

Great scenery, terrible mobile coverage

Great scenery, terrible mobile coverage


If you’re anything like me, December involves a tour of parents who have retired to far flung corners of the land and who are now living in houses with perfectly serviceable wifi but absolutely no mobile phone coverage. This creates a problem if you’re supposed to be listening out for computers that go bleep in the night, as the SMS notifications don’t get through.

To address this, we’ve just implemented Notify My Android support in our monitoring service. As the name suggests, this allows us to push monitoring alerts to your Android phone. It’s pretty easy to set up:

  1. Register for an account with Notify My Android (a free account will allow 5 notifications per day, a one-off payment of $5 will get you unlimited notifications).
  2. Log in, and generate an API key
  3. Download the app on your phone, and log in
  4. Visit our control panel and add an entry to your notification list that looks like nma:API_KEY where API_KEY is the key generated above.

If you use the other kind of phone we have equivalent functionality using Prowl. This works the same, except you put prowl: before your API key.

All of our dedicated and virtual servers include basic ping monitoring as standard. Comprehensive monitoring of other services is available as an add-on, or as standard with any of our managed services.

ANAME records

October 7th, 2016 by
Company policy requires that blog posts have a picture.

Company policy requires that all blog posts have a picture.

We’ve just added support to our control panel and DNS API for “ANAME” records. ANAME records, also known as ALIAS records, aren’t real DNS records, but are a handy way of simulating CNAME records in places where you can’t use a real CNAME.

It works like this:

You’ve got DNS for your domain managed with Mythic Beasts, and you want to host your website with some 3rd party service provider. They’ll tell you to point DNS for your website at their server. You create a CNAME record for www.yourdomain.com and point it at server.3rdparty.com. So far so good.

You also want requests for your bare domain, e.g. http://yourdomain.com to be served by your provider, so you try to create a CNAME for yourdomain.com and get told you can’t. This is because you will already have MX, NS and SOA records for your bare domain, and CNAMEs aren’t allowed to co-exist with other records for the same name.

The usual fall back is to create A or AAAA records that point directly to the IP address of server.3rdparty.com, but this sucks because their IP is now hard coded into your zone, and if they ever want to change the IP of that server they’ve got to try and get all of their customers to update their DNS.

The nice solution would be SRV records, standardised DNS records that allow you to point different protocols at different servers. Unfortunately, they’re not supported for HTTP or HTTPS.

This is where ANAME records come in. You can create an ANAME just like a CNAME, but without the restrictions on co-existing with other records. We resolve the ANAME and substitute the corresponding IP addresses into records into your zone. We then regularly check for any changes, and update your changes accordingly.

Naturally, our ANAME implementation fully supports IPv6: if the hostname you point the ANAME at returns AAAA records, we’ll include those in addition to any A records returned.

PROXY protocol + nginx = broken header

May 9th, 2016 by

We recently announced support for PROXY protocol in our IPv4 to IPv6 reverse proxy, and happily linked to the instructions for making it work with NGINX. One of our customers has pointed out that they didn’t actually work, and we’ve now got to the bottom of why not.

NGINX version

First issue: you need NGINX >= 1.9.10, as there was a bug with using proxy_protocol on IPv6 listeners. If you’re on Debian Jessie, you can get a suitable version from Jessie backports.

PROXY protocol version

Second issue: NGINX only speaks PROXY protocol v1 and our proxy was attempting to speak v2.

v1 is a human readable plain text protocol, whereas v2 is binary. If you see something like this in the error log:

2016/05/09 11:11:30 [error] 6058#6058: *1 broken header: "

QUIT
!
 ]Y??.????PGET / HTTP/1.1

Then that’s a good sign that you’ve got a v2 reverse proxy talking to you.

We’ve now changed our proxy to only speak PROXY protocol v1 by default. We will look into making this a configurable option in the future. The Apache module seems happy speaking either version.

Whilst we’re here, here are some other failure modes you might see. This in the access log, is v2 PROXY protocol being spoken to NGINX which is not configured for PROXY protocol at all.

2a00:1098:0:82:1000:3b:1:1 - - [09/May/2016:11:08:55 +0100] "\x00" 400 172 "-" "-"

And this is v1 PROXY protocol being spoken to NGINX which is not configured for it:

2a00:1098:0:82:1000:3b:1:1 - - [09/May/2016:11:39:30 +0100] "PROXY TCP4 93.89.134.240 46.235.225.189 64221 80" 400 173 "-" "-"

PROXY protocol support for our, err, proxy

April 29th, 2016 by

We’re increasingly using our IPv4 to IPv6 reverse proxy to host websites on IPv6-only virtual machines. One of the downsides of proxying is that your server doesn’t get to see the client’s real IP address. For non-SSL connections, the proxy can insert an “X-Forwarded-For” header, but SSL is increasingly becoming the norm, and one of the nice things about an SNI-aware reverse proxy is that it doesn’t need to do SSL off load: we don’t need your certificates on our proxy and your traffic stays encrypted until it hits your server. Of course, this means that we can’t go inserting any headers into your connection either.

Fortunately, there is a solution: PROXY protocol. This is a protocol-agnostic mechanism for passing information from a reverse proxy to a server, including the client IP address.

We’ve just added support for PROXY protocol to our reverse proxy:

proxy-protocol

Turning this on allows your server to get the client IP address, but as it’s an additional protocol, not part of HTTP, your server must be expecting it: turning this on and pointing it at a standard HTTP server will result in a broken website.

Most web servers have support for this. NGINX has support built in, and just needs “proxy_protocol” adding after the listen directive:

server {
    listen 80   proxy_protocol;
    listen 443  ssl proxy_protocol;
    ...
}

You will probably also want some additional configuration to actually set the IP address that gets used for logs etc., and also to ensure that you only trust proxy information from the real proxy servers.

For Apache, support is provided by mod_proxy_protocol, which needs to be installed manually. Once done, configuration is easy:

<VirtualHost *:443>
  ...
  ProxyProtocol On

  CustomLog ${APACHE_LOG_DIR}/access.log "%a %l %u %t \"%r\" %>s %b \"%{Referer}i\" \"%{User-agent}i\""

The CustomLog line instructs Apache to use the real client IP rather than the proxy. You should now see v4 addresses being happily logged on your IPv6 server:

root@vm1:~# tail -n 1 /var/log/apache2/access.log
93.93.130.44 - - [29/Apr/2016:14:05:32 +0100] "GET / HTTP/1.1" 200 321 "-" "curl/7.26.0"

Unfortunately the module doesn’t currently provide a way to restrict enablement to trusted proxies only. As such, you’ll probably want to install a firewall to restrict HTTP/HTTPS traffic to only come from our proxies, as otherwise clients could easily fake their IP address.

One thing to watch out for is that although this is applied within a VirtualHost configuration, it’ll actually apply to all virtual hosts on the same IP address and port. This is an unavoidable side effect of the fact that the proxy information is sent before we start talking HTTP. Of course, with IPv6, throwing another IP address at the problem isn’t an issue.