DNS API: sed vs jq

March 16th, 2020 by

Our forthcoming new DNS API seeks to make automating common DNS management tasks as simple as possible. It supports both JSON and “zone file format” for inputs and outputs, which opens up some interesting options for making bulk updates to a DNS zone.

Bulk TTL change

Suppose we want to drop the TTL on all of our zone’s MX records. We could get the records in zone file format and edit them using sed:

curl -n -H 'Accept: text/dns' https://api.mythic-beasts.com/dns/zones/example.com/records/@/MX \ 
| sed -E 's/^([^ ]+) +([0-9]+)/\1 60/' \
| curl -n -H 'Content-Type: text/dns' -X PUT --data-binary @- https://api.mythic-beasts.com/dns/zones/example.com/records/@/MX

Alternatively, we could get the zone in JSON format and modify it using the awesome command line JSON-wrangler, jq:

curl -n -H 'Accept: application/json' https://api.mythic-beasts.com/dns/zones/example.com/records/@/MX \
| jq '.records[].ttl = 60' \
| curl -n -H 'Content-Type: application/json' -X PUT --data-binary @- https://api.mythic-beasts.com/dns/zones/example.com/records/@/MX

How does this work?

The most common response to seeing the second example is “cool, I didn’t know you could do that with jq!”. So what’s going on here?

The first line gets us the the MX records for example.com (the “@” means fetch records for the bare domain). This gives us some JSON that looks like this:

{
    "records": [
        {
            "data": "mx1.mythic-beasts.com.",
            "host": "@",
            "mx_priority": 10,
            "ttl": 300,
            "type": "MX"
        },
        {
            "data": "mx2.mythic-beasts.com.",
            "host": "@",
            "mx_priority": 10,
            "ttl": 300,
            "type": "MX"
        }
    ]
}

The next line modifies this JSON. jq allows you to select values within the JSON. .records selects the “records” property of the response. [] gets us all of the members of the array, and .ttl selects the “ttl” property of each one. On its own, this would extract the values from the JSON, and give us “300 300”, but jq also allows us to modify the property by assigning to it. This gives us the following JSON:

{
    "records": [
        {
            "data": "mx1.mythic-beasts.com.",
            "host": "@",
            "mx_priority": 10,
            "ttl": 60,
            "type": "MX"
        },
        {
            "data": "mx2.mythic-beasts.com.",
            "host": "@",
            "mx_priority": 10,
            "ttl": 60,
            "type": "MX"
        }
    ]
}

We can now pipe this into a PUT request back to the same URL. This will replace all of the records that the first request selected with our new records.

The first approach works in a very similar way, except rather than working on a block of JSON with jq, we use sed to modify the following zone file formatted text:

@                      300 MX    10 mx1.mythic-beasts.com.
@                      300 MX    10 mx2.mythic-beasts.com.

Choose your weapon

We think that the JSON based approach is neater, but let us know what you think on our Twitter poll. We’d also be interested to hear of specific DNS management operations that you’d like to automate, so that we can see how they’d be tackled in our new API.

Covid-19

March 11th, 2020 by

The European Bio-informatics Institute used x-rays and a lot of hard maths to draw the above picture of the main covid-19 protein.

We provide critical infrastructure to many other companies who rely on our services. Covid-19 could significantly change day to day life everywhere. At present we don’t believe it will have a significant effect on our operations.

Remote working

Mythic Beasts has always been a distributed company with no central office; all staff members normally work from home. As a result adopting remote working recommendations or enforcement will have no significant impact on our day to day operations. Normally, we have a weekly optional meeting for staff around Cambridgeshire and a compulsory all company meeting roughly once every six weeks. Migrating these meetings to conference calls will have minimal operational impact.

Reducing travel

Our sales process has always been online. We don’t routinely meet customers and have a very light attendance at conferences and industry events. Our next scheduled events are UKNOF (Manchester, April) and LINX (Manchester) both of which offer remote participation should it become necessary.

Financial stability

Mythic Beasts has been profitable every year since 2001 and carries no debt. We maintain significant cash reserves so we can self-finance routine expansion and other business opportunities, and weather unforeseen circumstances. We understand that many consider this an inefficient use of capital, but we can definitely pay our bills in the event of a global pandemic crippling the economy for a short period.

Sensible staff policies

Our staff members are provided with private healthcare. They also get sick leave which they are expected to take if ill. We also provide 30 days + bank holidays holiday to all staff members as standard and we strongly encourage them to have a two week contiguous holiday in order that we know we can operate without them in the event of sickness. We have sufficient staff levels that in the event of multiple staff members being ill for an extended period day to day operations can be maintained and only longer term projects should be delayed.

Supplier issues

We maintain stock of key components to cope with hardware failure although for some months we’ve been seeing very long lead times for hardware, especially CPUs and SSDs. This may impact our lead times for larger orders of dedicated servers or custom hardware. We think it likely that our data centre and connectivity providers may implement a change freeze as they do annually over the Christmas period and at other key times (e.g. the 2012 Olympics). This is a familiar operating environment and utilising multiple providers in multiple countries will help to mitigate this.

2020-03-12 : We run our own private phone conference and IRC service so we’re not affected by the reported load issues at the major public providers Slack, Teams, Zoom etc.

In summary, we think that day to day operations shouldn’t be affected but if you have any concerns get in touch at support@mythic-beasts.com.

New DNS API – easily update SSHFP keys

March 9th, 2020 by

We’ve got a new version of our DNS API under development.  One of the neat new features is the ability to accept input in bind zone file format (aka RFC 1035). 

One of the things that this makes very easy is adding SSHFP records to the DNS. SSHFP is a mechanism for lodging your server’s public SSH keys in the DNS so that your SSH client can automatically verify a server the first time you connect to it, rather than prompting you to confirm the host key.

Using our new API, you can add or update SSHFP keys by piping the output of ssh-keygen straight into curl:

ssh-keygen -r myhost | curl -X PUT -n https://api.mythic-beasts.com/zones/example.com/myhost/SSHFP -H 'Content-Type: text/dns' --data-binary @-

What’s going on here?

ssh-keygen -r outputs your server’s SSH public keys in RFC 1035 format:

$ ssh-keygen -r myhost
myhost IN SSHFP 1 1 e579ff6aabc2f0acf714deca53108a0c1ea7d799
myhost IN SSHFP 1 2 7c47d5dfb748ff1fd244b7289d815e83dad8c2c1652b92ac8aed8ff166733d07
myhost IN SSHFP 2 1 c5caf4cc8870acc7fd113e5a7c866822ec0d94de
myhost IN SSHFP 2 2 9f11843fa1d9da318aa4bc09bbcaacaf4a9868c4d83dfc4bad6853d0c9597a31
myhost IN SSHFP 3 1 eb8644f5fcfd555341f2063bd92044075e20da89
myhost IN SSHFP 3 2 60f3e9780f9b87e5b4d6344f2ab46decbf705123e96ef07c3247f714ca220fc4
myhost IN SSHFP 4 1 139426de48381ea46ad75dde4e412bf1c9b11e61
myhost IN SSHFP 4 2 6f094181b510bbb573048835665773eb1a2a65fd4341d95207479ed71296491b

We then pipe that into curl to make a request to the DNS API.

A PUT request to the /myhost/SSHFP endpoint replaces all existing myhost SSHFP records.

-n tells curl to get auth credentials from a .netrc file, and the “Content-Type” header tells our API that we’re providing the new records in zone file format.

What’s the point of SSHFP?

Having lodged these SSHFP records in the DNS, and provided that DNSSEC is enabled for your domain, it’s possible to connect to a server without being prompted to verify the server’s host key.

$ ssh -o VerifyHostKeyDNS=yes root@myhost.example.com 

You can avoid having to specify the -o by putting VerifyHostKeyDNS=yes in your ~/.ssh/config file.

Feedback wanted

The new DNS API isn’t quite ready for deployment, so if there are any features that you’d really like to see in the new API, now’s the time to tell us, either on Twitter or by email.

IPv6-only hosting in 2020

February 28th, 2020 by

It’s now nearly five years since we started offering IPv6-only hosting, and what started out as a source of interesting projects for enthusiastic early-adopters has become our default for most hosting requirements.

A few things have changed over the years that have made this possible:

  • The death of Windows XP, the last significant OS with a browser that didn’t support SNI (Server Name Indication). SNI makes it possible for us to proxy encrypted connections to IPv6-only hosts.
  • The widespread adoption of secure services. This means that protocols that don’t have their own proxying features (such as POP3 or IMAP) can be proxied in their encrypted form thanks to SNI.
  • Improvements to our hosting services, such as our SSH port forwarder.

This post gives a quick run-down of how we make IPv6-only hosting a reality.

Getting bytes in

There’s no getting away from the fact that an IPv6-only hosting server still needs to be able to talk to IPv4-only clients, but there’s now a good solution for doing so for pretty much all common scenarios.

Web traffic

This is the most common requirement, and also probably the easiest, as it can be handled by our v4 to v6 proxy.  The proxy is a set of servers with both IPv4 and IPv6 addresses that accept traffic for various protocols and forward it to an IPv6-only server.

The DNS for the hosted site points at our proxy servers, by means of either an ANAME or CNAME record to proxy.mythic-beasts.com.

Unencrypted HTTP traffic is easy to proxy as HTTP 1.1 is designed to support multiple websites on a single IP address.

HTTPS is also easy to proxy, thanks to the now-ubiquitous support for SNI (its successor, ESNI, may complicate this a bit in the future, but we’ll tackle that in a separate post).

Our proxy also supports PROXY protocol, which is a standard way of communicating the original client’s IP address on a proxied connection. Support for PROXY protocol is now a standard feature of NGINX and Apache.

IPv6 traffic can either follow the same route as IPv4 traffic through the proxy (as shown above) or can be routed directly to the hosting server by setting the AAAA records for the site to point at the server rather than the proxy:

This provides a slightly more direct route for IPv6 traffic, but can make the configuration on the server a little more complicated, particularly if you’re using PROXY protocol.

IMAP and POP3

These can both be proxied in their secure forms (IMAPS and POP3S) thanks to SNI, and thankfully these secure variants are now the default choice for all popular email clients.

SSH

Our customers typically want to administer their servers via SSH, and can’t guarantee that they’ll always be connecting from a v6-enabled network. The SSH protocol isn’t built on TLS/SSL so doesn’t have SNI support, and doesn’t have any equivalent features of its own.

We work around this by providing a port-forward to all virtual servers and Raspberry Pi servers from a host with a v4 IP address, so customers can make a connection to a different host on a non-standard port, and the connection will be forwarded to the IPv6 server on port 22. Details of the host and port can be found in our customer control panel.

SMTP

SMTP is a bit awkward. It’s used in two common scenarios:

  1. “Submission”, where an end-user client sends outgoing mail using authenticated SMTP
  2. Server-to-server delivery of email.

It has multiple ports in common use:

  • 25 – the standard port for server-to-server email
  • 465 – a port for SMTP over SSL
  • 587 – the standard SMTP submission port

Port 25 doesn’t use SSL/TLS at connection time, but can be upgraded to a secure connection via the STARTTLS command, which means it can’t be proxied using SNI.

Port 465 has a confused history, having been allocated by IANA for secure SMTP, then revoked in favour of STARTTLS and allocated to a different service, and then reinstated for secure SMTP submission by RFC 8314.  Port 465 is supported by our proxy, and is a good choice for SMTP submission.

Port 587 was historically plain SMTP (RFC 2476) with STARTTLS, but is being migrated to SSL by default (RFC 8314) which is proxyable thanks to SNI.  Our proxy assumes that port 587 traffic is encrypted (because it can’t do anything useful if it’s not) and as such can also be used for SMTP submission, provided you use SSL/TLS rather than STARTTLS.

For server-to-server delivery, it’s possible to use our dual-stack MX servers to handle incoming mail. This can be done by having the highest priority MX record point to the v6-only server, and then have a lower priority record
pointing to our MX servers. v4-only servers will deliver to our MX servers, and we’ll then pass it on to your v6-only server.

This isn’t a perfect solution, as it means you can’t do connection-time filtering of incoming mail.

Our MX servers need to be configured to accept mail for your domain. At present, this needs to be done by emailing support.

Getting bytes out

Your server may need to make outgoing connections to v4-only servers. Fortunately this is straightforward using our NAT64 resolvers. These are DNS resolvers that when asked for an address for a host that does not have any AAAA records will provide an IPv6 address that is mapped to the host’s v4 address. The v6 address is actually an address on one of our NAT servers that will then forward the traffic to the v4 address.

There’s a 1:1 mapping between v4 addresses and v6 addresses on the NAT server – with IPv6 we can easily allocate the equivalent of the full 32-bit IPv4 address space to a single server!

NAT64 works very well in almost all cases. We have come across a few bits of software which explicitly request an A record when doing a DNS lookup, which obviously doesn’t work.

As with any NAT configuration, you’re sharing a v4 address with other users, which can cause issues for sites that perform IP-based filtering or rate limiting.

Make the switch

Like most providers, we now charge for IPv4 addresses, but unlike most other providers it’s a tax you probably don’t need pay. We offer IPv6-only versions of all of our virtual and dedicated servers, and our Raspberry Pi servers area all IPv6-only.

Learn more

If you’d like to hear more, here are some videos of a presentation that Pete gave at the UK Network Operators Forum (UKNOF).

Working with talented people.

February 14th, 2020 by

You can buy another copy in a bookshop if your cat refuses to return the one you already own.


We like working with talented people be they staff, customers or suppliers. That’s why we give discounts to people who can navigate our jobs challenge even if they don’t want to work for us.

Occasionally we’ve drafted in Gytha Lodge to help us copy write various articles and turn a jumble of thoughts into a coherent and interesting article.

Formerly an aspiring author, her full title is now Richard and Judy book club pick and Sunday Times bestselling author, Gytha Lodge.

We’re also pleased to report that she took our advice on her first book seriously and the new book starts with a murder being watched over a webcam.

Security in DNS, TLSA records now available in our control panel to support DANE

February 11th, 2020 by

The Internet is better when it’s secure. Finally, thanks to Let’s Encrypt it’s possible to automatically get SSL certificates free of charge and as a result the Internet is dramatically more secure than it used to be. If you’ve used our DNS API you may have discovered that you can verify Let’s Encrypt SSL certificate requests using DNS records, including issuing wildcard certificates.

We support secure DNS (DNSSEC) which prevents DNS records from being forged, making the process of authenticating your SSL certificate through DNS records far more secure than the email-based authentication that was typically used for certificates issued by commercial certificate authorities. We have implemented support for CAA records which uses DNS to restrict the certificate authorities that can issue your certificates. This is most useful if the DNS is trustworthy which, again, requires DNSSEC.

However, there seems to be an opportunity here to improve things further. Rather than relying on a 3rd party certificate authority to confirm that you have control of your own DNS, why can’t you just publish your certificate in DNS directly? If you can trust DNS this would seem to be an obvious improvement, and with DNSSEC, DNS becomes trustworthy. We’ve now added support for the additional record type TLSA which allows exactly that, as part of DNS-Based Authentication of Named Entities (DANE).

Adding a TLSA record through our control panel.

DANE is a flexible mechanism that can be used to add an additional layer of security to certificates issued by a 3rd party authority, or to enable the use of self-signed certificates.

Unfortunately at the moment few clients support TLSA so for the majority of interactions you’re still going to rely on the certificate authority to verify the certificate. But implementations exist for both Exim and Postfix. Step by step, email is becoming more secure.

Two Factor Auth – TOTP now available

January 27th, 2020 by

Good security practice requires two different factors.

We’ve just rolled out a much requested feature to our control panel: Timed One Time Passwords or TOTP.

TOTP is a type of 2FA. If these acronyms are making sense to you, head over to the control panel and set up TOTP.

If not, read-on…

What is 2FA?

You’ll probably have noticed an increasing number of websites that you use encouraging or requiring you to enable “two factor authentication” or 2FA.

2FA refers to requiring two separate things to confirm your identity: something you know (your password) and something you have (e.g. your phone).

2FA protects against some of the most common ways in which accounts get compromised:

  • Username/password re-use. Despite advice not to do so, plenty of people re-use passwords across lots of different sites. Every now and again, sites get compromised, and databases of usernames and passwords become available on the shadier parts of the internet. These credentials will then be tried against other sites, looking for places that they’ve been re-used.
  • Email account compromise. If your email account is compromised, it’s very easy for an attacker to gain access to your other accounts, as it’s almost always possible to reset your password by sending an email.
  • Key-logging. If your computer is compromised, or you use an untrusted shared computer, key-logging malware may be installed that logs your password as you type it to log into your account.

2FA protects against all of these. It’s no longer sufficient to know the username and password to login, and you can’t reset your password just by having access to the email account. 2FA uses “one time passcodes” which means that whilst they can be captured by a key-logger, they’re of no value as they’ve already been used.

TOTP, SMS and Recovery Codes

We now support three different methods to provide the second factor: SMS, TOTP and recovery codes. With a Timed One Time Password your phone uses a secret key and the current time to generate a unique six digit code. The code is only valid for a short period, and can only be used once. The code proves that you have access to the secret key in the phone, but does not require you to send the secret key or any part of it to us.

With SMS we send you a time-limited, one-time code via a text message. Your phone collects this and you can type it in during login to prove that you’re holding your phone.

Recovery codes are intended to be a fall back should you lose access to your primary 2FA method. These are a set of one time codes that you can store securely (e.g. on paper, in a safe) and use each of them for a single login as required.

TOTP has a number of advantages over SMS. Firstly, it’s entirely offline on your phone so that if you’re somewhere with no phone signal you can still log in. Secondly, it doesn’t rely on trusting the mobile phone network; anyone with access to the phone network could intercept your SMS code or arrange for it to be delivered to another device. Similarly you may have things like message sharing enabled which means that your passcode is delivered to multiple devices.

Setting up TOTP

TOTP is very easy to setup. You’ll need an app on your phone. You can use Google Authenticator, but we prefer the open source FreeOTP. Once installed, go to the two-factor auth page in our control panel and hit the big green “Enable TOTP” button.

You’ll be shown a QR code which you can scan into the app on your phone, and you can then start generating codes. You need to enter a code to confirm that it’s set up correctly, and you can then choose to require 2FA whenever you log into your account.

Whilst you’re there, you should take the chance to print off some recovery codes.

IPv6 updates

December 16th, 2019 by

Last Thursday we went to the IPv6 Council to speak about IPv6-only hosting and to exchange information with other networks about the state of IPv6 in the UK.

IPv4 address exhaustion is becoming ever more real: the USA and Europe have now run out, and Asia, Africa and Latin America all have less than a year of highly-restricted supply left.

Perhaps unsurprisingly, we’re now seeing real progress in deploying IPv6 across the board.

The major connectivity providers gave an update on their progress. Sky already have an effectively complete deployment across their UK network, so instead they told us about the Sky Italia build-out that launches early next year. It will initially be 100% dual stack but they’re planning to migrate to single stack IPv6 with IPv4 access provided by MAP-T as soon as possible. BT/EE have IPv6 virtually everywhere and take-up is rising as HomeHubs are retired and replaced with SmartHubs. Three are actively enabling IPv6 over their network, as we noticed last month:

Smaller providers are also making progress; Hyperoptic and Community Fibre have both essentially completed their dual stack rollout this year, with both organisations having to consider Network Address Translation due to lack of IPv4 addresses.

We’ve been working hard for many years to make IPv6-only hosting a practical option for our customers, allowing us to considerably expand the lifespan of our IPv4 allocation (which, thanks to a few acquisitions and being a relatively old company, is a reasonable size).

We heard from Ungliech, who started more recently and don’t have a large historical allocation of IPv4 addresses. They gave an interesting talk about their IPv6-only hosting and how it’s an urgent requirement for a new entrant because a RIPE final allocation of 1024 addresses isn’t enough to start a traditional hosting company. Thanks to RIPE running out last month, any new competitor has it four times harder with only 256 addresses to get them started.

We also had interesting updates from Microsoft about their continuing journey to IPv6-only internally in their corporate network, and the pain of continuing to support IPv4 private addressing. When they acquire a company they already have overlapping internal networks, and making internal services available to the wider organisation is an ongoing difficult challenge.

There was also a fascinating talk from SITA about providing network and infrastructure to aviation. There is a huge amount of networking involved and the RFC1918 private IPv4 address space is no longer large enough to enable a large airport. They have a very strong push to use IPv6 even on networks not connected to the public internet.

Updates to sympl to continue to support Let’s Encrypt

October 25th, 2019 by

Before you 3D print the keys from the photo, you should know they are no longer in use.

We’ve now updated Sympl to support the new ACME v2 protocol for long term Let’s Encrypt support.

Let’s Encrypt is changing the protocol for obtaining and renewing certificates from ACME v1, to ACME v2 and the version 1 protocol is now end-of-life. In the next few days (1st November) this means that new accounts will no longer be able to be registered which will prevent new sites obtaining SSL certificates. Final end of life occurs in 2021 when certificate renewals will start to generate errors and then fail entirely.

Symbiosis is now end of life, as Sympl is an actively developed fork we’d recommend any Symbiosis users migrate to Sympl. We’d also recommend our managed hosting as a good place to run your Sympl server.

Multiple Mythic Beasts staff members contributed to this update.

Let’s Encrypt support for older Debian

October 9th, 2019 by
seure cat

This cat is secure, but not dehydrated. (Credit Lizzie Charlton, @LizzieCharlton

Debian Jessie and Debian Stretch include dehydrated, a useful command line tool for managing Let’s Encrypt certificates. We use it fairly extensively for managing certificates throughout our servers and with our managed customers. Unfortunately due to a change in capitalisation at Let’s Encrypt, the standard copy of dehydrated shipped with Debian Jessie and Debian Stretch is no longer compatible. As there’s no package in backports, we’ve spun our own packages of a newer version of dehydrated which is available on our mirror server.

If you use the older version you’ll see an error like


{
"type": "urn:acme:error:badNonce",
"detail": "JWS has no anti-replay nonce",
"status": 400
}

or


{
“type”: “urn:ietf:params:acme:error:malformed”,
“detail”: “Malformed account ID in KeyID header URL: “https://acme-v02.api.letsencrypt.org/acme/acct/””,
“status”: 400
}

The fix is very simple, you just need to install our dehydrated packages. This is very easy to do.

First add our signing keys


wget -O - -q https://mirror.mythic-beasts.com/mythic/support@mythic-beasts.com.gpg.key | apt-key add -

Then the correct repository based on your version of Debian

echo deb http://packages.mythic-beasts.com/mythic/ jessie main >/etc/apt/sources.list.d/packages.mythic-beasts.com.list

or

echo deb http://packages.mythic-beasts.com/mythic/ stretch main >/etc/apt/sources.list.d/packages.mythic-beasts.com.list

then

apt-get update
apt-get install --only-upgrade dehydrated
dehydrated -c

and your copy of dehydrated will be updated to 0.6 and your certificates can be created as normal.