8GB and overclocked Raspberry Pi servers

June 15th, 2021 by
Pi 4 with PoE HAT

Our Pi 4 servers all wear the Power over Ethernet HAT to provide power and cooling to the CPU.

Since the launch of the 8GB Raspberry Pi 4 we’ve had many requests to add these to our Raspberry Pi cloud. Meanwhile many Raspberry Pi users have read about overclocking the Raspberry Pi and running at a higher clock speed.

Overclocking further increases the computing power of the Pi, but brings significant operational issues for our Pi cloud. Not all Raspberry Pi hardware will run reliably at the higher clockspeed and the higher voltage required to support it. Increasing the clockspeed and voltage significantly increases the power consumption and thus the cooling requirements necessary to prevent overheating. We’ve spent a considerable amount of time testing and we’re now ready to launch our first 8GB Raspberry Pi 4 cluster. We’re offering them at two clock speeds: the stock 1.5GHz and overclocked to 2GHz.

The overclocked Raspberry Pis have all been run at a significant CPU load for several weeks to test their stability before release. Any that failed the stability test have been added to the cloud at the normal 1.5GHz clockspeed.

The 8GB Pi is available at 1.5GHz and 2GHz clock speeds. Supported operating systems are Raspberry Pi OS 64 and Ubuntu 64.

Larger fans provide more cooling to our 8GB Pi4 cloud so we can run at higher clockspeeds.

VPS API, on-demand billing and dormant VPSs

May 14th, 2021 by

Dormant mode means your VPS can have a nice snooze.

We’ve recently rolled out some new features that provide more flexibility to our VPS platform.

On-demand billing

Last year we added on-demand billing to our Raspberry Pi Cloud and we’ve now rolled this out to our VPS services, allowing you to add and remove VPSs at any time and pay by the second for the time that the server is provisioned. We continue to offer monthly, quarterly and annual billing options, with discounts for longer billing periods, allowing users to choose between the best pricing for long term usage and the convenience of on-demand, pay-as-you go pricing.

Dormant VPS mode

We’ve also added the ability to make an on-demand VPS dormant, so that you’re only charged for the server’s storage space (and any allocated IPv4 addresses) until you want to reactivate it. Dormant VPSs can be reactivated at any time, although it is not guaranteed that you will be able to re-provision to the same specification of server immediately. The RAM and CPU previously allocated to your server may have been reallocated, and a move to a different host server may be required.

VPS management API

We have also added an API for managing on-demand VPSs, allowing the creation and deletion of servers to be automated. The API is very similar to our API for managing Raspberry Pi Cloud servers. To get started, see our API docs.

Cloud-init user data

We use cloud-init to automate operating system installation when provisioning a new VPS. The installation can be customised using cloud-init user data, which can provide additional installation steps to be performed after the first boot. User data can be provided through both the control panel and the API. It also possible to store and re-use user data snippets in the control panel, making it easy to repeatably spin up new servers with your applications already installed and configured.

More capacity

We continue to add capacity to our cloud to keep up with customer demand with the most recent expansion being in our London Meridian Gate (MER) zone.

Private cloud improvements

Our Private Cloud service gets you the features and convenience of our public VPS platform, but provided on your own dedicated servers. We’ve recently rolled out improvements to our Private Cloud platform, allowing Private Cloud servers to be provisioned and managed via the API and control panel.

Teaching our network some MANRS

April 30th, 2021 by


We’ve recently rolled out software upgrades to our networks that enable improved routing security and we have joined the MANRS (Mutually Agreed Norms for Routing Security) initiative.

Our MANRS certification for our EU and US networks confirms that we block spoofed traffic, drop incorrect routing information and work with others to maintain routing security.

This is beneficial for any customer using our transit and IP services, which includes all dedicated server and virtual server customers.

Resource Public Key Infrastructure (RPKI)

Amazingly, up until the advent of RPKI the entire internet worked on a trust relationship. When another network told us that they were responsible for a range of internet addresses we’d believe them. Border Gateway Protocol (BGP) is how networks communicate routing data to each other and it had no mechanism to confirm that the route and address space being advertised to you were genuine.

Incorrect advertisements result in network traffic being delivered to the wrong destination and incidents, both deliberate and accidental, are common and can cause real harm. For example, $17m in crypto currency was stolen in 2018 via an IP address hijack aimed at Amazon. Youtube has been taken offline as have large parts of the Cloudflare network.

RPKI seeks to address this by providing signed proof that a network operator (identified by their Autonomous System Number) is permitted to originate a specific range of IP addresses. Once a range of IP addresses is signed you know that any announcement of the address space from any other provider is invalid and should be dropped.

Our transit providers are also certified by MANRS for further protection.

An RPKI example

RIPE Labs have created a deliberately invalid routing announcement that can be used to demonstrate and test RPKI. RIPE Labs have published a Resource Origination Authorisation (ROA) that says only AS0 is permitted to announce the prefix 209.24.0.0/24. They then announce that prefix under AS15562.

With RPKI we see that the network listed in the ROA does not match the network announcing the route, so that route is considered invalid and rejected as being a hijack.

Ripe Labs have published a checker that runs in your browser and detects whether you can see this invalid route on your ISP’s network.

From our network, we now get the big smiley face:

Internet Resource Registry (IRR)

RPKI complements another approach to routing security: filtering based on Internet Resource Registry (IRR) data. RPKI allows us to verify if a network is a valid ultimate destination for a particular IP range. Most networks we don’t see directly, we go through another transit providing network. IRR allows us to verify that the network advertising a given route is authorised to originate or transit that route.

The Regional Internet Registries (RIR) allow network providers to register a link between their network and an IP block. Various tools exist (e.g. bgpq3) to create a list of all the internet addresses that a network can originate or transit from their downstream customers. This is be used to generate a filter list that restricts what routes we will accept from peers and downstream customers.

These lists can be very long and change frequently – the list for our network (AS-MYTHIC) is usually 5000 or so records with tens to hundreds of changes per day.

Best Common Practice 38 (BCP 38)

Another issue with insecure routing is “spoofing” — sending IP packets with a fake source address. This is widely used by attackers to cause denial of service attacks. An attacker sends packets with a sender IP address faked to be that of the target machine. The recipient of these packets will send replies to the target machine instead of the originator. This makes it very easy to create distributed denial of service attacks.

BCP38 is a Best Common Practice which requires that networks filter packets that aren’t either to or from an address within their network.

Part of MANRS is not only to implement BCP 38 but also to host an active spoofer. This means if we drop our BCP38 filtering our non-compliance will be published including regular mailings to network operator groups.

Having good MANRS

By combining all these methods routing security is significantly improved. RPKI provides dynamic checking that doesn’t rely on us adding static route lists to our routers. This also provides excellent protection against accidental hijacks from a “route optimiser” gone wrong. IRR forces accurate routing data to generate filters. BCP38 reduces risks to other networks from spoofed packets. Combining all of these means we have much better MANRs at the price of terrible acronyms.

RPKI filtering is now fully deployed on our US and European network and they both now pass Cloudflare’s “Is BGP Safe Yet” test.

Restoring Nominet’s Purpose: update

February 22nd, 2021 by

Earlier this month we reported that we’d signed up to the Public Benefit campaign to reform Nominet, the company responsible for overseeing UK domain registrations.

The campaign was seeking 5% of Nominet’s membership in order to call an EGM to replace Nominet’s non-elected directors. The campaign quickly achieved this, the EGM request was delivered, and Nominet have now set the date for the EGM as 22nd March 2021. Members representing more than 17% of Nominet voting rights have now signed up to support the campaign. Typical AGM voting turnout is well under 10% suggesting that the vote is pretty much certain to succeed, at least according to The Register’s analysis.

If there was ever any doubt about the need for reform, Nominet’s response to the EGM letter has completely removed this.

Nominet’s CEO rushed out a statement hoping that:

all constituencies will be able to engage in a constructive way

At the same time, Nominet responded to Public Benefit’s email requesting member information by providing 575 printed pages:


This would seem to be more obstructive than constructive.

The EGM request made two motions: (1) sack the current directors; and (2) appoint two interim directors to take over. Nominet are claiming that the second motion is illegal (contrary to legal advice received by Public Benefit) and are refusing to put it on the EGM agenda. They now have the gall to claim that the EGM request destabilises Nominet because it does not provide a credible plan to replace the current leadership.

Is this just about reducing UK domain fees?

It’s been suggested that this campaign is about Nominet members, who are mostly companies like us that resell domain registrations, trying to reduce the price that they pay for domains. This seems to ignore the fact that the domain market is very competitive, and UK domains are particularly easy to transfer between registrars. Provided that the price is the same for all members, what that price is doesn’t make much difference to us.

Nonetheless, we’re very happy to make a public commitment that if the EGM process results in a reduction in the price that we pay for domains, we will pass on that saving in the price that we charge.

Testimonials

February 5th, 2021 by

We’ve had a variety of customer being very complimentary recently. Andy Steven runs a series of web cams in the Shetland Islands that stream live views of the northern lights. The cameras relay the stream via one of our virtual servers in our MER data centre and the current bandwidth record is several Gbps.

I am proud to say that our new ‘AuroraCam’ network just delivers and for the first time I no longer break out in a sweat watching the demand increase from that AuroraWatchUK alert or a celebrity weather personality sending out a Tweet.

— Andy Steven, Shetland Webcams (full article)

Beautiful shot of the northern lights captured by Shetland Webcams. Could be improved by adding a kitten though.

We provide 10Gbps fibre connectivity to the Cambridge office of DarkTrace. Darktrace uses machine learning to identify and neutralise security threats in real time.

You’ve been much more transparent & approachable than any provider I’ve dealt with previously. Very happy with the service so far.

— Harry Godwin, Head of Business Infrastructure. Darktrace

The Web hosting review and advice site Hosting Advice interviewed us and wrote a great article about the management and infrastructure services we provide.

Recognizing that there is no one-size-fits-all approach to managed hosting, Mythic Beasts can take on varying responsibility levels as needed. This range of services includes everything from ensuring that servers are up and running to providing the extensive monitoring, security, and assistance necessary to keep custom web applications functioning reliably.

— Hosting Advice (full article)

Lastly our strong stance about returning Nominet to its public benefit roots garnered entirely positive responses at Twitter.

 

 

Nominet: managing .uk for public benefit

February 1st, 2021 by

We have signed up to Public Benefit, an effort to restore Nominet to its roots as a public benefit, not for profit organisation.

Nominet runs a world class registry for domains ending in .uk. Their technical execution is faultless and we’re extremely happy with all the services they provide for .uk domains.

A ccTLD domain registry is a natural monopoly, and a profitable one at that. For many years, Nominet have donated their surplus to the Social Tech Trust (formerly the Nominet Trust, which was renamed after they cut funding), a charity that uses technology for the public good.

Charitable donations have dwindled whilst prices have increased over the last five years, due to spending on loss making research projects such as self driving cars and Radio Spectrum management, not to mention last year’s £249,000 pay rise for the CEO (to £772,000).

We are strongly in favour of the proposal of Axel Pawlik, former MD of RIPE, as a director. Under Axel’s leadership, RIPE achieved many significant improvements to internet infrastructure including, but not limited, to:

  • Managing IPv4 address exhaustion, balancing the needs of existing ISPs while preserving access for new entrants;
  • Encouraging and facilitating IPv6 uptake;
  • Encouraging uptake of RPKI to secure routing announcements (RIPE now has the highest participation rate of any RIR); and
  • Creating RIPE Atlas, a communal tool to track routing that makes running an ISP much easier.

Sir Michael Lyons also appears to be a sound proposal, although beyond his earlier report on Nominet governance, we have no day-to-day experience of his work.

Nominet is structured such that the elected non-executive directors are out-numbered and are unable to achieve meaningful change, which is why after years of dissatisfaction this has come to an Extraordinary General Meeting to remove the existing directors. Voting is weighted in a complicated fashion, but the more domains the member controls the more important their vote is. As a result domain owners can effectively vote by switching registrars, and if you would like to support this proposal we would recommend moving any .uk domains to a registrar that has signed up to call the EGM. Nominet are very good at actually running the registry, and .uk domain transfers are very easy, and free.

Zero-day Security Updates for Managed WordPress

November 26th, 2020 by
Cat, napping

Don’t get caught napping when it comes to WordPress updates!

Installing updates is an important part of keeping your computer secure. This is also true when running a website based around popular publishing tools such as WordPress, which have vast communities of plugin and theme developers of varying experience. Plugins often contain security vulnerabilities that can lead to a compromised site and it can be difficult to tell if a new version is a security update or just adding features.

For our managed WordPress customers we have been using the excellent WPScan API for some time to check installed plugins and themes against their list of security vulnerabilities. Dealing with this report was a time-consuming manual process once or twice a week which we wanted to improve.

Helpfully WPScan have recently introduced a feature which allows us to receive these updates in real-time. Now, when a new security update for a plugin or theme is announced we automatically check within a few minutes if a vulnerable version is present on any of our managed WordPress installs, and then generate a support case to ask the customer when they’d like us to install the update. Some customers prefer to perform the updates themselves, which is also fine – the important thing is that the vulnerability gets fixed.

Where a security issue is dangerous and likely to be exploited then we apply our standard zero-day vulnerability process of deploying an update immediately and notifying customers afterwards. A good example of this would have been the recent Loginizer SQL Injection vulnerability, had the WordPress team not already decided this was too dangerous and invoked their rarely-used forced update process.

Now we can respond much more quickly to WordPress vulnerabilities, helping us keep our customers’ websites secure.

Our managed WordPress service includes a number of features that help keep your site secure and protect your data:

  • Daily backups, mirrored to multiple sites
  • 24/7 monitoring
  • Custom security hardening
  • Notification and installation of security updates
  • You can ask us for help if something goes wrong!

If this sounds interesting then you can order managed WordPress, see details of our other managed applications or contact us if you have questions.

MagPi magazine: how to host a website on a Raspberry Pi

October 9th, 2020 by

The MagPi MagazineThe MagPi Magazine has published a new article on how to set up a web server using a Raspberry Pi hosted in our Pi Cloud.

The article walks through all the steps necessary from ordering a server on our website to getting WordPress installed and running.

It’s also a great demonstration of how easy it is to host a website on an IPv6-only server such as those in our Pi Cloud. In fact, it’s so easy that the article doesn’t even mention that the Pi doesn’t have a public IPv4 address. An SSH port-forward on our gateway server provides IPv4 access for remote administration, and our v4 to v6 proxy relays incoming HTTP requests from those still using a legacy internet connection.

You can read the article on the MagPi site or order a server to try it out yourself.

We have Pi 3 and Pi 4 servers available now, and the option of per-second billing means you can try this without any ongoing commitment.

More DNS API fun: find an IP across all zones

September 21st, 2020 by

A customer was doing an IP address change on a server and wanted a quick way to find all references to the old IP address across all of their domains.

This seemed like a good job for our DNS API and a few UNIX utilities.

Finding matching records

Our DNS API makes it easy to find records with particular content:

curl -sn https://api.mythic-beasts.com/dns/v2/zones/example1.com/records?data=1.2.3.4

The -n assumes we’ve got a .netrc file with our API credentials. See our DNS API tutorial for more details.

This gives us a block of JSON with any matching records:

{
  "records": [
    {
      "data": "1.2.3.4",
      "host": "www",
      "ttl": 300,
      "type": "A"
    }
  ]
}

jq lets us turn the presence or absence of any matching records into an exit code that we can test with an if statement by piping into the following:

jq -e '.records | length > 0' 

This counts the number of members of the records array, and -e sets the exit code based on the output of the last expression.

Getting a list of zones

We want to check this across all zones, so let’s get a list of zones:

curl -sn https://api.mythic-beasts.com/dns/v2/zones

This gives us some JSON:

{
  "zones": [
    "example1.com",
    "example2.com"
  ]
}

What we really want is a flat list, so we can iterate over it in bash. jq to the rescue again. Simply pipe into:

jq -r '.zones[]'

and we get:

example1.com
example2.com

Putting it all together

Putting this all together with a for loop and an if:

IP=1.2.3.4
for zone in $(curl -sn https://api.mythic-beasts.com/dns/v2/zones | jq -r '.zones[]') ; do
  if curl -sn "https://api.mythic-beasts.com/dns/v2/zones/$zone/records?data=$IP" |\
      jq -e '.records | length > 0' >/dev/null ; then 
    echo "$IP found in $zone"
  fi
done

Gives:

1.2.3.4 found in example1.com

More than one way to do it

Another approach would be to use the zone file output format and check if the output is empty or not:

curl -sn -H 'Accept: text/dns' \
  "https://api.mythic-beasts.com/dns/v2/zones/$zone/records?data=$IP"

This give us matching records, one per line:

www         300 A 1.2.3.4

We can then test if we’ve got any matches using ifne (if-not-empty, part of the moreutils package in most distributions):

curl -sn -H 'Accept: text/dns' \
  "https://api.mythic-beasts.com/dns/v2/zones/$zone/records?data=$IP" \
  | ifne echo $IP found in $zone

Access to our DNS API is included with all domains registered with us. API credentials can be limited to individual zones or even records, can be either read/write or read-only.

ANAME records

Of course, it’s generally desirable to avoid including an IP address in lots of different DNS records in the first place. It’s preferable to assign the IP to a single hostname, and then point other records at that. Our DNS service supports ANAME records which allow the use of hostnames rather than IP addresses in places where CNAMEs cannot be used.

IPv4/IPv6 transit in HE Fremont 2

September 18th, 2020 by

Back in 2018, we acquired BHost, a virtual hosting provider with a presence in the UK, the Netherlands and the US. Since the acquisition, we’ve been working steadily to upgrade the US site from a single transit provider with incomplete IPv6 networking and a mixture of container-based and full virtualisation to what we have now:

  • Dual redundant routers
  • Two upstream network providers (HE.net, CenturyLink)
  • A presence on two internet Exchanges (FCIX/SFMIX)
  • Full IPv6 routing
  • All customers on our own KVM-based virtualisation platform

With these improvements to our network, we’re now able to offer IPv4 and IPv6 transit connectivity to other customers in Hurricane Electric’s Fremont 2 data centre. We believe that standard services should have a standard price list, so here’s ours:

Transit Price List

Prices start at £60/month on a one month rolling contract, with discounts for longer commits. You can order online by hitting the big green button, we’ll send you a cross-connect location within one working day, and we’ll have your session up within one working day of the cross connect being completed. If we don’t hit this timescale, your first month is free.

We believe that ordering something as simple as IP transit should be this straightforward, but it seems that it’s not the norm. Here’s what it took for us to get our second 10G transit link in place:

  • 24th April – Contact sales representative recommended by another ISP.
  • 1st May – Contact different sales representative recommended by UKNOF as one of their sponsors.
  • 7th May – 1 hour video conference to discuss our requirements (a 10Gbps link).
  • 4th June – Chase for a formal quote.
  • 10th June – Provide additional details required for a formal quote.
  • 10th June – Receive quote.
  • 1st July – Clarify further details on quote, including commit.
  • 2nd July – Approve quote, place order by email.
  • 6th July – Answer clarifications, push for contract.
  • 7th July – Quote cancelled. Provider realises that Fremont is in the US and they have sent EU pricing. Receive and accept higher revised quote.
  • 10th July – Receive contract.
  • 14th July – Return signed contract. Ask for cross connect location.
  • 15th July – Reconfirm the delivery details from the signed contract.
  • 16th July – Send network plan details for setting up the network.
  • 27th July – Send IP space justification form. They remind us to provision a cross connect, we ask for details again.
  • 6th August – Chase for cross connect location.
  • 7th August – Delivery manager allocated who will process our order.
  • 11th August – Ask for a cross connect location.
  • 20th August – Ask for a cross connect location.
  • 21st August – Circuit is declared complete within the 35 day working setup period. Billing for the circuit starts.
  • 26th August – Receive a Letter Of Authorisation allowing us to arrange the cross connect. We immediately place order for cross connect.
  • 26th August – Data centre is unable to fulfil cross connect order because the cross connect location is already in use.
  • 28th August – Provide contact at data centre for our new provider to work out why this port is already in use.
  • 1st September – Receive holding mail confirming they’re working on sorting our cross connect issue.
  • 2nd September – Receive invoice for August + September. Refuse to pay it.
  • 3rd September – Cross connect location resolved, circuit plugged in, service starts functioning.

Shortly after this we put our order form live and improved our implementation, we received our first order on the 9th September and provisioned a few days later. Our third transit customer is up and live, order form to fully working was just under twelve hours; comfortably within our promise of two working days.