IPv6 Networking in the UK

November 26th, 2024 by
Image from Google showing 48% of UK traffic over IPv6 with 10ms lower latency.

48% of the UK has IPv6 and it’s 10ms faster (credit Google).

We recently went to the UK IPv6 Council annual meeting, ten years since the first one. In the intervening time, IPv6 usage in the UK has grown from 0.2% of connections to 48% today; almost half the country is IPv6-enabled. There was lots of interesting material about IPv6-only and “IPv6 mostly” networks, in addition to dual-stack networks.

IPv6 Mostly

“IPv6 mostly” networks are dual stack networks that provide NAT64 and DNS64 servers. DNS64 provides synthesised IPv6 addresses for IPv4-only resources, and the NAT64 service then provides translation between the two. Some software is incompatible because it tries to talk directly to IPv4 addresses which can’t be reached. Modern computers and phones offer CLAT which bridges this gap. A network client using CLAT in a network that offers NAT64 and DNS64 no longer needs to be dual stack and can turn direct IPv4 off.

DHCP has a new option: Option 108: ‘IPv6-Only preferred’. About 75% of clients – mostly phones, tablets and OSX devices – will specify this option. When present on both server and client, the client won’t request an IPv4 address from the DHCP server, and will operate only with IPv6 addresses. Imperial College London have rolled this out on their wifi network. Of the 71,000 devices using their network, only 16,000 request an IPv4 address. 77% are IPv6-only.

IPv4 as a service

Sky rolled out a network in Italy which is internally IPv6-only, and IPv4 traffic is layered on top using MAP-T. This means the broadband box translates all IPv4 traffic into IPv6 as it enters the network, then a MAP box turns it back into IPv4 as it leaves the Sky network to get to the origin. IPv6 traffic skips both transitions. As a large eyeball network, they have network cache devices in their network. If the traffic flow is IPv4, it is terminated on the cache box in one of the four points key points-of-presence. IPv6 flows can terminate on cache boxes anywhere in the network – crucially closer and faster to the end user.

Image from Sky showing IPv6 traffic delivered from the edge, but all IPv4 traffic has to be server from the core

Multiple providers report lower IPv6 latency than IPv4, in Sky’s network IPv6 can have a shorter and faster path. Sky IPv6 council slides

Other updates

Vodafone started and has nearly finished dual-stacking their network. The motivation was to reduce the IPv4 and carrier grade NAT costs. Today 75% of their customers have IPv6 and 38% of their traffic flows over IPv6. Microsoft talked about all their work on the operating system side to support and proxy IPv6, with the consensus being very clear that CLAT and DHCP option 108 was the most important thing they should have delivered last year.

Multi-coloured bandwidth in an Electromagnetic Field

July 12th, 2024 by
Traceroute from EMF to Google via Mythic Beasts

A satisfying traceroute from EMF out to Google via a private interconnect from Mythic Beasts

Last month we attended Electromagnetic Field as a silver sponsor.  Despite being in a remote field in Herefordshire, the site had amazing connectivity, which we played a small part in providing.

We provided some optics to help get internet around the field and acted as an Internet Transit Provider to uplink the festival through our network.

We had a tour of the network operations centre. ElectromagneticField leased a single fibre to a telephone exchange in Gloucester and a donated private 40Gbps circuit hauls the traffic back to the London Network Access Point (LONAP). We used private VLANS over LONAP to link to the Mythic Beasts core network routers in Sovereign House and Telehouse and used this to provide our blend of transit providers and peers, including direct access over private fibre to some of the largest cloud providers.

EMF fibre uplink using DWDM

EMF fibre uplink using 4x 10Gbps DWDM with fake BiDi. The MUX is on the top, eight fibre pairs [03-10] are multiplexed into the single 60km fibre to the telephone exchange [01]. Ports 41-48 on the switch all have different coloured handles to indicate the different light colour used by the transceiver

The section from the field in Eastnor to Gloucester uses Dense Wavelength Division Multiplexing, a neat technology that uses multiple different frequencies to carry multiple signals on the same fibre at the same time. Each optical transceiver typically transmits at a specific wavelength on one fibre, and receives on the same wavelength on a second fibre. This is put into a multiplexer which combines the different frequencies from multiple optics into the same fibre and a second multiplexer splits them back out into the component frequencies at the other end, allowing multiple 10Gbps channels to operate over one fibre pair.

Newtons original diagram from 1704 showing splitting and combining of colours into white light.

By kind permission of the Masters and Fellows of Clare College, Newtons original diagram for splitting and combining wavelengths with prisms, taken from a first edition of Newtons Optiks (1704)

We use the same technique to multiply up the bandwidth in our core London network on our leased fibre that interconnects our core London points of presence.

To keep costs down at EMF there isn’t a fibre pair – just a single 60km fibre. The hack to get around this limitation is to use different frequencies in each direction and rely on the fact that the transceivers are frequency-specific for transmitting but not receiving – a transmitter that transmits at 1572.48nm will happily receive at 1572.89nm and vice versa. You can then use eight channels on one fibre as four bi-directional channels.

Around the campsite there were datenklo (a switch in a portaloo) which provided wifi and multiple 1Gbps wired uplinks. Each dataklo had a 10Gbps link back to the network operations centre to provide super-fast connectivity all around the site.

You can read more about some of the awesome things we saw at EMF 2024 in our previous blog post.

Electromagnetic Field 2024 sponsorship

May 1st, 2024 by

Electromagnetic Field Logo

We’re pleased to announce that we are silver sponsors of this year’s Electromagnetic Field festival.  As in previous years, we will also continue to support the event with free transit.  EMF is a long weekend camping in a field where people who are really very interested in things will tell you about the things that really interest them. There’s talks, demos, art installations and workshops on all kinds of creative things. In addition to camping, everyone gets power and high speed internet to their tent. Rumour has it there is also a bar.

Previous years have had an exceptionally wide variety of talks on a huge number of different subjects. The list of talks from the last festival in 2022 is long, but includes things as wild as:

  • Ship vs Oil Rig
  • The imitation game – using live data feeds from Network Rail to control a model railway
  • Building a home-made enigma machine

We’re not giving a talk this year as we didn’t come up with a good idea in time. For 2026 we’ve already rejected the following presentation titles :

  • I’ve got 99 problems and HEX ain’t one.
  • D. E. P. R. E. C. I. 8. The importance of correct accounting policies delivered through the medium of Aretha Franklin covers.
  • As a large language model I can’t assist with that. It’s illegal, unethical, and against my guidelines.

We’re looking forward to meeting up with lots of interesting people at EMF2024.

HEX-it complete

April 29th, 2024 by
Equinix invites you to celebrate international data centre day

We elected not to celebrate with Equinix

In March 2004 we moved all three of our servers into a single rack in the 6/7 Harbour Exchange data centre, operated at the time by Redbus.  The data centre has changed hands several times, and merged with the building next door to become what is now Equinix LD8. We’ve been continuously present for 20 years and 1 month. Normally moving out of a data centre is a difficult, expensive and time consuming operation that is best avoided, but Equinix offered us terms that made doing so make sense. In September 2023 we opened our new core point of presence in Telehouse South.

We’re happy to report this project is now complete and our footprint in Equinix LD8 is now reduced to an optical-only point of presence forwarding 10Gbps waves to our core site at City Lifeline.

Our new space in Telehouse South offers a considerable upgrade over what we could offer in LD8. All servers now have remotely switchable dual power feeds and with dual 10Gbps uplinks. We are able to offer offer cross-connects to anywhere in the Telehouse London campus and 10Gbps wavelengths back to our other sites. We already have some new colocation customers taking advantage of these additional services. We still include serial for out-of-band server management.

During this move, we live migrated our virtual server cloud to hosts in either City Lifeline or Sovereign House. Apart from a few special cases supporting very old virtual servers or ones with BGP transit services, this was done without interruption to the client. Dedicated servers and colocation customers moved in a series of windows to minimise downtime while the servers were relocated.

We brought on additional network capacity as part of the move including 10Gbps and 100Gbps links to transit providers and private peers within the Telehouse London campus. This provides a significant upgrade in connected external capacity.

HEX-it

September 27th, 2023 by

Last year, we undertook a significant data centre migration, with the closure of Digital Realty’s Meridian Gate requiring us to move our entire presence there to Redcentric’s City Life Line. Having done it once, why not do it again?

Southern Serval, leaping

Our shared hosting server “serval” has already migrated to SOV. [ Photo by Wynand Uys]

This year, we’re planning a move out of Harbour Exchange (HEX), and starting a presence in Telehouse South. A lot of the things we learned during the previous move are making this move easier to manage, although it is still a prodigious effort, both physically and in terms of design and infrastructure.

One of the things we’ve been working on for some time is improved network infrastructure within our data centres. This introduces IP address portability so that IP addresses do not need to change when servers are moved between data centres, as well as significantly higher bandwidth uplinks for our virtual server hosts.

In the last year, we’ve live migrated over a thousand VMs across two data centres, with minimal interruption to service.

We’re about to start migrating all VMs out of our HEX data centre. We have two available London destinations, SOV and CLL. If you’re a customer with a VM in our HEX data centre, we’ll be emailing you over the next couple of weeks, to check if you have a preference (for instance because you have existing services in one of those data centres, and would prefer to be moved to the other to maintain fault-tolerance).

We will also soon be able to offer Telehouse South as a virtual server zone, in addition to SOV and CLL. This means we will continue to provide three London-based zones for our customers running distributed services. We’ll retain a small residual presence in HEX for connectivity.

New data centre presence: City Lifeline

May 27th, 2022 by

The rest room has a nice view, proper coffee and our branded mugs


In June last year, Digital Realty informed us that they planned to close the Meridian Gate (MER) data centre in 2023. Meridian Gate is our largest presence, so initially this seemed like really bad news. Moving data centres is such a daunting – and expensive – prospect that we’d never really consider it on its own, even if there are long term cost savings or technical benefits. But, once you’re forced to do it, it becomes a rare opportunity to do the kind of upgrades, reorganisation and general tidying that’s so hard to do in racks full of live servers.

Since the announcement we’ve been working hard to figure out not only how to replace the space in MER, but also how to make the most of this chance to configure and kit out new space exactly as we want it.

A key part of the plan is taking on a presence in a new London data centre so that we retain three separate sites in London, and we’re very pleased to announce that our new suite in Redcentric’s City Lifeline (CLL) data centre in Shoreditch is now live, and that our migration out of MER is well underway.

Our CLL presence is connected back to our other two London data centres, Digital Realty’s Sovereign House (SOV) and Equinix LD8 (aka Harbour Exchange/HEX), via a lit fibre ring. The new space allows us to offer dual, redundant 10Gbps to servers, as well as dual redundant power feeds. As with all our data centre space, we have switched PDUs, enabling power to be remotely controlled via our control panel, and remotely accessible serial consoles, so that almost all server issues can be resolved remotely.

If you have services in MER and haven’t already heard from us we’ll be in touch soon to discuss migration plans. We’ve been working hard behind the scenes to minimise disruption to services from the migration out of MER. This includes network upgrades to enable IP portability between MER and CLL so that servers will not need to change IPs during the move and our team are doing a lot of late nights to reduce the impact of any unavoidable disruption.

If you’re interested in taking on new colocated or dedicated servers, please do get in touch as we’ve now got lots of capacity.

IPv6 Deployment World Leader

December 8th, 2021 by

Yesterday (7th December) we attended the virtual IPv6 forum annual meeting. We were delighted that our director Pete Stevens has been added to the IPv6 Hall Of Fame as an IPv6 Deployment World Leader.

Unlike most awards we turn down, you can’t win this one just by paying for a hugely expensive table at an awards ceremony.

We also got an update on how IPv6 deployment is going through the UK. Happy to hear from BT that they’re making excellent progress replacing all the old HomeHubs with new IPv6-capable consumer routers. Sky Italia has deployed a consumer broadband network that’s effectively IPv6 only – IPv4 is provided as a service on top with MAP-T. As this is a form of carrier NAT they’ve managed one IPv4 address per 16 subscribers. This compares with their initial dual stack rollout we reported on from the 2019 council meeting.

Lastly it was noted that the cost of an IPv4 address on the open market is now around $60; increasing numbers of server providers are following our lead and making an IPv4 address an additional and removable option on the order form.

Teaching our network some MANRS

April 30th, 2021 by


We’ve recently rolled out software upgrades to our networks that enable improved routing security and we have joined the MANRS (Mutually Agreed Norms for Routing Security) initiative.

Our MANRS certification for our EU and US networks confirms that we block spoofed traffic, drop incorrect routing information and work with others to maintain routing security.

This is beneficial for any customer using our transit and IP services, which includes all dedicated server and virtual server customers.

Resource Public Key Infrastructure (RPKI)

Amazingly, up until the advent of RPKI the entire internet worked on a trust relationship. When another network told us that they were responsible for a range of internet addresses we’d believe them. Border Gateway Protocol (BGP) is how networks communicate routing data to each other and it had no mechanism to confirm that the route and address space being advertised to you were genuine.

Incorrect advertisements result in network traffic being delivered to the wrong destination and incidents, both deliberate and accidental, are common and can cause real harm. For example, $17m in crypto currency was stolen in 2018 via an IP address hijack aimed at Amazon. Youtube has been taken offline as have large parts of the Cloudflare network.

RPKI seeks to address this by providing signed proof that a network operator (identified by their Autonomous System Number) is permitted to originate a specific range of IP addresses. Once a range of IP addresses is signed you know that any announcement of the address space from any other provider is invalid and should be dropped.

Our transit providers are also certified by MANRS for further protection.

An RPKI example

RIPE Labs have created a deliberately invalid routing announcement that can be used to demonstrate and test RPKI. RIPE Labs have published a Resource Origination Authorisation (ROA) that says only AS0 is permitted to announce the prefix 209.24.0.0/24. They then announce that prefix under AS15562.

With RPKI we see that the network listed in the ROA does not match the network announcing the route, so that route is considered invalid and rejected as being a hijack.

Ripe Labs have published a checker that runs in your browser and detects whether you can see this invalid route on your ISP’s network.

From our network, we now get the big smiley face:

Internet Resource Registry (IRR)

RPKI complements another approach to routing security: filtering based on Internet Resource Registry (IRR) data. RPKI allows us to verify if a network is a valid ultimate destination for a particular IP range. Most networks we don’t see directly, we go through another transit providing network. IRR allows us to verify that the network advertising a given route is authorised to originate or transit that route.

The Regional Internet Registries (RIR) allow network providers to register a link between their network and an IP block. Various tools exist (e.g. bgpq3) to create a list of all the internet addresses that a network can originate or transit from their downstream customers. This is be used to generate a filter list that restricts what routes we will accept from peers and downstream customers.

These lists can be very long and change frequently – the list for our network (AS-MYTHIC) is usually 5000 or so records with tens to hundreds of changes per day.

Best Common Practice 38 (BCP 38)

Another issue with insecure routing is “spoofing” — sending IP packets with a fake source address. This is widely used by attackers to cause denial of service attacks. An attacker sends packets with a sender IP address faked to be that of the target machine. The recipient of these packets will send replies to the target machine instead of the originator. This makes it very easy to create distributed denial of service attacks.

BCP38 is a Best Common Practice which requires that networks filter packets that aren’t either to or from an address within their network.

Part of MANRS is not only to implement BCP 38 but also to host an active spoofer. This means if we drop our BCP38 filtering our non-compliance will be published including regular mailings to network operator groups.

Having good MANRS

By combining all these methods routing security is significantly improved. RPKI provides dynamic checking that doesn’t rely on us adding static route lists to our routers. This also provides excellent protection against accidental hijacks from a “route optimiser” gone wrong. IRR forces accurate routing data to generate filters. BCP38 reduces risks to other networks from spoofed packets. Combining all of these means we have much better MANRs at the price of terrible acronyms.

RPKI filtering is now fully deployed on our US and European network and they both now pass Cloudflare’s “Is BGP Safe Yet” test.

IPv4/IPv6 transit in HE Fremont 2

September 18th, 2020 by

Back in 2018, we acquired BHost, a virtual hosting provider with a presence in the UK, the Netherlands and the US. Since the acquisition, we’ve been working steadily to upgrade the US site from a single transit provider with incomplete IPv6 networking and a mixture of container-based and full virtualisation to what we have now:

  • Dual redundant routers
  • Two upstream network providers (HE.net, CenturyLink)
  • A presence on two internet Exchanges (FCIX/SFMIX)
  • Full IPv6 routing
  • All customers on our own KVM-based virtualisation platform

With these improvements to our network, we’re now able to offer IPv4 and IPv6 transit connectivity to other customers in Hurricane Electric’s Fremont 2 data centre. We believe that standard services should have a standard price list, so here’s ours:

Transit Price List

Prices start at £60/month on a one month rolling contract, with discounts for longer commits. You can order online by hitting the big green button, we’ll send you a cross-connect location within one working day, and we’ll have your session up within one working day of the cross connect being completed. If we don’t hit this timescale, your first month is free.

We believe that ordering something as simple as IP transit should be this straightforward, but it seems that it’s not the norm. Here’s what it took for us to get our second 10G transit link in place:

  • 24th April – Contact sales representative recommended by another ISP.
  • 1st May – Contact different sales representative recommended by UKNOF as one of their sponsors.
  • 7th May – 1 hour video conference to discuss our requirements (a 10Gbps link).
  • 4th June – Chase for a formal quote.
  • 10th June – Provide additional details required for a formal quote.
  • 10th June – Receive quote.
  • 1st July – Clarify further details on quote, including commit.
  • 2nd July – Approve quote, place order by email.
  • 6th July – Answer clarifications, push for contract.
  • 7th July – Quote cancelled. Provider realises that Fremont is in the US and they have sent EU pricing. Receive and accept higher revised quote.
  • 10th July – Receive contract.
  • 14th July – Return signed contract. Ask for cross connect location.
  • 15th July – Reconfirm the delivery details from the signed contract.
  • 16th July – Send network plan details for setting up the network.
  • 27th July – Send IP space justification form. They remind us to provision a cross connect, we ask for details again.
  • 6th August – Chase for cross connect location.
  • 7th August – Delivery manager allocated who will process our order.
  • 11th August – Ask for a cross connect location.
  • 20th August – Ask for a cross connect location.
  • 21st August – Circuit is declared complete within the 35 day working setup period. Billing for the circuit starts.
  • 26th August – Receive a Letter Of Authorisation allowing us to arrange the cross connect. We immediately place order for cross connect.
  • 26th August – Data centre is unable to fulfil cross connect order because the cross connect location is already in use.
  • 28th August – Provide contact at data centre for our new provider to work out why this port is already in use.
  • 1st September – Receive holding mail confirming they’re working on sorting our cross connect issue.
  • 2nd September – Receive invoice for August + September. Refuse to pay it.
  • 3rd September – Cross connect location resolved, circuit plugged in, service starts functioning.

Shortly after this we put our order form live and improved our implementation, we received our first order on the 9th September and provisioned a few days later. Our third transit customer is up and live, order form to fully working was just under twelve hours; comfortably within our promise of two working days.

VMHaus services now available in Amsterdam

July 3rd, 2019 by

Integration can be hard work

Last year we had a busy time acquiring Retrosnub, BHost and VMHaus. We’ve been steadily making progress in the background integrating the services the companies provide to reduce costs and complexity of management. We can now also announce our first significant feature upgrade for VMHaus. We’ve deployed a new virtual server cluster to our Amsterdam location and VMHaus services are now available in Amsterdam. VMHaus is using Mythic Beasts for colocation and network and in Amsterdam they will gain access to our extensive set of peers at AMSIX, LINX and LoNAP. Per hour billed virtual servers are available from VMHaus with payment through Paypal.

As you’d expect, every VM comes with a /64 of IPv6 space.

In the background we’ve also been migrating former-BHost KVM-based services to Mythic Beasts VM services in Amsterdam. Shortly we’ll be starting to migrate former-BHost and VMHaus KVM-based services in London to new VM clusters in the Meridian Gate data centre.