Monitoring API
We’ve added a new API for managing our monitoring system. Now you can script shutting down the monitoring of your application while it’s offline for update and script the post deployment switch-on too.
We’ve added a new API for managing our monitoring system. Now you can script shutting down the monitoring of your application while it’s offline for update and script the post deployment switch-on too.
As you may be aware there’s been a flurry of IPv6 related excitement in the past few days. IANA has allocated the last of the IPv4 address space to the regional registries. This means that obtaining IP addresses is going to become steadily more difficult from here and attempting to migrate the whole Internet to IPv6 is looking like more of an immediate priority.
We’ve been running an IPv6 network for over six months, yesterday we enabled IPv6 on two of our customer facing hosting servers, yali.mythic-beasts.com and onza.mythic-beasts.com. We also made their control panels and all services hosted on them available over IPv6.
pete@ubuntu:~$ dig AAAA yali.mythic-beasts.com +short 2a00:1098:0:86:1000::10
Temporarily an automated scripted cleaned up the A record for the hosts disabling access to all the services over IPv4 to all of our customers that don’t have IPv6 connectivity. As our support mailqueue testified, the majority of our customers do not have working IPv6 connectivity yet.
Unrelated to this activity we also discovered that by default linux limits the number of ipv6 routes to 4096. You can update this by doing,
echo 32768 >/proc/sys/net/ipv6/route/max_size
this is a good idea on any linux machine that sees the full routing table, the IPv6 routing table is now about 4300 routes.
If you are running Exim 4 you should be aware that a remote root vulnerability was discovered on Friday 10th December. This means that someone sending a specially crafted email to your server can completely take control of it.
If you are a managed server customer, you do not need to worry. All managed server customers were fully updated by the end of Saturday 11th December, including where necessary building non standard exim packages from source.
If you are not a managed customer then upgrading exim is your responsibility. We have notified all customers who look like they may be running a vulnerable version of exim.
Make sure /etc/apt/sources.list contains the line
deb http://security.debian.org/ lenny/updates main
then run
apt-get update apt-get upgrade
this will install a patched exim for you.
yum update
will installed a patched exim for you.
there is no security update provided by Debian. You will have to roll your own Debian package with the fix or upgrade your server or exim package to Debian Lenny.
You should make sure you have the appropriate security lines in your apt configuration and follow the instructions for Debian Lenny above.
You should be purchasing a managed service from us and we will manage it for you, contact us at support@mythic-beasts.com.
If you think that building a centos 5.5 backport of exim for a customer who’s compelled to run an early version of Fedora is both possible and fun, contact us at our jobs page and we’ll let you know when we’re hiring.
One of our customers this evening mailed us to report that he’d upgraded Ubuntu on his colocated server and it had gone wrong. The machine refused to boot, and he’d managed to wipe out the serial settings in his grub configuration so he couldn’t alter the boot line in the configuration to add rootdelay=30. Could we help?
With a bit of fiddling, we could. On bootup the machine dropped out into busybox in the initrd.
ls /dev/mapper/system-system
revealed that the device for the lvm root volume was missing.
lvm vgchange -a y system
activated the root partition inside LVM so we could see it in /dev/mapper
fstype /dev/mapper/system-system
revealed the filesystem to be xfs
modprobe xfs mkdir /mnt mkdir /mnt/root cd /mnt mount /dev/mapper/system-system root
mounted the root filesystem inside of busybox.
modprobe ext3 mount /dev/sda1 root/boot
mounted the /boot partition
cd root ./usr/bin/vim.tiny boot/grub/menu.lst
brought up a minimal vim editing the grub configuration. I could then add the serial console lines,
serial --unit=0 --speed=115200 --word=8 --parity=no --stop=1 terminal --timeout=15 serial console
to the grub config and rootdelay=30 to the kernel line, reboot and the machine came up.
If this is the sort of thing you could have figured out yourself, we’re always happy to accept CVs at our jobs page. If this scares you we’d suggest you’d be interested in our managed hosting where we do these bits for you.
At a customer request we’ve added a programmatic API for updating DNS records stored with our primary DNS servers. This is immediately available for all customers with a domain purchased from us at no extra charge. You can see the instructions on our support pages under Primary DNS API.
On Sunday I dropped in at the Debian Barbeque and provided a firkin of Pegasus from the Milton Brewery. Thanks to all the Debian developers for their hard work and have a pint on us.
Yesterday we believe was a power failure in Telehouse North. Mythic Beasts don’t have any equipment located in Telehouse but the effects were quite noticeable.
Two internet exchanges, LONAP and LINX were affected. The LONAP looking glass and traffic graph tell us that LONAP saw a all of the peers located in Telehouse North disconnect.
Lonap Traffic Graph
We don’t believe that LINX was directly affected by the power failure, but all sessions on the Brocade LAN were reset and brought up slowly over the course of about an hour, as you can see from the looking glass.
LINX Looking glass for the Brocade LAN
whereas the Extreme LAN wasn’t affected at all.
LINX Looking glass for the Extreme LAN
LINX Traffic Graph
Mythic Beasts saw no overall change in our traffic levels; we escaped unscathed.
Mythic Beasts Total Traffic
but we did see a brief drop on LONAP as various high bandwidth peers disconnected in Telehouse North.
Mythic Beasts LONAP Traffic
we didn’t see any measurable effect over Edge-IX (this traffic pattern is normal for this time of day)
Mythic Beasts Edge-IX Traffic
Mythic Beasts doesn’t currently peer directly on LINX, but we have two partial transit suppliers that do. Partial transit suppliers provide us with routes only from their peering partners so when they lose contact with a peer, we stop being able to route traffic to that network through them.
This partial transit supplier has 10G into the LINX Brocade LAN, 1G into the LINX Extreme LAN and 2G into LONAP plus private peers.
Mythic Beasts Transit 1
This partial transit supplier has 10G into LINX Brocade, 10G into LINX Extreme, 10G into AMSIX, 2.5G into Decix, 2G into LONAP and 1G into Edge-ix plus private peers.
Mythic Beasts Transit 2
We take partial transit from two suppliers, one in Telecity HEX 6/7, one in Telecity MER. Whilst this is more expensive than a single supplier or joining LINX ourselves, we’ve always felt that the additional redundancy was worth paying extra for. We discovered today that one partial transit supplier has almost no redundancy in the event of a failure of the LINX Brocade LAN. We’ve brought this up with the transit in question and will be pressuring them to add resiliency to their partial transit service. We do intend to join LINX, but when we do so we’ll join both the peering LANs from different data centres to maximise our resiliency.
We’re pleased to announce that as a result of tonight’s connectivity changes our core network, all four data centres now have IPv6 connectivity available. In the next weeks we’ll be contacting all our peering partners to enable direct IPv6 peerings where possible to improve our IPv6 connectivity.
If any customers would like to enable IPv6 on their colocated, dedicated or virtual servers please contact us and we’ll allocate you an address range and provide you with connectivity details.
Until the end of August 2010, all IPv6 bandwidth will be free.
Our Cambridge hosting centre has two physically redundant private circuits linking it back to our network core in Telecity Sovereign House and Telecity Harbour Exchange. We’re pleased to report that we’ve now completed upgrades on both links increasing the available bandwidth to Cambridge customers by a factor of 2.5.
As a result we have increased all standard bandwidth customers to 250GB bandwidth quotas, and now offer higher bandwidth packages for dedicated and colocated customers in Cambridge.