A UX team meeting

In the first part of our four-part series about the tech that runs Potato, I laid out our ethos and how it has affected our tech choices. The main points were:

  • Security first
  • Everything must be accessible from everywhere with an internet connection
  • Potato owns no servers and will probably never buy a physical server
  • Infrastructure and software shouldn’t get in the way of getting stuff done

Here, I'll discuss Potato's physical infrastructure.

We have two offices where we provide the infrastructure: London (65 people, up to ~250 event attendees) and Bristol (35 people). Employees also work from our clients’ and partners’ offices around the world (mainly in California), plus from home, conferences and ad-hoc locations wherever they may be in the world.

Wifi makes our world go round:

Some 98% of our office bandwidth is wifi-based, so it's essential to keep it working well. We use Ubiquiti’s UniFi UAP-AC; we have nine of these devices placed strategically around our 7,000sq ft London office.

Bristol uses the same devices, but being a smaller office, only requires two. We’ve found these access points hugely useful and scalable; they are controller-less, fast, cheap, regularly patched and simple to expand.

In both of our offices, the wifi SSID and passwords are the same, so when our staff travel between the two they only have to walk into the office and they’re connected. We have also set up guest networks, again with the same name and password. The guest networks treat each connection in isolation, which protects our staff and other guests from any unwanted snooping.

In such a busy part of London, we make use of the less-congested 5GHz waveband for our main wifi network, but we offer a separate 2.4GHz network on a separate SSID for legacy devices, for both staff and guests.

High bandwidth, low latency:

Having no servers in our offices means we have a heavy reliance on cloud-based services. This means everyone requires a larger-than-average amount of bandwidth.

An animated gif
Source: C-SPAN

Storing design files in Google Drive requires fast movement of large files; deployment to Google App Engine requires fast movement of thousands of small files.

The ideal combo is high bandwidth in full duplex and low latency, and fibre is best for this. In London we have a 250mbps leased line provided by Venus, which is on a 1gbps bearer, so it’s straightforward to upgrade if we decide we need more.

We’re averaging about 25% capacity currently, but there’s plenty of extra for busy times. In Bristol we make use of the city’s gigabit circuit. Unlike the leased line in London, there is some contention with other businesses on the network, but the effect is negligible.

We can’t operate without an internet connection:

This is the biggest drawback to relying on cloud-based services: no internet connection means an office full of forlorn-looking coffee drinkers.

An animated gif from the IT Crowd
Source: FremantleMedia

Both of our offices, therefore, have a secondary internet connection. In London we have four 16mbps bonded ADSL lines provided by Eclipse, and in Bristol we have Virgin’s 200mbps business offering.

These are the fastest options available at the two office locations without getting fibre to the premises. In both offices we have a Ubiquiti EdgeRouters configured to use the secondary connection as a failover.

A developer working

The ADSL lines provide us with a minimum viable solution for an acceptable cost, in order to keep the office running through a fibre outage, but is unsustainable for longer periods, particularly due to higher latency and severely restricted upload speed.

In London there is still a small risk of an outage. The ADSL backup lines are connected to the same exchange and in the same building as the fibre, albeit on separate infrastructure. If we wanted to be even safer with our backup we would have to consider using a separate connection into another exchange.

Summary:

Every time custom configuration is added to a network it increases the security risk and maintenance overhead. Our lack of servers in our offices means that our network setup remains simple.

There’s no need for anyone to connect to one of our offices from outside, so our connectivity concerns are not much more complex than a home network.

A blow-up Darth Vader doll in the office

We have redundancy on the internet connections to keep things running, and our hardware choices, particularly with the wifi, allow us to scale up quickly when needed.

The entire physical infrastructure is about providing a fast, secure and reliable internet connection to everyone in our office, and doing so makes it easy to work from anywhere else in the world too.