An AMS-IX Story
August 29, 2019
Management network is our internal AMS-IX network which interconnects all our infrastructure devices like: production systems, dense wavelength-division multiplexing technologies (DWDMs), photonic cross connects (PXCs) etc. Also, it’s been used for managing our production platform switches, collecting customer’s statistics, as well as flow data.
The management network should be extremely reliable since our monitoring system relies on it and on top of that, our users are also going through it when accessing our website or my.ams-ix portal.
A couple of plain facts about our current management network setup: we have 15 PoPs—including our Amsterdam office— and in total we have 22 switches in the Amsterdam area. We also have them on the remote locations, like Curaçao, Bay Area, Hong Kong, Chicago, New York, and in total there are 10 more switches. Our management network grew over the years adding side by side, that’s why we have a mix of L2 devices, mostly Brocade FCX, FES, FGS, ICX and even Foundry devices, which are currently part of the portfolio of Ruckus Wireless/ ARRIS Group.
We were using the Ring topology, and two limitations were that these devices didn’t have a loop prevention mechanism or isolation when we had a double fiber cut.
As part of our premise to continuously optimize our infrastructure, we decided to upgrade our management system.
Also, it is really hard to manage different devices with different software versions and we were aiming to make the environment more unified. Newer devices would bring us more capacity and more redundancy for our internal cloud replication of our VMs (Virtual Machines).
We migrated to DWDM solution and in the end, we would have only 30 dark fibers between our core and established location. Besides that, we had 17 separate fibers just for the management network, which was too much. That is why we decided to use DWDM’s muxes (passive and active) to build fully redundant spine leaves for our management network. The solution we came up with was to use a couple of DWDM colors (colored optics) and connect switches to them. By decommissioning the 17 dark fibers between our sites we also decreased the costs involved.
The next step was to determine which technology, brand, software to use.
10 years ago, there were not that many options, but nowadays it’s difficult to choose the best suitable solution as there are a lot of options available. As always, AMS-IX aims to be on the networking edge and following a thorough research, we decided to go with a future-proof option and choose the bare-metal white-box switches with one of the third-party device vendors.
The bare-metal white-box concept is all about decoupling software from hardware on the devices. When you buy the switch, you buy a switch without software, i.e without any operation system; one can install on it anything that exists in the market. There are a couple of companies, and more and more appear every day in the market, e.g Pluribus Networks, Cumulus, BigSwitch, IP infusion, Google has some projects about that, Facebook has some projects etc and you can install it on the server, as we do on our personal computers (Linux, Windows, or anything else). You get to have just 1 PC with a lot of interfaces. You can also use the OpenSwitch platform, which is an open source, Linux-based network operating system (NOS).
As you may know, we chose VxLAN plus Dell EMC, plus Pluribus Networks as a software render. We used Dell S4048 ONIE enabled switches for this project. Pluribus Networks give us a lot of useful features: ECMP, loop prevention, fast REST-API VxLAN and their proprietary design. Regarding the fabric, if we go for the classic switch design, in the bottom we have Data plane on every switch, then we have Control plane and on the upper part we have the management plane. So, you have the single device with everything that you need on. In the Pluribus fabric concept, the Data plane and Control plane stay absolutely the same and these devices are absolutely independent— even if the Management plane crashes—but the Management plane is common. This means that once you’ve logged in on any of the devices you can manage all the fabric. It’s really convenient, especially when you are trying to trouble shoot something so you can see the statistics online from different switches on the fabric immediately on the same CLI cell.
An attempt to represent visually how it looks from the inside. As mentioned, the switch itself is a computer, it’s an Intel Atom board with a main switch board. On the older devices that is inside two different boards; the bigger board is the switch with the ASIC and upper board is just Intel Atom, just a PC. Some of them even had the VGA connector on them, so you can connect and install anything that you want. Nowadays it’s just a single board.
In Pluribus solution there is Ubuntu Linux installed as an operational system and LXC container which works as L3.
One of the very important features that we need for our network is redundancy. For the critical infrastructure e.g: NAS replication, servers, statistics collector, web server, and my.ams-ix portal, this solution supports MLAG, MC-LAG or VLAG, in different terminologies. For this one, we have the cluster for upper part, and the SPINE Cluster where we can connect non-VXLAG devices in between. It is represented like non-Pluribus Networks switch or in the leaf cluster. We can see connected NAS or VMWare/ KVM servers etc. At the end, our management network would look like a pure spine leaf topology.
Acknowledgement: I would like to thank my colleague Maxx Cherevko (Network Architect at AMS-IX) with whom I worked on this project.
Photo credit: Tom Visbeek