Network & Datacentre
Dual-stack, carrier-neutral fabric in the UK Midlands.
Built for resilience: dual transit into dual Arista cores, dual VyOS edge routers, and segmented IPMI/10G fabrics in a Tier III-aligned facility.
Carrier-neutral with diverse fibre entries and multiple transit/peering options.
- Diverse paths into dual Arista cores
- IPv4 + IPv6 across all services
Remote hands, access control, CCTV, VESDA, and 24/7 engineering coverage.
- 24/7 staffed NOC/security
- Structured change windows
Midlands facility highlights
Carrier-neutral, independently operated DC near Birmingham with resilient power, cooling, and security—aligned to Tier III design principles.
Power
N+1 UPS, on-site diesel generators, and dual power paths to racks for clean failover.
Cooling
N+1 cooling plant with hot/cold aisle containment to maintain stable thermal envelopes.
Security
24/7 CCTV, access control, mantraps, and audited visitor procedures.
Fire protection
VESDA early detection with gas suppression to protect equipment and uptime.
Carrier-neutral
Multiple carriers with diverse fibre entry; transit and peering available upon request.
Redundant core to edge flow
Dual transit feeds dual Arista 7050SX cores. Dual VyOS routers connect to both cores for edge services and BGP. Downstream fabrics split for IPMI and production traffic.
Transit ingress
Two independent transit providers land on separate uplinks into each Arista 7050SX core (dual-homed per core).
Core switching
Pair of Arista 7050SX switches act as dual cores with MLAG/MC-LAG style interconnect for east/west resilience.
Edge routing
Two VyOS hardware routers, each cabled to both 7050SX cores, providing BGP, policy, and gateway services.
Management fabric
IPMI/OOB on Arista 7010T with dual uplinks (one to each 7050SX) to keep OOB reachable during maintenance.
Production fabric
Arista 7050TX edge (10G Ethernet + 4x40G uplinks) dual-homed to the 7050SX cores for customer traffic.
Segmentation
Separate VLANs/VRFs for IPMI, customer, and routing services; rDNS and BGP self-service available.
How the links are wired
High-level link layout for resilience and predictable maintenance.
Transit → Cores
Transit A lands on 7050SX-A and Transit B lands on 7050SX-B, providing provider diversity while keeping the physical pathing simple.
Cores → VyOS
Each VyOS router has a dedicated uplink to each 7050SX core (A/B), ensuring router availability during single switch events.
Cores → IPMI
Arista 7010T for IPMI/OOB with two uplinks—one into each 7050SX core.
Cores → 10G edge
Arista 7050TX (10G + 4x40G) dual-homed to the 7050SX pair for production traffic fan-out to racks.
Gateway placement
Default gateway services delivered via VyOS, reachable through either core; VRRP/anycast-style failover for stability.
Network topology diagram
This simplified core view highlights the parts that matter most for uptime: transit diversity, a redundant MLAG core, and redundant routing. Transit A uplinks into 7050SX-A and Transit B uplinks into 7050SX-B. Each VyOS router has one link to each core switch, and the core pair is joined by an MLAG peer link.
Why this design and how failover works
The goal is to avoid single points of failure and keep maintenance events non-disruptive. MLAG keeps the switching layer redundant, while the two VyOS routers provide redundant routing/BGP so that loss of any single device or uplink does not strand the network.
Failure scenarios (examples):
Transit failure: If Transit A fails, routes learned via Transit A withdraw and traffic exits via Transit B (and vice-versa).
Single core switch failure: If 7050SX-A or 7050SX-B fails, the remaining core continues forwarding. Links to the failed core drop and traffic reconverges across the remaining core.
Single VyOS failure: If VyOS-1 or VyOS-2 fails, the remaining router continues routing/BGP and the network remains reachable through its uplinks to both cores.
MLAG peer-link events: If the MLAG peer link is interrupted, the core pair is designed to prevent split-brain behavior; depending on the exact failure mode, some paths may be suspended to preserve correctness until connectivity is restored.