www/content/network/design.md
Simon Marsh 0be1840c4c
All checks were successful
continuous-integration/drone/push Build is passing
design update
2022-01-04 13:32:32 +00:00

4.6 KiB

title geekdocDescription weight
Network Design burble.dn42 network design 0

This page documents some key elements of the current burble.dn42 design.

Tunnel Mesh

{{

}}

Hosts within the burble.dn42 network are joined using an IPsec/L2TP mesh. Static, unmanaged, L2TP tunnels operate at the IP level and are configured to create a full mesh between nodes. IPsec in transport mode protects the L2TP protocol traffic.

Using L2TP allows for a large, virtual MTU of 4310 between nodes; this is chosen to spread the encapsulation costs of higher layers across packets. L2TP also allows for multiple tunnels between hosts and these are sometimes used to separate low level traffic without incurring the additional overheads of VXLANs (e.g. for NFS cross mounting).

The network also supports using point to point wireguard tunnels instead of the IPsec/L2TP mesh. In this case, the large in-tunnel MTU requires UDP fragmentation support between the hosts.

Network configuration on hosts is managed by systemd-networkd.

BGP EVPN

EVPN diagram

Overlaying the IPsec/L2TP mesh is a set of VXLANs managed by a BGP EVPN.

The VXLANs are primarily designed to tag and isolate transit traffic, making their use similar to MPLS.

The Babel routing protocol is used to discover loopback addresses between nodes; Babel is configured to operate across the point to point L2TP tunnels and with a static, latency based metric that is applied during deployment.

The BGP EVPN uses FRR with two global route reflectors located on different continents, for redundency. Once overheads are taken in to account the MTU within each VXLAN is 4260.

dn42 Core Routing

EVPN diagram

Each host in the network runs an unprivileged LXD container that acts as a dn42 router for that host. The container uses Bird2 and routes between dn42 peer tunnels, local services on the same node and transit to the rest of the burble.dn42 network via a single dn42 core VXLAN.

Local services and peer networks are fully dual stack IPv4/IPv6 however the transit VXLAN uses purely IPv6 link-local addressing, making use of BGP multiprotocol and extended next hop capabilities for IPv4.

The transit VXLAN and burble.dn42 services networks use an MTU of 4260, however the dn42 BGP configuration includes internal communities to distribute desintation MTU across the network for per-route MTUs. This helps ensure path mtu discovery takes place as early and efficiently as possible.

Local services on each host are provided by LXD containers or VMs connecting to internal network bridges.
These vary across hosts but typically include:

  • tier1 - used for publically avialable services (DNS, web proxy, etc)
  • tier2 - used for internal services, with access restricted to burble.dn42 networks

Other networks might include:

  • dmz - used for hosting untrusted services (e.g. the shell servers)
  • dn42 services - for other networks, such as the registry services

dn42 peer tunnels are created directly on the host and then injected in to the container using a small script, allowing the router container itself to remain unprivileged.

The routers also run nftables for managing access to each of the networks, bird_exporter for metrics and the bird-lg-go proxy for the burble.dn42 looking glass.

Host Configuration

EVPN diagram

burble.dn42 nodes are designed to have the minimum functionality at the host level, with all major services being delivered via virtual networks, containers and VMs.

Hosts have three main functions:

  • connecting in to the burble.dn42 IPsec/L2TP mesh and BGP EVPN
  • providing internal bridges for virtual networks
  • hosting LXD containers and VMs

Together these three capabilities allow for arbitary, isolated networks and services to be created and hosted within the network.

The hosts also provide a few ancillary services:

  • delivering clearnet access for internal containers/VMs using an internal bridge. The host manages addressing and routing for the bridge to allow clearnet access independent of the host capabilities (e.g. proxied vs routed IPv6 connectivity)
  • creating dn42 peer tunnels and injecting them in to the dn42 router container
  • monitoring via netdata
  • backup using borg