Skip to main content

NFS Storage

Bridge supports vanilla NFS as a storage option for deployments that do not require a dedicated high-performance storage appliance. Bridge configures both the NFS server and NFS clients automatically — server setup is handled by the Bridge storage controller, and client installation is handled by the compute post-provisioning controller.

Integration Architecture

The Bridge NFS controller runs an Ansible-based provisioning flow to install and configure the NFS server on a designated Linux storage node. No external storage management API is required.

Bridge NFS Controller (K8s)

▼ SSH + Ansible
NFS Server (Ubuntu 22.04, nfs-kernel-server)

▼ NFS (TCP over tenant storage network)
Compute Nodes (NFS clients)

Server Requirements

The NFS storage server must be a Linux machine running Ubuntu 22.04. Bridge's storage controller runs a storage preparation Ansible playbook on the server that:

  1. Installs nfs-kernel-server and all required dependencies.
  2. Creates per-tenant export directories on the server.
  3. Configures /etc/exports with the tenant's storage subnet CIDR, restricting NFS access to the correct tenant network.

Client Setup

As part of compute allocation, Bridge's post-provisioning controller installs the NFS client (nfs-common) and all required dependencies on each compute node assigned to a tenant with NFS storage.

The NFS mount is created over the tenant's isolated storage network, ensuring that NFS traffic from different tenants is carried on separate subnets and VRFs.

Storage Network Isolation

Bridge creates a dedicated per-tenant storage network segment for NFS:

  • On Spectrum (Ethernet) fabrics: a per-tenant storage VRF is created on the leaf switch interfacing with the NFS server, with route leaks configured between the tenant's storage VRF and the NFS server's VRF.
  • The NFS server's /etc/exports file is configured with the tenant's storage subnet CIDR, limiting NFS mount access to the correct tenant IP range.

This two-layer isolation (fabric VRF + NFS export restriction) ensures that one tenant cannot mount another tenant's NFS export.

Storage Allocation

Tenants allocate NFS storage through the Bridge UI:

Storage TypeSupported
Parallel filesystem share (NFS mount)Yes
S3 object storageNo

Parallel filesystem share flow:

  1. Tenant requests a storage share from the Bridge UI, specifying share size and target compute nodes.
  2. Bridge NFS controller creates the export directory on the NFS server and updates /etc/exports.
  3. Bridge mounts the NFS export on the tenant's compute nodes over the tenant storage VRF.

Storage Lifecycle

EventBridge Action
Tenant createdPer-tenant export directory created on NFS server
Storage share requestedNFS export configured, mount created on compute nodes
Compute deallocatedNFS share unmounted from compute nodes
Storage share deletedExport removed, data securely deleted
Tenant deletedAll tenant exports removed from NFS server
  • Storage Overview — Supported storage systems and multi-tenancy model
  • VAST — VAST Data storage integration for high-performance parallel filesystem and S3
  • DDN — DDN Lustre storage integration