NFS Storage
Bridge supports vanilla NFS as a storage option for deployments that do not require a dedicated high-performance storage appliance. Bridge configures both the NFS server and NFS clients automatically — server setup is handled by the Bridge storage controller, and client installation is handled by the compute post-provisioning controller.
Integration Architecture
The Bridge NFS controller runs an Ansible-based provisioning flow to install and configure the NFS server on a designated Linux storage node. No external storage management API is required.
Bridge NFS Controller (K8s)
│
▼ SSH + Ansible
NFS Server (Ubuntu 22.04, nfs-kernel-server)
│
▼ NFS (TCP over tenant storage network)
Compute Nodes (NFS clients)
Server Requirements
The NFS storage server must be a Linux machine running Ubuntu 22.04. Bridge's storage controller runs a storage preparation Ansible playbook on the server that:
- Installs
nfs-kernel-serverand all required dependencies. - Creates per-tenant export directories on the server.
- Configures
/etc/exportswith the tenant's storage subnet CIDR, restricting NFS access to the correct tenant network.
Client Setup
As part of compute allocation, Bridge's post-provisioning controller installs the NFS client (nfs-common) and all required dependencies on each compute node assigned to a tenant with NFS storage.
The NFS mount is created over the tenant's isolated storage network, ensuring that NFS traffic from different tenants is carried on separate subnets and VRFs.
Storage Network Isolation
Bridge creates a dedicated per-tenant storage network segment for NFS:
- On Spectrum (Ethernet) fabrics: a per-tenant storage VRF is created on the leaf switch interfacing with the NFS server, with route leaks configured between the tenant's storage VRF and the NFS server's VRF.
- The NFS server's
/etc/exportsfile is configured with the tenant's storage subnet CIDR, limiting NFS mount access to the correct tenant IP range.
This two-layer isolation (fabric VRF + NFS export restriction) ensures that one tenant cannot mount another tenant's NFS export.
Storage Allocation
Tenants allocate NFS storage through the Bridge UI:
| Storage Type | Supported |
|---|---|
| Parallel filesystem share (NFS mount) | Yes |
| S3 object storage | No |
Parallel filesystem share flow:
- Tenant requests a storage share from the Bridge UI, specifying share size and target compute nodes.
- Bridge NFS controller creates the export directory on the NFS server and updates
/etc/exports. - Bridge mounts the NFS export on the tenant's compute nodes over the tenant storage VRF.
Storage Lifecycle
| Event | Bridge Action |
|---|---|
| Tenant created | Per-tenant export directory created on NFS server |
| Storage share requested | NFS export configured, mount created on compute nodes |
| Compute deallocated | NFS share unmounted from compute nodes |
| Storage share deleted | Export removed, data securely deleted |
| Tenant deleted | All tenant exports removed from NFS server |
Related Pages
- Storage Overview — Supported storage systems and multi-tenancy model
- VAST — VAST Data storage integration for high-performance parallel filesystem and S3
- DDN — DDN Lustre storage integration