Skip to main content

DDN Storage

Bridge integrates with DDN (DataDirect Networks) storage systems through a generic Lustre controller that communicates with the DDN server over SSH. Bridge automates tenant isolation using Lustre Nodemaps, and configures the Lustre client on compute nodes during post-provisioning.

Integration Architecture

The Bridge DDN controller is a generic Lustre controller implemented as a Kubernetes controller. It communicates with the DDN management (MGS/MDT) node over SSH and issues lctl commands to configure per-tenant isolation on the Lustre filesystem.

Bridge DDN Controller (K8s)

▼ SSH
DDN Server (Rocky 8.10, Lustre MGS/MDT)

▼ Lustre (IB or TCP)
Compute Nodes (Lustre clients)

Server and Client Requirements

DDN Server

The DDN storage server runs Rocky Linux 8.10 with the Lustre server stack configured. Bridge communicates with the server over SSH and does not require a proprietary DDN management API.

Compute Node (Lustre Client)

Bridge's compute post-provisioning controller automatically installs the Lustre client on compute nodes allocated to tenants with DDN storage. Supported client OS versions:

  • RHEL 8
  • Ubuntu 20.04
  • Ubuntu 22.04

DDN supports both InfiniBand and TCP networks for Lustre data plane traffic, providing flexibility for deployments with either fabric type.

Tenant Isolation with Lustre Nodemaps

Bridge enforces per-tenant isolation on DDN using the Lustre Nodemap feature. Each tenant is assigned a Nodemap that restricts their compute nodes to a dedicated directory tree on the Lustre filesystem.

Nodemap Configuration

For each tenant, Bridge:

  1. Creates a Lustre nodemap scoped to the tenant (lctl nodemap_add <tenant>).
  2. Applies a fileset restriction so the tenant can only access their directory (lctl set_param nodemap.<tenant>.fileset=/<tenant>).
  3. Activates nodemap enforcement (lctl nodemap_activate 1).
  4. Assigns the tenant's compute node IP addresses (or IB GIDs) to the nodemap (lctl nodemap_add_range --name <tenant> --range <ip>@tcp).

The result: a compute node assigned to tenant1 can only see and access the /tenant1 directory on the Lustre filesystem, regardless of how many tenants share the same DDN cluster.

Directory Structure

Bridge creates the per-tenant directory structure on the Lustre MGS/MDT node as part of tenant onboarding:

/lustre/
└── <filesystem>/
├── tenant1/
│ ├── compute1/
│ └── compute2/
└── tenant2/
└── compute1/

Storage Network Isolation

As with all supported storage vendors, Bridge creates a dedicated per-tenant storage network segment for DDN:

  • On Ethernet fabrics (Spectrum): a per-tenant storage VRF with route leaks between the compute VRF and the storage VRF on the switch interfacing with the DDN cluster.
  • On InfiniBand fabrics: per-tenant PKey assignment for IB-connected DDN nodes.

Storage Allocation

Tenants allocate DDN storage through the Bridge UI using the same flow as other storage vendors:

  1. Tenant requests a parallel filesystem share, specifying size and target compute nodes.
  2. Bridge DDN controller ensures the Lustre nodemap for the tenant is configured with the target compute node ranges.
  3. Bridge mounts the Lustre filesystem on the compute nodes over the tenant storage network.

S3 object storage is not supported on DDN through Bridge.

Storage Lifecycle

EventBridge Action
Tenant createdLustre nodemap created, per-tenant directory tree initialized
Compute allocated with storageCompute node IP added to tenant nodemap; Lustre client mounted
Compute deallocatedCompute node IP removed from nodemap; Lustre client unmounted
Tenant deletedTenant nodemap removed; tenant directory cleaned up
  • Storage Overview — Supported storage systems and multi-tenancy model
  • NFS — NFS storage integration for simpler storage deployments