Skip to main content

BlueField-3 (BF3)

The NVIDIA BlueField-3 (BF3) DPU provides hardware-enforced security and control plane offloading for Bridge-managed servers. When deployed in DPU mode (Zero Trust), BF3 acts as an independent control plane — the host server is provisioned and managed exclusively through the DPU, removing the need to trust the host's main board for security-sensitive operations.

DPU Mode vs. NIC Mode

AspectDPU Mode (Zero Trust)NIC Mode (SuperNIC)
Control planeRuns on DPURuns on host
Host provisioningVia DPU over OOBVia MaaS/BCM
Network feature setHBN, VRF, VTEPSpectrum-X, RoCE
Security modelHardware root of trustDOCA DMS service
Primary use caseBare metal, VM, HBNSpectrum-X GPU networking

Zero Trust DPF Provisioning Flow

Bridge provisions BF3 DPUs in Zero Trust mode using the NVIDIA DPU Fabric (DPF) framework:

  1. Bridge configures the BF3 into Zero Trust DPU mode and reboots the server.
  2. Bridge provisions the DPU over the OOB network (1 GbE).
  3. Bridge provisions the host OS over OOB and in-band network — no tenant access is granted at this stage.
  4. The DPU is added as a worker node to the DPF control plane hosted on Bridge.
  5. Bridge orchestrates Host-Based Networking (HBN) to the DPU.
  6. Bridge configures isolated L3 networks on the DPU and the inband switch fabric (200 GbE converged network).
  7. Tenant access is provided via the gateway over the isolated L3 networks.
note

No tenant traffic reaches the host until all isolation and networking steps are complete.

DOCA HBN Controller

After DPU provisioning, Bridge's Cumulus controller is extended with HBN functionality that interfaces with DOCA HBN over the OOB network.

Switch fabric configuration:

  • Configures BGP underlay across the Ethernet switch fabric.
  • Creates per-tenant L2/L3 overlay networks (VRFs, VXLANs) on the switches.

DPU-level configuration applied via Ansible:

ConfigurationPurpose
VRFs per tenant on DPUEnforce L3 tenant isolation at the DPU
VTEP on DPUVxLAN tunnel endpoint, enabling VxLAN overlay termination on DPU
VF-to-representor mappingSR-IOV passthrough for VMs and containers

With DOCA HBN, the switch fabric maintains a single underlay VRF while tenant isolation is enforced at the DPU — overcoming per-switch VLAN and VRF scale limits.

note

For InfiniBand fabrics, Bridge creates PKeys and PKey-to-VF associations instead of VRFs and VTEPs.

DPF for Kubernetes Deployments

Bridge hosts both a host control plane and a DPU control plane using the DPF framework, enabling automated DPU lifecycle management for Kubernetes-based deployments:

  1. Bridge hosts the control plane for host Kubernetes nodes.
  2. Bridge hosts a separate control plane for DPU nodes.
  3. DPU provisioning is triggered via the DOCA Management Kubernetes service on the host.
  4. DPF operators on both control planes enable Bridge to:
    • Provision DPUs
    • Deploy DOCA services
    • Configure service chaining
    • Apply HBN configuration

This architecture enables advanced Day N use cases including distributed gateway and distributed firewall enforcement at the DPU.

Control Plane Security Offloading

By running the control plane on BF3 rather than the host main board, Bridge achieves the following security properties:

FeatureBridge + BF3
Hardware root of trustSecure and measured boot with digitally signed and encrypted firmware and OS images
Authentication offloadAuthentication, access control, and encryption accelerated on DPU
MicrosegmentationPolicies defining access to assets and data enforced on DPU
Data encryptionEncryption of data in transit at 200 Gb/s, including East-West traffic
Network telemetryDOCA Telemetry Service for collecting and analyzing network traffic metrics
note

BF3 provides hardware root of trust for the DPU itself, not the server's main board. Bridge leverages BF3's security capabilities for DPU-side control plane security and tenant isolation.