Skip to main content

Create and Access Workspace

JupyterHub in Armada Bridge provides interactive notebook environments where tenant users can write code, analyze data, and develop machine learning models with access to GPU, CPU, or MIG (Multi-Instance GPU) resources.

Each server runs in an isolated environment. You can create multiple servers with different profiles depending on your workload.

Prerequisites

  • A JupyterHub cluster exists — created by the Tenant Admin using the JupyterHub with KAI Scheduler cluster template
  • You have a tenant user account — created by a Tenant Admin
  • MIG profile configured (optional) — required only for the Environment with MIG GPU access profile
Port-forward requirement

If JupyterHub is not reachable from your browser, an administrator must run the following on the Bridge node to make the ingress controller accessible:

kubectl -n amcop-system port-forward --address 0.0.0.0 svc/ingress-ingress-nginx-controller 443:443

Keep this command running while you use JupyterHub.

Step 1: Open AI Studio

  1. Log in to Armada Bridge as a tenant user.

  2. In the left sidebar, click AI Studio. A new tab opens.

    Tenant User Dashboard

  3. In the AI Studio sidebar, click JupyterHub.

    The JupyterHub page opens showing Authentication Required.

    JupyterHub Authentication Required

Step 2: Authenticate with JupyterLab

Authentication is required once per tenant user. After authenticating, you can create and manage servers without repeating these steps.

  1. Click Authenticate with JupyterLab.

  2. The browser opens a security warning (when the JupyterHub endpoint uses a self-signed or untrusted certificate). Click Advanced.

    Browser security warning

  3. Click Proceed to jupyter.armada.ai (unsafe) (or the equivalent for your browser).

    Proceed to JupyterHub

  4. The page shows Authentication Successful. Click Close Window.

    Authentication Successful

  5. Return to the JupyterHub tab. You should now see the Add new server option.

    JupyterHub — Add new server

Step 3: Create a Workspace

  1. Click + Add new server.
  2. Enter a Server name and select an Image from the dropdown.
  3. Choose a Profile that matches your workload:
ProfileResourcesUse case
Environment with GPU accessFull GPUML training, CUDA, deep learning
Environment with CPUCPU onlyData analysis, scripting, light computation
Environment with MIG GPU accessGPU partitionWorkloads needing a fraction of a GPU
  1. If you selected profile Environment with GPU access, then select GPU count as needed.

  2. If you selected profile as Environment with MIG GPU access, then select the desired MIG Profile.

  3. Click Create Server.

    Create GPU JupyterHub Server

  4. Wait until the server status is Ready.

    GPU JupyterHub Server Status

    Datasets and volumes with JupyterHub

    If the Admin has imported the NFS server, you can create volumes and datasets in AI Studio and attach them when creating a JupyterHub server. The server is then created with access to the chosen datasets and volumes so you can train and run inference on your model and data. Volume data is persisted and can be reused when you create new JupyterHub servers.

    GPU JupyterHub Server Status

Open JupyterLab and Verify GPU Access

  1. Click the URL shown for your server. JupyterLab opens in a new tab.

    GPU JupyterHub Notebook

  2. Open a notebook (e.g., click Python 3 (ipykernel)).

    Click ipykernel

  3. In a cell, run !nvidia-smi to confirm that the GPU is visible.

    GPU JupyterHub Command-1

  4. (Optional) To verify GPU utilization, install PyTorch and NumPy, then run the script below.

    pip install torch numpy

    Run this in a notebook cell:

    import torch
    import time
    device = "cuda"
    a = torch.randn(4096, 4096, device=device)
    b = torch.randn(4096, 4096, device=device)
    print("Running approximately 50% GPU workload for 30 seconds...")
    end_time = time.time() + 30
    while time.time() < end_time:
    c = torch.matmul(a, b)
    torch.cuda.synchronize()
    time.sleep(0.02)
    print("GPU workload completed successfully. GPU access and utilization are verified.")

    On success, you should see the completion message in the cell output.

    GPU JupyterHub Usage Script