Skip to main content

Access Model Playground

Overview

The Model Playground is an interactive interface in Bridge that lets you test deployed models, run predictions, and validate model behavior before integrating them into production applications.

Use the playground to:

  • Send prompts and inspect model responses in real time
  • Verify that a deployed model behaves as expected
  • Iterate on prompts and parameters without writing code
  • Share the playground URL with your team for quick testing (within tenant isolation)

Prerequisites

Before you can use the Model Playground, ensure that:

  • A model has been deployed (as a Tenant Admin or by a user with deployment permissions)
  • You have a tenant user account (created by a Tenant Admin)
  • You are logged in as that tenant user

Accessing the Model Playground

Step 1: Open AI Studio

  1. Log in to Bridge as a tenant user (for example, tenantuser).
  2. In the left sidebar, click AI Studio. This opens AI Studio in a new tab.

AI Studio sidebar

Step 2: Open the Model in the Playground

  1. In the AI Studio tab, click Models in the sidebar.
  2. Locate the deployed model and click Open in Playground.

Models sidebar — Open in Playground

  1. Click Open in Playground to launch the Playground.

Open in Playground button

Step 3: Handle the First-Login Browser Security Warning

Browser security warning (self-signed or untrusted certificate)

On the first login, the Playground may display an error message if the model endpoint does not have an authorized certificate. To resolve this:

  1. Copy the model endpoint URL from the top-right of the Playground page.
  2. Open a new browser tab and paste the URL.
  3. When the browser shows a security warning, click Advanced.

Browser Advanced option

  1. Click Proceed to [site] (unsafe) or Accept the risk and continue.
  2. The browser will show a "404 page not found" message — this is expected.

404 page not found

  1. Return to the Playground tab and refresh the page if needed.

This one-time step establishes trust for the endpoint in your browser. You do not need to repeat it for the same endpoint.

Step 4: Run Queries

You can now run queries in the prompting interface.

Playground prompting interface

Tenant user isolation

Tenant user isolation is enforced in Bridge. One tenant user cannot see or use another tenant user's Model Playground. Each tenant user (for example, tenantuser1 and tenantuser2) must complete the steps above with their own login to access their own model playground.

Using the Playground

In the prompting interface you can:

  • Enter text prompts and submit them to the deployed model
  • View the model’s response in the same screen
  • Adjust parameters (if supported by the model) and re-run tests

Use these results to validate behavior before calling the model from applications or APIs.

Playground Interface

The playground provides:

  • Input Editor - Define model inputs
  • Prediction Viewer - View model outputs
  • History - Track prediction calls
  • Export - Save test results

Playground Interface

Test Models

Single Prediction

Make a single prediction:

  1. Enter input data in the input editor
  2. Click Predict or Submit
  3. View model output
{
"inputs": {
"feature1": 1.0,
"feature2": 2.0,
"feature3": 3.0
}
}

Single Prediction

Output:

{
"prediction": 0.95,
"confidence": 0.98
}

Batch Predictions

Test multiple inputs:

  1. Prepare CSV or JSON file with multiple inputs
  2. Upload file to playground
  3. Submit batch request
  4. Download results
feature1,feature2,feature3
1.0,2.0,3.0
2.0,3.0,4.0
3.0,4.0,5.0

Batch Predictions

Input Formats

JSON Format

{
"instances": [
[1.0, 2.0, 3.0],
[2.0, 3.0, 4.0]
]
}

CSV Format

col1,col2,col3
1.0,2.0,3.0
2.0,3.0,4.0

Text Format

Raw text input for text models

Validate Models

Check Predictions

Verify model behaves correctly:

  • Test with known inputs
  • Verify expected outputs
  • Check confidence scores
  • Test edge cases

Validate Predictions

Test Different Inputs

Explore model behavior:

{
"test_cases": [
{"normal_input": [1.0, 2.0, 3.0]},
{"edge_case": [0.0, 0.0, 0.0]},
{"extreme": [100.0, 100.0, 100.0]}
]
}

Analyze Predictions

Review results:

  • Prediction values
  • Confidence scores
  • Inference time
  • Resource usage

Export Results

Download Predictions

Save test results:

  1. Run predictions
  2. Click Download Results
  3. Choose format (JSON, CSV, Excel)

Export Results

Share Results

Share findings with team:

  1. Export results
  2. Include in reports
  3. Attach to documentation
  4. Reference in model validation

Performance Testing

Latency Testing

Measure model response time:

Request: 1.5 seconds
Average: 1.2 seconds (from 10 requests)

Latency Testing

Throughput Testing

Test throughput capacity:

Requests per second: 100 RPS
Concurrent connections: 10

Resource Usage

The Resource Usage tab displays token usage per model and per user (which user consumed how many tokens). The following fields are shown:

  • Prompt tokens — tokens in the request input
  • Completed tokens — tokens in the model output
  • Total tokens — combined prompt and completion token count
  • Model ID — model associated with the usage
  • Username — tenant user associated with the usage

Resource Monitoring

Debugging

View Model Logs

Check model serving logs:

[2024-01-09 10:15:32] Model loaded successfully
[2024-01-09 10:15:45] Prediction request: {input: [1,2,3]}
[2024-01-09 10:15:46] Inference complete: 0.95

Error Handling

Debug prediction errors:

  1. Check input format
  2. Verify input data types
  3. Check model requirements
  4. Review error messages

Error Debugging

A/B Testing

Compare Models

Test multiple model versions:

  1. Deploy Model A and Model B
  2. Send same inputs to both
  3. Compare outputs
  4. Analyze differences

A/B Testing

Model Comparison

{
"model_a_output": 0.95,
"model_b_output": 0.92,
"difference": 0.03
}

Best Practices

Before Production

  • Test with diverse inputs
  • Validate edge cases
  • Check performance metrics
  • Review resource usage
  • Document test results

Continuous Validation

  • Regular model testing
  • Monitor prediction distribution
  • Track confidence scores
  • Detect data drift
  • Compare against baselines

Next Steps