Transform your playground explorations into reusable assets by saving and managing views. Create consistent testing environments, share configurations with team members, and maintain a library of validated setups for different use cases.
Views in the Evaligo playground capture the complete state of your testing environment - including prompts, model settings, input examples, and evaluation criteria. This ensures that promising configurations can be easily reproduced and shared across your team.
Effective view management accelerates development cycles by eliminating the need to manually recreate successful configurations. Teams can build upon each other's work and maintain consistency across different testing scenarios.

Creating and Saving Views
Save playground configurations at key milestones in your experimentation process. Capture not just the final working version, but also intermediate states that represent different approaches or reveal important insights.
- 1
Configure your playground Set up prompts, model parameters, input examples, and evaluation criteria for your specific use case.
- 2
Test and validate Run multiple iterations to ensure the configuration produces consistent, high-quality results.
- 3
Add descriptive metadata Include clear names, descriptions, tags, and usage notes to make views discoverable and understandable.
- 4
Save as named view Preserve the complete configuration as a reusable asset for future experiments and team collaboration.
Naming Convention: Use descriptive names that include the use case, model, and key characteristics (e.g., "customer-support-gpt4-formal-tone" or "code-review-claude-detailed-feedback").
# Create a view from current playground state
view = client.playground.save_view(
name="customer-support-empathy-v2",
description="Customer support responses with enhanced empathy and emotional intelligence",
tags=["customer-support", "empathy", "production-ready"],
configuration={
"model": "gpt-4-turbo",
"temperature": 0.7,
"max_tokens": 500,
"system_prompt": """You are an empathetic customer support agent...
Guidelines:
- Acknowledge customer emotions
- Provide clear, actionable solutions
- Maintain professional warmth
""",
"few_shot_examples": [
{
"input": "I'm really frustrated with this recurring billing issue...",
"output": "I completely understand your frustration with this billing issue..."
}
],
"evaluators": [
{"type": "empathy_score", "weight": 0.3},
{"type": "solution_quality", "weight": 0.4},
{"type": "professionalism", "weight": 0.3}
]
},
metadata={
"created_by": "ai-team",
"use_case": "tier-1-support",
"approval_status": "reviewed",
"performance_baseline": {
"empathy_score": 0.85,
"solution_quality": 0.92,
"customer_satisfaction": 4.2
}
}
)Organizing and Categorizing
Use tags, folders, and metadata to organize your views into a discoverable library. Good organization practices help teams find relevant configurations quickly and avoid duplicating effort across similar use cases.
Establish consistent tagging conventions that reflect your team's workflow and use cases. Consider organizing views by business function, model type, performance characteristics, or development stage to support different discovery patterns.

- 1
Define tag taxonomy Create consistent tags for use cases, models, quality levels, and approval status (e.g., "production", "experimental", "needs-review").
- 2
Use folder structure Group related views into folders by team, product area, or workflow stage for logical organization.
- 3
Add performance metadata Include baseline metrics and performance expectations to help teams select appropriate views.
- 4
Regular cleanup Archive outdated views and consolidate similar configurations to maintain a clean, useful library.
Discovery Features: Use advanced search and filtering to find views by performance metrics, last modified date, creator, or specific configuration parameters.
Team Collaboration
Share views with team members to enable collaborative development and consistent testing practices. Different sharing levels allow you to control access while promoting knowledge sharing across the organization.
Collaborative view management ensures that successful configurations benefit the entire team rather than staying siloed with individual contributors. This accelerates onboarding for new team members and maintains consistency across different projects.
Video

# Share view with team members
view.share_with(
users=["alice@company.com", "bob@company.com"],
teams=["ai-engineering", "product-qa"],
permissions={
"view": True,
"clone": True,
"modify": False, # Read-only for most users
"share": False
}
)
# Create team template from successful view
template = view.create_template(
name="Customer Support Base Template",
description="Standard starting point for customer support configurations",
template_variables=[
{
"name": "support_tier",
"type": "select",
"options": ["tier-1", "tier-2", "escalation"],
"description": "Level of support complexity"
},
{
"name": "tone",
"type": "select",
"options": ["formal", "casual", "empathetic"],
"description": "Communication style"
}
],
default_values={
"support_tier": "tier-1",
"tone": "empathetic"
}
)
# Team members can instantiate from template
new_view = template.instantiate(
name="escalation-support-formal",
variables={
"support_tier": "escalation",
"tone": "formal"
}
)Version Control and History
Track changes to views over time to understand how configurations evolve and to enable rollback when needed. Version control for views provides the same benefits as code versioning - traceability, reversibility, and collaborative development.
Change Management: Always test modified views thoroughly before using them in production experiments. Maintain separate development and production view libraries when possible.

# Create a new version of an existing view
updated_view = existing_view.create_version(
version="2.1.0",
changes_summary="Improved empathy detection prompt and added fallback responses",
changelog=[
"Enhanced system prompt with specific empathy guidelines",
"Added 5 new few-shot examples for edge cases",
"Updated temperature from 0.7 to 0.6 for more consistent responses",
"Added fallback responses for low-confidence scenarios"
],
testing_notes="Tested on 200 support tickets, 15% improvement in empathy scores"
)
# Compare performance between versions
comparison = client.views.compare_versions(
view_id=existing_view.id,
baseline_version="2.0.0",
candidate_version="2.1.0",
test_dataset="customer-support-validation-set"
)
# Rollback if needed
if comparison.performance_degraded():
existing_view.rollback_to_version("2.0.0")
print("Rolled back due to performance regression")
Integration with Experiments
Seamlessly transition from playground exploration to formal experiments by promoting successful views to experiment configurations. This workflow ensures that insights from interactive testing inform systematic evaluation processes.