Flows and Prompt Engineering work together seamlessly. Test and refine prompts in the Playground, then deploy them in flows for automated, production-scale execution.
The Workflow
1. Prompt Playground
↓ Create and test prompts
↓ Run A/B experiments
↓ Evaluate quality
↓ Iterate and optimize
2. Save Prompt
↓ Save winning variant
↓ Document parameters
↓ Set model settings
3. Use in Flow
↓ Add Prompt node
↓ Select saved prompt
↓ Map variables
↓ Execute at scaleBenefits of This Approach
Quality Assurance
- Test prompts with evaluators before automation
- Validate quality metrics meet thresholds
- Catch issues in controlled environment
- Iterate quickly without flow execution costs
Version Control
- Prompts are centrally managed
- Update once, affects all flows using it
- Roll back to previous versions if needed
- Track changes and improvements over time
Consistent Settings
- Model selection stored with prompt
- Temperature and parameters preserved
- System prompts included
- Response schema enforced
Tip
Always test prompts in the Playground with your evaluation datasets before using them in production flows.
Step-by-Step Integration
1. Create and Test Prompt
In the Prompt Playground:
- Write your prompt template with variables
- Configure model settings (model, temperature, etc.)
- Test with sample inputs
- Run evaluations to measure quality
- Iterate until satisfactory
2. Save the Prompt
Prompt Name: "Product Description Generator"
Template: "Write a {{tone}} product description for {{productName}}"
Variables: tone, productName
Model: GPT-4
Temperature: 0.7
Max Tokens: 5003. Add to Flow
In the Flow Playground:
- Drag Prompt node to canvas
- Select "Product Description Generator" from dropdown
- Node automatically creates inputs for
toneandproductName - Map variables from upstream nodes
4. Map Variables
Dataset Source (products)
out.name → Prompt.productName
out.targetAudience → Prompt.tone
Prompt Node executes with mapped values
→ out.content contains generated descriptionAdvanced Patterns
Conditional Prompt Selection
Use different prompts based on data characteristics:
Dataset Source
→ Branch by product category
→ Branch A: Prompt "Technical Description"
→ Branch B: Prompt "Marketing Copy"
→ Merge resultsMulti-Stage Processing
Chain multiple prompts for complex tasks:
Prompt 1: "Extract Key Facts"
→ output: structured data
Prompt 2: "Generate Summary"
→ uses facts from Prompt 1
Prompt 3: "Create Title"
→ uses summary from Prompt 2Feedback Loops
Use evaluation results to improve prompts:
Flow Execution
→ Prompt generates outputs
→ Evaluator scores quality
→ Low scores trigger alert
→ Refine prompt in Playground
→ Update saved prompt
→ Re-run flow with improved versionWarning
Updating a saved prompt affects all flows using it. Test changes thoroughly before updating production prompts.
Prompt Variables Best Practices
Clear Variable Names
✅ Good:
{{productName}}, {{targetAudience}}, {{keyFeatures}}
❌ Bad:
{{x}}, {{data}}, {{input1}}Provide Context
Better: "Write a {{tone}} product description..."
vs
Worse: "Describe {{product}}"Use Structured Outputs
Configure JSON schemas for reliable parsing:
Schema:
{
"type": "object",
"properties": {
"description": { "type": "string" },
"keyPoints": {
"type": "array",
"items": { "type": "string" }
},
"tone": {
"type": "string",
"enum": ["professional", "casual", "technical"]
}
}
}Monitoring Prompt Performance
In Playground
- Run experiments to compare variants
- Track evaluation scores over time
- Analyze failure patterns
- Measure cost and latency
In Flows
- Monitor execution success rates
- Track API usage and costs
- Review output quality samples
- Identify edge cases for testing
With Tracing
Enable production tracing to monitor deployed flows:
Deployed Flow API
→ Traces every execution
→ Logs inputs and outputs
→ Records latency and costs
→ Flags anomalies
→ Feeds back to Playground for improvementCommon Patterns
Content Generation
Dataset Source (topics)
→ Prompt: "Generate article outline"
→ Array Splitter (split sections)
→ Prompt: "Write section content"
→ Array Flatten (collect sections)
→ Prompt: "Format final article"
→ Dataset SinkData Enrichment
Dataset Source (company names)
→ Website Mapper (find pages)
→ Page Scraper (extract info)
→ Prompt: "Summarize company"
→ Dataset Sink (enriched data)Quality Control
Dataset Source
→ Prompt: "Generate content"
→ Prompt: "Review and score content"
→ Filter: Keep only high scores
→ Dataset Sink (approved content)Troubleshooting
Prompt Not Found
If a saved prompt is missing from the dropdown:
- Verify prompt is saved in the same workspace
- Check prompt hasn't been deleted
- Refresh the flow page
Variable Mapping Issues
If variables aren't mapping correctly:
- Check variable names match exactly
- Verify data types are compatible
- Test with simple pass-through first
- Inspect intermediate outputs
Unexpected Outputs
If flow outputs differ from Playground results:
- Verify same model and settings
- Check input data format
- Review variable mapping
- Test prompt separately with flow inputs