The Prompt node lets you use any saved prompt from your Evaligo workspace in a flow. Each prompt variable becomes an input handle for dynamic data mapping.
Adding a Prompt Node
- 1
Drag Prompt node from palette Find it in the "AI Processing" section
- 2
Select a saved prompt Click the node to open the prompt selector
- 3
Configure iterations (optional) Enable parallel processing for array inputs
Prompt Variables as Inputs
When you select a prompt with variables, the Prompt node automatically creates input handles for each variable.
Prompt Template:
"Write a {{tone}} product description for {{productName}}"
Node Inputs Created:
- in (generic fallback)
- tone
- productNameYou can then map data from upstream nodes to these specific inputs:
Dataset Source:
out.name → Prompt.productName
out.targetAudience → Prompt.tonein input gives you more control and makes your flow easier to understand.Configuration Options
Parallel Iterations
Enable this option to process array items in parallel:
- Disabled: Processes items sequentially (default, more predictable)
- Enabled: Processes multiple items simultaneously (faster, uses more credits)
Prompt Settings
The node uses the settings configured in your saved prompt:
- Model selection (GPT-4, Claude, etc.)
- Temperature and other parameters
- System prompt
- Response format/schema
To change these settings, edit the prompt in the Prompt Playground and the changes will apply automatically to all flows using that prompt.
Output Structure
The Prompt node outputs:
out- The LLM response (string or structured object if using schema)_input- The original input data passed to the nodeout.content- Text response (for structured responses)out.fieldName- Specific fields if using JSON schema
Working with Arrays
Processing Lists
When connected after an Array Splitter, the Prompt node processes each item individually:
Dataset → Array Splitter (split on 'products')
→ Prompt Node (process each product)
→ Array Flatten (collect results)Batch Processing
For large datasets, consider using Dataset Source with sample selection to test your prompt before running on the full dataset.
Best Practices
Test Your Prompts First
Always test prompts in the Prompt Playground before using them in flows. This helps you:
- Verify the prompt produces expected outputs
- Optimize for cost and latency
- Run evaluations to measure quality
- Iterate quickly without running full flows
Use Schemas for Structured Output
Configure JSON schemas in your prompts to get reliable, parseable outputs:
{
"type": "object",
"properties": {
"summary": { "type": "string" },
"sentiment": { "type": "string", "enum": ["positive", "negative", "neutral"] },
"confidence": { "type": "number" }
}
}Handle Errors Gracefully
Prompt nodes can fail due to:
- API rate limits
- Malformed input data
- Schema validation errors
- Token limit exceeded
Use error handling nodes downstream or implement retry logic in your deployment configuration.
Integration with Prompt Engineering
Prompt nodes bridge your prompt engineering work with production workflows:
- Test prompts with A/B experiments in the Playground
- Use custom evaluators to measure quality
- Deploy winning prompts in flows
- Monitor production performance with tracing
- Iterate based on real-world results