Variable mapping is how you control exactly what data flows from one node to another. Understanding the mapping system is key to building powerful, flexible workflows.
Core Concepts
Output Variables
Every processing node exposes these standard outputs:
out- The processed result from this node_input- The original input that was passed to this node (passthrough)out.fieldName- Access specific fields within the output object
Input Variables
Target nodes have specific inputs depending on their type:
- Prompt nodes:
inplus one input per prompt variable - Dataset nodes:
inplus one input per dataset field - Utility nodes: Specialized inputs like
urlVarfor Website Mapper
Mapping Examples
Simple Pass-Through
Source Node Output: out
Target Node Input: in
Result: The entire output object is passed to the inputField Extraction
Source Node Output: out.url
Target Node Input: urlVar
Result: Only the 'url' field is extracted and passedOriginal Input Access
Source Node Output: _input.companyName
Target Node Input: company
Result: Access the original input from earlier in the flow_input to maintain context from earlier stages of your workflow. This is especially useful when you need to combine original data with processed results.Dataset Nodes
Dataset Source Outputs
Dataset Source nodes expose additional outputs for each field in your dataset schema:
out- The entire row as an object_input- Empty for source nodesout.fieldName- Individual field access (e.g.,out.productName,out.description)
Dataset Sink Inputs
Dataset Sink nodes have inputs for each field in the target dataset schema. You can map different sources to each field:
Map to Dataset Sink:
id -> _input.originalId
result -> out.generatedText
confidence -> out.score
timestamp -> out.processedAtPrompt Node Variables
Prompt nodes create an input handle for each variable in your prompt template:
Prompt Template:
"Analyze the product {{productName}} and describe its {{feature}}"
Available Inputs:
- in (generic input)
- productName
- featureYou can map different sources to each prompt variable:
Map to Prompt Node:
productName -> out.name
feature -> out.mainFeatureArray Processing
Array Splitter
When using Array Splitter, specify which array field to split:
Source: out.urls (array of strings)
Split Path: urls
Result: Each URL is processed individually by downstream nodesArray Flatten
Array Flatten collects results from iterative processing back into an array:
Multiple inputs from parallel processing
→ Array Flatten
→ Single array output containing all resultsAdvanced Patterns
Multi-Source Mapping
Map multiple outputs to a single target node to combine data:
Node A: out.title -> targetNode.heading
Node B: out.content -> targetNode.body
Node C: out.author -> targetNode.bylineNested Field Access
Access deeply nested fields using dot notation:
out.metadata.author.name
out.results[0].score
_input.original.source.urlDebugging Tips
- Check data structure: Run your flow and inspect outputs to see the actual data shape
- Use simple mappings first: Start with
out → inand refine from there - Inspect passthrough: Use
_inputto debug what data was originally passed - Test incrementally: Add one connection at a time and verify outputs