Variable mapping is how you control exactly what data flows from one node to another. Understanding the mapping system is key to building powerful, flexible workflows.

Core Concepts

Output Variables

Every processing node exposes these standard outputs:

  • out - The processed result from this node
  • _input - The original input that was passed to this node (passthrough)
  • out.fieldName - Access specific fields within the output object

Input Variables

Target nodes have specific inputs depending on their type:

  • Prompt nodes: in plus one input per prompt variable
  • Dataset nodes: in plus one input per dataset field
  • Utility nodes: Specialized inputs like urlVar for Website Mapper

Mapping Examples

Simple Pass-Through

Source Node Output: out
Target Node Input: in

Result: The entire output object is passed to the input

Field Extraction

Source Node Output: out.url
Target Node Input: urlVar

Result: Only the 'url' field is extracted and passed

Original Input Access

Source Node Output: _input.companyName
Target Node Input: company

Result: Access the original input from earlier in the flow
Tip
Use _input to maintain context from earlier stages of your workflow. This is especially useful when you need to combine original data with processed results.

Dataset Nodes

Dataset Source Outputs

Dataset Source nodes expose additional outputs for each field in your dataset schema:

  • out - The entire row as an object
  • _input - Empty for source nodes
  • out.fieldName - Individual field access (e.g., out.productName, out.description)

Dataset Sink Inputs

Dataset Sink nodes have inputs for each field in the target dataset schema. You can map different sources to each field:

Map to Dataset Sink:
  id -> _input.originalId
  result -> out.generatedText
  confidence -> out.score
  timestamp -> out.processedAt

Prompt Node Variables

Prompt nodes create an input handle for each variable in your prompt template:

Prompt Template:
"Analyze the product {{productName}} and describe its {{feature}}"

Available Inputs:
  - in (generic input)
  - productName
  - feature

You can map different sources to each prompt variable:

Map to Prompt Node:
  productName -> out.name
  feature -> out.mainFeature

Array Processing

Array Splitter

When using Array Splitter, specify which array field to split:

Source: out.urls (array of strings)
Split Path: urls

Result: Each URL is processed individually by downstream nodes

Array Flatten

Array Flatten collects results from iterative processing back into an array:

Multiple inputs from parallel processing
→ Array Flatten
→ Single array output containing all results
Warning
When working with arrays, ensure your downstream nodes can handle the data type correctly. Array Splitter outputs individual items, not arrays.

Advanced Patterns

Multi-Source Mapping

Map multiple outputs to a single target node to combine data:

Node A: out.title -> targetNode.heading
Node B: out.content -> targetNode.body
Node C: out.author -> targetNode.byline

Nested Field Access

Access deeply nested fields using dot notation:

out.metadata.author.name
out.results[0].score
_input.original.source.url

Debugging Tips

  • Check data structure: Run your flow and inspect outputs to see the actual data shape
  • Use simple mappings first: Start with out → in and refine from there
  • Inspect passthrough: Use _input to debug what data was originally passed
  • Test incrementally: Add one connection at a time and verify outputs

Related Documentation

Connecting Nodes
Learn how to create connections
Dataset Nodes
Working with Dataset Source and Sink
Prompt Node
Using prompts in flows