The Array Splitter node takes an array and splits it into individual items, allowing downstream nodes to process each item separately. This is essential for batch processing workflows.
How It Works
Array Splitter transforms an array field into individual executions:
Input: { urls: ["url1", "url2", "url3"] }
↓
Array Splitter (split on 'urls')
↓
Output: Three separate executions:
1. "url1"
2. "url2"
3. "url3"Configuration
Split Path
Specify which field contains the array to split:
Examples:
urls # Split the 'urls' array
results # Split the 'results' array
data.items # Split nested 'items' arrayInput/Output Structure
// Input
{
"out": {
"urls": [
"https://example.com/page1",
"https://example.com/page2",
"https://example.com/page3"
]
}
}
// Output (processed individually downstream)
// Execution 1: "https://example.com/page1"
// Execution 2: "https://example.com/page2"
// Execution 3: "https://example.com/page3"Common Usage Patterns
Website Crawling
Website Mapper (discover pages)
out.urls: ["url1", "url2", "url3"]
↓
Array Splitter (split urls)
↓
Page Scraper (process each URL)
↓
Array Flatten (collect results)Batch Prompt Processing
Dataset Source (products)
out: [
{name: "Product A", desc: "..."},
{name: "Product B", desc: "..."}
]
↓
Array Splitter
↓
Prompt (generate description for each)
↓
Array Flatten
↓
Dataset SinkMulti-Step Processing
Dataset Source (companies)
↓
Array Splitter
↓
Website Mapper (map each company site)
↓
Array Splitter (split discovered URLs)
↓
Page Scraper (scrape each page)
↓
Array Flatten (collect all pages)
↓
Prompt (analyze content)
↓
Dataset SinkProcessing Modes
Sequential Processing (Default)
Process array items one at a time:
- More predictable execution order
- Lower concurrent API usage
- Easier to debug
- Slower for large arrays
Parallel Processing
Process multiple items simultaneously:
- Much faster for large arrays
- Higher API credit usage
- May hit rate limits
- More complex error handling
Working with Nested Arrays
Use dot notation to split nested arrays:
// Input structure
{
"data": {
"results": [
{ "id": 1, "urls": ["a.com", "b.com"] },
{ "id": 2, "urls": ["c.com", "d.com"] }
]
}
}
// Split path options:
data.results // Split outer array (2 items)
data.results.urls // Split inner urls (needs two splitters)Error Handling
Empty Arrays
If the array is empty, downstream nodes are skipped and the flow continues.
Missing Field
If the specified field doesn't exist, the node logs an error and stops execution.
Non-Array Values
If the field is not an array, the node treats it as a single-item array and processes it once.
Best Practices
Test with Small Arrays First
- Use 2-3 items for initial testing
- Verify each item processes correctly
- Check that downstream nodes handle the data properly
- Then scale up to full arrays
Monitor Array Sizes
- Very large arrays (100+ items) can be slow
- Consider filtering or limiting array size upstream
- Use pagination for massive datasets
- Break into smaller batches if needed
Combine with Array Flatten
Always use Array Flatten after processing split items to collect results back into a single array:
Array Splitter
→ Process individually
→ Array Flatten
→ Continue with combined resultsPerformance Tips
Batch Size Optimization
- Small arrays (<10 items): Sequential is fine
- Medium arrays (10-50 items): Consider parallel
- Large arrays (50+ items): Use parallel with batching
- Huge arrays (500+ items): Break into separate flow runs
Cost Management
When using expensive operations (LLM calls) after Array Splitter:
- Estimate total cost: items × cost per item
- Test on a subset first
- Monitor credit usage during execution
- Set up cost alerts
Integration Patterns
With Dataset Nodes
Dataset Source
→ Array Splitter (if output is array)
→ Process
→ Array Flatten
→ Dataset SinkWith Website Nodes
Website Mapper
→ Array Splitter (split URLs)
→ Page Scraper
→ HTML Text Extractor
→ Prompt
→ Array FlattenNested Splitting
You can use multiple Array Splitters for nested processing:
Dataset Source (companies with URLs)
→ Array Splitter 1 (split companies)
→ Array Splitter 2 (split URLs per company)
→ Process each URL
→ Array Flatten 2 (collect company URLs)
→ Array Flatten 1 (collect all results)