Robust error handling ensures your workflows continue operating reliably even when individual operations fail. Learn how to detect, handle, and recover from errors in your flows.
Types of Errors
Input Validation Errors
Invalid or missing input data:
- Missing required fields
- Wrong data types
- Invalid format (e.g., malformed URLs)
- Out of range values
Node Execution Errors
Failures during node processing:
- API timeouts
- Rate limits exceeded
- Network errors
- Invalid responses from external services
Data Processing Errors
Issues with data transformation:
- Null or undefined values
- Empty arrays when data expected
- Parse errors (JSON, HTML)
- Mapping failures
Resource Errors
System and quota limitations:
- Insufficient credits
- Memory limits
- Execution timeouts
- Concurrent execution limits
Error Handling Strategies
Retry Logic
Automatically retry transient failures:
Configuration:
Max Retries: 3
Retry Delay: 2s, 5s, 10s (exponential backoff)
Retry On:
- Network timeouts
- Rate limit errors (429)
- Server errors (500, 502, 503)
Don't Retry On:
- Invalid input (400)
- Authentication errors (401, 403)
- Not found (404)Fallback Values
Provide default values when operations fail:
Node fails to scrape page
→ Fallback to cached version
→ Or use default empty string
→ Flow continuesSkip and Continue
For array processing, skip failed items:
Array of 100 items
→ Item 23 fails
→ Log error
→ Continue with remaining 99 items
→ Report 99 successes, 1 failureFail Fast
Stop immediately on critical errors:
API authentication fails
→ No point continuing
→ Stop flow immediately
→ Return clear error messageConfiguring Error Handling
Per-Node Settings
Configure error behavior for each node:
Prompt Node:
On Error: Retry 3 times
Page Scraper:
On Error: Skip and continue
Dataset Sink:
On Error: Fail flow (critical)Flow-Level Settings
Global error handling rules:
- Maximum total errors allowed
- Error rate threshold (e.g., fail if >10% errors)
- Notification preferences
- Logging level
Error Information
Error Object Structure
{
"error": {
"node": "Page Scraper",
"nodeId": "node_abc123",
"type": "NetworkTimeout",
"message": "Request timed out after 30s",
"retryable": true,
"timestamp": "2024-01-15T10:30:00Z",
"input": {
"url": "https://slow-site.com"
},
"attempts": 2,
"stackTrace": "..."
}
}Error Categories
- Retryable: Temporary issues, safe to retry
- Non-retryable: Permanent failures, don't retry
- Partial: Some items succeeded, others failed
- Critical: Flow cannot continue
Monitoring Errors
Real-Time Alerts
Get notified when errors occur:
- Email notifications
- Webhook callbacks
- Slack/Discord integration
- Dashboard alerts
Error Logs
Access detailed error information:
Flow Execution Logs:
[10:30:00] Node: Page Scraper - Started
[10:30:15] Node: Page Scraper - Error: Timeout
[10:30:17] Node: Page Scraper - Retry 1/3
[10:30:32] Node: Page Scraper - Error: Timeout
[10:30:35] Node: Page Scraper - Retry 2/3
[10:30:50] Node: Page Scraper - Success
[10:30:51] Node: HTML Text Extractor - StartedError Dashboard
View aggregated error metrics:
- Error rate over time
- Most common error types
- Problematic nodes
- Retry success rates
Best Practices
1. Validate Inputs Early
API Input Node
→ Validation Node (check required fields)
→ If invalid: Return error immediately
→ If valid: Continue processing2. Use Appropriate Timeouts
Fast operations: 10s timeout
API calls: 30s timeout
Web scraping: 60s timeout
Large LLM requests: 120s timeout3. Provide Context in Errors
// Good error message
{
"error": "Failed to scrape URL: https://example.com/page1",
"reason": "404 Not Found",
"suggestion": "Check if URL is valid"
}
// Bad error message
{
"error": "Failed"
}4. Test Error Scenarios
Create test cases for common failures:
- Invalid URLs
- Empty datasets
- Malformed data
- Missing API keys
- Network failures
Recovery Patterns
Checkpoint and Resume
Save progress and resume after failures:
Process 100 items
→ Save checkpoint every 10 items
→ If flow fails at item 45
→ Resume from checkpoint 40
→ Only reprocess 5 itemsDead Letter Queue
Save failed items for later review:
Array Splitter
→ Process items
→ Successful: → Dataset Sink (results)
→ Failed: → Dataset Sink (errors)
Review failed items dataset
→ Fix underlying issues
→ Reprocess failed itemsCircuit Breaker
Stop calling failing services temporarily:
API call fails 5 times in a row
→ Open circuit breaker
→ Return cached data or error
→ After 5 minutes, try again
→ If successful, close circuit breakerAdvanced Error Handling
Conditional Branching
Page Scraper
→ If success: Continue normal flow
→ If 404 error: Log and skip
→ If timeout: Retry with longer timeout
→ If other error: Alert teamError Aggregation
Collect and analyze batch errors:
{
"totalItems": 100,
"successful": 87,
"failed": 13,
"errors": {
"timeout": 8,
"not_found": 3,
"parse_error": 2
},
"errorRate": 0.13
}Custom Error Messages
Map technical errors to user-friendly messages:
const errorMessages = {
"RATE_LIMIT": "Too many requests. Please try again in a few minutes.",
"INVALID_URL": "The URL provided is not valid. Please check and try again.",
"NETWORK_ERROR": "Unable to connect. Please check your internet connection.",
"INSUFFICIENT_CREDITS": "You've run out of API credits. Please upgrade your plan."
}Debugging Failed Flows
Inspect Node Outputs
- Click the failed node
- View input data received
- Check output or error message
- Review execution logs
- Identify the root cause
Test Node in Isolation
Run the problematic node separately:
Extract the failing node's input
→ Create minimal test flow
→ Run just that node
→ Easier to debug without full flow contextUse Smaller Datasets
Reproduce errors with minimal data:
Full dataset: 1000 items, fails at item 487
→ Create test dataset with just item 487
→ Reproduce error quickly
→ Fix and verify
→ Rerun full dataset