Once you've built your flow, running it executes all nodes in sequence, processing your data through each step while providing real-time progress feedback.
How to Run a Flow
- 1
Select data source If using Dataset Source, select which samples to process
- 2
Click "Run Flow" The execution button in the top toolbar
- 3
Monitor progress Watch real-time status updates for each node
- 4
Review results Check outputs in Dataset Sinks or node inspectors
Execution Modes
Sync (Synchronous) Mode
Default mode for manual flow execution:
- Waits for completion before returning
- Shows progress in real-time
- Best for testing and development
- Limited to reasonable execution times (<5 minutes)
Async (Asynchronous) Mode
Used for long-running flows and API deployments:
- Returns immediately with execution ID
- Flow continues in background
- Poll for status updates
- Best for large datasets or slow operations
Progress Monitoring
Node Status Indicators
Each node shows its current status during execution:
- Pending: Waiting to execute
- Running: Currently processing (animated)
- Success: Completed successfully (green checkmark)
- Error: Failed with error (red X)
- Skipped: Not executed due to conditions
Progress Metrics
Monitor key execution metrics:
Real-Time Logs
View detailed logs during execution:
- Node start/completion times
- Data flow between nodes
- Error messages and warnings
- Resource usage (tokens, API calls)
Running with Datasets
Sample Selection
Choose which dataset rows to process:
Options:
- Selected samples (manually picked)
- First N samples (for testing)
- Random sample (statistical testing)
- All samples (full dataset run)Incremental Processing
Process dataset in batches:
1. Run with 5 samples (validate logic)
2. Run with 20 samples (test edge cases)
3. Run with 100 samples (verify scale)
4. Run full dataset (production)Execution Control
Pause/Resume
For long-running flows:
- Pause execution at any time
- Inspect intermediate results
- Resume from where you paused
- Useful for debugging complex flows
Stop/Cancel
Abort execution if needed:
- Stops current node immediately
- Saves partial results
- Can restart from beginning
- Refunds unused credits
Retry Failed Nodes
Recover from transient failures:
Error on Node 5 (API timeout)
→ Click "Retry Node"
→ Continues from failure point
→ Doesn't re-execute successful nodesPerformance Optimization
Parallel vs Sequential
Sequential (default):
Process items one at a time
Lower credit usage
Easier to debug
Time: N × per-item-time
Parallel:
Process multiple items simultaneously
Higher credit usage
Faster completion
Time: ceil(N / concurrency) × per-item-timeCaching
Evaligo caches certain node outputs:
- Website mapper results (24 hours)
- Page scraper content (1 hour)
- Prompt responses (optional, configurable)
- Reduces redundant API calls
Resource Limits
Be aware of execution limits:
Free Tier:
- Max execution time: 5 minutes
- Max concurrent runs: 2
- Max items per array: 100
Pro Tier:
- Max execution time: 30 minutes
- Max concurrent runs: 10
- Max items per array: 1,000
Enterprise:
- Custom limits
- Dedicated resourcesResult Inspection
Node Outputs
Click any node after execution to view:
- Input data received
- Output data produced
- Execution time
- Resource usage (tokens, cost)
Dataset Sink Results
View saved results in datasets:
After flow completes:
→ Navigate to Datasets
→ Open target dataset
→ See newly added rows
→ Export or further analyzeError Analysis
When nodes fail:
- Click failed node for error details
- Check error type and message
- View input data that caused failure
- Fix and retry
Execution History
Access past flow runs:
- View all executions with timestamps
- Compare results across runs
- Replay previous executions
- Export execution data
Best Practices
Pre-Flight Checks
Before running a flow:
- Verify all node configurations
- Check variable mappings are correct
- Ensure datasets have valid data
- Review prompt configurations
- Check API keys are active
Test Incrementally
Step 1: Test with 1 sample
Step 2: Test with 5 samples
Step 3: Test with different data types
Step 4: Test edge cases
Step 5: Run full datasetMonitor Costs
Track API credit usage:
- Estimate cost before running
- Monitor usage during execution
- Set up cost alerts
- Review cost per execution