Once you've built your flow, running it executes all nodes in sequence, processing your data through each step while providing real-time progress feedback.

How to Run a Flow

  1. 1

    Select data source If using Dataset Source, select which samples to process

  2. 2

    Click "Run Flow" The execution button in the top toolbar

  3. 3

    Monitor progress Watch real-time status updates for each node

  4. 4

    Review results Check outputs in Dataset Sinks or node inspectors

Execution Modes

Sync (Synchronous) Mode

Default mode for manual flow execution:

  • Waits for completion before returning
  • Shows progress in real-time
  • Best for testing and development
  • Limited to reasonable execution times (<5 minutes)

Async (Asynchronous) Mode

Used for long-running flows and API deployments:

  • Returns immediately with execution ID
  • Flow continues in background
  • Poll for status updates
  • Best for large datasets or slow operations
Tip
Use Sync mode for quick tests with small datasets. Switch to Async for production workflows with large data volumes.

Progress Monitoring

Node Status Indicators

Each node shows its current status during execution:

  • Pending: Waiting to execute
  • Running: Currently processing (animated)
  • Success: Completed successfully (green checkmark)
  • Error: Failed with error (red X)
  • Skipped: Not executed due to conditions

Progress Metrics

Monitor key execution metrics:

Current Node
Prompt(analyzing content)
Progress
5/8
nodes completed
Elapsed Time
23.5s
Items Processed
12/15
Est. Remaining
~8s

Real-Time Logs

View detailed logs during execution:

  • Node start/completion times
  • Data flow between nodes
  • Error messages and warnings
  • Resource usage (tokens, API calls)

Running with Datasets

Sample Selection

Choose which dataset rows to process:

Options:
  - Selected samples (manually picked)
  - First N samples (for testing)
  - Random sample (statistical testing)
  - All samples (full dataset run)

Incremental Processing

Process dataset in batches:

1. Run with 5 samples (validate logic)
2. Run with 20 samples (test edge cases)
3. Run with 100 samples (verify scale)
4. Run full dataset (production)
Warning
Always test with a small sample first. Running large datasets can consume significant API credits and take considerable time.

Execution Control

Pause/Resume

For long-running flows:

  • Pause execution at any time
  • Inspect intermediate results
  • Resume from where you paused
  • Useful for debugging complex flows

Stop/Cancel

Abort execution if needed:

  • Stops current node immediately
  • Saves partial results
  • Can restart from beginning
  • Refunds unused credits

Retry Failed Nodes

Recover from transient failures:

Error on Node 5 (API timeout)
  → Click "Retry Node"
  → Continues from failure point
  → Doesn't re-execute successful nodes

Performance Optimization

Parallel vs Sequential

Sequential (default):
  Process items one at a time
  Lower credit usage
  Easier to debug
  Time: N × per-item-time

Parallel:
  Process multiple items simultaneously  
  Higher credit usage
  Faster completion
  Time: ceil(N / concurrency) × per-item-time

Caching

Evaligo caches certain node outputs:

  • Website mapper results (24 hours)
  • Page scraper content (1 hour)
  • Prompt responses (optional, configurable)
  • Reduces redundant API calls

Resource Limits

Be aware of execution limits:

Free Tier:
  - Max execution time: 5 minutes
  - Max concurrent runs: 2
  - Max items per array: 100

Pro Tier:
  - Max execution time: 30 minutes
  - Max concurrent runs: 10
  - Max items per array: 1,000

Enterprise:
  - Custom limits
  - Dedicated resources

Result Inspection

Node Outputs

Click any node after execution to view:

  • Input data received
  • Output data produced
  • Execution time
  • Resource usage (tokens, cost)

Dataset Sink Results

View saved results in datasets:

After flow completes:
  → Navigate to Datasets
  → Open target dataset
  → See newly added rows
  → Export or further analyze

Error Analysis

When nodes fail:

  • Click failed node for error details
  • Check error type and message
  • View input data that caused failure
  • Fix and retry
Tip
Save execution logs for complex flows. They're invaluable for debugging issues and optimizing performance.

Execution History

Access past flow runs:

  • View all executions with timestamps
  • Compare results across runs
  • Replay previous executions
  • Export execution data

Best Practices

Pre-Flight Checks

Before running a flow:

  1. Verify all node configurations
  2. Check variable mappings are correct
  3. Ensure datasets have valid data
  4. Review prompt configurations
  5. Check API keys are active

Test Incrementally

Step 1: Test with 1 sample
Step 2: Test with 5 samples  
Step 3: Test with different data types
Step 4: Test edge cases
Step 5: Run full dataset

Monitor Costs

Track API credit usage:

  • Estimate cost before running
  • Monitor usage during execution
  • Set up cost alerts
  • Review cost per execution

Related Documentation

Sync vs Async
Choose the right execution mode
Error Handling
Handle failures gracefully
Progress Tracking
Monitor flow execution