Flows can run in two modes: synchronous (sync) for immediate results, or asynchronous (async) for long-running operations. Understanding when to use each mode optimizes your workflow performance and user experience.
Synchronous (Sync) Mode
Sync mode waits for the flow to complete before returning results.
How It Works
1. Send request
2. Flow starts executing
3. [Wait for completion...]
4. Receive results
5. ContinueWhen to Use Sync
- Testing and development
- Small datasets (<20 items)
- Quick operations (<30 seconds)
- Interactive use in the Playground
- When you need immediate results
Advantages
- Simple to use and understand
- Results returned immediately
- No need to poll for status
- Easy error handling
Limitations
- Maximum execution time: 5 minutes
- Blocks until completion
- Not suitable for large datasets
- Can timeout on slow operations
Warning
Sync mode has a 5-minute timeout. Flows exceeding this will automatically fail. Use async mode for longer operations.
Asynchronous (Async) Mode
Async mode starts the flow and returns immediately with an execution ID.
How It Works
1. Send request
2. Receive execution ID immediately
3. Flow continues in background
4. Poll for status periodically
5. Retrieve results when completeWhen to Use Async
- Large datasets (>20 items)
- Long-running flows (>30 seconds)
- Production API deployments
- Batch processing jobs
- When user shouldn't wait
Advantages
- No timeout limits (up to 30 minutes)
- Doesn't block the caller
- Better for large-scale processing
- Can handle thousands of items
Limitations
- More complex to implement
- Requires polling mechanism
- Delayed results
- Need to handle execution ID
Code Examples
Sync Request
import requests
response = requests.post(
"https://api.evaligo.com/flows/flow-id/execute",
headers={"Authorization": f"Bearer {api_key}"},
json={"url": "https://example.com"}
)
# Waits until flow completes
if response.status_code == 200:
results = response.json()
print(f"Results: {results}")
else:
print(f"Error: {response.text}")Async Request with Polling
import requests
import time
# Start async execution
response = requests.post(
"https://api.evaligo.com/flows/flow-id/execute-async",
headers={"Authorization": f"Bearer {api_key}"},
json={"url": "https://example.com"}
)
execution_id = response.json()["executionId"]
print(f"Started execution: {execution_id}")
# Poll for completion
while True:
status_response = requests.get(
f"https://api.evaligo.com/flows/executions/{execution_id}",
headers={"Authorization": f"Bearer {api_key}"}
)
status = status_response.json()
if status["status"] == "completed":
print(f"Results: {status['results']}")
break
elif status["status"] == "failed":
print(f"Error: {status['error']}")
break
else:
print(f"Status: {status['status']} ({status['progress']}%)")
time.sleep(5) # Wait 5 seconds before polling againAsync with Webhook
# Start async with webhook callback
response = requests.post(
"https://api.evaligo.com/flows/flow-id/execute-async",
headers={"Authorization": f"Bearer {api_key}"},
json={
"url": "https://example.com",
"webhook": "https://yourapp.com/webhook"
}
)
execution_id = response.json()["executionId"]
# Your webhook endpoint will receive results when complete
# No need to poll!Tip
Use webhooks with async mode to avoid polling. Evaligo will POST results to your endpoint when the flow completes.
Decision Matrix
Choose Sync If:
- Testing in the Playground
- Processing < 20 items
- Expected time < 30 seconds
- Need immediate feedback
- Interactive user workflow
- Simpler code is priority
Choose Async If:
- Production API deployment
- Processing > 20 items
- Expected time > 30 seconds
- Batch processing jobs
- Webhook-driven workflows
- High concurrency needed
Execution Status
Status Values
pending: Queued, waiting to start
running: Currently executing
completed: Finished successfully
failed: Encountered an error
cancelled: Manually stopped
timeout: Exceeded time limitProgress Tracking
Async executions provide detailed progress:
{
"executionId": "exec_123",
"status": "running",
"progress": 65,
"currentNode": "Prompt",
"itemsProcessed": 13,
"totalItems": 20,
"elapsedTime": "45s",
"estimatedRemaining": "25s"
}Error Handling
Sync Mode Errors
try:
response = requests.post(...)
response.raise_for_status()
results = response.json()
except requests.exceptions.Timeout:
print("Flow took too long")
except requests.exceptions.HTTPError as e:
if e.response.status_code == 400:
print(f"Invalid input: {e.response.json()}")
elif e.response.status_code == 500:
print("Server error, retry later")Async Mode Errors
# Check execution status
status = get_execution_status(execution_id)
if status["status"] == "failed":
error = status["error"]
print(f"Failed at node: {error['node']}")
print(f"Error message: {error['message']}")
print(f"Failed item: {error['itemIndex']}")
# Optionally retry
if error["retryable"]:
retry_execution(execution_id)Best Practices
Sync Mode
- Add timeout handling
- Show loading indicators
- Provide cancel option
- Handle errors gracefully
Async Mode
- Store execution IDs for tracking
- Implement exponential backoff for polling
- Use webhooks to avoid polling overhead
- Show progress updates to users
- Handle partial failures (some items succeed)
Polling Strategy
# Good polling strategy
intervals = [1, 2, 5, 10, 15, 30] # seconds
for i, interval in enumerate(intervals):
status = check_status(execution_id)
if status["status"] in ["completed", "failed"]:
break
time.sleep(interval)
# After intervals exhausted, poll every 30sWarning
Don't poll too frequently! Checking every second wastes resources. Use exponential backoff starting at 5 seconds.
Hybrid Approach
Smart applications can combine both modes:
def execute_flow(data, max_sync_time=30):
"""
Try sync first, fall back to async if needed
"""
try:
# Attempt sync with short timeout
response = requests.post(
url,
json=data,
timeout=max_sync_time
)
return {"mode": "sync", "results": response.json()}
except requests.exceptions.Timeout:
# Fall back to async
response = requests.post(
async_url,
json=data
)
execution_id = response.json()["executionId"]
return {
"mode": "async",
"executionId": execution_id,
"message": "Processing in background"
}Monitoring and Logs
Both modes provide execution logs:
- Node-level timing
- Input/output data
- Error messages
- Resource usage (tokens, costs)
- Available in dashboard or via API