Docs / OpenAI

OpenAI

Connect OpenAI's powerful language models to your evaluation workflows for high-quality AI experimentation and production deployments. Configure secure access, optimize model selection, and implement best practices for reliable integration.

OpenAI integration provides access to industry-leading language models including GPT-4, GPT-3.5, and specialized models for different use cases. Proper integration ensures optimal performance, cost efficiency, and security for your AI evaluation workflows.

Effective OpenAI integration requires careful attention to API key management, model selection, rate limiting, and error handling to build robust, production-ready AI systems that scale reliably with your evaluation needs.

OpenAI integration dashboard showing API configuration, model selection, and usage analytics with security settings

Initial Setup and Configuration

Configure secure OpenAI integration with proper authentication, model selection, and operational parameters for reliable AI evaluation workflows.

  1. 1

    API key setup Securely configure OpenAI API keys with appropriate access controls and rotation policies.

  2. 2

    Model selection Choose optimal models based on your quality, cost, and latency requirements.

  3. 3

    Rate limiting Configure request throttling and retry logic for reliable operation.

  4. 4

    Error handling Implement comprehensive error handling and fallback strategies.

OpenAI integration setup
import evaligo
from evaligo.integrations import OpenAIIntegration
import openai
import os
from typing import Dict, List, Optional

class EvaligOpenAIIntegration:
    """Secure OpenAI integration for evaluation workflows"""
    
    def __init__(self, client: evaligo.Client):
        self.evaligo_client = client
        self.openai_client = None
        self.model_configs = {}
        
    def setup_integration(self, api_key: str, organization_id: str = None) -> Dict:
        """Initialize OpenAI integration with security and monitoring"""
        
        # Configure OpenAI client
        self.openai_client = openai.Client(
            api_key=api_key,
            organization=organization_id
        )
        
        # Test connection
        try:
            models = self.openai_client.models.list()
            available_models = [model.id for model in models.data]
            
            # Store integration configuration
            integration_config = {
                "provider": "openai",
                "api_version": "v1",
                "organization_id": organization_id,
                "available_models": available_models,
                "setup_timestamp": time.time(),
                "status": "active"
            }
            
            # Save configuration securely
            self.evaligo_client.integrations.save_openai_config(integration_config)
            
            return {
                "success": True,
                "available_models": available_models,
                "configuration": integration_config
            }
            
        except Exception as e:
            return {"success": False, "error": str(e)}
    
    def configure_model_settings(self, model_configs: Dict[str, Dict]) -> Dict:
        """Configure model-specific settings for different use cases"""
        
        configured_models = {}
        
        for model_name, config in model_configs.items():
            model_config = {
                "model": model_name,
                "temperature": config.get("temperature", 0.7),
                "max_tokens": config.get("max_tokens", 1000),
                "top_p": config.get("top_p", 1.0),
                "frequency_penalty": config.get("frequency_penalty", 0),
                "presence_penalty": config.get("presence_penalty", 0),
                "stop_sequences": config.get("stop_sequences", []),
                "function_calling": config.get("function_calling", False),
                "structured_outputs": config.get("structured_outputs", False),
                "rate_limits": {
                    "requests_per_minute": config.get("rpm_limit", 100),
                    "tokens_per_minute": config.get("tpm_limit", 10000),
                    "concurrent_requests": config.get("concurrent_limit", 10)
                },
                "retry_config": {
                    "max_retries": 3,
                    "backoff_factor": 2,
                    "timeout_seconds": 30
                },
                "use_cases": config.get("use_cases", [])
            }
            
            # Validate model configuration
            validation_result = self._validate_model_config(model_name, model_config)
            
            if validation_result["valid"]:
                configured_models[model_name] = model_config
                self.model_configs[model_name] = model_config
            else:
                print(f"Invalid configuration for {model_name}: {validation_result['error']}")
        
        return configured_models
    
    def create_evaluation_client(self, model_name: str, use_case: str = "general") -> Dict:
        """Create evaluation client with optimized settings"""
        
        if model_name not in self.model_configs:
            raise ValueError(f"Model {model_name} not configured")
        
        config = self.model_configs[model_name].copy()
        
        # Optimize settings for evaluation use case
        if use_case == "evaluation":
            config.update({
                "temperature": 0.1,  # Lower temperature for consistent evaluation
                "top_p": 0.95,
                "max_tokens": 2000
            })
        elif use_case == "generation":
            config.update({
                "temperature": 0.8,  # Higher temperature for creative generation
                "top_p": 1.0,
                "max_tokens": 4000
            })
        elif use_case == "function_calling":
            config.update({
                "temperature": 0.0,  # Deterministic for function calls
                "function_calling": True,
                "max_tokens": 1000
            })
        
        # Create evaluation client
        eval_client = self.evaligo_client.evaluations.create_openai_client(
            model_config=config,
            tracking_enabled=True,
            cost_tracking=True,
            quality_monitoring=True
        )
        
        return {
            "client": eval_client,
            "model": model_name,
            "use_case": use_case,
            "configuration": config
        }

# Usage example
integration = EvaligOpenAIIntegration(evaligo_client)

# Setup OpenAI integration
setup_result = integration.setup_integration(
    api_key=os.getenv("OPENAI_API_KEY"),
    organization_id=os.getenv("OPENAI_ORG_ID")
)

if setup_result["success"]:
    print(f"OpenAI integration configured with {len(setup_result['available_models'])} models")
    
    # Configure models for different use cases
    model_configs = {
        "gpt-4": {
            "temperature": 0.7,
            "max_tokens": 2000,
            "use_cases": ["evaluation", "generation"],
            "rpm_limit": 100,
            "tpm_limit": 10000
        },
        "gpt-3.5-turbo": {
            "temperature": 0.5,
            "max_tokens": 1000,
            "use_cases": ["evaluation", "function_calling"],
            "rpm_limit": 200,
            "tpm_limit": 20000
        }
    }
    
    configured_models = integration.configure_model_settings(model_configs)
    print(f"Configured {len(configured_models)} models")
    
    # Create evaluation client
    eval_client = integration.create_evaluation_client("gpt-4", "evaluation")
    print(f"Evaluation client ready for {eval_client['model']}")
    
else:
    print(f"OpenAI integration failed: {setup_result['error']}")

Security and Best Practices

Implement security best practices for OpenAI API key management, access controls, and monitoring to protect your integration and maintain compliance with security policies.

Info

API Key Security: Never commit API keys to version control. Use environment variables, secure key management systems, and implement regular key rotation to maintain security.

Video

OpenAI Security Best Practices
OpenAI Security Best Practices
Learn how to securely manage OpenAI integrations with proper key management and access controls.
7m 40s

Model Selection and Optimization

Choose the optimal OpenAI models for your specific use cases by balancing quality requirements, cost constraints, and performance needs. Regular optimization ensures efficient resource utilization.

Model comparison interface showing performance metrics, cost analysis, and quality scores across different OpenAI models
Info

Model Selection: Consider your specific requirements when choosing models. GPT-4 offers higher quality but at increased cost, while GPT-3.5-turbo provides good performance at lower cost for many use cases.

Related Documentation

Azure OpenAI
Integrate with Azure OpenAI Service for enterprise features
Setup Tracing
Monitor OpenAI API calls and performance metrics
Cost Tracking
Track and optimize OpenAI usage costs
API Keys
Secure API key management and rotation