Skip to main content
AeonSage supports 20+ AI providers, giving you the flexibility to use the best model for your needs. Switch between providers seamlessly without changing your agent code.

Supported Providers

Cloud Providers

Local Providers

Aggregators


Provider Architecture

┌─────────────────────────────────────────────────────────────────┐
│                     Provider Selection                           │
│                                                                 │
│  ┌──────────────────────────────────────────────────────────┐  │
│  │                   Gateway Router                          │  │
│  │                         │                                 │  │
│  │         ┌───────────────┼───────────────┐                │  │
│  │         ▼               ▼               ▼                │  │
│  │   ┌──────────┐   ┌──────────┐   ┌──────────┐           │  │
│  │   │ Primary  │   │ Fallback │   │  Local   │           │  │
│  │   │ Provider │   │ Provider │   │ Provider │           │  │
│  │   │ (OpenAI) │   │(Anthropic)│   │ (Ollama) │           │  │
│  │   └──────────┘   └──────────┘   └──────────┘           │  │
│  │         │               │               │                │  │
│  │         └───────────────┼───────────────┘                │  │
│  │                         ▼                                 │  │
│  │              ┌──────────────────┐                        │  │
│  │              │ Unified Response │                        │  │
│  │              └──────────────────┘                        │  │
│  └──────────────────────────────────────────────────────────┘  │
│                                                                 │
└─────────────────────────────────────────────────────────────────┘

Configuration

Environment Variables

Configuration File

{
  "providers": {
    "default": "openai",
    "openai": {
      "apiKey": "${OPENAI_API_KEY}",
      "model": "gpt-4o",
      "temperature": 0.7
    },
    "anthropic": {
      "apiKey": "${ANTHROPIC_API_KEY}",
      "model": "claude-3-5-sonnet-latest"
    },
    "ollama": {
      "host": "http://localhost:11434",
      "model": "llama3.2"
    }
  }
}

Provider Selection

Default Provider

Set the default provider for all requests:
aeonsage config set providers.default openai

Per-Channel Provider

Different channels can use different providers:
{
  "channels": {
    "telegram": {
      "provider": "openai",
      "model": "gpt-4o"
    },
    "discord": {
      "provider": "anthropic",
      "model": "claude-3-5-sonnet-latest"
    }
  }
}

Fallback Chain

Configure automatic fallback when primary fails:
{
  "providers": {
    "fallback": ["openai", "anthropic", "ollama"],
    "fallbackDelay": 5000
  }
}

Model Parameters

Common Parameters

Configuration Example

{
  "providers": {
    "openai": {
      "model": "gpt-4o",
      "parameters": {
        "temperature": 0.7,
        "maxTokens": 4096,
        "topP": 0.9,
        "frequencyPenalty": 0.3
      }
    }
  }
}

Testing Providers

Connection Test

# Test all providers
aeonsage providers test

# Test specific provider
aeonsage providers test openai

# Test with custom message
aeonsage providers test openai --message "Hello, world!"

Performance Benchmark

# Run benchmark
aeonsage providers benchmark

# Compare providers
aeonsage providers benchmark openai anthropic

Cost Optimization

Cost Tracking

# View usage statistics
aeonsage providers stats

# View cost breakdown
aeonsage providers costs --period month

Optimization Strategies


Troubleshooting

Common Issues


Next Steps