Skip to main content
Find answers to commonly asked questions about AeonSage.

General

AeonSage is a self-hosted AI orchestration platform that connects AI models, messaging channels, and tools in a unified interface. It allows you to:
  • Use multiple AI providers (OpenAI, Anthropic, local models)
  • Access your AI through Telegram, Discord, WhatsApp, and more
  • Maintain full control over your data
Yes! AeonSage is open source under the AGPL v3 license.
  • Free tier: Self-hosted, unlimited usage, all core features
  • Pro tier: Cloud hosting, advanced features, priority support
See Pricing for details.
FeatureOSS (Self-Hosted)SaaS (Cloud)
HostingYour serverOur cloud
UsersSingle userMulti-user
CostFreeSubscription
SupportCommunityPriority
Yes! AeonSage has first-class support for local models via Ollama:
# Install Ollama
curl -fsSL https://ollama.ai/install.sh | sh

# Pull a model
ollama pull llama3.2

# AeonSage auto-detects Ollama
aeonsage gateway start
Local models work offline and keep your data completely private.

Installation

Minimum:
  • 2 CPU cores
  • 4 GB RAM
  • 1 GB storage
  • Node.js 22+
For local AI models:
  • 8+ GB RAM for 7B models
  • 16+ GB RAM for 13B models
  • GPU recommended for best performance
Desktop App:
  • Automatic updates (Settings → About → Check for Updates)
npm:
npm update -g aeonsage
Docker:
docker pull aeonsage/gateway:latest
docker-compose up -d
Yes! AeonSage runs on ARM-based devices including Raspberry Pi 4/5.
# Install Node.js 22 on Raspberry Pi
curl -fsSL https://deb.nodesource.com/setup_22.x | sudo -E bash -
sudo apt-get install -y nodejs

# Install AeonSage
npm install -g aeonsage
Performance Tip: For local AI models on Pi, use quantized models (e.g., llama3.2:1b) for acceptable performance.

Configuration

The configuration file is located at:
  • Linux/macOS: ~/.aeonsage/config.json
  • Windows: %USERPROFILE%\.aeonsage\config.json
You can also use environment variables:
export OPENAI_API_KEY=sk-your-key
export GATEWAY_PORT=8080
aeonsage gateway start
  1. Get API key from the provider (OpenAI, Anthropic, etc.)
  2. Set environment variable:
    export ANTHROPIC_API_KEY=sk-ant-your-key
    
  3. Or add to config.json:
    {
      "providers": {
        "anthropic": {
          "apiKey": "${ANTHROPIC_API_KEY}"
        }
      }
    }
    
  4. Restart Gateway
Add to config.json:
{
  "rateLimit": {
    "enabled": true,
    "windowMs": 60000,
    "maxRequests": 60
  }
}

Channels

  1. Create a bot via @BotFather
  2. Copy the bot token
  3. Run setup:
    aeonsage channel add telegram --token YOUR_BOT_TOKEN
    
  4. Get your access code from the Gateway logs
See Telegram Channel for detailed setup.
  1. Create a Discord application at Discord Developer Portal
  2. Create a bot and copy the token
  3. Invite bot to your server with OAuth2 URL
  4. Run setup:
    aeonsage channel add discord --token YOUR_BOT_TOKEN --clientId YOUR_CLIENT_ID
    
See Discord Channel for detailed setup.
Yes! You can connect as many channels as you need. AeonSage routes messages based on the channel source while maintaining a unified context.
# List active channels
aeonsage channel list

# Add multiple channels
aeonsage channel add telegram --token TOKEN_1
aeonsage channel add discord --token TOKEN_2
aeonsage channel add whatsapp  # Interactive setup

Privacy & Security

Only if you configure it to:
  • AI providers: If using OpenAI/Anthropic, prompts go to their APIs
  • Channels: Messages are sent to Telegram/Discord/etc.
  • License validation: Pro features check license with license.aeonsage.org
For complete privacy:
  • Use Ollama for local AI (no external API calls)
  • Use iMessage or Signal for end-to-end encrypted channels
  • Disable telemetry in settings
Self-hosted:
# Delete all data
rm -rf ~/.aeonsage
SaaS:
  • Go to Settings → Account → Delete Account
  • All your data is permanently deleted within 24 hours
Self-hosted deployments can be configured for GDPR compliance:
  • Data minimization (only necessary data stored)
  • Right to access (export all data)
  • Right to erasure (delete all data)
  • Data portability (standard export formats)
See Security Overview for details.

Troubleshooting

Check common issues:
  1. Port in use:
    aeonsage gateway start --port 8080
    
  2. Missing configuration:
    aeonsage init
    
  3. Node.js version:
    node --version  # Should be 22+
    
  4. Check logs:
    aeonsage logs --follow
    
Possible causes:
  1. Local model performance: Try a smaller model or use cloud API
  2. Network latency: Check connection to AI provider
  3. Rate limiting: You may be hitting API limits
  4. Memory pressure: Check system RAM usage
Solutions:
# Use smaller local model
ollama pull llama3.2:1b

# Check model status
aeonsage model status

# View metrics
aeonsage metrics
Troubleshooting steps:
  1. Check channel status:
    aeonsage channel status
    
  2. Reconnect:
    aeonsage channel reconnect telegram
    
  3. Check logs:
    aeonsage logs --channel telegram
    
  4. Verify credentials:
    • Bot token may have been revoked
    • Check OAuth expiration

Still Have Questions?