Skip to content

Configuration

Talon uses a JSON configuration file to manage AI providers, models, permissions, and communication channels. This guide covers all configuration options and how to set them up.

The talon.json configuration file is stored in your application data directory. The exact location depends on your operating system:

~/Library/Application Support/com.talon.app/talon.json
%APPDATA%\com.talon.app\talon.json
~/.config/com.talon.app/talon.json

The talon.json file follows this basic structure:

{
"models": {
"providers": [
// Provider configurations
],
"default_model": "provider/model"
},
"default_temperature": 0.7,
"permission_mode": "allow",
"channels": {
"slack": {
"accounts": {}
},
"discord": {
"accounts": {}
}
}
}

Configure one or more AI providers under the models.providers array. Each provider requires a name and API key, with optional base URL configuration.

Each provider object requires:

  • name: Provider identifier (e.g., openai, anthropic, groq)
  • api_key: Your API key for the provider
  • base_url (optional): Custom endpoint URL (useful for self-hosted or proxy setups)
  • model (optional): Default model for this provider

Talon supports the following AI providers:

  • OpenAI - GPT-4, GPT-4 Turbo, GPT-3.5 Turbo
  • Anthropic - Claude 3 Opus, Claude 3 Sonnet, Claude 3 Haiku
  • Google Gemini - Gemini Pro and other variants
  • Groq - Fast inference with Mixtral and Llama
  • Together - Open-source models and fine-tuned variants
  • Ollama - Local LLM inference
  • OpenAI-Compatible - Any API implementing the OpenAI API specification
{
"models": {
"providers": [
{
"name": "openai",
"api_key": "sk-...",
"model": "gpt-4"
},
{
"name": "anthropic",
"api_key": "sk-ant-...",
"model": "claude-3-opus-20240229"
},
{
"name": "groq",
"api_key": "gsk_...",
"model": "mixtral-8x7b-32768"
}
],
"default_model": "anthropic/claude-3-opus-20240229"
}
}

Specify the default model using the models.default_model field in the format provider/model:

{
"models": {
"default_model": "anthropic/claude-3-opus-20240229"
}
}

This model will be used for all requests unless explicitly overridden in a channel or command configuration.

For self-hosted models using Ollama, configure a custom base URL:

{
"models": {
"providers": [
{
"name": "ollama",
"base_url": "http://localhost:11434/v1",
"api_key": "ollama"
}
],
"default_model": "ollama/mistral"
}
}

If you’re using a proxy, gateway, or custom API that implements the OpenAI specification:

{
"models": {
"providers": [
{
"name": "custom-api",
"api_key": "your-api-key",
"base_url": "https://your-api.example.com/v1"
}
]
}
}

The default_temperature setting controls the randomness and creativity of AI responses across all models. This setting applies globally unless overridden per-channel or per-command.

  • 0.0: Deterministic, focused, and consistent responses (best for precise tasks)
  • 0.7: Balanced (default - good for general use)
  • 2.0: Maximum randomness and creativity (best for brainstorming)
{
"default_temperature": 0.5
}

Permission modes control how Talon handles requests and determine whether actions execute automatically or require approval.

  • plan: Talon shows a plan of what it will do and waits for confirmation before executing
  • ask: Talon asks before taking each action
  • allow: Talon executes actions automatically (default)
  • bypass: Talon bypasses all safety checks and permission prompts
{
"permission_mode": "allow"
}
ModeBehavior
planShows overall plan, requires approval before any execution
askRequests confirmation before each individual action
allowExecutes automatically, no prompts (default)
bypassSkips all checks and confirmations

Talon can integrate with multiple communication platforms. Channels are configured under the channels object, organized by type (e.g., slack, discord). Each channel type has its own required fields and authentication methods.

{
"channels": {
"slack": {
"accounts": {
"workspace-name": {
// Slack-specific configuration
}
}
},
"discord": {
"accounts": {
"server-name": {
// Discord-specific configuration
}
}
}
}
}

Each channel account configuration varies by platform. For detailed setup instructions specific to each channel type, refer to the Channels documentation.

{
"channels": {
"slack": {
"accounts": {
"my-workspace": {
"bot_token": "xoxb-...",
"signing_secret": "..."
}
}
}
}
}

Here’s a complete example with multiple providers, custom settings, and channel configuration:

{
"models": {
"providers": [
{
"name": "anthropic",
"api_key": "sk-ant-...",
"model": "claude-3-opus-20240229"
},
{
"name": "openai",
"api_key": "sk-...",
"model": "gpt-4"
},
{
"name": "ollama",
"base_url": "http://localhost:11434/v1",
"api_key": "ollama"
}
],
"default_model": "anthropic/claude-3-opus-20240229"
},
"default_temperature": 0.7,
"permission_mode": "allow",
"channels": {
"slack": {
"accounts": {
"engineering-team": {
"bot_token": "xoxb-...",
"signing_secret": "..."
}
}
},
"discord": {
"accounts": {
"dev-server": {
"bot_token": "..."
}
}
}
}
}