Skip to content

configure — Setup & Management

The configure command is your tool for setting up AI providers, managing profiles, and customizing Rawi’s behavior.

rawi configure [options]
Terminal window
# Interactive setup (recommended for first-time users)
rawi configure
# View current configuration
rawi configure --show
# List all profiles
rawi configure --list
OptionDescriptionExample
-p, --profile <profile>Profile name to configure--profile work
--provider <provider>AI provider--provider openai
--model <model>AI model name--model gpt-4o
--api-key <apiKey>API key for the provider--api-key sk-...
--base-url <baseURL>Custom base URL--base-url https://api.openai.com/v1
--temperature <temp>Sampling temperature (0-2)--temperature 0.7
--max-tokens <tokens>Maximum response tokens--max-tokens 2048
--language <lang>Language (english, arabic)--language english
OptionDescriptionRequired
--resource-name <name>Azure resource name
--api-version <version>API version❌ (default: 2024-10-01-preview)
OptionDescriptionRequired
--region <region>AWS region❌ (default: us-east-1)
--access-key-id <id>AWS access key ID❌*
--secret-access-key <key>AWS secret access key❌*
--session-token <token>AWS session token
--use-provider-chainUse AWS credential provider chain❌*

*Either explicit credentials or provider chain required

OptionDescription
-s, --showShow current configuration
-l, --listList all profiles
-d, --delete <profile>Delete a configuration profile

The interactive mode guides you through the setup process step by step.

Terminal window
rawi configure
  1. Choose Provider: Select from available AI providers
  2. Select Model: Pick from provider’s available models
  3. Enter Credentials: Provide API keys or authentication details
  4. Configure Options: Set temperature, tokens, language
  5. Test Configuration: Verify the setup works
Terminal window
$ rawi configure
🤖 Welcome to Rawi configuration!
? Select an AI provider: (Use arrow keys)
OpenAI (GPT-4, GPT-3.5)
Anthropic (Claude)
Google (Gemini)
Ollama (Local AI)
Azure OpenAI
AWS Bedrock
xAI (Grok)
LM Studio
? Select a model:
gpt-4o (Latest GPT-4 model)
gpt-4o-mini (Faster, cheaper GPT-4)
gpt-3.5-turbo (Fast and efficient)
? Enter your OpenAI API key: sk-********************************
? Set temperature (0-2, higher = more creative): 0.7
? Set max tokens (1-100000): 2048
? Select language:
English
Arabic (العربية)
Configuration saved successfully!
Testing connection... Success!
You can now use: rawi ask "Hello, world!"

For automation or advanced users, configure directly with command-line options.

Terminal window
# Basic OpenAI setup
rawi configure --provider openai --model gpt-4o --api-key sk-your-key
# With custom settings
rawi configure --provider openai --model gpt-4o-mini --api-key sk-your-key \
--temperature 0.3 --max-tokens 1500
Terminal window
# Claude setup
rawi configure --provider anthropic --model claude-3-5-sonnet-20241022 \
--api-key sk-ant-your-key
# For long-form analysis
rawi configure --provider anthropic --model claude-3-5-sonnet-20241022 \
--api-key sk-ant-your-key --temperature 0.1 --max-tokens 4000
Terminal window
# Gemini setup
rawi configure --provider google --model gemini-1.5-pro \
--api-key your-google-key
Terminal window
# Local Llama 2 setup
rawi configure --provider ollama --model llama2 \
--base-url http://localhost:11434
# Custom local model
rawi configure --provider ollama --model codellama:13b \
--base-url http://localhost:11434
Terminal window
# Azure setup (requires resource name)
rawi configure --provider azure --model gpt-4 \
--api-key your-azure-key \
--resource-name your-resource \
--base-url https://your-resource.openai.azure.com
Terminal window
# Using AWS credentials
rawi configure --provider bedrock --model anthropic.claude-3-sonnet-20240229-v1:0 \
--region us-east-1 \
--access-key-id your-access-key \
--secret-access-key your-secret-key
# Using AWS credential provider chain
rawi configure --provider bedrock --model anthropic.claude-3-sonnet-20240229-v1:0 \
--region us-east-1 \
--use-provider-chain

Profiles allow you to maintain different configurations for different use cases.

Terminal window
# Work profile with enterprise settings
rawi configure --profile work \
--provider azure \
--model gpt-4 \
--resource-name company-openai \
--api-key work-key
# Personal profile with cost-effective settings
rawi configure --profile personal \
--provider openai \
--model gpt-3.5-turbo \
--api-key personal-key \
--temperature 0.8
# Local profile for privacy
rawi configure --profile local \
--provider ollama \
--model llama2 \
--base-url http://localhost:11434
Terminal window
# List all profiles
rawi configure --list
# View specific profile
rawi configure --profile work --show
# Delete profile
rawi configure --delete personal
# Set default profile
rawi configure --profile work --set-default
Terminal window
# Code review profile (detailed analysis)
rawi configure --profile code-review \
--provider anthropic \
--model claude-3-5-sonnet-20241022 \
--temperature 0.1 \
--max-tokens 4000
# Quick development profile (fast responses)
rawi configure --profile dev-quick \
--provider openai \
--model gpt-3.5-turbo \
--temperature 0.3 \
--max-tokens 1000
# Local development profile (privacy)
rawi configure --profile dev-local \
--provider ollama \
--model codellama \
--temperature 0.2
Terminal window
# Creative writing profile
rawi configure --profile creative \
--provider openai \
--model gpt-4 \
--temperature 0.9 \
--max-tokens 3000
# Technical writing profile
rawi configure --profile technical \
--provider anthropic \
--model claude-3-5-sonnet-20241022 \
--temperature 0.3 \
--max-tokens 2500

You can also configure Rawi using environment variables:

Terminal window
# API keys
export OPENAI_API_KEY="sk-your-openai-key"
export ANTHROPIC_API_KEY="sk-ant-your-anthropic-key"
export GOOGLE_API_KEY="your-google-key"
# Default profile
export RAWI_PROFILE="work"
# Default provider settings
export RAWI_PROVIDER="openai"
export RAWI_MODEL="gpt-4"
export RAWI_TEMPERATURE="0.7"

Rawi stores configuration in:

  • Linux/macOS: ~/.config/rawi/
  • Windows: %APPDATA%/rawi/
~/.config/rawi/
├── config.json # Global settings
├── profiles/
│ ├── default.json # Default profile
│ ├── work.json # Work profile
│ └── personal.json # Personal profile
└── history/ # Session history
Terminal window
# Edit main configuration
nano ~/.config/rawi/config.json
# Edit specific profile
nano ~/.config/rawi/profiles/work.json
# Backup configuration
cp -r ~/.config/rawi/ ~/rawi-config-backup/
  1. Get API Key: Visit platform.openai.com
  2. Choose Model: gpt-4o (latest), gpt-4o-mini (cost-effective), gpt-3.5-turbo (fast)
  3. Configure:
    Terminal window
    rawi configure --provider openai --model gpt-4o --api-key sk-your-key
  1. Get API Key: Visit console.anthropic.com
  2. Choose Model: claude-3-5-sonnet-20241022 (recommended)
  3. Configure:
    Terminal window
    rawi configure --provider anthropic --model claude-3-5-sonnet-20241022 --api-key sk-ant-your-key
  1. Install Ollama: Visit ollama.ai and follow installation instructions
  2. Pull a model:
    Terminal window
    ollama pull llama2
    # or for coding:
    ollama pull codellama
  3. Configure Rawi:
    Terminal window
    rawi configure --provider ollama --model llama2

Configuration not saved:

Terminal window
# Check permissions
ls -la ~/.config/rawi/
# Create directory if missing
mkdir -p ~/.config/rawi/profiles/

API key not working:

Terminal window
# Test configuration
rawi info
# Verify API key format
rawi configure --show

Provider connection fails:

Terminal window
# Check network connectivity
curl -s https://api.openai.com/v1/models -H "Authorization: Bearer $OPENAI_API_KEY"
# Try different provider
rawi configure --provider anthropic

Ollama not connecting:

Terminal window
# Check if Ollama is running
ollama list
# Check base URL
rawi configure --provider ollama --base-url http://localhost:11434
Terminal window
# Test current configuration
rawi ask "test" --verbose
# Validate specific profile
rawi configure --profile work --test
# Check provider status
rawi provider --show openai
Terminal window
# Export all profiles
rawi configure --export > rawi-config-backup.json
# Export specific profile
rawi configure --profile work --export > work-profile.json
Terminal window
# Import configuration
rawi configure --import rawi-config-backup.json
# Import specific profile
rawi configure --profile work --import work-profile.json
Terminal window
# Reset all configuration
rawi configure --reset
# Reset specific profile
rawi configure --profile work --reset
# Delete all profiles
rawi configure --reset-all
  1. Use Environment Variables for API keys in production
  2. Separate Profiles for different security contexts
  3. Regular Key Rotation for enterprise environments
  4. Local Providers for sensitive data
  1. Appropriate Models: Use lighter models for simple tasks
  2. Temperature Settings: Lower for factual tasks, higher for creative
  3. Token Limits: Set reasonable limits to control costs
  4. Profile Optimization: Different profiles for different use cases
  1. Descriptive Profile Names: work-gpt4, personal-claude, local-dev
  2. Environment-Specific Profiles: dev, staging, production
  3. Task-Specific Profiles: code-review, writing, data-analysis