Configuration Options
Comprehensive guide to configuring CommitStudio AI settings and options for optimal code analysis
AI Models
CommitStudio leverages OpenAI's latest models to provide intelligent code analysis. Each model offers different capabilities, performance characteristics, and cost profiles:
Model | Best For | Token Limit | Cost |
---|---|---|---|
gpt-4o | Complex codebases, nuanced reviews | 128K | Highest |
gpt-4.1 | Deep technical analysis, security reviews | 128K | High |
gpt-4.1-mini | Everyday code reviews, general analysis | 64K | Medium |
gpt-4.1-nano | Quick reviews, simple analyses | 16K | Low |
o4-mini | Standard daily reviews | 64K | Medium |
o3-mini | Legacy projects, compatibility checks | 16K | Low |
The default model is gpt-4.1-mini, offering a good balance of performance and cost for most use cases. Here's how to select different models based on your needs:
Model Selection Guidelines
- For critical code reviews: Use
gpt-4o
orgpt-4.1
for the most thorough analysis - For day-to-day development: Use
gpt-4.1-mini
for good results at reasonable cost - For quick checks: Use
gpt-4.1-nano
when you need fast feedback - For large repositories: Use models with higher token limits to analyze more code at once
Token Limits
Tokens are the basic units of text that the AI processes. Each model has different token limits that determine how much code can be analyzed at once:
# Set max tokens for responses
commitstudio config --max-tokens 3000
Understanding Token Usage
- Code Analysis: More complex code requires more tokens to analyze
- Response Detail: Higher token limits produce more detailed analysis
- Cost Impact: Token usage directly affects API costs
- Context Window: Separate from max tokens, this is the model's capacity to "see" code
Recommended Token Limits
Use Case | Tokens |
---|---|
Brief summaries | 1000-1500 |
Standard reviews (default) | 2000-3000 |
Detailed analysis | 4000-6000 |
Comprehensive reviews | 8000+ |
How Model Selection Affects Analysis
Different models analyze code differently:
-
Higher-tier models (gpt-4o, gpt-4.1)
- Recognize complex design patterns
- Identify subtle bugs that might only appear in edge cases
- Suggest architectural improvements
- Provide detailed explanations of potential issues
- Make more nuanced trade-off recommendations
-
Mid-tier models (gpt-4.1-mini, o4-mini)
- Catch common coding errors
- Identify straightforward performance issues
- Suggest code simplifications
- Recommend basic best practices
- Provide reasonable explanations
-
Lower-tier models (gpt-4.1-nano, o3-mini)
- Find syntax errors and obvious bugs
- Suggest simple improvements
- Provide brief explanations
- Work best on smaller, simpler code segments
Configuration Commands
View Current Settings
To see your current configuration:
commitstudio config --view
This displays:
- Current AI model
- Maximum tokens setting
- GitHub token status
- OpenAI API key status
- Cache configuration
Interactive Configuration
For a guided setup experience:
commitstudio config
This launches an interactive prompt that:
- Shows available AI models
- Allows selection from a menu
- Prompts for token limit settings
- Validates your choices
- Saves your configuration
Direct Configuration
For scripting or quick changes:
# Set a specific model
commitstudio config --model gpt-4o
# Set max tokens
commitstudio config --max-tokens 3000
# Configure multiple settings at once
commitstudio config --model gpt-4.1-mini --max-tokens 2500
Configuration Persistence
Your configuration settings are stored securely on your machine:
- macOS:
~/Library/Preferences/commitstudio-nodejs
- Linux:
~/.config/commitstudio
- Windows:
%APPDATA%\commitstudio-nodejs
These settings persist between sessions and can be reset with:
commitstudio --reset
Related Topics
- Environment Variables - Configure CommitStudio using environment variables
- Customizing Behavior - Fine-tune how CommitStudio analyzes your code
- Standard Mode - See how configuration affects standard analysis mode