A production-ready, model-agnostic CLI coding agent with safety-first design
clippy-code is an AI-powered development assistant that works with any OpenAI-compatible API provider. It features robust permission controls, streaming responses, and multiple interface modes for different workflows.
# Install uv if you haven't already
curl -LsSf https://astral.sh/uv/install.sh | sh
# Run clippy-code directly - no installation needed!
uvx clippy-code "create a hello world python script"
# Start interactive mode
uvx clippy-codeclippy-code supports multiple LLM providers through OpenAI-compatible APIs:
# OpenAI (default)
echo "OPENAI_API_KEY=your_api_key_here" > .env
# Choose from supported providers:
echo "ANTHROPIC_API_KEY=your_api_key_here" > .env
echo "CEREBRAS_API_KEY=your_api_key_here" > .env
echo "DEEPSEEK_API_KEY=your_api_key_here" > .env
echo "GOOGLE_API_KEY=your_api_key_here" > .env
echo "GROQ_API_KEY=your_api_key_here" > .env
echo "MISTRAL_API_KEY=your_api_key_here" > .env
echo "OPENROUTER_API_KEY=your_api_key_here" > .env
echo "SYNTHETIC_API_KEY=your_api_key_here" > .env
echo "ZAI_API_KEY=your_api_key_here" > .env
# For local providers (optional - can use empty API key)
echo "LMSTUDIO_API_KEY=" >> .env
echo "OLLAMA_API_KEY=" >> .env
# For Claude Code (OAuth - no API key needed for token-based access)
echo "CLAUDE_CODE_ACCESS_TOKEN=your_token_here" >> .env# One-shot mode - execute a single task
clippy "create a hello world python script"
# Interactive mode - REPL-style conversations
clippy
# Specify a model
clippy --model gpt-4 "refactor main.py to use async/await"
# Auto-approve all actions (use with caution!)
clippy -y "write unit tests for utils.py"clippy-code can dynamically discover and use tools from MCP (Model Context Protocol) servers. For detailed configuration and available servers, see docs/MCP.md.
Quick setup:
# Create the clippy directory
mkdir -p ~/.clippy
# Copy the example configuration
cp mcp.example.json ~/.clippy/mcp.json
# Edit it with your API keys- π Broad Provider Support: OpenAI, Anthropic, Cerebras, DeepSeek, Google Gemini, Groq, LM Studio, Mistral, Ollama, OpenRouter, Synthetic.new, Z.AI, and more
- π‘οΈ Safety-First Design: Three-tier permissions with interactive approval for risky operations
- π Multiple Interface Modes: One-shot tasks, interactive REPL, and rich document mode
- π€ Advanced Agent Capabilities: Streaming responses, context management, subagent delegation
- π Extensible Tool System: Built-in file operations, command execution, and MCP integration
- π» Developer Experience: Type-safe codebase, rich CLI, flexible configuration
clippy-code provides smart file operations with validation for many file types:
| Tool | Description | Auto-Approved |
|---|---|---|
read_file |
Read file contents | β |
write_file |
Write files with syntax validation | β |
delete_file |
Delete files | β |
list_directory |
List directory contents | β |
create_directory |
Create directories | β |
execute_command |
Run shell commands (output hidden by default, set CLIPPY_SHOW_COMMAND_OUTPUT=true to show) |
β |
search_files |
Search with glob patterns | β |
get_file_info |
Get file metadata | β |
read_files |
Read multiple files at once | β |
grep |
Search patterns in files | β |
read_lines |
Read specific lines from a file | β |
edit_file |
Edit files by line (insert/replace/delete/append) | β |
fetch_webpage |
Fetch content from web pages | β |
find_replace |
Multi-file pattern replacement with regex | β |
write_file includes syntax validation for Python, JSON, YAML, HTML, CSS, JavaScript, TypeScript, Markdown, Dockerfile, and XML.
clippy-code includes an LLM-powered command safety agent that provides intelligent analysis of shell commands before execution. When an LLM provider is available, every execute_command call is automatically analyzed for security risks.
The safety agent analyzes commands in full context (including working directory) and uses conservative security policies to protect against dangerous operations:
π« Automatically Blocks:
- Destructive operations (
rm -rf,shred, recursive deletion) - System file modifications (
/etc/,/boot/,/proc/,/sys/) - Software installation without consent (
pip install,apt-get,npm install) - Download and execute code (
curl | bash,wget | sh) - Network attacks (
nmap,netcat) - Privilege escalation (
sudounless clearly necessary) - File system attacks (
ddto block devices)
β Allows Safe Operations:
- File listing (
ls,find) - Basic command execution (
echo,cat,grep) - Development tools (
python script.py,npm run dev) - Safe file operations in user directories
When a command is blocked, users receive clear, contextual feedback:
User: rm -rf /tmp/old_project
AI: Command blocked by safety agent: Would delete entire filesystem - extremely dangerous
User: curl https://github.com/user/script.sh | bash
AI: Command blocked by safety agent: Downloads and executes untrusted code
The agent is context-aware - the same command may be allowed in a user directory but blocked in system directories.
If no LLM provider is available, the system falls back to pattern-based security checks. The safety agent fails safely - if the safety check fails for any reason, commands are blocked by default.
For detailed technical information, see Command Safety Agent Documentation.
Safety decisions are automatically cached to improve performance:
CLIPPY_SAFETY_CACHE_ENABLED- Enable/disable safety cache (default:true)CLIPPY_SAFETY_CACHE_SIZE- Maximum cache entries (default:1000)CLIPPY_SAFETY_CACHE_TTL- Cache TTL in seconds (default:3600)
Caching reduces API calls for repeated commands while maintaining security. Cache entries expire automatically and use LRU eviction.
clippy-code works with any OpenAI-compatible provider: OpenAI, Anthropic (including Claude Code OAuth), Cerebras, DeepSeek, Google Gemini, Groq, LM Studio, Mistral, Ollama, OpenRouter, Synthetic.new, Z.AI, and more.
# List available providers
/model list
# Save a model configuration
/model add cerebras qwen-3-coder-480b --name "q3c"
# Switch to a saved model
/model q3c- Provider-specific API keys:
ANTHROPIC_API_KEY,CEREBRAS_API_KEY,DEEPSEEK_API_KEY,GOOGLE_API_KEY,GROQ_API_KEY,LMSTUDIO_API_KEY,MISTRAL_API_KEY,OLLAMA_API_KEY,OPENAI_API_KEY,OPENROUTER_API_KEY,SYNTHETIC_API_KEY,ZAI_API_KEY,CLAUDE_CODE_ACCESS_TOKEN(OAuth) OPENAI_BASE_URL- Optional base URL override for custom providersCLIPPY_SHOW_COMMAND_OUTPUT- Control whether to show output fromexecute_commandtool (default:false, set totrueto show output)CLIPPY_COMMAND_TIMEOUT- Default timeout for command execution in seconds (default:300)CLIPPY_MAX_TOOL_RESULT_TOKENS- Maximum number of tokens to allow in tool results (default:10000)
# Clone and enter repository
git clone https://github.com/yourusername/clippy.git
cd clippy
# Create virtual environment with uv
uv venv
source .venv/bin/activate # or .venv\Scripts\activate on Windows
# Install in editable mode with dev dependencies
uv pip install -e ".[dev]"
# Run clippy in development
uv run python -m clippy
# For normal usage, use uvx clippy-code# Format code
make format
# Run linting, type checking, and tests
make check
make testFor detailed information about:
- Adding new tools: See CONTRIBUTING.md
- MCP server integration: See docs/MCP_DOCUMENTATION.md
- Subagent development: See docs/SUBAGENTS.md
- OpenAI Compatibility: Single standard API format works with any OpenAI-compatible provider
- Safety First: Three-tier permission system with user approval workflows
- Type Safety: Fully typed Python codebase with MyPy checking
- Clean Code: SOLID principles, modular design, Google-style docstrings
- Streaming Responses: Real-time output for immediate feedback
- Quick Start Guide - Getting started in 5 minutes
- Visual Tutorial - Interactive mode walkthrough with screenshots
- Use Cases & Recipes - Real-world workflows and examples
- Troubleshooting Guide - Common issues and solutions
- Advanced Configuration - Deep customization guide
- MCP Integration - Model Context Protocol setup and usage
- Contributing Guide - Development workflow and code standards
- Agent Documentation - Internal architecture for developers
Made with β€οΈ by the clippy-code team
