Add CLAUDE.md with project state, architecture, and remaining work
Portable project context that travels with the repo — works as both CLAUDE.md for Claude Code and SLUG.md for slug itself. Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
This commit is contained in:
commit
f2e1d53e37
1 changed files with 110 additions and 0 deletions
110
CLAUDE.md
Normal file
110
CLAUDE.md
Normal file
|
|
@ -0,0 +1,110 @@
|
||||||
|
# Slug Code
|
||||||
|
|
||||||
|
FOSS Rust rewrite of Claude Code — an AI coding assistant with a pluggable LLM backend targeting OpenAI-compatible APIs (vLLM, Ollama, llama.cpp, OpenAI, etc.).
|
||||||
|
|
||||||
|
## Project
|
||||||
|
|
||||||
|
- **Binary:** `slug`
|
||||||
|
- **Package:** `slug-code`
|
||||||
|
- **Branch:** `rust-rewrite` (orphan branch — `master` holds original TypeScript source as reference)
|
||||||
|
- **Working directory:** `/home/bryan/Downloads/claude/src`
|
||||||
|
|
||||||
|
## Architecture
|
||||||
|
|
||||||
|
| Module | Purpose |
|
||||||
|
|--------|---------|
|
||||||
|
| `src/main.rs` | CLI entry (clap), wires everything together |
|
||||||
|
| `src/provider/` | Provider trait + OpenAI-compatible streaming SSE impl |
|
||||||
|
| `src/agent/` | Core agent loop: stream → tool exec → repeat, SLUG.md injected per turn |
|
||||||
|
| `src/tools/` | Tool trait + bash, read, write, edit, glob, grep |
|
||||||
|
| `src/tui/` | Interactive REPL (basic stdin/stdout — ratatui TUI not yet built) |
|
||||||
|
| `src/permissions/` | ask/yolo/sandbox/allowEdits + glob allow/deny from settings.json |
|
||||||
|
| `src/slugmd/` | SLUG.md hierarchy loaded every turn (40K char budget) |
|
||||||
|
| `src/session/` | JSONL session persistence at ~/.slug/sessions/ |
|
||||||
|
| `src/hooks/` | 5 lifecycle events, command + prompt hook types |
|
||||||
|
| `src/compact/` | 3 compaction strategies, /compact command |
|
||||||
|
| `src/config/` | TOML config + CLI overrides + env vars |
|
||||||
|
|
||||||
|
## Conventions
|
||||||
|
|
||||||
|
- Rust 2024 edition
|
||||||
|
- No `unwrap()` in non-test code — use `?` or `anyhow`
|
||||||
|
- No analytics, telemetry, or Anthropic-internal features
|
||||||
|
- Provider trait is the abstraction boundary — new LLM backends implement it
|
||||||
|
- Permissions are checked in `agent::execute_with_permission` before any tool runs
|
||||||
|
- SLUG.md is never stored in `self.messages` — rebuilt from disk every turn
|
||||||
|
|
||||||
|
## Completed Features
|
||||||
|
|
||||||
|
- OpenAI-compatible provider with streaming SSE
|
||||||
|
- 6 core tools: bash, read, write, edit, glob, grep
|
||||||
|
- Agent loop with tool use
|
||||||
|
- Permission system: ask / yolo / sandbox / allowEdits + glob patterns in `~/.slug/settings.json`
|
||||||
|
- SLUG.md hierarchy (global/project/rules/local)
|
||||||
|
- Session persistence: `--continue`, `--resume`, `--fork-session`
|
||||||
|
- Hook system: PreToolUse, PostToolUse, UserPromptSubmit, SessionStart, SessionEnd
|
||||||
|
- Compaction: ToolResultTrim → Truncate, `/compact` command
|
||||||
|
|
||||||
|
## Remaining Work (Prioritized)
|
||||||
|
|
||||||
|
### High
|
||||||
|
- Proper ratatui TUI — markdown rendering, scrollback, syntax highlighting
|
||||||
|
- Retry system — exponential backoff + model fallback on repeated failures
|
||||||
|
- Clean interrupt — Escape aborts active stream without losing context
|
||||||
|
- Wire PreToolUse/PostToolUse hooks into agent tool execution
|
||||||
|
- Concurrent reads / serial writes (tool batching)
|
||||||
|
|
||||||
|
### Medium
|
||||||
|
- Anthropic API provider (different format from OpenAI)
|
||||||
|
- Google Gemini provider
|
||||||
|
- MCP client support
|
||||||
|
- Subagent parallelism (fork/teammate/worktree models)
|
||||||
|
|
||||||
|
### Low
|
||||||
|
- Computer use integration
|
||||||
|
- Session search
|
||||||
|
- Plugin system beyond hooks
|
||||||
|
|
||||||
|
## Running
|
||||||
|
|
||||||
|
```bash
|
||||||
|
cargo build
|
||||||
|
./target/debug/slug --help
|
||||||
|
|
||||||
|
# Connect to local vLLM
|
||||||
|
slug -e http://localhost:8000/v1 -m qwen2.5-coder
|
||||||
|
|
||||||
|
# Connect to OpenAI
|
||||||
|
slug -e https://api.openai.com/v1 -k $OPENAI_API_KEY -m gpt-4o
|
||||||
|
|
||||||
|
# Resume last session
|
||||||
|
slug --continue
|
||||||
|
|
||||||
|
# Sandbox mode (auto-approve everything in current dir)
|
||||||
|
slug --sandbox .
|
||||||
|
|
||||||
|
# Skip all permissions
|
||||||
|
slug --yolo
|
||||||
|
```
|
||||||
|
|
||||||
|
## Settings
|
||||||
|
|
||||||
|
Config file: `~/.slug/config.toml`
|
||||||
|
|
||||||
|
```toml
|
||||||
|
endpoint = "http://localhost:8000/v1"
|
||||||
|
model = "qwen2.5-coder-32b"
|
||||||
|
max_tokens = 4096
|
||||||
|
permission_mode = { type = "ask" }
|
||||||
|
```
|
||||||
|
|
||||||
|
Permission rules: `~/.slug/settings.json` or `.slug/settings.json`
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"permissions": {
|
||||||
|
"allow": ["Bash(cargo *)", "Bash(git *)", "Edit(src/**)", "Write(src/**)"],
|
||||||
|
"deny": ["Bash(rm -rf *)", "Bash(sudo *)"]
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
Loading…
Add table
Add a link
Reference in a new issue