mirror of
https://github.com/itme-brain/agent-team.git
synced 2026-05-08 18:10:13 -04:00
Compare commits
No commits in common. "7381316e28cb92f0ee5e63bf62ff3b1339531ec0" and "41c31a2a85d78925a196890dee627a952fd7da2c" have entirely different histories.
7381316e28
...
41c31a2a85
40 changed files with 277 additions and 3446 deletions
5
.claude/memory/MEMORY.md
Normal file
5
.claude/memory/MEMORY.md
Normal file
|
|
@ -0,0 +1,5 @@
|
|||
# Project Memory
|
||||
|
||||
Index of persistent memory for the agent-team project.
|
||||
|
||||
- [TODO: inter-agent JSON schema](todo_inter_agent_schema.md) — formal typed schema for all inter-agent messages to replace freetext signals
|
||||
11
.claude/memory/todo_inter_agent_schema.md
Normal file
11
.claude/memory/todo_inter_agent_schema.md
Normal file
|
|
@ -0,0 +1,11 @@
|
|||
---
|
||||
name: TODO — formal JSON schema for inter-agent communication
|
||||
description: Planned work to replace informal signal/text conventions with a typed JSON schema for all inter-agent messages
|
||||
type: project
|
||||
---
|
||||
|
||||
Define a formal JSON schema for all inter-agent communication in the agent team.
|
||||
|
||||
**Why:** Current protocol relies on freetext signals (RFR, LGTM, REVISE, VERDICT: PASS, etc.) and unstructured prose output. A typed schema would make messages machine-readable, easier to validate, and more reliable for orchestrator parsing — especially as parallelism increases and the orchestrator is managing multiple concurrent agent outputs.
|
||||
|
||||
**How to apply:** Design the schema before any further changes to the orchestrate skill or agent protocols. All agent output formats (reviewer verdict, auditor verdict, worker RFR, architect triage response, etc.) should conform to it. Consider whether the schema lives as a skill, a standalone JSON Schema file, or embedded in agent frontmatter.
|
||||
1
.envrc
1
.envrc
|
|
@ -1 +0,0 @@
|
|||
use flake
|
||||
8
.gitignore
vendored
8
.gitignore
vendored
|
|
@ -8,11 +8,3 @@ settings.local.json
|
|||
# OS noise
|
||||
.DS_Store
|
||||
Thumbs.db
|
||||
|
||||
# Generated output (derived from source templates via generate.sh)
|
||||
settings.json
|
||||
claude/
|
||||
codex/
|
||||
.claude
|
||||
.codex
|
||||
.direnv
|
||||
|
|
|
|||
70
CLAUDE.md
70
CLAUDE.md
|
|
@ -1,4 +1,70 @@
|
|||
# Global Claude Code Instructions
|
||||
|
||||
Rules are modularized in `rules/` and loaded automatically by the generated Claude config.
|
||||
Agent-team specific protocols live in skills (orchestrate, conventions, worker-protocol, qa-checklist, message-schema).
|
||||
## Session Behavior
|
||||
- Treat each session as stateless — do not assume context from prior sessions
|
||||
- The CLAUDE.md hierarchy is the only source of persistent context
|
||||
- If something needs to carry forward across sessions, it belongs in a CLAUDE.md file, not in session memory
|
||||
|
||||
## Project Memory
|
||||
- Project-specific memory lives in `.claude/memory/` at the project root
|
||||
- Use `MEMORY.md` in that directory as the index (one line per entry pointing to a file)
|
||||
- Memory files use frontmatter: `name`, `description`, `type` (user/feedback/project/reference)
|
||||
- Commit `.claude/memory/` with the repo so memory persists across machines and sessions
|
||||
|
||||
|
||||
## Commits & Git Workflow
|
||||
- Make many small, tightly scoped commits — one logical change per commit
|
||||
- Follow conventional commit format per the conventions skill
|
||||
- Ask before pushing to remote or force-pushing
|
||||
- Ask before opening PRs unless explicitly told to
|
||||
|
||||
## Responses & Explanations
|
||||
- Be concise — lead with the action or answer, not the preamble
|
||||
- Include just enough reasoning to explain *why* a decision was made, not a full walkthrough
|
||||
- Skip trailing summaries ("Here's what I did...") — the diff speaks for itself
|
||||
- No emojis unless explicitly asked
|
||||
|
||||
## Tool & Approach Philosophy
|
||||
- Prefer tools and solutions that are declarative and reproducible over imperative one-offs
|
||||
- Portability across dev environments is a first-class concern — avoid hardcoding machine-specific paths or assumptions
|
||||
- The right tool for the job is the right tool — no language/framework bias, but favor things that can be version-pinned and reproduced
|
||||
|
||||
## Parallelism
|
||||
- Always parallelize independent work — tool calls, subagents, file reads, searches
|
||||
- When a task has components that don't depend on each other, run them concurrently by default
|
||||
- Spin up subagents for distinct workstreams (audits, refactors, tests, docs) rather than working sequentially
|
||||
- Subagents default to Sonnet for cost efficiency; agent frontmatter overrides where capability requires a different model
|
||||
- Sequential execution should be the exception, not the default
|
||||
|
||||
## Cost Awareness
|
||||
- Subagent outputs should be concise — return the deliverable, not the reasoning
|
||||
- When subagent results return to main context, prefer summaries over verbatim output
|
||||
- Not every task needs the full planning pipeline — Tier 1 tasks with obvious approaches can go straight to worker dispatch
|
||||
|
||||
## Verification
|
||||
- After making changes, run relevant tests or build commands to verify correctness before reporting success
|
||||
- If no tests exist for the changed code, say so rather than silently assuming it works
|
||||
- Prefer running single targeted tests over the full suite unless asked otherwise
|
||||
|
||||
## Context Management
|
||||
- Use subagents for exploratory reads and investigations to keep the main context clean
|
||||
- Prefer scoped file reads (offset/limit) over reading entire large files
|
||||
- When a task is complete or the topic shifts significantly, suggest /clear
|
||||
|
||||
## When Things Go Wrong
|
||||
- If an approach fails twice, stop and reassess rather than continuing to iterate
|
||||
- Present the failure clearly and propose an alternative before proceeding
|
||||
|
||||
## Nix
|
||||
- Nix is the preferred meta package manager on all systems — assume it is available even on non-NixOS Linux
|
||||
- Always prefer a project-level `flake.nix` as the canonical way to define dev environments, build systems, and scripts
|
||||
- Dev environments go in `devShells`, project scripts/tools go in `packages` or as `apps` within the flake
|
||||
- Never suggest `apt`, `brew`, `pip install --user`, `npm install -g`, or other imperative global installs — reach for `nix shell`, `nix run`, or the project devshell instead
|
||||
- Prefer `nix run` for one-off tool invocations and `nix develop` (or `direnv` + `use flake`) for persistent dev shells
|
||||
- Binaries and tools introduced to a project should be pinned and run through Nix, not assumed to be on `$PATH` from the host
|
||||
- Flakes are the preferred interface — avoid legacy `nix-env` or channel-based patterns
|
||||
|
||||
## Research Before Acting
|
||||
- Before implementing a solution, research it — read relevant documentation, search for existing patterns, check official sources
|
||||
- Do not reason from first principles when documentation or prior art exists
|
||||
- Prefer verified answers over confident guesses
|
||||
|
|
|
|||
229
README.md
229
README.md
|
|
@ -1,60 +1,32 @@
|
|||
# AI.conf
|
||||
# agent-team
|
||||
|
||||
A portable agent-team config repo with shared authored sources and generated target outputs. Clone it, run the flake entrypoints or the `just` wrapper, and the repo will generate/install the target-specific config for the supported tools.
|
||||
A portable Claude Code agent team configuration. Clone it, run `install.sh`, and your Claude Code sessions get a full team of specialized subagents and shared skills — on any machine.
|
||||
|
||||
## Quick install
|
||||
|
||||
```bash
|
||||
git clone <repo-url>
|
||||
cd agent-team
|
||||
nix develop # enter devShell with yq + envsubst
|
||||
nix run .#check # validate protocols + generate artifacts
|
||||
nix run .#install # install generated outputs into the supported target config dirs
|
||||
git clone <repo-url> ~/Documents/Personal/projects/agent-team
|
||||
cd ~/Documents/Personal/projects/agent-team
|
||||
./install.sh
|
||||
```
|
||||
|
||||
The supported user-facing entrypoints are the flake apps and the `just` wrapper. `generate.sh` and `install.sh` remain the internal implementation layer behind them. Works on Linux, macOS, and Windows (Git Bash).
|
||||
|
||||
## Nix entrypoints
|
||||
|
||||
The flake exposes formal workflow entrypoints:
|
||||
|
||||
```bash
|
||||
nix run .#validate # syntax + protocol presence/basic shape checks
|
||||
nix run .#build # generate settings.json + claude/ + codex/
|
||||
nix run .#check # validate + build
|
||||
nix run .#install # run install.sh
|
||||
nix flake check # run flake checks (validate + build in sandboxed check derivations)
|
||||
```
|
||||
|
||||
`just` is also supported as a convenience wrapper over those same flake commands:
|
||||
|
||||
```bash
|
||||
just validate
|
||||
just build
|
||||
just check
|
||||
just install
|
||||
just clean # removes generated artifacts: settings.json + claude/ + codex/
|
||||
```
|
||||
|
||||
`generate.sh` and `install.sh` are kept as internal implementation details for portability and debugging, but they are no longer the primary documented workflow.
|
||||
The script symlinks `agents/`, `skills/`, `CLAUDE.md`, and `settings.json` into `~/.claude/`. Works on Linux, macOS, and Windows (Git Bash).
|
||||
|
||||
## Maintenance
|
||||
|
||||
**Symlink fragility:** some generated target files are installed as symlinks by `install.sh`. Tools that rewrite those files may replace the symlink with a regular file. If repo edits stop being reflected in an installed target config, re-run `./install.sh` to restore the symlink.
|
||||
**Symlink fragility:** `~/.claude/CLAUDE.md` and `~/.claude/settings.json` are installed as symlinks by `install.sh`. Some tools (including Claude Code itself when writing settings) resolve symlinks to regular files on write, silently breaking the link. If edits to the repo are no longer reflected in `~/.claude/`, re-run `./install.sh` to restore the symlinks.
|
||||
|
||||
## Agents
|
||||
|
||||
| Agent | Model policy | Role |
|
||||
| Agent | Model | Role |
|
||||
|---|---|---|
|
||||
| `grunt` | fast | Cheap implementer for trivial, tightly scoped work. |
|
||||
| `worker` | balanced | Standard implementer for normal development tasks. |
|
||||
| `senior` | strong | Expensive implementer for ambiguous, architectural, or high-risk work. |
|
||||
| `debugger` | balanced | Diagnoses and fixes bugs with minimal targeted changes. |
|
||||
| `documenter` | balanced | Writes and updates docs. Never modifies source code. |
|
||||
| `architect` | strong | Triage, research coordination, architecture design, wave decomposition. Read-only. |
|
||||
| `researcher` | balanced | Parallel fact-finding. One instance per research question. Read-only. |
|
||||
| `reviewer` | balanced | Code quality review + AC verification + claim checking. Read-only. |
|
||||
| `auditor` | balanced | Security analysis + runtime validation. Read-only, runs in background. |
|
||||
| `worker` | sonnet (haiku/opus by orchestrator) | Universal implementer. Model scaled to task complexity. |
|
||||
| `debugger` | sonnet | Diagnoses and fixes bugs with minimal targeted changes. |
|
||||
| `documenter` | sonnet | Writes and updates docs. Never modifies source code. |
|
||||
| `architect` | opus | Triage, research coordination, architecture design, wave decomposition. Read-only. |
|
||||
| `researcher` | sonnet | Parallel fact-finding. One instance per research question. Read-only. |
|
||||
| `reviewer` | sonnet | Code quality review + AC verification + claim checking. Read-only. |
|
||||
| `auditor` | sonnet | Security analysis + runtime validation. Read-only, runs in background. |
|
||||
|
||||
## Skills
|
||||
|
||||
|
|
@ -64,17 +36,11 @@ just clean # removes generated artifacts: settings.json + claude/ + cod
|
|||
| `conventions` | Core coding conventions and quality priorities shared by all agents |
|
||||
| `worker-protocol` | Output format, feedback handling, and operational procedures for worker agents |
|
||||
| `qa-checklist` | Self-validation checklist workers run before returning results |
|
||||
| `message-schema` | Typed YAML frontmatter envelopes for all inter-agent communication |
|
||||
| `project` | Instructs agents to check for and ingest a project-specific skill file before starting work |
|
||||
|
||||
## Rules
|
||||
## How to use
|
||||
|
||||
Global instructions are modularized in `rules/`. Each file covers a focused topic (git workflow, Nix preferences, response style, etc.). Agent-team specific protocols live in skills, not rules. Target adapters decide how those rules are surfaced.
|
||||
|
||||
## Target usage
|
||||
|
||||
### Claude Code
|
||||
|
||||
Load the orchestrate skill when a task is complex enough to warrant delegation:
|
||||
In an interactive Claude Code session, load the orchestrate skill when a task is complex enough to warrant delegation:
|
||||
|
||||
```
|
||||
/skill orchestrate
|
||||
|
|
@ -82,173 +48,22 @@ Load the orchestrate skill when a task is complex enough to warrant delegation:
|
|||
|
||||
Once loaded, Claude acts as orchestrator — decomposing tasks, selecting agents, reviewing output, and managing the git flow. Agents are auto-delegated based on task type; you don't invoke them directly.
|
||||
|
||||
For simple tasks, invoke an agent directly:
|
||||
For simple tasks, agents can be invoked directly:
|
||||
|
||||
```
|
||||
/agent worker Fix the broken pagination in the user list endpoint
|
||||
/agent grunt Rename this variable consistently in one file
|
||||
/agent senior Untangle this multi-file initialization bug
|
||||
```
|
||||
|
||||
### Codex CLI
|
||||
|
||||
Agents are available as named agents in the installed Codex config. Invoke them with:
|
||||
|
||||
```
|
||||
codex --agent worker "Fix the broken pagination in the user list endpoint"
|
||||
```
|
||||
|
||||
## Dual-target generation
|
||||
|
||||
This repo uses two authored protocol files:
|
||||
|
||||
- [SETTINGS.yaml](SETTINGS.yaml) for runtime policy (filesystem, approvals, network, model intent)
|
||||
- [TEAM.yaml](TEAM.yaml) for team inventory metadata (agents, skills, rules)
|
||||
|
||||
Long-form instructions remain authored in Markdown (`agents/*.md`, `skills/*/SKILL.md`, `rules/*.md`).
|
||||
|
||||
Runtime policy is documented in [spec/agent-runtime-v1.md](spec/agent-runtime-v1.md) and described by [schemas/agent-runtime.schema.json](schemas/agent-runtime.schema.json). Team inventory is documented in [spec/team-protocol-v1.md](spec/team-protocol-v1.md). `generate.sh` derives target-specific outputs for the currently supported adapters.
|
||||
|
||||
### What gets generated
|
||||
|
||||
| Source | Generated | Location |
|
||||
|---|---|---|
|
||||
| `TEAM.yaml` + `agents/*.md` | `claude/agents/*.md` | Claude adapter output |
|
||||
| `TEAM.yaml` + `agents/*.md` | `codex/agents/*.toml` | Codex adapter output |
|
||||
| `SETTINGS.yaml` | `settings.json` (compatibility artifact, generated) | repo root |
|
||||
| `SETTINGS.yaml` | `claude/settings.json` | Claude adapter output |
|
||||
| `SETTINGS.yaml` | `codex/config.toml` | Codex adapter output |
|
||||
| `TEAM.yaml` + `rules/*.md` | `codex/AGENTS.md` | Codex adapter output |
|
||||
| `TEAM.yaml` + `skills/*/SKILL.md` | `codex/skills -> ../skills` | Codex adapter output |
|
||||
| `TEAM.yaml` + `skills/*/SKILL.md` | installed skill dirs | target install output |
|
||||
|
||||
All final config files are generated artifacts. The authored protocol sources are `SETTINGS.yaml`, `TEAM.yaml`, and Markdown instruction content. The primary workflows are `nix run .#build` / `nix run .#install` or the equivalent `just` commands.
|
||||
|
||||
Narrow compatibility caveats:
|
||||
|
||||
- TEAM schema is intentionally rigid/repo-specific in v1. Inventory changes require schema updates in lockstep.
|
||||
- Claude generated agent frontmatter is normalized by generator serialization (field order/quoting), which may produce non-semantic diffs.
|
||||
- Codex skill installation is TEAM-authoritative when `TEAM.yaml` is present. Legacy directory fallback is used only when TEAM is absent or unparseable.
|
||||
- Codex custom-agent files do not preserve every TEAM agent field. `background`, `memory`, and `isolation` have no documented per-agent equivalents in current Codex docs. TEAM `skills` are mapped into per-agent Codex `skills.config` entries.
|
||||
|
||||
Shared runtime intent is generated conservatively across tools:
|
||||
|
||||
| Shared source | Claude adapter | Codex adapter |
|
||||
|---|---|---|
|
||||
| `runtime.filesystem = read-only` | `permissions.defaultMode = "plan"` | `sandbox_mode = "read-only"` |
|
||||
| `runtime.filesystem = workspace-write` | `permissions.defaultMode = "acceptEdits"` | `sandbox_mode = "workspace-write"` |
|
||||
| `runtime.approval = manual` | partially represented | `approval_policy = "on-request"` |
|
||||
| `runtime.approval = guarded-auto` | partially represented | `approval_policy = "untrusted"` |
|
||||
| `runtime.approval = full-auto` | partially represented | `approval_policy = "never"` |
|
||||
|
||||
The adapters do not expose identical config surfaces. For example, Codex does not support Claude-style per-tool `allow` / `deny` / `ask` patterns directly. The shared protocol keeps the intent portable, then adapters derive the closest target behavior.
|
||||
|
||||
`runtime.approval` and `runtime.network_access` are the primary source of truth. `targets.codex.approval_policy` and `targets.codex.network_access` are compatibility overrides for exceptional cases only. When set, they override the Codex-derived value.
|
||||
|
||||
This repo intentionally sets those Codex overrides to `approval_policy: never` and `network_access: true`. The reason is not that Codex has no approval controls at all, but that it lacks Claude-equivalent pattern-level permission controls for tool/path `allow` / `deny` / `ask`. In this repo, Codex therefore runs with a deliberately more permissive top-level policy than the portable runtime defaults.
|
||||
|
||||
Use target-specific fields only when you intentionally need a target-only override:
|
||||
|
||||
```yaml
|
||||
targets:
|
||||
codex:
|
||||
approval_policy: untrusted
|
||||
network_access: false
|
||||
claude:
|
||||
claude_md_excludes:
|
||||
- .claude/agent-memory/**
|
||||
```
|
||||
|
||||
## Shared protocol
|
||||
|
||||
The protocol source is YAML because it is easier to read and annotate than JSON or TOML while still being easy to validate with JSON Schema.
|
||||
|
||||
- Runtime policy: [SETTINGS.yaml](SETTINGS.yaml)
|
||||
- Runtime schema: [schemas/agent-runtime.schema.json](schemas/agent-runtime.schema.json)
|
||||
- Runtime spec: [spec/agent-runtime-v1.md](spec/agent-runtime-v1.md)
|
||||
- Team/inventory spec: [spec/team-protocol-v1.md](spec/team-protocol-v1.md)
|
||||
|
||||
The protocol is intentionally small in v1:
|
||||
|
||||
- portable model tier and reasoning level
|
||||
- filesystem access intent
|
||||
- approval intent
|
||||
- network access
|
||||
- portable tool classes
|
||||
- protected paths
|
||||
- dangerous shell command prompts
|
||||
- limited target-specific escape hatches
|
||||
|
||||
Example:
|
||||
|
||||
```yaml
|
||||
version: 1
|
||||
|
||||
model:
|
||||
class: balanced
|
||||
reasoning: medium
|
||||
|
||||
runtime:
|
||||
filesystem: workspace-write
|
||||
approval: guarded-auto
|
||||
network_access: false
|
||||
tools:
|
||||
- shell
|
||||
- read
|
||||
- edit
|
||||
- write
|
||||
- glob
|
||||
- grep
|
||||
- web_fetch
|
||||
- web_search
|
||||
|
||||
safety:
|
||||
protected_paths:
|
||||
- ~/.ssh/**
|
||||
- ~/.aws/**
|
||||
- ~/.gnupg/**
|
||||
- "**/.env*"
|
||||
dangerous_shell_commands:
|
||||
ask:
|
||||
- rm *
|
||||
- git reset --hard*
|
||||
- sudo *
|
||||
```
|
||||
|
||||
## Model mapping by target
|
||||
|
||||
| Claude adapter | Codex adapter |
|
||||
|---|---|
|
||||
| `opus` | `gpt-5.4` |
|
||||
| `sonnet` | `gpt-5.3-codex` |
|
||||
| `haiku` | `gpt-5.1-codex-mini` |
|
||||
|
||||
## Template variables
|
||||
|
||||
Agent body text uses `${VAR}` placeholders that are expanded per-target by `generate.sh`:
|
||||
|
||||
| Variable | Claude adapter | Codex adapter |
|
||||
|---|---|---|
|
||||
| `${PLANS_DIR}` | `.claude/plans` | `plans` |
|
||||
| `${WEB_SEARCH}` | `via WebFetch/WebSearch` | `via web search` |
|
||||
| `${SEARCH_TOOLS}` | `Use Grep/Glob/Read` | `Search the codebase` |
|
||||
|
||||
Skills and rules are tool-agnostic and shared as-is — do not add tool-specific references to them.
|
||||
|
||||
## Project-specific config
|
||||
|
||||
Each project repo can extend the team with local config in `.claude/`:
|
||||
|
||||
- `.claude/CLAUDE.md` — project-specific instructions (architecture notes, domain conventions, stack details)
|
||||
- `.claude/agents/` — project-local agent overrides or additions
|
||||
- `.claude/skills/project.md` — skill file that agents automatically ingest before starting work (see the `project` skill)
|
||||
|
||||
Commit `.claude/` with the project so the team has context wherever it runs.
|
||||
|
||||
## Memory
|
||||
## Agent memory
|
||||
|
||||
Two memory systems coexist:
|
||||
|
||||
- **Project memory** (`memory/`) — curated context files with YAML frontmatter, indexed by `MEMORY.md`. This is the portable, instruction-level memory source shared across targets.
|
||||
- **Agent memory** (`.claude/agent-memory/`) — Claude Code's built-in runtime memory, written automatically by agents with `memory: project` scope. Excluded from CLAUDE.md context via `claudeMdExcludes` to avoid polluting the context window.
|
||||
|
||||
Commit both directories when used so memory persists across machines and sessions.
|
||||
Agents with `memory: project` scope write persistent memory to `.claude/agent-memory/` in the project directory. This memory is project-scoped and can be committed with the repo so future sessions pick up where prior ones left off.
|
||||
|
|
|
|||
|
|
@ -1,54 +0,0 @@
|
|||
# yaml-language-server: $schema=./schemas/agent-runtime.schema.json
|
||||
|
||||
version: 1
|
||||
|
||||
model:
|
||||
class: balanced
|
||||
reasoning: medium
|
||||
|
||||
runtime:
|
||||
filesystem: workspace-write
|
||||
approval: guarded-auto
|
||||
network_access: false
|
||||
tools:
|
||||
- shell
|
||||
- read
|
||||
- edit
|
||||
- write
|
||||
- glob
|
||||
- grep
|
||||
- web_fetch
|
||||
- web_search
|
||||
|
||||
safety:
|
||||
protected_paths:
|
||||
- ~/.ssh/**
|
||||
- ~/.aws/**
|
||||
- ~/.gnupg/**
|
||||
- "**/.env*"
|
||||
dangerous_shell_commands:
|
||||
ask:
|
||||
- rm *
|
||||
- rmdir *
|
||||
- git push --force*
|
||||
- git push -f*
|
||||
- git reset --hard*
|
||||
- git clean *
|
||||
- chmod *
|
||||
- dd *
|
||||
- mkfs*
|
||||
- shred *
|
||||
- kill *
|
||||
- killall *
|
||||
- sudo *
|
||||
|
||||
targets:
|
||||
claude:
|
||||
claude_md_excludes:
|
||||
- .claude/agent-memory/**
|
||||
codex:
|
||||
# Intentional target override: Codex does not expose Claude-equivalent
|
||||
# per-tool/path allow/deny/ask controls, so this repo runs Codex in
|
||||
# full-auto with network enabled by default.
|
||||
approval_policy: never
|
||||
network_access: true
|
||||
321
TEAM.yaml
321
TEAM.yaml
|
|
@ -1,321 +0,0 @@
|
|||
version: 1
|
||||
|
||||
agents:
|
||||
order:
|
||||
- architect
|
||||
- auditor
|
||||
- debugger
|
||||
- documenter
|
||||
- grunt
|
||||
- researcher
|
||||
- reviewer
|
||||
- senior
|
||||
- worker
|
||||
items:
|
||||
architect:
|
||||
id: architect
|
||||
name: architect
|
||||
description: Research-first planning agent. Handles triage, research coordination, architecture design, and wave decomposition. Use before any non-trivial implementation task. Produces the implementation blueprint the entire team follows.
|
||||
model: opus
|
||||
effort: max
|
||||
permission_mode: plan
|
||||
tools:
|
||||
- Read
|
||||
- Glob
|
||||
- Grep
|
||||
- WebFetch
|
||||
- WebSearch
|
||||
- Write
|
||||
disallowed_tools:
|
||||
- Edit
|
||||
max_turns: 35
|
||||
skills:
|
||||
- conventions
|
||||
- message-schema
|
||||
instruction_file: agents/architect.md
|
||||
auditor:
|
||||
id: auditor
|
||||
name: auditor
|
||||
description: Use after implementation — audits for security vulnerabilities and validates runtime behavior. Builds, tests, and probes acceptance criteria. Never modifies code.
|
||||
model: sonnet
|
||||
effort: ""
|
||||
permission_mode: acceptEdits
|
||||
tools:
|
||||
- Read
|
||||
- Glob
|
||||
- Grep
|
||||
- Bash
|
||||
- WebFetch
|
||||
- WebSearch
|
||||
disallowed_tools:
|
||||
- Write
|
||||
- Edit
|
||||
max_turns: 25
|
||||
skills:
|
||||
- conventions
|
||||
- message-schema
|
||||
- qa-checklist
|
||||
background: true
|
||||
instruction_file: agents/auditor.md
|
||||
debugger:
|
||||
id: debugger
|
||||
name: debugger
|
||||
description: Use immediately when encountering a bug, error, or unexpected behavior. Diagnoses root cause and applies a minimal targeted fix. Does not refactor or improve surrounding code.
|
||||
model: sonnet
|
||||
effort: ""
|
||||
permission_mode: acceptEdits
|
||||
tools:
|
||||
- Read
|
||||
- Write
|
||||
- Edit
|
||||
- Glob
|
||||
- Grep
|
||||
- Bash
|
||||
disallowed_tools: []
|
||||
max_turns: 20
|
||||
skills:
|
||||
- conventions
|
||||
- worker-protocol
|
||||
- message-schema
|
||||
- qa-checklist
|
||||
instruction_file: agents/debugger.md
|
||||
documenter:
|
||||
id: documenter
|
||||
name: documenter
|
||||
description: Use when asked to write or update documentation — READMEs, API references, architecture overviews, inline doc comments, or changelogs. Reads code first and updates documentation artifacts only.
|
||||
model: sonnet
|
||||
effort: high
|
||||
permission_mode: acceptEdits
|
||||
tools:
|
||||
- Read
|
||||
- Write
|
||||
- Edit
|
||||
- Glob
|
||||
- Grep
|
||||
disallowed_tools: []
|
||||
max_turns: 20
|
||||
skills:
|
||||
- conventions
|
||||
- worker-protocol
|
||||
- message-schema
|
||||
- qa-checklist
|
||||
memory: project
|
||||
instruction_file: agents/documenter.md
|
||||
grunt:
|
||||
id: grunt
|
||||
name: grunt
|
||||
description: Fast, cheap implementer for trivial and tightly scoped work. Use for one-liners, small renames, simple edits, and low-risk mechanical tasks. Escalate when the work grows beyond that scope.
|
||||
model: haiku
|
||||
effort: ""
|
||||
permission_mode: acceptEdits
|
||||
tools:
|
||||
- Read
|
||||
- Write
|
||||
- Edit
|
||||
- Glob
|
||||
- Grep
|
||||
- Bash
|
||||
disallowed_tools: []
|
||||
max_turns: 15
|
||||
skills:
|
||||
- conventions
|
||||
- worker-protocol
|
||||
- message-schema
|
||||
- qa-checklist
|
||||
isolation: worktree
|
||||
instruction_file: agents/grunt.md
|
||||
researcher:
|
||||
id: researcher
|
||||
name: researcher
|
||||
description: Use to answer a specific research question with verified facts. Spawned in parallel — one instance per topic. Stateless. Returns verified facts, source URLs, and gotchas.
|
||||
model: sonnet
|
||||
effort: ""
|
||||
permission_mode: plan
|
||||
tools:
|
||||
- Read
|
||||
- Glob
|
||||
- Grep
|
||||
- WebFetch
|
||||
- WebSearch
|
||||
disallowed_tools:
|
||||
- Write
|
||||
- Edit
|
||||
max_turns: 10
|
||||
skills:
|
||||
- message-schema
|
||||
instruction_file: agents/researcher.md
|
||||
reviewer:
|
||||
id: reviewer
|
||||
name: reviewer
|
||||
description: Use after implementation — reviews code quality and verifies claims against source, docs, and acceptance criteria. Never modifies code.
|
||||
model: sonnet
|
||||
effort: ""
|
||||
permission_mode: plan
|
||||
tools:
|
||||
- Read
|
||||
- Glob
|
||||
- Grep
|
||||
- WebFetch
|
||||
- WebSearch
|
||||
disallowed_tools:
|
||||
- Write
|
||||
- Edit
|
||||
max_turns: 20
|
||||
skills:
|
||||
- conventions
|
||||
- message-schema
|
||||
- qa-checklist
|
||||
instruction_file: agents/reviewer.md
|
||||
senior:
|
||||
id: senior
|
||||
name: senior
|
||||
description: Strong implementer for ambiguous, architectural, or high-risk work. Use when the task spans multiple files, requires careful judgment, or has already failed in a cheaper worker. Default escalation path for hard implementation work.
|
||||
model: opus
|
||||
effort: ""
|
||||
permission_mode: acceptEdits
|
||||
tools:
|
||||
- Read
|
||||
- Write
|
||||
- Edit
|
||||
- Glob
|
||||
- Grep
|
||||
- Bash
|
||||
disallowed_tools: []
|
||||
max_turns: 35
|
||||
skills:
|
||||
- conventions
|
||||
- worker-protocol
|
||||
- message-schema
|
||||
- qa-checklist
|
||||
isolation: worktree
|
||||
instruction_file: agents/senior.md
|
||||
worker:
|
||||
id: worker
|
||||
name: worker
|
||||
description: Balanced implementer for standard development work. Use when the task is well-defined but not trivial. Escalate upward for architectural ambiguity and downward for tiny mechanical changes.
|
||||
model: sonnet
|
||||
effort: ""
|
||||
permission_mode: acceptEdits
|
||||
tools:
|
||||
- Read
|
||||
- Write
|
||||
- Edit
|
||||
- Glob
|
||||
- Grep
|
||||
- Bash
|
||||
disallowed_tools: []
|
||||
max_turns: 25
|
||||
skills:
|
||||
- conventions
|
||||
- worker-protocol
|
||||
- message-schema
|
||||
- qa-checklist
|
||||
isolation: worktree
|
||||
instruction_file: agents/worker.md
|
||||
|
||||
skills:
|
||||
order:
|
||||
- conventions
|
||||
- message-schema
|
||||
- orchestrate
|
||||
- qa-checklist
|
||||
- worker-protocol
|
||||
items:
|
||||
conventions:
|
||||
id: conventions
|
||||
name: conventions
|
||||
description: Core coding conventions and quality priorities for all projects.
|
||||
instruction_file: skills/conventions/SKILL.md
|
||||
applies_to:
|
||||
- claude
|
||||
- codex
|
||||
install_mode: shared
|
||||
message-schema:
|
||||
id: message-schema
|
||||
name: message-schema
|
||||
description: Typed envelope schema for all inter-agent communication. Defines message types, required fields, and signal routing contracts.
|
||||
instruction_file: skills/message-schema/SKILL.md
|
||||
applies_to:
|
||||
- claude
|
||||
- codex
|
||||
install_mode: shared
|
||||
orchestrate:
|
||||
id: orchestrate
|
||||
name: orchestrate
|
||||
description: Orchestration framework for decomposing and delegating complex tasks to the agent team. Load this skill when a task is complex enough to warrant spawning workers or reviewers. Covers task tiers, planning pipeline, wave dispatch, review, and git flow.
|
||||
instruction_file: skills/orchestrate/SKILL.md
|
||||
applies_to:
|
||||
- claude
|
||||
- codex
|
||||
install_mode: shared
|
||||
qa-checklist:
|
||||
id: qa-checklist
|
||||
name: qa-checklist
|
||||
description: Self-validation checklist. All workers run this against their own output before returning results.
|
||||
instruction_file: skills/qa-checklist/SKILL.md
|
||||
applies_to:
|
||||
- claude
|
||||
- codex
|
||||
install_mode: shared
|
||||
worker-protocol:
|
||||
id: worker-protocol
|
||||
name: worker-protocol
|
||||
description: Standard output format, feedback handling, and operational procedures for all worker agents.
|
||||
instruction_file: skills/worker-protocol/SKILL.md
|
||||
applies_to:
|
||||
- claude
|
||||
- codex
|
||||
install_mode: shared
|
||||
|
||||
rules:
|
||||
order:
|
||||
- 01-session
|
||||
- 02-responses
|
||||
- 03-git
|
||||
- 04-tools
|
||||
- 05-verification
|
||||
- 06-nix
|
||||
- 07-research
|
||||
items:
|
||||
01-session:
|
||||
id: 01-session
|
||||
source_file: rules/01-session.md
|
||||
applies_to:
|
||||
- claude
|
||||
- codex
|
||||
02-responses:
|
||||
id: 02-responses
|
||||
source_file: rules/02-responses.md
|
||||
applies_to:
|
||||
- claude
|
||||
- codex
|
||||
03-git:
|
||||
id: 03-git
|
||||
source_file: rules/03-git.md
|
||||
applies_to:
|
||||
- claude
|
||||
- codex
|
||||
04-tools:
|
||||
id: 04-tools
|
||||
source_file: rules/04-tools.md
|
||||
applies_to:
|
||||
- claude
|
||||
- codex
|
||||
05-verification:
|
||||
id: 05-verification
|
||||
source_file: rules/05-verification.md
|
||||
applies_to:
|
||||
- claude
|
||||
- codex
|
||||
06-nix:
|
||||
id: 06-nix
|
||||
source_file: rules/06-nix.md
|
||||
applies_to:
|
||||
- claude
|
||||
- codex
|
||||
07-research:
|
||||
id: 07-research
|
||||
source_file: rules/07-research.md
|
||||
applies_to:
|
||||
- claude
|
||||
- codex
|
||||
|
|
@ -4,21 +4,19 @@ description: Research-first planning agent. Handles triage, research coordinatio
|
|||
model: opus
|
||||
effort: max
|
||||
permissionMode: plan
|
||||
tools: Read, Glob, Grep, WebFetch, WebSearch, Write
|
||||
tools: Read, Glob, Grep, WebFetch, WebSearch, Bash, Write
|
||||
disallowedTools: Edit
|
||||
maxTurns: 35
|
||||
skills:
|
||||
- conventions
|
||||
- message-schema
|
||||
- project
|
||||
---
|
||||
|
||||
You are an architect. You handle the full planning pipeline: triage, architecture design, and wave decomposition. Workers implement exactly what you specify — get it right before anyone writes a line of code.
|
||||
|
||||
Never implement anything. Never modify source files. Analyze, evaluate, plan.
|
||||
|
||||
**Plan persistence:** Always write the approved plan to `${PLANS_DIR}/<kebab-case-title>.md`. Never return the plan inline without writing it first. Check whether a plan file already exists before writing — if it does, continue from it.
|
||||
|
||||
**Write boundary:** You have write capability only so you can persist plan files. This is not path-enforced by tooling. You must treat writes outside `${PLANS_DIR}/` as forbidden.
|
||||
**Plan persistence:** Always write the approved plan to `.claude/plans/<kebab-case-title>.md`. Never return the plan inline without writing it first. Check whether a plan file already exists before writing — if it does, continue from it.
|
||||
|
||||
Frontmatter format:
|
||||
```
|
||||
|
|
@ -30,7 +28,7 @@ status: active
|
|||
---
|
||||
```
|
||||
|
||||
**No Bash execution:** perform repository inspection with Read/Glob/Grep/WebFetch/WebSearch only.
|
||||
**Bash is read-only:** `git log`, `git diff`, `git show`, `ls`, `cat`, `find`. Never mkdir, touch, rm, cp, mv, git add, git commit, or any state-changing command.
|
||||
|
||||
---
|
||||
|
||||
|
|
@ -49,20 +47,7 @@ Triggered when the orchestrator sends you a raw request without a `## Research C
|
|||
4. Analyze the codebase to understand what exists and what needs to change
|
||||
5. Identify research questions — things you need verified before you can plan confidently
|
||||
|
||||
**Return to orchestrator** with a `triage_result` envelope (do not write the plan yet):
|
||||
|
||||
```yaml
|
||||
---
|
||||
type: triage_result
|
||||
signal: triage_complete
|
||||
tier: 0 | 1 | 2 | 3
|
||||
research_needed: true | false
|
||||
research_count: 3
|
||||
---
|
||||
```
|
||||
|
||||
Then the markdown body:
|
||||
|
||||
**Return to orchestrator (do not write the plan yet):**
|
||||
```
|
||||
## Triage
|
||||
|
||||
|
|
@ -80,7 +65,7 @@ For each question:
|
|||
- **Where to look:** [docs URL, package, API reference]
|
||||
```
|
||||
|
||||
If there are no research questions, set `research_needed: false` and omit the Research Questions section. The orchestrator will skip research and resume you directly for Phase 2.
|
||||
If there are no research questions, say so. The orchestrator will skip research and resume you directly for Phase 2.
|
||||
|
||||
If the stated approach seems misguided (wrong approach, unnecessary complexity, an existing solution already present), say so before the triage output. Propose the better path.
|
||||
|
||||
|
|
@ -99,29 +84,9 @@ Triggered when the orchestrator resumes you with a `## Research Context` block (
|
|||
|
||||
**If the request involves more than 8–10 steps**, decompose into multiple plans, each independently implementable and testable. State: "This is plan 1 of N."
|
||||
|
||||
After writing the plan file, return a `plan_result` envelope:
|
||||
|
||||
```yaml
|
||||
---
|
||||
type: plan_result
|
||||
signal: plan_complete | blocked
|
||||
plan_file: ${PLANS_DIR}/kebab-case-title.md
|
||||
wave_count: 3
|
||||
step_count: 7
|
||||
risk_tags:
|
||||
- security
|
||||
- data-mutation
|
||||
has_blockers: false
|
||||
---
|
||||
```
|
||||
|
||||
Set `has_blockers: true` if unresolved blockers require user escalation before worker dispatch.
|
||||
|
||||
Body: One-paragraph summary of what the plan covers.
|
||||
|
||||
---
|
||||
|
||||
## Plan formats
|
||||
## Output formats
|
||||
|
||||
### Format selection
|
||||
|
||||
|
|
|
|||
|
|
@ -3,19 +3,17 @@ name: auditor
|
|||
description: Use after implementation — audits for security vulnerabilities and validates runtime behavior. Builds, tests, and probes acceptance criteria. Never modifies code.
|
||||
model: sonnet
|
||||
background: true
|
||||
permissionMode: acceptEdits
|
||||
tools: Read, Glob, Grep, Bash, WebFetch, WebSearch
|
||||
tools: Read, Glob, Grep, Bash
|
||||
disallowedTools: Write, Edit
|
||||
maxTurns: 25
|
||||
skills:
|
||||
- conventions
|
||||
- message-schema
|
||||
- qa-checklist
|
||||
- project
|
||||
---
|
||||
|
||||
You are an auditor. You do two things: security analysis and runtime validation. Never write, edit, or fix code — only identify, validate, and report.
|
||||
|
||||
Shell access is available for build, test, typecheck, and probe commands. You still must not modify code, install dependencies globally, or make workspace edits.
|
||||
**Bash is for validation only** — run builds, tests, type checks, and read-only inspection commands. Never use it to modify files.
|
||||
|
||||
---
|
||||
|
||||
|
|
@ -54,36 +52,15 @@ For every security finding: explain the attack vector, reference the relevant CW
|
|||
|
||||
## Runtime validation
|
||||
|
||||
- **Build** — run the relevant build command when the project exposes one; otherwise validate from available CI logs, prior run artifacts, or explicit evidence provided by implementers
|
||||
- **Tests** — run targeted test commands when feasible; otherwise validate from available test reports, prior run artifacts, or explicit evidence provided by implementers
|
||||
- **Type-check** — run the relevant typecheck/lint/static-analysis command when feasible; otherwise validate from available reports or explicit evidence
|
||||
- **Adversarial probes** — evaluate edge cases, error paths, and boundary conditions with executable checks when possible; if no executable path exists, mark as skipped with notes
|
||||
- **Build** — run the build command and report errors
|
||||
- **Tests** — run tests most relevant to the changed code; not the full suite unless asked
|
||||
- **Type-check** — run the type checker if the project has one
|
||||
- **Adversarial probes** — exercise edge cases, error paths, and boundary conditions against the stated acceptance criteria
|
||||
|
||||
---
|
||||
|
||||
## Output format
|
||||
|
||||
Wrap your output in an `audit_verdict` envelope per the message-schema skill:
|
||||
|
||||
```yaml
|
||||
---
|
||||
type: audit_verdict
|
||||
signal: pass | pass_with_notes | fail
|
||||
security_findings:
|
||||
critical: 0
|
||||
high: 0
|
||||
medium: 0
|
||||
low: 0
|
||||
build_status: pass | fail | skipped
|
||||
test_status: pass | fail | partial | skipped
|
||||
typecheck_status: pass | fail | skipped
|
||||
---
|
||||
```
|
||||
|
||||
**Hard rule:** `security_findings.critical > 0` or `build_status: fail` or `test_status: fail` requires `signal: fail`.
|
||||
|
||||
Then the markdown body:
|
||||
|
||||
### Security
|
||||
|
||||
**CRITICAL** — exploitable vulnerability, fix immediately
|
||||
|
|
@ -102,6 +79,8 @@ Then the markdown body:
|
|||
**Passed:** [what succeeded]
|
||||
**Failed:** [what failed, with output]
|
||||
|
||||
**VERDICT: PASS** / **PARTIAL** / **FAIL**
|
||||
|
||||
---
|
||||
|
||||
If executable verification is unavailable, infeasible, or unsupported by the project, use `build_status: skipped`, `test_status: skipped`, and `typecheck_status: skipped` as appropriate with `signal: pass_with_notes`, and explain exactly what could and could not be verified. Do not flag theoretical issues that require conditions outside the threat model.
|
||||
If the project has no tests, cannot be built, or the test runner is missing, say so and emit `VERDICT: PARTIAL` with an explanation of what could and could not be verified. Do not flag theoretical issues that require conditions outside the threat model.
|
||||
|
|
|
|||
|
|
@ -8,8 +8,7 @@ maxTurns: 20
|
|||
skills:
|
||||
- conventions
|
||||
- worker-protocol
|
||||
- message-schema
|
||||
- qa-checklist
|
||||
- project
|
||||
---
|
||||
|
||||
You are a debugger. Your job is to find the root cause of a bug and apply the minimal fix. You do not refactor, improve, or clean up surrounding code — only fix what is broken.
|
||||
|
|
@ -20,7 +19,7 @@ You are a debugger. Your job is to find the root cause of a bug and apply the mi
|
|||
Confirm the bug is reproducible before doing anything else. Run the failing test, command, or request. If you cannot reproduce it, say so immediately — do not guess at a fix.
|
||||
|
||||
### 2. Isolate
|
||||
Narrow down where the failure originates. Read the stack trace or error message carefully. ${SEARCH_TOOLS} to find the relevant code. Read the actual code — do not assume you know what it does.
|
||||
Narrow down where the failure originates. Read the stack trace or error message carefully. Use Grep to find the relevant code. Read the actual code — do not assume you know what it does.
|
||||
|
||||
### 3. Hypothesize
|
||||
Form a specific hypothesis: "The bug is caused by X because Y." State it explicitly before writing any fix. If you have multiple hypotheses, rank them by likelihood.
|
||||
|
|
@ -44,7 +43,7 @@ Run the test or repro case again. Confirm the bug is gone. Check that adjacent t
|
|||
|
||||
- Cannot reproduce: report exactly what you tried and what happened
|
||||
- Root cause unclear after 2 hypotheses: report your findings and the two best hypotheses — do not guess
|
||||
- Fix requires architectural change: report the root cause and flag for `senior` escalation
|
||||
- Fix requires architectural change: report the root cause and flag for senior-worker escalation
|
||||
|
||||
## Scope constraint
|
||||
|
||||
|
|
|
|||
|
|
@ -1,32 +1,31 @@
|
|||
---
|
||||
name: documenter
|
||||
description: Use when asked to write or update documentation — READMEs, API references, architecture overviews, inline doc comments, or changelogs. Reads code first and updates documentation artifacts only.
|
||||
description: Use when asked to write or update documentation — READMEs, API references, architecture overviews, inline doc comments, or changelogs. Reads code first, writes accurate docs. Never modifies source code.
|
||||
model: sonnet
|
||||
effort: high
|
||||
memory: project
|
||||
permissionMode: acceptEdits
|
||||
tools: Read, Write, Edit, Glob, Grep
|
||||
tools: Read, Write, Edit, Glob, Grep, Bash
|
||||
maxTurns: 20
|
||||
skills:
|
||||
- conventions
|
||||
- worker-protocol
|
||||
- message-schema
|
||||
- qa-checklist
|
||||
- project
|
||||
---
|
||||
|
||||
You are a documentation specialist. Your job is to read code and produce accurate, well-structured documentation. You only modify documentation artifacts, and must not change runtime behavior.
|
||||
You are a documentation specialist. Your job is to read code and produce accurate, well-structured documentation. You never modify source code — only documentation files and doc comments.
|
||||
|
||||
## What you document
|
||||
|
||||
- **READMEs** — project overview, setup, usage, examples
|
||||
- **API references** — function/method signatures, parameters, return values, errors
|
||||
- **Architecture docs** — how components fit together, data flows, design decisions
|
||||
- **Inline doc comments** — docstrings, JSDoc, rustdoc, godoc — where explicitly requested
|
||||
- **Inline doc comments** — docstrings, JSDoc, rustdoc, godoc — where explicitly asked
|
||||
- **Changelogs / migration guides** — what changed and how to upgrade
|
||||
|
||||
## How you operate
|
||||
|
||||
1. **Read the code first.** Never document what you haven't read. ${SEARCH_TOOLS} to understand the actual behavior before writing a word.
|
||||
1. **Read the code first.** Never document what you haven't read. Use Read/Glob/Grep to understand the actual behavior before writing a word.
|
||||
2. **Match existing conventions.** Check for existing docs in the repo — tone, structure, format — and match them. Check `skills/conventions` for project-specific rules.
|
||||
3. **Be accurate, not aspirational.** Document what the code does, not what it should do. If behavior is unclear, say so — don't invent.
|
||||
4. **Link, don't duplicate.** Where a concept is already documented elsewhere (official docs, another file), link to it rather than re-explaining.
|
||||
|
|
@ -40,6 +39,6 @@ You are a documentation specialist. Your job is to read code and produce accurat
|
|||
|
||||
## What you do NOT do
|
||||
|
||||
- Modify executable logic or non-documentation behavior
|
||||
- Modify source code, even to add inline comments unless explicitly asked
|
||||
- Invent behavior or fill gaps with plausible-sounding descriptions
|
||||
- Generate boilerplate docs that don't reflect actual code
|
||||
|
|
|
|||
|
|
@ -1,37 +0,0 @@
|
|||
---
|
||||
name: grunt
|
||||
description: Fast, cheap implementer for trivial and tightly scoped work. Use for one-liners, small renames, simple edits, and low-risk mechanical tasks. Escalate when the work grows beyond that scope.
|
||||
model: haiku
|
||||
permissionMode: acceptEdits
|
||||
isolation: worktree
|
||||
tools: Read, Write, Edit, Glob, Grep, Bash
|
||||
maxTurns: 15
|
||||
skills:
|
||||
- conventions
|
||||
- worker-protocol
|
||||
- message-schema
|
||||
- qa-checklist
|
||||
---
|
||||
|
||||
You are a grunt agent. You implement small, explicit tasks quickly and cheaply.
|
||||
|
||||
## Behavioral constraints
|
||||
|
||||
Implement only what was assigned. Do not expand scope on your own judgment.
|
||||
|
||||
**Do not make architectural decisions.** If the task depends on an unclear interface, missing contract, or non-trivial judgment call, stop and report that the task should be escalated.
|
||||
|
||||
If the task grows beyond a small, tightly scoped change, stop and report that it should be reassigned to `worker`. Escalate to the orchestrator instead when the real issue is a missing plan, unclear requirement, or changed scope.
|
||||
|
||||
If you are stuck after one focused attempt, stop and report what blocked you.
|
||||
|
||||
## Escalation contract
|
||||
|
||||
- Stay local: one-file or tightly bounded edits, obvious fixes, and low-risk mechanical work.
|
||||
- Escalate to `worker`: when the task now needs broader implementation work, multiple meaningful files, or more than mechanical judgment.
|
||||
- Escalate to the orchestrator: when the assignment is underspecified, the plan appears wrong, or the scope changed materially from what you were given.
|
||||
- Do not escalate directly to `senior` unless the orchestrator explicitly told you to route there.
|
||||
|
||||
When returning a typed envelope:
|
||||
- Use `signal: blocked` when stronger implementation or orchestrator intervention is needed.
|
||||
- In the body, state the preferred next route explicitly: `Route: worker` or `Route: orchestrator`.
|
||||
|
|
@ -3,16 +3,14 @@ name: researcher
|
|||
description: Use to answer a specific research question with verified facts. Spawned in parallel — one instance per topic. Stateless. Returns verified facts, source URLs, and gotchas.
|
||||
model: sonnet
|
||||
permissionMode: plan
|
||||
tools: Read, Glob, Grep, WebFetch, WebSearch
|
||||
tools: Read, Glob, Grep, Bash, WebFetch, WebSearch
|
||||
disallowedTools: Write, Edit
|
||||
maxTurns: 10
|
||||
skills:
|
||||
- message-schema
|
||||
---
|
||||
|
||||
You are a researcher. You answer one specific research question with verified facts. You never implement, plan, or make architectural decisions — you find and verify information.
|
||||
|
||||
Shell access is intentionally unavailable in this role to enforce read-only behavior.
|
||||
**Bash is for read-only inspection only.** Never use Bash for commands that change state.
|
||||
|
||||
## How you operate
|
||||
|
||||
|
|
@ -31,20 +29,6 @@ Shell access is intentionally unavailable in this role to enforce read-only beha
|
|||
|
||||
## Output format
|
||||
|
||||
Wrap your output in a `research_result` envelope per the message-schema skill:
|
||||
|
||||
```yaml
|
||||
---
|
||||
type: research_result
|
||||
signal: research_complete
|
||||
topic: "brief topic identifier"
|
||||
verified: true | false
|
||||
has_gotchas: true | false
|
||||
---
|
||||
```
|
||||
|
||||
Then the markdown body:
|
||||
|
||||
```
|
||||
## Research: [topic]
|
||||
|
||||
|
|
|
|||
|
|
@ -2,19 +2,17 @@
|
|||
name: reviewer
|
||||
description: Use after implementation — reviews code quality and verifies claims against source, docs, and acceptance criteria. Never modifies code.
|
||||
model: sonnet
|
||||
permissionMode: plan
|
||||
tools: Read, Glob, Grep, WebFetch, WebSearch
|
||||
tools: Read, Glob, Grep, Bash, WebFetch, WebSearch
|
||||
disallowedTools: Write, Edit
|
||||
maxTurns: 20
|
||||
skills:
|
||||
- conventions
|
||||
- message-schema
|
||||
- qa-checklist
|
||||
- project
|
||||
---
|
||||
|
||||
You are a reviewer. You do two things in one pass: quality review and claim verification. Never write, edit, or fix code — only flag and explain.
|
||||
|
||||
Shell access is intentionally unavailable in this role to enforce read-only behavior.
|
||||
**Bash is for verification only** — run type checks, lint, build checks, or spot-check commands. Never modify files.
|
||||
|
||||
## Quality review
|
||||
|
||||
|
|
@ -29,7 +27,7 @@ Shell access is intentionally unavailable in this role to enforce read-only beha
|
|||
## Claim verification
|
||||
|
||||
- **Acceptance criteria** — walk each criterion explicitly by number. Clean code that doesn't do what was asked is a FAIL.
|
||||
- **API and library usage** — verify against official docs ${WEB_SEARCH} when the implementation uses external APIs, libraries, or non-obvious patterns
|
||||
- **API and library usage** — verify against official docs via WebFetch/WebSearch when the implementation uses external APIs, libraries, or non-obvious patterns
|
||||
- **File and path claims** — do they exist?
|
||||
- **Logic correctness** — does the implementation actually solve the problem?
|
||||
- **Contradictions** — between worker output and source code, between claims and evidence
|
||||
|
|
@ -40,25 +38,6 @@ On **resubmissions**, the orchestrator will include a delta of what changed. Foc
|
|||
|
||||
## Output format
|
||||
|
||||
Wrap your output in a `review_verdict` envelope per the message-schema skill:
|
||||
|
||||
```yaml
|
||||
---
|
||||
type: review_verdict
|
||||
signal: pass | pass_with_notes | fail
|
||||
critical_count: 0
|
||||
moderate_count: 0
|
||||
minor_count: 0
|
||||
ac_coverage:
|
||||
AC1: pass | fail
|
||||
AC2: pass | fail
|
||||
---
|
||||
```
|
||||
|
||||
**Hard rule:** `critical_count > 0` requires `signal: fail`.
|
||||
|
||||
Then the markdown body:
|
||||
|
||||
### Review: [scope]
|
||||
|
||||
**CRITICAL** — must fix before shipping
|
||||
|
|
@ -75,8 +54,10 @@ Then the markdown body:
|
|||
- AC2: PASS / FAIL — [one line]
|
||||
- ...
|
||||
|
||||
**VERDICT: PASS** / **PASS WITH NOTES** / **FAIL**
|
||||
|
||||
One line summary.
|
||||
|
||||
---
|
||||
|
||||
Keep it tight. One line per issue unless the explanation genuinely needs more. Reference file:line for every finding. If nothing is wrong, return `signal: pass` + 1-line summary.
|
||||
Keep it tight. One line per issue unless the explanation genuinely needs more. Reference file:line for every finding. If nothing is wrong, return `VERDICT: PASS` + 1-line summary.
|
||||
|
|
|
|||
|
|
@ -1,38 +0,0 @@
|
|||
---
|
||||
name: senior
|
||||
description: Strong implementer for ambiguous, architectural, or high-risk work. Use when the task spans multiple files, requires careful judgment, or has already failed in a cheaper worker. Default escalation path for hard implementation work.
|
||||
model: opus
|
||||
permissionMode: acceptEdits
|
||||
isolation: worktree
|
||||
tools: Read, Write, Edit, Glob, Grep, Bash
|
||||
maxTurns: 35
|
||||
skills:
|
||||
- conventions
|
||||
- worker-protocol
|
||||
- message-schema
|
||||
- qa-checklist
|
||||
---
|
||||
|
||||
You are a senior agent. You implement difficult or ambiguous tasks with strong technical judgment.
|
||||
|
||||
## Behavioral constraints
|
||||
|
||||
Implement only what was assigned. Do not expand scope unless the orchestrator explicitly revises the task.
|
||||
|
||||
You may resolve local implementation ambiguity when necessary, but **do not invent architecture** that should have been specified by the plan. If a missing interface or contract changes the design boundary, stop and report the gap.
|
||||
|
||||
If the plan appears wrong or incomplete, stop and explain the issue clearly rather than forcing a brittle implementation.
|
||||
|
||||
If you are stuck after two serious attempts, stop and report what you tried and what remains unresolved.
|
||||
|
||||
## Escalation contract
|
||||
|
||||
- Stay local: difficult implementation, careful cross-file reasoning, and bounded ambiguity that can be resolved without changing the plan's design boundary.
|
||||
- Escalate to the orchestrator: when the remaining work should be decomposed into a team, when coordination is now the main risk, or when the plan needs to be revised before safe implementation can continue.
|
||||
- Do not summon more seniors yourself. Re-decomposition is the orchestrator's responsibility.
|
||||
- If a stronger implementation wave is needed, report that explicitly so the orchestrator can spawn a senior team with clear ownership.
|
||||
|
||||
When returning a typed envelope:
|
||||
- Use `signal: blocked` when the orchestrator should re-decompose the work, amend the plan, or split the task into a senior wave.
|
||||
- Use `signal: escalate` only when the issue requires a user decision rather than orchestration.
|
||||
- In the body, state the preferred next route explicitly: `Route: orchestrator (re-decompose)` or `Route: orchestrator (user decision required)`.
|
||||
|
|
@ -1,19 +1,18 @@
|
|||
---
|
||||
name: worker
|
||||
description: Balanced implementer for standard development work. Use when the task is well-defined but not trivial. Escalate upward for architectural ambiguity and downward for tiny mechanical changes.
|
||||
description: Universal implementer. Handles all task tiers — trivial to architectural. Model is scaled by the orchestrator based on task complexity (haiku for trivial, sonnet for standard, opus for architectural/ambiguous). Default implementer for all implementation work.
|
||||
model: sonnet
|
||||
permissionMode: acceptEdits
|
||||
isolation: worktree
|
||||
tools: Read, Write, Edit, Glob, Grep, Bash
|
||||
maxTurns: 25
|
||||
skills:
|
||||
- conventions
|
||||
- worker-protocol
|
||||
- message-schema
|
||||
- qa-checklist
|
||||
- project
|
||||
---
|
||||
|
||||
You are a worker agent. You implement standard development tasks. Your orchestrator may resume you to iterate on feedback or continue related work.
|
||||
You are a worker agent. You implement what you are assigned. Your orchestrator may resume you to iterate on feedback or continue related work.
|
||||
|
||||
## Behavioral constraints
|
||||
|
||||
|
|
@ -23,16 +22,4 @@ Implement only what was assigned. Do not expand scope on your own judgment — i
|
|||
|
||||
If you are stuck after two attempts at the same approach, stop and report what you tried and why it failed.
|
||||
|
||||
If this task is more complex than it appeared (more files involved, unclear interfaces, systemic implications), stop and report whether the issue is implementation difficulty or a planning gap.
|
||||
|
||||
## Escalation contract
|
||||
|
||||
- Stay local: standard, well-defined implementation work where the plan and interfaces are already clear.
|
||||
- Escalate to `senior`: when the task is implementable but now requires stronger judgment, broader reasoning, or higher-risk multi-file work than originally assigned.
|
||||
- Escalate to the orchestrator: when the plan is incomplete, an interface or requirement is missing, or proceeding would require making an architectural decision that was not assigned.
|
||||
- Do not silently turn a plan gap into a design decision.
|
||||
|
||||
When returning a typed envelope:
|
||||
- Use `signal: blocked` when the work should be reassigned to `senior` or when the orchestrator needs to unblock you.
|
||||
- Use `signal: escalate` only when user-level clarification or approval is required.
|
||||
- In the body, state the preferred next route explicitly: `Route: senior` or `Route: orchestrator`.
|
||||
If this task is more complex than it appeared (more files involved, unclear interfaces, systemic implications), flag that to the orchestrator — it may need to be re-dispatched with a more capable model or a revised plan.
|
||||
|
|
|
|||
27
flake.lock
generated
27
flake.lock
generated
|
|
@ -1,27 +0,0 @@
|
|||
{
|
||||
"nodes": {
|
||||
"nixpkgs": {
|
||||
"locked": {
|
||||
"lastModified": 1775095191,
|
||||
"narHash": "sha256-CsqRiYbgQyv01LS0NlC7shwzhDhjNDQSrhBX8VuD3nM=",
|
||||
"owner": "NixOS",
|
||||
"repo": "nixpkgs",
|
||||
"rev": "106eb93cbb9d4e4726bf6bc367a3114f7ed6b32f",
|
||||
"type": "github"
|
||||
},
|
||||
"original": {
|
||||
"owner": "NixOS",
|
||||
"ref": "nixpkgs-unstable",
|
||||
"repo": "nixpkgs",
|
||||
"type": "github"
|
||||
}
|
||||
},
|
||||
"root": {
|
||||
"inputs": {
|
||||
"nixpkgs": "nixpkgs"
|
||||
}
|
||||
}
|
||||
},
|
||||
"root": "root",
|
||||
"version": 7
|
||||
}
|
||||
209
flake.nix
209
flake.nix
|
|
@ -1,209 +0,0 @@
|
|||
{
|
||||
inputs.nixpkgs.url = "github:NixOS/nixpkgs/nixpkgs-unstable";
|
||||
outputs = { self, nixpkgs, ... }:
|
||||
let
|
||||
systems = [ "x86_64-linux" "aarch64-linux" "x86_64-darwin" "aarch64-darwin" ];
|
||||
forAllSystems = f: nixpkgs.lib.genAttrs systems (system: f nixpkgs.legacyPackages.${system});
|
||||
in
|
||||
{
|
||||
devShells = forAllSystems (pkgs: {
|
||||
default = pkgs.mkShell {
|
||||
packages = with pkgs; [
|
||||
yq-go
|
||||
gettext
|
||||
just
|
||||
];
|
||||
};
|
||||
});
|
||||
|
||||
apps = forAllSystems (pkgs: let
|
||||
pythonEnv = pkgs.python3.withPackages (ps: with ps; [ pyyaml jsonschema ]);
|
||||
runtimeInputs = with pkgs; [
|
||||
bash
|
||||
yq-go
|
||||
gettext
|
||||
pythonEnv
|
||||
];
|
||||
bashBin = "${pkgs.bash}/bin/bash";
|
||||
|
||||
validateCmd = ''
|
||||
# Script syntax checks
|
||||
${bashBin} -n ./generate.sh
|
||||
${bashBin} -n ./install.sh
|
||||
|
||||
# Protocol file presence checks
|
||||
test -f ./SETTINGS.yaml
|
||||
test -f ./TEAM.yaml
|
||||
test -f ./schemas/agent-runtime.schema.json
|
||||
test -f ./schemas/team.schema.json
|
||||
|
||||
# Basic protocol shape checks
|
||||
yq -e '.version == 1' ./SETTINGS.yaml
|
||||
yq -e '.version == 1' ./TEAM.yaml
|
||||
yq -e '.agents.order | type == "!!seq"' ./TEAM.yaml
|
||||
yq -e '.skills.order | type == "!!seq"' ./TEAM.yaml
|
||||
yq -e '.rules.order | type == "!!seq"' ./TEAM.yaml
|
||||
|
||||
# JSON Schema validation for protocol files
|
||||
python <<'PY'
|
||||
import json
|
||||
from pathlib import Path
|
||||
|
||||
import yaml
|
||||
from jsonschema import validate
|
||||
|
||||
root = Path(".")
|
||||
settings_data = yaml.safe_load((root / "SETTINGS.yaml").read_text())
|
||||
team_data = yaml.safe_load((root / "TEAM.yaml").read_text())
|
||||
settings_schema = json.loads((root / "schemas/agent-runtime.schema.json").read_text())
|
||||
team_schema = json.loads((root / "schemas/team.schema.json").read_text())
|
||||
|
||||
validate(instance=settings_data, schema=settings_schema)
|
||||
validate(instance=team_data, schema=team_schema)
|
||||
|
||||
# TEAM referenced files must exist on disk.
|
||||
for agent_id in team_data["agents"]["order"]:
|
||||
instruction_file = team_data["agents"]["items"][agent_id]["instruction_file"]
|
||||
if not (root / instruction_file).is_file():
|
||||
raise FileNotFoundError(f"Missing agent instruction file: {instruction_file}")
|
||||
|
||||
for skill_id in team_data["skills"]["order"]:
|
||||
instruction_file = team_data["skills"]["items"][skill_id]["instruction_file"]
|
||||
if not (root / instruction_file).is_file():
|
||||
raise FileNotFoundError(f"Missing skill instruction file: {instruction_file}")
|
||||
|
||||
for rule_id in team_data["rules"]["order"]:
|
||||
source_file = team_data["rules"]["items"][rule_id]["source_file"]
|
||||
if not (root / source_file).is_file():
|
||||
raise FileNotFoundError(f"Missing rule source file: {source_file}")
|
||||
PY
|
||||
'';
|
||||
|
||||
mkAppScript = name: text:
|
||||
pkgs.writeShellApplication {
|
||||
inherit name runtimeInputs text;
|
||||
};
|
||||
in {
|
||||
build = {
|
||||
type = "app";
|
||||
program = "${mkAppScript "build" ''
|
||||
set -euo pipefail
|
||||
test -f ./generate.sh || { echo "Run this command from the repository root."; exit 1; }
|
||||
${bashBin} ./generate.sh
|
||||
''}/bin/build";
|
||||
meta.description = "Generate Claude and Codex build artifacts from the authored protocol files.";
|
||||
};
|
||||
|
||||
validate = {
|
||||
type = "app";
|
||||
program = "${mkAppScript "validate" ''
|
||||
set -euo pipefail
|
||||
test -f ./generate.sh || { echo "Run this command from the repository root."; exit 1; }
|
||||
${validateCmd}
|
||||
''}/bin/validate";
|
||||
meta.description = "Validate scripts and protocol files.";
|
||||
};
|
||||
|
||||
check = {
|
||||
type = "app";
|
||||
program = "${mkAppScript "check" ''
|
||||
set -euo pipefail
|
||||
test -f ./generate.sh || { echo "Run this command from the repository root."; exit 1; }
|
||||
${validateCmd}
|
||||
${bashBin} ./generate.sh
|
||||
''}/bin/check";
|
||||
meta.description = "Run validation and generation together.";
|
||||
};
|
||||
|
||||
install = {
|
||||
type = "app";
|
||||
program = "${mkAppScript "install" ''
|
||||
set -euo pipefail
|
||||
test -f ./install.sh || { echo "Run this command from the repository root."; exit 1; }
|
||||
${validateCmd}
|
||||
${bashBin} ./install.sh
|
||||
''}/bin/install";
|
||||
meta.description = "Install generated artifacts into Claude and Codex config directories.";
|
||||
};
|
||||
});
|
||||
|
||||
checks = forAllSystems (pkgs: let
|
||||
pythonEnv = pkgs.python3.withPackages (ps: with ps; [ pyyaml jsonschema ]);
|
||||
runtimeInputs = with pkgs; [
|
||||
bash
|
||||
yq-go
|
||||
gettext
|
||||
pythonEnv
|
||||
];
|
||||
bashBin = "${pkgs.bash}/bin/bash";
|
||||
|
||||
validateCmd = ''
|
||||
${bashBin} -n ./generate.sh
|
||||
${bashBin} -n ./install.sh
|
||||
test -f ./SETTINGS.yaml
|
||||
test -f ./TEAM.yaml
|
||||
test -f ./schemas/agent-runtime.schema.json
|
||||
test -f ./schemas/team.schema.json
|
||||
yq -e '.version == 1' ./SETTINGS.yaml
|
||||
yq -e '.version == 1' ./TEAM.yaml
|
||||
yq -e '.agents.order | type == "!!seq"' ./TEAM.yaml
|
||||
yq -e '.skills.order | type == "!!seq"' ./TEAM.yaml
|
||||
yq -e '.rules.order | type == "!!seq"' ./TEAM.yaml
|
||||
|
||||
python <<'PY'
|
||||
import json
|
||||
from pathlib import Path
|
||||
|
||||
import yaml
|
||||
from jsonschema import validate
|
||||
|
||||
root = Path(".")
|
||||
settings_data = yaml.safe_load((root / "SETTINGS.yaml").read_text())
|
||||
team_data = yaml.safe_load((root / "TEAM.yaml").read_text())
|
||||
settings_schema = json.loads((root / "schemas/agent-runtime.schema.json").read_text())
|
||||
team_schema = json.loads((root / "schemas/team.schema.json").read_text())
|
||||
|
||||
validate(instance=settings_data, schema=settings_schema)
|
||||
validate(instance=team_data, schema=team_schema)
|
||||
|
||||
# TEAM referenced files must exist on disk.
|
||||
for agent_id in team_data["agents"]["order"]:
|
||||
instruction_file = team_data["agents"]["items"][agent_id]["instruction_file"]
|
||||
if not (root / instruction_file).is_file():
|
||||
raise FileNotFoundError(f"Missing agent instruction file: {instruction_file}")
|
||||
|
||||
for skill_id in team_data["skills"]["order"]:
|
||||
instruction_file = team_data["skills"]["items"][skill_id]["instruction_file"]
|
||||
if not (root / instruction_file).is_file():
|
||||
raise FileNotFoundError(f"Missing skill instruction file: {instruction_file}")
|
||||
|
||||
for rule_id in team_data["rules"]["order"]:
|
||||
source_file = team_data["rules"]["items"][rule_id]["source_file"]
|
||||
if not (root / source_file).is_file():
|
||||
raise FileNotFoundError(f"Missing rule source file: {source_file}")
|
||||
PY
|
||||
'';
|
||||
|
||||
mkCheck = name: text:
|
||||
pkgs.runCommand name { nativeBuildInputs = runtimeInputs; src = ./.; } ''
|
||||
mkdir -p "$TMPDIR/repo"
|
||||
cp -R "$src"/. "$TMPDIR/repo"
|
||||
chmod -R u+w "$TMPDIR/repo"
|
||||
cd "$TMPDIR/repo"
|
||||
${text}
|
||||
touch "$out"
|
||||
'';
|
||||
in {
|
||||
validate = mkCheck "agent-team-validate-check" ''
|
||||
set -euxo pipefail
|
||||
${validateCmd}
|
||||
'';
|
||||
|
||||
build = mkCheck "agent-team-build-check" ''
|
||||
set -euxo pipefail
|
||||
${validateCmd}
|
||||
${bashBin} ./generate.sh
|
||||
'';
|
||||
});
|
||||
};
|
||||
}
|
||||
698
generate.sh
698
generate.sh
|
|
@ -1,698 +0,0 @@
|
|||
#!/usr/bin/env bash
|
||||
set -euo pipefail
|
||||
|
||||
# generate.sh — generates both Claude and Codex output directories from
|
||||
# shared agent source files plus a vendor-neutral runtime config.
|
||||
# Agent source files (agents/*.md) are the single source of truth; this
|
||||
# script derives tool-specific equivalents.
|
||||
#
|
||||
# Template variables in agent bodies are expanded per-target:
|
||||
# ${PLANS_DIR} — where plans live (.claude/plans vs plans)
|
||||
# ${WEB_SEARCH} — how web search is referenced
|
||||
# ${SEARCH_TOOLS} — how codebase search tools are referenced
|
||||
#
|
||||
# Idempotent: safe to run multiple times.
|
||||
|
||||
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
||||
|
||||
AGENTS_SRC="$SCRIPT_DIR/agents"
|
||||
RULES_DIR="$SCRIPT_DIR/rules"
|
||||
CLAUDE_MD="$SCRIPT_DIR/CLAUDE.md"
|
||||
SETTINGS_SHARED_YAML="$SCRIPT_DIR/SETTINGS.yaml"
|
||||
TEAM_YAML="$SCRIPT_DIR/TEAM.yaml"
|
||||
SETTINGS_JSON="$SCRIPT_DIR/settings.json"
|
||||
|
||||
CLAUDE_DIR="$SCRIPT_DIR/claude"
|
||||
CLAUDE_AGENTS_DIR="$CLAUDE_DIR/agents"
|
||||
|
||||
CODEX_DIR="$SCRIPT_DIR/codex"
|
||||
CODEX_AGENTS_DIR="$CODEX_DIR/agents"
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Template variable values per target (KEY=VALUE pairs)
|
||||
# ---------------------------------------------------------------------------
|
||||
CLAUDE_VARS=(
|
||||
"PLANS_DIR=.claude/plans"
|
||||
"WEB_SEARCH=via WebFetch/WebSearch"
|
||||
"SEARCH_TOOLS=Use Grep/Glob/Read"
|
||||
)
|
||||
|
||||
CODEX_VARS=(
|
||||
"PLANS_DIR=plans"
|
||||
"WEB_SEARCH=via web search"
|
||||
"SEARCH_TOOLS=Search the codebase"
|
||||
)
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# extract_body — extracts everything after the second --- (YAML frontmatter)
|
||||
# ---------------------------------------------------------------------------
|
||||
extract_body() {
|
||||
local file="$1"
|
||||
awk 'BEGIN{fm=0} /^---$/{if(fm==0){fm=1;next} if(fm==1){fm=2;next}} fm==2{print}' "$file"
|
||||
}
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# expand_body — runs envsubst on body text, substituting only our 3 variables
|
||||
# $1 = body text
|
||||
# $2.. = KEY=VALUE pairs to export
|
||||
# ---------------------------------------------------------------------------
|
||||
expand_body() {
|
||||
local body="$1"
|
||||
shift
|
||||
# Export only the specified variables
|
||||
for pair in "$@"; do
|
||||
export "${pair%%=*}=${pair#*=}"
|
||||
done
|
||||
echo "$body" | envsubst '${PLANS_DIR} ${WEB_SEARCH} ${SEARCH_TOOLS}'
|
||||
# Clean up exported variables
|
||||
for pair in "$@"; do
|
||||
unset "${pair%%=*}"
|
||||
done
|
||||
}
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# yaml_escape_single_quoted — escapes text for YAML single-quoted scalars
|
||||
# ---------------------------------------------------------------------------
|
||||
yaml_escape_single_quoted() {
|
||||
printf '%s' "$1" | sed "s/'/''/g"
|
||||
}
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# csv_from_yaml_array — joins YAML array values from stdin with ", "
|
||||
# ---------------------------------------------------------------------------
|
||||
csv_from_yaml_array() {
|
||||
local first=1
|
||||
local item
|
||||
while IFS= read -r item; do
|
||||
[ -n "$item" ] || continue
|
||||
if [ "$first" -eq 0 ]; then
|
||||
printf ', '
|
||||
fi
|
||||
printf '%s' "$item"
|
||||
first=0
|
||||
done
|
||||
}
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# validate_team_protocol — validates TEAM protocol fields and referenced files
|
||||
# ---------------------------------------------------------------------------
|
||||
validate_team_protocol() {
|
||||
[ -f "$TEAM_YAML" ] || {
|
||||
echo "Error: missing $TEAM_YAML"
|
||||
exit 1
|
||||
}
|
||||
|
||||
yq -e '.version == 1' "$TEAM_YAML" > /dev/null
|
||||
yq -e '.agents.order and .agents.items and .skills.order and .skills.items and .rules.order and .rules.items' "$TEAM_YAML" > /dev/null
|
||||
|
||||
local section id ids_in_order
|
||||
for section in agents skills rules; do
|
||||
while IFS= read -r id; do
|
||||
[ -n "$id" ] || continue
|
||||
yq -e ".${section}.items.${id}" "$TEAM_YAML" > /dev/null
|
||||
[ "$(yq -r ".${section}.items.${id}.id" "$TEAM_YAML")" = "$id" ] || {
|
||||
echo "Error: TEAM ${section} item '${id}' has mismatched id field"
|
||||
exit 1
|
||||
}
|
||||
done < <(yq -r ".${section}.order[]" "$TEAM_YAML")
|
||||
|
||||
ids_in_order="$(yq -r ".${section}.order[]" "$TEAM_YAML")"
|
||||
while IFS= read -r id; do
|
||||
[ -n "$id" ] || continue
|
||||
printf '%s\n' "$ids_in_order" | grep -qx "$id" || {
|
||||
echo "Error: TEAM ${section} item '${id}' missing from order list"
|
||||
exit 1
|
||||
}
|
||||
done < <(yq -r ".${section}.items | keys | .[]" "$TEAM_YAML")
|
||||
done
|
||||
|
||||
while IFS= read -r id; do
|
||||
[ -n "$id" ] || continue
|
||||
local path
|
||||
path="$SCRIPT_DIR/$(yq -r ".agents.items.${id}.instruction_file" "$TEAM_YAML")"
|
||||
[ -f "$path" ] || {
|
||||
echo "Error: missing agent instruction file for '${id}': $path"
|
||||
exit 1
|
||||
}
|
||||
done < <(yq -r '.agents.order[]' "$TEAM_YAML")
|
||||
|
||||
while IFS= read -r id; do
|
||||
[ -n "$id" ] || continue
|
||||
local path
|
||||
path="$SCRIPT_DIR/$(yq -r ".skills.items.${id}.instruction_file" "$TEAM_YAML")"
|
||||
[ -f "$path" ] || {
|
||||
echo "Error: missing skill instruction file for '${id}': $path"
|
||||
exit 1
|
||||
}
|
||||
done < <(yq -r '.skills.order[]' "$TEAM_YAML")
|
||||
|
||||
while IFS= read -r id; do
|
||||
[ -n "$id" ] || continue
|
||||
local path
|
||||
path="$SCRIPT_DIR/$(yq -r ".rules.items.${id}.source_file" "$TEAM_YAML")"
|
||||
[ -f "$path" ] || {
|
||||
echo "Error: missing rule source file for '${id}': $path"
|
||||
exit 1
|
||||
}
|
||||
done < <(yq -r '.rules.order[]' "$TEAM_YAML")
|
||||
}
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# validate_shared_settings — validates the shared protocol fields we rely on
|
||||
# ---------------------------------------------------------------------------
|
||||
validate_shared_settings() {
|
||||
[ -f "$SETTINGS_SHARED_YAML" ] || {
|
||||
echo "Error: missing $SETTINGS_SHARED_YAML"
|
||||
exit 1
|
||||
}
|
||||
|
||||
yq -e '.version == 1' "$SETTINGS_SHARED_YAML" > /dev/null
|
||||
yq -e '.model.class == "fast" or .model.class == "balanced" or .model.class == "powerful"' "$SETTINGS_SHARED_YAML" > /dev/null
|
||||
yq -e '.model.reasoning == "low" or .model.reasoning == "medium" or .model.reasoning == "high" or .model.reasoning == "max"' "$SETTINGS_SHARED_YAML" > /dev/null
|
||||
yq -e '.runtime.filesystem == "read-only" or .runtime.filesystem == "workspace-write"' "$SETTINGS_SHARED_YAML" > /dev/null
|
||||
yq -e '.runtime.approval == "manual" or .runtime.approval == "guarded-auto" or .runtime.approval == "full-auto"' "$SETTINGS_SHARED_YAML" > /dev/null
|
||||
yq -e '(.runtime.network_access | type) == "!!bool"' "$SETTINGS_SHARED_YAML" > /dev/null
|
||||
yq -e '
|
||||
(.runtime.tools // []) as $tools |
|
||||
(
|
||||
$tools |
|
||||
map(
|
||||
select(
|
||||
. == "shell" or
|
||||
. == "read" or
|
||||
. == "edit" or
|
||||
. == "write" or
|
||||
. == "glob" or
|
||||
. == "grep" or
|
||||
. == "web_fetch" or
|
||||
. == "web_search"
|
||||
)
|
||||
) |
|
||||
length
|
||||
) == ($tools | length)
|
||||
' "$SETTINGS_SHARED_YAML" > /dev/null
|
||||
}
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# map_model_class_to_claude — maps shared model.class to Claude model value
|
||||
# ---------------------------------------------------------------------------
|
||||
map_model_class_to_claude() {
|
||||
local model_class="$1"
|
||||
case "$model_class" in
|
||||
fast) echo "haiku" ;;
|
||||
powerful) echo "opus" ;;
|
||||
balanced) echo "sonnet" ;;
|
||||
*) echo "sonnet" ;;
|
||||
esac
|
||||
}
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# map_approval_intent_to_codex_policy — shared approval intent to Codex value
|
||||
# ---------------------------------------------------------------------------
|
||||
map_approval_intent_to_codex_policy() {
|
||||
local approval_intent="$1"
|
||||
case "$approval_intent" in
|
||||
manual) echo "on-request" ;;
|
||||
full-auto) echo "never" ;;
|
||||
guarded-auto) echo "untrusted" ;;
|
||||
*) echo "untrusted" ;;
|
||||
esac
|
||||
}
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# map_filesystem_intent_to_claude_mode — shared filesystem to Claude mode
|
||||
# ---------------------------------------------------------------------------
|
||||
map_filesystem_intent_to_claude_mode() {
|
||||
local filesystem="$1"
|
||||
case "$filesystem" in
|
||||
read-only) echo "plan" ;;
|
||||
workspace-write) echo "acceptEdits" ;;
|
||||
*) echo "acceptEdits" ;;
|
||||
esac
|
||||
}
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# map_portable_tool_to_claude — shared runtime tool to Claude allow-list name
|
||||
# ---------------------------------------------------------------------------
|
||||
map_portable_tool_to_claude() {
|
||||
local tool="$1"
|
||||
case "$tool" in
|
||||
shell) echo "Bash" ;;
|
||||
read) echo "Read" ;;
|
||||
edit) echo "Edit" ;;
|
||||
write) echo "Write" ;;
|
||||
glob) echo "Glob" ;;
|
||||
grep) echo "Grep" ;;
|
||||
web_fetch) echo "WebFetch" ;;
|
||||
web_search) echo "WebSearch" ;;
|
||||
*) echo "$tool" ;;
|
||||
esac
|
||||
}
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# json_escape — escapes a string for JSON string literal output
|
||||
# ---------------------------------------------------------------------------
|
||||
json_escape() {
|
||||
printf '%s' "$1" | sed 's/\\/\\\\/g; s/"/\\"/g'
|
||||
}
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# json_array_from_lines — renders stdin as a compact JSON string array
|
||||
# ---------------------------------------------------------------------------
|
||||
json_array_from_lines() {
|
||||
local first=1
|
||||
local item
|
||||
|
||||
printf '['
|
||||
while IFS= read -r item; do
|
||||
[ -n "$item" ] || continue
|
||||
if [ "$first" -eq 0 ]; then
|
||||
printf ', '
|
||||
fi
|
||||
printf '"%s"' "$(json_escape "$item")"
|
||||
first=0
|
||||
done
|
||||
printf ']'
|
||||
}
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# generate_legacy_settings_json — emits Claude-compatible settings.json
|
||||
# from SETTINGS.yaml so downstream generation stays backward-compatible
|
||||
# ---------------------------------------------------------------------------
|
||||
generate_legacy_settings_json() {
|
||||
local model_class model_reasoning runtime_filesystem runtime_approval
|
||||
local claude_model claude_default_mode codex_approval_policy codex_network_access
|
||||
local allow_json deny_json ask_json claude_md_excludes_json
|
||||
|
||||
model_class="$(yq -r '.model.class' "$SETTINGS_SHARED_YAML")"
|
||||
model_reasoning="$(yq -r '.model.reasoning' "$SETTINGS_SHARED_YAML")"
|
||||
runtime_filesystem="$(yq -r '.runtime.filesystem' "$SETTINGS_SHARED_YAML")"
|
||||
runtime_approval="$(yq -r '.runtime.approval' "$SETTINGS_SHARED_YAML")"
|
||||
|
||||
claude_model="$(map_model_class_to_claude "$model_class")"
|
||||
claude_default_mode="$(map_filesystem_intent_to_claude_mode "$runtime_filesystem")"
|
||||
codex_approval_policy="$(yq -r '.targets.codex.approval_policy // ""' "$SETTINGS_SHARED_YAML")"
|
||||
codex_network_access="$(yq -r '.targets.codex.network_access // .runtime.network_access // false' "$SETTINGS_SHARED_YAML")"
|
||||
|
||||
if [ -z "$codex_approval_policy" ] || [ "$codex_approval_policy" = "null" ]; then
|
||||
codex_approval_policy="$(map_approval_intent_to_codex_policy "$runtime_approval")"
|
||||
fi
|
||||
|
||||
allow_json="$(
|
||||
yq -r '.runtime.tools[]' "$SETTINGS_SHARED_YAML" \
|
||||
| while IFS= read -r tool; do
|
||||
map_portable_tool_to_claude "$tool"
|
||||
done \
|
||||
| json_array_from_lines
|
||||
)"
|
||||
|
||||
deny_json="$(
|
||||
{
|
||||
yq -r '.safety.protected_paths[]' "$SETTINGS_SHARED_YAML" | while IFS= read -r path; do
|
||||
printf 'Read(%s)\n' "$path"
|
||||
printf 'Write(%s)\n' "$path"
|
||||
printf 'Edit(%s)\n' "$path"
|
||||
done
|
||||
} | json_array_from_lines
|
||||
)"
|
||||
|
||||
ask_json="$(
|
||||
yq -r '.safety.dangerous_shell_commands.ask[]' "$SETTINGS_SHARED_YAML" \
|
||||
| while IFS= read -r cmd; do
|
||||
printf 'Bash(%s)\n' "$cmd"
|
||||
done \
|
||||
| json_array_from_lines
|
||||
)"
|
||||
|
||||
claude_md_excludes_json="$(
|
||||
yq -r '(.targets.claude.claude_md_excludes // [".claude/agent-memory/**"])[]' "$SETTINGS_SHARED_YAML" \
|
||||
| json_array_from_lines
|
||||
)"
|
||||
|
||||
cat > "$SETTINGS_JSON" <<JSON
|
||||
{
|
||||
"\$schema": "https://json.schemastore.org/claude-code-settings.json",
|
||||
"attribution": {
|
||||
"commit": "",
|
||||
"pr": ""
|
||||
},
|
||||
"permissions": {
|
||||
"allow": ${allow_json},
|
||||
"deny": ${deny_json},
|
||||
"ask": ${ask_json},
|
||||
"defaultMode": "${claude_default_mode}"
|
||||
},
|
||||
"model": "${claude_model}",
|
||||
"effortLevel": "${model_reasoning}",
|
||||
"codex": {
|
||||
"approvalPolicy": "${codex_approval_policy}",
|
||||
"networkAccess": ${codex_network_access}
|
||||
},
|
||||
"claudeMdExcludes": ${claude_md_excludes_json}
|
||||
}
|
||||
JSON
|
||||
}
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# prepare_settings_json — ensures the Claude-compatible settings.json
|
||||
# artifact exists from the shared runtime config
|
||||
# ---------------------------------------------------------------------------
|
||||
prepare_settings_json() {
|
||||
echo "Using shared config: $SETTINGS_SHARED_YAML"
|
||||
validate_shared_settings
|
||||
validate_team_protocol
|
||||
generate_legacy_settings_json
|
||||
echo "Generated compatibility artifact: $SETTINGS_JSON"
|
||||
}
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# map_model — maps Claude model name to Codex model name
|
||||
# ---------------------------------------------------------------------------
|
||||
map_model() {
|
||||
local model="$1"
|
||||
case "$model" in
|
||||
opus) echo "gpt-5.4" ;;
|
||||
sonnet) echo "gpt-5.3-codex" ;;
|
||||
haiku) echo "gpt-5.1-codex-mini" ;;
|
||||
*) echo "gpt-5.3-codex" ;;
|
||||
esac
|
||||
}
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# map_effort — maps Claude effort level to Codex model_reasoning_effort
|
||||
# ---------------------------------------------------------------------------
|
||||
map_effort() {
|
||||
local effort="$1"
|
||||
case "$effort" in
|
||||
low) echo "low" ;;
|
||||
medium) echo "medium" ;;
|
||||
high) echo "high" ;;
|
||||
max) echo "xhigh" ;;
|
||||
*) echo "medium" ;;
|
||||
esac
|
||||
}
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# map_sandbox_mode — determines Codex sandbox_mode from agent metadata
|
||||
# $1 = permissionMode value (plan / acceptEdits / "")
|
||||
# $2 = tools list (comma-separated)
|
||||
# ---------------------------------------------------------------------------
|
||||
map_sandbox_mode() {
|
||||
local permission_mode="$1"
|
||||
local tools="$2"
|
||||
|
||||
# plan mode is read-only
|
||||
if [ "$permission_mode" = "plan" ]; then
|
||||
echo "read-only"
|
||||
return
|
||||
fi
|
||||
|
||||
# acceptEdits with Write or Edit tool → workspace-write
|
||||
if [ "$permission_mode" = "acceptEdits" ]; then
|
||||
if echo "$tools" | grep -qE '\b(Write|Edit)\b'; then
|
||||
echo "workspace-write"
|
||||
return
|
||||
fi
|
||||
fi
|
||||
|
||||
# Default: read-only
|
||||
echo "read-only"
|
||||
}
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# map_default_sandbox_mode — determines Codex sandbox_mode from shared config
|
||||
# $1 = Claude permissions.defaultMode value
|
||||
# ---------------------------------------------------------------------------
|
||||
map_default_sandbox_mode() {
|
||||
local default_mode="$1"
|
||||
|
||||
case "$default_mode" in
|
||||
plan) echo "read-only" ;;
|
||||
acceptEdits) echo "workspace-write" ;;
|
||||
*) echo "workspace-write" ;;
|
||||
esac
|
||||
}
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# map_approval_policy — determines Codex approval_policy from shared config
|
||||
# $1 = runtime.approval value (manual / guarded-auto / full-auto)
|
||||
# $2 = optional Codex approval override from shared config
|
||||
# ---------------------------------------------------------------------------
|
||||
map_approval_policy() {
|
||||
local runtime_approval="$1"
|
||||
local override="$2"
|
||||
|
||||
if [ -n "$override" ] && [ "$override" != "null" ]; then
|
||||
echo "$override"
|
||||
return
|
||||
fi
|
||||
|
||||
map_approval_intent_to_codex_policy "$runtime_approval"
|
||||
}
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# generate_claude — produces claude/ output directory
|
||||
# ---------------------------------------------------------------------------
|
||||
generate_claude() {
|
||||
echo "=== Generating Claude output ==="
|
||||
|
||||
# Clean and recreate output directories
|
||||
rm -rf "$CLAUDE_DIR"
|
||||
mkdir -p "$CLAUDE_AGENTS_DIR"
|
||||
|
||||
# Copy CLAUDE.md
|
||||
cp "$CLAUDE_MD" "$CLAUDE_DIR/CLAUDE.md"
|
||||
echo "Copied: $CLAUDE_DIR/CLAUDE.md"
|
||||
|
||||
# Copy settings.json
|
||||
cp "$SETTINGS_JSON" "$CLAUDE_DIR/settings.json"
|
||||
echo "Copied: $CLAUDE_DIR/settings.json"
|
||||
|
||||
# Create relative symlinks for rules and skills
|
||||
ln -s ../rules "$CLAUDE_DIR/rules"
|
||||
echo "Symlinked: $CLAUDE_DIR/rules -> ../rules"
|
||||
|
||||
ln -s ../skills "$CLAUDE_DIR/skills"
|
||||
echo "Symlinked: $CLAUDE_DIR/skills -> ../skills"
|
||||
|
||||
# Generate agent .md files from TEAM metadata + markdown instruction body
|
||||
local agent_id
|
||||
while IFS= read -r agent_id; do
|
||||
[ -n "$agent_id" ] || continue
|
||||
|
||||
local name description model effort permission_mode
|
||||
local src_file dst_file body expanded_body
|
||||
local max_turns background memory isolation
|
||||
local tools_csv disallowed_tools_csv
|
||||
|
||||
name="$(yq -r ".agents.items.${agent_id}.name" "$TEAM_YAML")"
|
||||
description="$(yq -r ".agents.items.${agent_id}.description" "$TEAM_YAML")"
|
||||
model="$(yq -r ".agents.items.${agent_id}.model" "$TEAM_YAML")"
|
||||
effort="$(yq -r ".agents.items.${agent_id}.effort // \"\"" "$TEAM_YAML")"
|
||||
permission_mode="$(yq -r ".agents.items.${agent_id}.permission_mode // \"\"" "$TEAM_YAML")"
|
||||
tools_csv="$(yq -r ".agents.items.${agent_id}.tools[]" "$TEAM_YAML" | csv_from_yaml_array)"
|
||||
disallowed_tools_csv="$(yq -r ".agents.items.${agent_id}.disallowed_tools // [] | .[]" "$TEAM_YAML" | csv_from_yaml_array)"
|
||||
max_turns="$(yq -r ".agents.items.${agent_id}.max_turns // \"\"" "$TEAM_YAML")"
|
||||
background="$(yq -r ".agents.items.${agent_id}.background // \"\"" "$TEAM_YAML")"
|
||||
memory="$(yq -r ".agents.items.${agent_id}.memory // \"\"" "$TEAM_YAML")"
|
||||
isolation="$(yq -r ".agents.items.${agent_id}.isolation // \"\"" "$TEAM_YAML")"
|
||||
|
||||
src_file="$SCRIPT_DIR/$(yq -r ".agents.items.${agent_id}.instruction_file" "$TEAM_YAML")"
|
||||
dst_file="$CLAUDE_AGENTS_DIR/${name}.md"
|
||||
|
||||
body="$(extract_body "$src_file")"
|
||||
expanded_body="$(expand_body "$body" "${CLAUDE_VARS[@]}")"
|
||||
|
||||
{
|
||||
echo "---"
|
||||
echo "name: '$(yaml_escape_single_quoted "$name")'"
|
||||
echo "description: '$(yaml_escape_single_quoted "$description")'"
|
||||
echo "model: '$(yaml_escape_single_quoted "$model")'"
|
||||
if [ -n "$effort" ] && [ "$effort" != "null" ]; then
|
||||
echo "effort: '$(yaml_escape_single_quoted "$effort")'"
|
||||
fi
|
||||
if [ -n "$permission_mode" ] && [ "$permission_mode" != "null" ]; then
|
||||
echo "permissionMode: '$(yaml_escape_single_quoted "$permission_mode")'"
|
||||
fi
|
||||
echo "tools: '$(yaml_escape_single_quoted "$tools_csv")'"
|
||||
if [ -n "$disallowed_tools_csv" ] && [ "$disallowed_tools_csv" != "null" ]; then
|
||||
echo "disallowedTools: '$(yaml_escape_single_quoted "$disallowed_tools_csv")'"
|
||||
fi
|
||||
if [ "$background" = "true" ]; then
|
||||
echo "background: true"
|
||||
fi
|
||||
if [ -n "$memory" ] && [ "$memory" != "null" ]; then
|
||||
echo "memory: '$(yaml_escape_single_quoted "$memory")'"
|
||||
fi
|
||||
if [ -n "$isolation" ] && [ "$isolation" != "null" ]; then
|
||||
echo "isolation: '$(yaml_escape_single_quoted "$isolation")'"
|
||||
fi
|
||||
if [ -n "$max_turns" ] && [ "$max_turns" != "null" ]; then
|
||||
echo "maxTurns: $max_turns"
|
||||
fi
|
||||
echo "skills:"
|
||||
yq -r ".agents.items.${agent_id}.skills[]" "$TEAM_YAML" | while IFS= read -r skill; do
|
||||
echo " - $(yaml_escape_single_quoted "$skill")"
|
||||
done
|
||||
echo "---"
|
||||
echo ""
|
||||
echo "$expanded_body"
|
||||
} > "$dst_file"
|
||||
|
||||
echo "Generated: $dst_file"
|
||||
done < <(yq -r '.agents.order[]' "$TEAM_YAML")
|
||||
}
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# generate_codex — produces codex/ output directory
|
||||
# ---------------------------------------------------------------------------
|
||||
generate_codex() {
|
||||
echo ""
|
||||
echo "=== Generating Codex output ==="
|
||||
|
||||
# Clean and recreate output directories
|
||||
rm -rf "$CODEX_DIR"
|
||||
mkdir -p "$CODEX_AGENTS_DIR"
|
||||
ln -s ../skills "$CODEX_DIR/skills"
|
||||
echo "Symlinked: $CODEX_DIR/skills -> ../skills"
|
||||
|
||||
# Generate agent .toml files from TEAM metadata + markdown instruction body
|
||||
echo "Generating Codex agent definitions..."
|
||||
local agent_id
|
||||
while IFS= read -r agent_id; do
|
||||
[ -n "$agent_id" ] || continue
|
||||
|
||||
local name description model effort permission_mode tools disallowed_tools
|
||||
local agent_skills
|
||||
local src_file dst_file
|
||||
name="$(yq -r ".agents.items.${agent_id}.name" "$TEAM_YAML")"
|
||||
description="$(yq -r ".agents.items.${agent_id}.description" "$TEAM_YAML")"
|
||||
model="$(yq -r ".agents.items.${agent_id}.model" "$TEAM_YAML")"
|
||||
effort="$(yq -r ".agents.items.${agent_id}.effort // \"\"" "$TEAM_YAML")"
|
||||
permission_mode="$(yq -r ".agents.items.${agent_id}.permission_mode // \"\"" "$TEAM_YAML")"
|
||||
tools="$(yq -r ".agents.items.${agent_id}.tools[]" "$TEAM_YAML" | csv_from_yaml_array)"
|
||||
disallowed_tools="$(yq -r ".agents.items.${agent_id}.disallowed_tools // [] | .[]" "$TEAM_YAML" | csv_from_yaml_array)"
|
||||
agent_skills="$(yq -r ".agents.items.${agent_id}.skills[]" "$TEAM_YAML")"
|
||||
src_file="$SCRIPT_DIR/$(yq -r ".agents.items.${agent_id}.instruction_file" "$TEAM_YAML")"
|
||||
dst_file="$CODEX_AGENTS_DIR/${name}.toml"
|
||||
|
||||
# Map to Codex equivalents
|
||||
local codex_model codex_effort codex_sandbox
|
||||
codex_model="$(map_model "$model")"
|
||||
codex_effort="$(map_effort "${effort:-medium}")"
|
||||
codex_sandbox="$(map_sandbox_mode "$permission_mode" "$tools")"
|
||||
|
||||
# Extract and expand body with Codex variable values
|
||||
local body expanded_body
|
||||
body="$(extract_body "$src_file")"
|
||||
expanded_body="$(expand_body "$body" "${CODEX_VARS[@]}")"
|
||||
|
||||
# Build developer_instructions: append disallowedTools note if present
|
||||
local developer_instructions
|
||||
developer_instructions="$expanded_body"
|
||||
if [ -n "$disallowed_tools" ] && [ "$disallowed_tools" != "null" ]; then
|
||||
developer_instructions="${developer_instructions}
|
||||
|
||||
You do NOT have access to these tools: ${disallowed_tools}"
|
||||
fi
|
||||
|
||||
# TOML multiline basic strings use """ delimiters; reject raw delimiter
|
||||
# sequences in instruction bodies so generated TOML remains parseable.
|
||||
if printf '%s' "$developer_instructions" | grep -q '"""'; then
|
||||
echo "Error: agent instruction contains raw triple quotes (\"\"\") which break TOML in $src_file"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Write TOML output
|
||||
cat > "$dst_file" <<TOML
|
||||
name = "${name}"
|
||||
description = "${description}"
|
||||
model = "${codex_model}"
|
||||
model_reasoning_effort = "${codex_effort}"
|
||||
sandbox_mode = "${codex_sandbox}"
|
||||
TOML
|
||||
|
||||
cat >> "$dst_file" <<TOML
|
||||
developer_instructions = """
|
||||
${developer_instructions}
|
||||
"""
|
||||
TOML
|
||||
|
||||
local skill_id skill_applies enabled
|
||||
while IFS= read -r skill_id; do
|
||||
[ -n "$skill_id" ] || continue
|
||||
skill_applies="$(yq -r ".skills.items.${skill_id}.applies_to[]" "$TEAM_YAML")"
|
||||
if ! printf '%s\n' "$skill_applies" | grep -qx "codex"; then
|
||||
continue
|
||||
fi
|
||||
|
||||
enabled=false
|
||||
if printf '%s\n' "$agent_skills" | grep -qx "$skill_id"; then
|
||||
enabled=true
|
||||
fi
|
||||
|
||||
cat >> "$dst_file" <<TOML
|
||||
[[skills.config]]
|
||||
path = "../skills/${skill_id}/SKILL.md"
|
||||
enabled = ${enabled}
|
||||
|
||||
TOML
|
||||
done < <(yq -r '.skills.order[]' "$TEAM_YAML")
|
||||
|
||||
echo "Generated: $dst_file"
|
||||
done < <(yq -r '.agents.order[]' "$TEAM_YAML")
|
||||
|
||||
# Generate AGENTS.md — concatenate TEAM-ordered rules with tool-agnostic header
|
||||
echo ""
|
||||
echo "Generating codex/AGENTS.md..."
|
||||
{
|
||||
echo "# Agent Team Instructions"
|
||||
echo ""
|
||||
echo "Agent-team specific protocols live in skills (orchestrate, conventions, worker-protocol, qa-checklist, message-schema)."
|
||||
local rule_id rules_file
|
||||
while IFS= read -r rule_id; do
|
||||
[ -n "$rule_id" ] || continue
|
||||
yq -r ".rules.items.${rule_id}.applies_to[]" "$TEAM_YAML" | grep -qx "codex" || continue
|
||||
rules_file="$SCRIPT_DIR/$(yq -r ".rules.items.${rule_id}.source_file" "$TEAM_YAML")"
|
||||
echo ""
|
||||
cat "$rules_file"
|
||||
done < <(yq -r '.rules.order[]' "$TEAM_YAML")
|
||||
} > "$CODEX_DIR/AGENTS.md"
|
||||
echo "Generated: $CODEX_DIR/AGENTS.md"
|
||||
|
||||
# Generate config.toml — derive sandbox/approval defaults from shared config
|
||||
echo ""
|
||||
echo "Generating codex/config.toml..."
|
||||
|
||||
local default_mode runtime_approval codex_approval_override codex_network_access
|
||||
default_mode="$(map_filesystem_intent_to_claude_mode "$(yq -r '.runtime.filesystem' "$SETTINGS_SHARED_YAML")")"
|
||||
runtime_approval="$(yq -r '.runtime.approval' "$SETTINGS_SHARED_YAML")"
|
||||
codex_approval_override="$(yq -r '.targets.codex.approval_policy // ""' "$SETTINGS_SHARED_YAML")"
|
||||
codex_network_access="$(yq -r '.targets.codex.network_access // .runtime.network_access // false' "$SETTINGS_SHARED_YAML")"
|
||||
|
||||
local config_sandbox config_approval
|
||||
config_sandbox="$(map_default_sandbox_mode "$default_mode")"
|
||||
config_approval="$(map_approval_policy "$runtime_approval" "$codex_approval_override")"
|
||||
|
||||
cat > "$CODEX_DIR/config.toml" <<TOML
|
||||
#:schema https://developers.openai.com/codex/config-schema.json
|
||||
model = "gpt-5.3-codex"
|
||||
model_reasoning_effort = "medium"
|
||||
sandbox_mode = "${config_sandbox}"
|
||||
approval_policy = "${config_approval}"
|
||||
|
||||
[sandbox_workspace_write]
|
||||
network_access = ${codex_network_access}
|
||||
TOML
|
||||
echo "Generated: $CODEX_DIR/config.toml"
|
||||
}
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Main
|
||||
# ---------------------------------------------------------------------------
|
||||
prepare_settings_json
|
||||
generate_claude
|
||||
generate_codex
|
||||
|
||||
echo ""
|
||||
echo "Done."
|
||||
201
install.sh
201
install.sh
|
|
@ -1,20 +1,18 @@
|
|||
#!/usr/bin/env bash
|
||||
set -euo pipefail
|
||||
|
||||
# install.sh — symlinks agent-team into ~/.claude/ and ~/.codex/ (if present)
|
||||
# install.sh — symlinks agent-team into ~/.claude/
|
||||
# Works on Windows (Git Bash/MSYS2), Linux, and macOS.
|
||||
|
||||
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
||||
CLAUDE_DIR="$HOME/.claude"
|
||||
AGENTS_SRC="$SCRIPT_DIR/claude/agents"
|
||||
AGENTS_SRC="$SCRIPT_DIR/agents"
|
||||
SKILLS_SRC="$SCRIPT_DIR/skills"
|
||||
RULES_SRC="$SCRIPT_DIR/rules"
|
||||
TEAM_YAML="$SCRIPT_DIR/TEAM.yaml"
|
||||
AGENTS_DST="$CLAUDE_DIR/agents"
|
||||
RULES_DST="$CLAUDE_DIR/rules"
|
||||
CLAUDE_MD_SRC="$SCRIPT_DIR/claude/CLAUDE.md"
|
||||
SKILLS_DST="$CLAUDE_DIR/skills"
|
||||
CLAUDE_MD_SRC="$SCRIPT_DIR/CLAUDE.md"
|
||||
CLAUDE_MD_DST="$CLAUDE_DIR/CLAUDE.md"
|
||||
SETTINGS_SRC="$SCRIPT_DIR/claude/settings.json"
|
||||
SETTINGS_SRC="$SCRIPT_DIR/settings.json"
|
||||
SETTINGS_DST="$CLAUDE_DIR/settings.json"
|
||||
|
||||
# Detect OS
|
||||
|
|
@ -30,15 +28,6 @@ echo "Source: $SCRIPT_DIR"
|
|||
echo "Target: $CLAUDE_DIR"
|
||||
echo ""
|
||||
|
||||
# Pre-flight: build fresh generated outputs before proceeding.
|
||||
if [ ! -f "$SCRIPT_DIR/generate.sh" ]; then
|
||||
echo "Error: generate.sh not found."
|
||||
exit 1
|
||||
fi
|
||||
|
||||
echo "Generating fresh artifacts before install..."
|
||||
bash "$SCRIPT_DIR/generate.sh"
|
||||
|
||||
# Ensure ~/.claude exists
|
||||
mkdir -p "$CLAUDE_DIR"
|
||||
|
||||
|
|
@ -71,7 +60,8 @@ create_symlink() {
|
|||
local win_dst
|
||||
win_src="$(cygpath -w "$src")"
|
||||
win_dst="$(cygpath -w "$dst")"
|
||||
if ! cmd //c "mklink /D \"$win_dst\" \"$win_src\"" > /dev/null 2>&1; then
|
||||
cmd //c "mklink /D \"$win_dst\" \"$win_src\"" > /dev/null 2>&1
|
||||
if [ $? -ne 0 ]; then
|
||||
echo "ERROR: mklink failed for $name."
|
||||
echo "On Windows, enable Developer Mode (Settings > Update & Security > For Developers)"
|
||||
echo "or run this script as Administrator."
|
||||
|
|
@ -84,22 +74,6 @@ create_symlink() {
|
|||
echo "Linked: $dst -> $src"
|
||||
}
|
||||
|
||||
ensure_directory() {
|
||||
local dst="$1"
|
||||
local name="$2"
|
||||
|
||||
if [ -L "$dst" ]; then
|
||||
echo "Removing existing symlink: $dst"
|
||||
rm "$dst"
|
||||
elif [ -f "$dst" ]; then
|
||||
local backup="${dst}.backup.$(date +%Y%m%d%H%M%S)"
|
||||
echo "Backing up existing $name file to: $backup"
|
||||
mv "$dst" "$backup"
|
||||
fi
|
||||
|
||||
mkdir -p "$dst"
|
||||
}
|
||||
|
||||
# Symlink a single file
|
||||
create_file_symlink() {
|
||||
local src="$1"
|
||||
|
|
@ -128,7 +102,8 @@ create_file_symlink() {
|
|||
local win_dst
|
||||
win_src="$(cygpath -w "$src")"
|
||||
win_dst="$(cygpath -w "$dst")"
|
||||
if ! cmd //c "mklink \"$win_dst\" \"$win_src\"" > /dev/null 2>&1; then
|
||||
cmd //c "mklink \"$win_dst\" \"$win_src\"" > /dev/null 2>&1
|
||||
if [ $? -ne 0 ]; then
|
||||
echo "ERROR: mklink failed for $name."
|
||||
echo "On Windows, enable Developer Mode (Settings > Update & Security > For Developers)"
|
||||
echo "or run this script as Administrator."
|
||||
|
|
@ -141,164 +116,10 @@ create_file_symlink() {
|
|||
echo "Linked: $dst -> $src"
|
||||
}
|
||||
|
||||
# Return one skill id per line for a target platform from TEAM.yaml.
|
||||
# Falls back to empty output when TEAM.yaml is unavailable.
|
||||
list_team_skills_for_target() {
|
||||
local target="$1"
|
||||
|
||||
if [ ! -f "$TEAM_YAML" ]; then
|
||||
return 0
|
||||
fi
|
||||
|
||||
# Validate TEAM parseability before resolving inventory.
|
||||
yq -e '.version == 1 and has("skills") and (.skills | has("order")) and (.skills | has("items"))' "$TEAM_YAML" > /dev/null
|
||||
|
||||
local skill_id applies
|
||||
while IFS= read -r skill_id; do
|
||||
[ -n "$skill_id" ] || continue
|
||||
applies="$(yq -r ".skills.items.\"$skill_id\".applies_to[]? // \"\"" "$TEAM_YAML")"
|
||||
if printf '%s\n' "$applies" | grep -Fxq "$target"; then
|
||||
printf '%s\n' "$skill_id"
|
||||
fi
|
||||
done < <(yq -r '.skills.order[]' "$TEAM_YAML")
|
||||
}
|
||||
|
||||
# Resolve a TEAM skill id to its source directory using instruction_file.
|
||||
resolve_skill_dir_from_team() {
|
||||
local skill_id="$1"
|
||||
|
||||
if [ ! -f "$TEAM_YAML" ]; then
|
||||
return 1
|
||||
fi
|
||||
|
||||
local instruction_file skill_dir
|
||||
instruction_file="$(
|
||||
yq -r ".skills.items.\"$skill_id\".instruction_file // \"\"" "$TEAM_YAML"
|
||||
)"
|
||||
[ -n "$instruction_file" ] || return 1
|
||||
|
||||
skill_dir="$(dirname "$SCRIPT_DIR/$instruction_file")"
|
||||
[ -d "$skill_dir" ] || return 1
|
||||
|
||||
printf '%s\n' "$skill_dir"
|
||||
}
|
||||
|
||||
install_team_skills_for_target() {
|
||||
local target="$1"
|
||||
local dst_root="$2"
|
||||
local label_prefix="$3"
|
||||
|
||||
ensure_directory "$dst_root" "$label_prefix skills"
|
||||
|
||||
local skill_dir skill_name skill_dst skill_id skill_dir_path
|
||||
local expected_skills_tmp
|
||||
expected_skills_tmp="$(mktemp)"
|
||||
|
||||
cleanup_skill_symlinks() {
|
||||
local expected_file="$1"
|
||||
local existing_path existing_name
|
||||
|
||||
for existing_path in "$dst_root"/*; do
|
||||
[ -e "$existing_path" ] || [ -L "$existing_path" ] || continue
|
||||
[ -L "$existing_path" ] || continue
|
||||
|
||||
existing_name="$(basename "$existing_path")"
|
||||
if [ -s "$expected_file" ] && grep -Fxq "$existing_name" "$expected_file"; then
|
||||
continue
|
||||
fi
|
||||
|
||||
echo "Removing stale symlink: $existing_path"
|
||||
rm "$existing_path"
|
||||
done
|
||||
}
|
||||
|
||||
if [ -f "$TEAM_YAML" ]; then
|
||||
local team_skills_tmp
|
||||
team_skills_tmp="$(mktemp)"
|
||||
if ! list_team_skills_for_target "$target" > "$team_skills_tmp" 2>/dev/null; then
|
||||
echo "Warning: TEAM.yaml exists but could not be parsed; falling back to directory-based ${label_prefix} skill install."
|
||||
for skill_dir in "$SKILLS_SRC"/*/; do
|
||||
skill_name="$(basename "$skill_dir")"
|
||||
printf '%s\n' "$skill_name" >> "$expected_skills_tmp"
|
||||
done
|
||||
cleanup_skill_symlinks "$expected_skills_tmp"
|
||||
for skill_dir in "$SKILLS_SRC"/*/; do
|
||||
skill_name="$(basename "$skill_dir")"
|
||||
create_symlink "$skill_dir" "$dst_root/$skill_name" "${label_prefix} skill: $skill_name"
|
||||
done
|
||||
rm -f "$team_skills_tmp"
|
||||
rm -f "$expected_skills_tmp"
|
||||
return
|
||||
fi
|
||||
|
||||
while IFS= read -r skill_id; do
|
||||
[ -n "$skill_id" ] || continue
|
||||
printf '%s\n' "$skill_id" >> "$expected_skills_tmp"
|
||||
done < "$team_skills_tmp"
|
||||
|
||||
cleanup_skill_symlinks "$expected_skills_tmp"
|
||||
|
||||
while IFS= read -r skill_id; do
|
||||
[ -n "$skill_id" ] || continue
|
||||
skill_dir_path="$(resolve_skill_dir_from_team "$skill_id" || true)"
|
||||
if [ -z "$skill_dir_path" ]; then
|
||||
echo "Warning: TEAM.yaml skill '$skill_id' has no valid instruction_file directory; skipping."
|
||||
continue
|
||||
fi
|
||||
create_symlink "$skill_dir_path" "$dst_root/$skill_id" "${label_prefix} skill: $skill_id"
|
||||
done < "$team_skills_tmp"
|
||||
rm -f "$team_skills_tmp"
|
||||
rm -f "$expected_skills_tmp"
|
||||
return
|
||||
fi
|
||||
|
||||
for skill_dir in "$SKILLS_SRC"/*/; do
|
||||
skill_name="$(basename "$skill_dir")"
|
||||
printf '%s\n' "$skill_name" >> "$expected_skills_tmp"
|
||||
done
|
||||
|
||||
cleanup_skill_symlinks "$expected_skills_tmp"
|
||||
|
||||
for skill_dir in "$SKILLS_SRC"/*/; do
|
||||
skill_name="$(basename "$skill_dir")"
|
||||
create_symlink "$skill_dir" "$dst_root/$skill_name" "${label_prefix} skill: $skill_name"
|
||||
done
|
||||
|
||||
rm -f "$expected_skills_tmp"
|
||||
}
|
||||
|
||||
create_symlink "$AGENTS_SRC" "$AGENTS_DST" "agents"
|
||||
create_symlink "$RULES_SRC" "$RULES_DST" "rules"
|
||||
create_symlink "$SKILLS_SRC" "$SKILLS_DST" "skills"
|
||||
create_file_symlink "$CLAUDE_MD_SRC" "$CLAUDE_MD_DST" "CLAUDE.md"
|
||||
create_file_symlink "$SETTINGS_SRC" "$SETTINGS_DST" "settings.json"
|
||||
install_team_skills_for_target "claude" "$CLAUDE_DIR/skills" "claude"
|
||||
|
||||
# Codex CLI integration (optional — only if codex/ output exists)
|
||||
CODEX_DIR="$HOME/.codex"
|
||||
|
||||
if [ -d "$SCRIPT_DIR/codex" ]; then
|
||||
echo ""
|
||||
echo "Codex output found — installing to $CODEX_DIR"
|
||||
mkdir -p "$CODEX_DIR"
|
||||
|
||||
# Skills: symlink each skill directory into ~/.codex/skills/
|
||||
# (Can't replace the whole directory — .system/ must remain intact)
|
||||
install_team_skills_for_target "codex" "$CODEX_DIR/skills" "codex"
|
||||
|
||||
# Generated agents
|
||||
if [ -d "$SCRIPT_DIR/codex/agents" ]; then
|
||||
create_symlink "$SCRIPT_DIR/codex/agents" "$CODEX_DIR/agents" "codex agents"
|
||||
else
|
||||
echo "Run ./generate.sh first to generate Codex agent definitions"
|
||||
fi
|
||||
|
||||
# Generated AGENTS.md (symlink to project root for Codex discovery)
|
||||
if [ -f "$SCRIPT_DIR/codex/AGENTS.md" ]; then
|
||||
create_file_symlink "$SCRIPT_DIR/codex/AGENTS.md" "$CODEX_DIR/AGENTS.md" "codex AGENTS.md"
|
||||
fi
|
||||
|
||||
# Generated config.toml
|
||||
if [ -f "$SCRIPT_DIR/codex/config.toml" ]; then
|
||||
create_file_symlink "$SCRIPT_DIR/codex/config.toml" "$CODEX_DIR/config.toml" "codex config.toml"
|
||||
fi
|
||||
fi
|
||||
echo "Done. Open Claude Code and load the orchestrate skill to begin."
|
||||
|
|
|
|||
21
justfile
21
justfile
|
|
@ -1,21 +0,0 @@
|
|||
set shell := ["bash", "-eu", "-o", "pipefail", "-c"]
|
||||
|
||||
default: help
|
||||
|
||||
help:
|
||||
@just --list
|
||||
|
||||
validate:
|
||||
nix run .#validate
|
||||
|
||||
build:
|
||||
nix run .#build
|
||||
|
||||
check:
|
||||
nix run .#check
|
||||
|
||||
install:
|
||||
nix run .#install
|
||||
|
||||
clean:
|
||||
rm -rf settings.json claude codex
|
||||
|
|
@ -1,13 +0,0 @@
|
|||
# Session Behavior
|
||||
|
||||
- Treat each session as stateless — do not assume context from prior sessions
|
||||
- The instruction hierarchy and `memory/` are the only sources of persistent context
|
||||
- If something needs to carry forward across sessions, persist it in the appropriate file — not in session memory
|
||||
|
||||
# Project Memory
|
||||
|
||||
- Project-specific memory lives in `memory/` at the project root
|
||||
- Use `MEMORY.md` in that directory as the index (one line per entry pointing to a file)
|
||||
- Memory files use frontmatter: `name`, `description`, `type` (user/feedback/project/reference)
|
||||
- Commit `memory/` with the repo so memory persists across machines and sessions
|
||||
- Tool-specific runtime memory (for example `.claude/agent-memory/`) is optional and does not replace `memory/` as the project source of truth
|
||||
|
|
@ -1,6 +0,0 @@
|
|||
# Responses & Explanations
|
||||
|
||||
- Be concise — lead with the action or answer, not the preamble
|
||||
- Include just enough reasoning to explain *why* a decision was made, not a full walkthrough
|
||||
- Skip trailing summaries ("Here's what I did...") — the diff speaks for itself
|
||||
- No emojis unless explicitly asked
|
||||
|
|
@ -1,6 +0,0 @@
|
|||
# Commits & Git Workflow
|
||||
|
||||
- Make many small, tightly scoped commits — one logical change per commit
|
||||
- Follow conventional commit format per the conventions skill
|
||||
- Ask before pushing to remote or force-pushing
|
||||
- Ask before opening PRs unless explicitly told to
|
||||
|
|
@ -1,17 +0,0 @@
|
|||
# Tool & Approach Philosophy
|
||||
|
||||
- Prefer tools and solutions that are declarative and reproducible over imperative one-offs
|
||||
- Portability across dev environments is a first-class concern — avoid hardcoding machine-specific paths or assumptions
|
||||
- The right tool for the job is the right tool — no language/framework bias, but favor things that can be version-pinned and reproduced
|
||||
|
||||
# Parallelism
|
||||
|
||||
- Always parallelize independent work — tool calls, file reads, searches
|
||||
- When a task has components that don't depend on each other, run them concurrently by default
|
||||
- Sequential execution should be the exception, not the default
|
||||
|
||||
# Context Management
|
||||
|
||||
- Use subagents for exploratory reads and investigations to keep the main context clean
|
||||
- Prefer scoped file reads (offset/limit) over reading entire large files
|
||||
- When a task is complete or the topic shifts significantly, suggest clearing context or starting a new session
|
||||
|
|
@ -1,10 +0,0 @@
|
|||
# Verification
|
||||
|
||||
- After making changes, run relevant tests or build commands to verify correctness before reporting success
|
||||
- If no tests exist for the changed code, say so rather than silently assuming it works
|
||||
- Prefer running single targeted tests over the full suite unless asked otherwise
|
||||
|
||||
# When Things Go Wrong
|
||||
|
||||
- If an approach fails twice, stop and reassess rather than continuing to iterate
|
||||
- Present the failure clearly and propose an alternative before proceeding
|
||||
|
|
@ -1,9 +0,0 @@
|
|||
# Nix
|
||||
|
||||
- Nix is the preferred meta package manager on all systems — assume it is available even on non-NixOS Linux
|
||||
- Always prefer a project-level `flake.nix` as the canonical way to define dev environments, build systems, and scripts
|
||||
- Dev environments go in `devShells`, project scripts/tools go in `packages` or as `apps` within the flake
|
||||
- Never suggest `apt`, `brew`, `pip install --user`, `npm install -g`, or other imperative global installs — reach for `nix shell`, `nix run`, or the project devshell instead
|
||||
- Prefer `nix run` for one-off tool invocations and `nix develop` (or `direnv` + `use flake`) for persistent dev shells
|
||||
- Binaries and tools introduced to a project should be pinned and run through Nix, not assumed to be on `$PATH` from the host
|
||||
- Flakes are the preferred interface — avoid legacy `nix-env` or channel-based patterns
|
||||
|
|
@ -1,5 +0,0 @@
|
|||
# Research Before Acting
|
||||
|
||||
- Before implementing a solution, research it — read relevant documentation, search for existing patterns, check official sources
|
||||
- Do not reason from first principles when documentation or prior art exists
|
||||
- Prefer verified answers over confident guesses
|
||||
|
|
@ -1,164 +0,0 @@
|
|||
{
|
||||
"$schema": "https://json-schema.org/draft/2020-12/schema",
|
||||
"$id": "https://example.com/schemas/agent-runtime.schema.json",
|
||||
"title": "Agent Runtime Config",
|
||||
"description": "Portable runtime policy for deriving tool-specific AI harness configuration.",
|
||||
"type": "object",
|
||||
"additionalProperties": false,
|
||||
"required": [
|
||||
"version",
|
||||
"model",
|
||||
"runtime",
|
||||
"safety"
|
||||
],
|
||||
"properties": {
|
||||
"version": {
|
||||
"type": "integer",
|
||||
"const": 1
|
||||
},
|
||||
"model": {
|
||||
"type": "object",
|
||||
"additionalProperties": false,
|
||||
"required": [
|
||||
"class",
|
||||
"reasoning"
|
||||
],
|
||||
"properties": {
|
||||
"class": {
|
||||
"type": "string",
|
||||
"enum": [
|
||||
"fast",
|
||||
"balanced",
|
||||
"powerful"
|
||||
],
|
||||
"description": "Portable model tier. Adapters map this to provider-specific model names."
|
||||
},
|
||||
"reasoning": {
|
||||
"type": "string",
|
||||
"enum": [
|
||||
"low",
|
||||
"medium",
|
||||
"high",
|
||||
"max"
|
||||
]
|
||||
}
|
||||
}
|
||||
},
|
||||
"runtime": {
|
||||
"type": "object",
|
||||
"additionalProperties": false,
|
||||
"required": [
|
||||
"filesystem",
|
||||
"approval",
|
||||
"network_access",
|
||||
"tools"
|
||||
],
|
||||
"properties": {
|
||||
"filesystem": {
|
||||
"type": "string",
|
||||
"enum": [
|
||||
"read-only",
|
||||
"workspace-write"
|
||||
]
|
||||
},
|
||||
"approval": {
|
||||
"type": "string",
|
||||
"enum": [
|
||||
"manual",
|
||||
"guarded-auto",
|
||||
"full-auto"
|
||||
],
|
||||
"description": "Portable approval intent. Adapters degrade where exact behavior is unavailable."
|
||||
},
|
||||
"network_access": {
|
||||
"type": "boolean"
|
||||
},
|
||||
"tools": {
|
||||
"type": "array",
|
||||
"items": {
|
||||
"type": "string",
|
||||
"enum": [
|
||||
"shell",
|
||||
"read",
|
||||
"edit",
|
||||
"write",
|
||||
"glob",
|
||||
"grep",
|
||||
"web_fetch",
|
||||
"web_search"
|
||||
]
|
||||
},
|
||||
"uniqueItems": true
|
||||
}
|
||||
}
|
||||
},
|
||||
"safety": {
|
||||
"type": "object",
|
||||
"additionalProperties": false,
|
||||
"required": [
|
||||
"protected_paths",
|
||||
"dangerous_shell_commands"
|
||||
],
|
||||
"properties": {
|
||||
"protected_paths": {
|
||||
"type": "array",
|
||||
"items": {
|
||||
"type": "string"
|
||||
}
|
||||
},
|
||||
"dangerous_shell_commands": {
|
||||
"type": "object",
|
||||
"additionalProperties": false,
|
||||
"required": [
|
||||
"ask"
|
||||
],
|
||||
"properties": {
|
||||
"ask": {
|
||||
"type": "array",
|
||||
"items": {
|
||||
"type": "string"
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
},
|
||||
"targets": {
|
||||
"type": "object",
|
||||
"additionalProperties": false,
|
||||
"properties": {
|
||||
"claude": {
|
||||
"type": "object",
|
||||
"additionalProperties": false,
|
||||
"properties": {
|
||||
"claude_md_excludes": {
|
||||
"type": "array",
|
||||
"items": {
|
||||
"type": "string"
|
||||
}
|
||||
}
|
||||
}
|
||||
},
|
||||
"codex": {
|
||||
"type": "object",
|
||||
"additionalProperties": false,
|
||||
"properties": {
|
||||
"approval_policy": {
|
||||
"type": "string",
|
||||
"enum": [
|
||||
"on-request",
|
||||
"untrusted",
|
||||
"never"
|
||||
],
|
||||
"description": "Codex-only compatibility override. Prefer runtime.approval as the portable source of truth."
|
||||
},
|
||||
"network_access": {
|
||||
"type": "boolean",
|
||||
"description": "Codex-only compatibility override. Prefer runtime.network_access as the portable source of truth."
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
|
@ -1,568 +0,0 @@
|
|||
{
|
||||
"$schema": "https://json-schema.org/draft/2020-12/schema",
|
||||
"$id": "https://example.com/schemas/team.schema.json",
|
||||
"title": "Team Protocol Config",
|
||||
"description": "Portable team-level inventory and metadata for agents, skills, and rules. For v1 this schema enforces strict inventory membership/order; generator runtime still validates referenced files exist on disk.",
|
||||
"type": "object",
|
||||
"additionalProperties": false,
|
||||
"required": [
|
||||
"version",
|
||||
"agents",
|
||||
"skills",
|
||||
"rules"
|
||||
],
|
||||
"properties": {
|
||||
"version": {
|
||||
"type": "integer",
|
||||
"const": 1
|
||||
},
|
||||
"agents": {
|
||||
"$ref": "#/$defs/inventory_agents"
|
||||
},
|
||||
"skills": {
|
||||
"$ref": "#/$defs/inventory_skills"
|
||||
},
|
||||
"rules": {
|
||||
"$ref": "#/$defs/inventory_rules"
|
||||
}
|
||||
},
|
||||
"$defs": {
|
||||
"id_agent": {
|
||||
"type": "string",
|
||||
"pattern": "^[a-z][a-z0-9-]*$"
|
||||
},
|
||||
"id_skill": {
|
||||
"type": "string",
|
||||
"pattern": "^[a-z][a-z0-9-]*$"
|
||||
},
|
||||
"id_rule": {
|
||||
"type": "string",
|
||||
"pattern": "^[0-9]{2}-[a-z0-9-]+$"
|
||||
},
|
||||
"tool_name": {
|
||||
"type": "string",
|
||||
"enum": [
|
||||
"Read",
|
||||
"Write",
|
||||
"Edit",
|
||||
"Glob",
|
||||
"Grep",
|
||||
"Bash",
|
||||
"WebFetch",
|
||||
"WebSearch"
|
||||
]
|
||||
},
|
||||
"target_name": {
|
||||
"type": "string",
|
||||
"enum": [
|
||||
"claude",
|
||||
"codex"
|
||||
]
|
||||
},
|
||||
"agent_item": {
|
||||
"type": "object",
|
||||
"additionalProperties": false,
|
||||
"required": [
|
||||
"id",
|
||||
"name",
|
||||
"description",
|
||||
"model",
|
||||
"effort",
|
||||
"permission_mode",
|
||||
"tools",
|
||||
"disallowed_tools",
|
||||
"max_turns",
|
||||
"skills",
|
||||
"instruction_file"
|
||||
],
|
||||
"properties": {
|
||||
"id": {
|
||||
"$ref": "#/$defs/id_agent"
|
||||
},
|
||||
"name": {
|
||||
"type": "string",
|
||||
"minLength": 1
|
||||
},
|
||||
"description": {
|
||||
"type": "string",
|
||||
"minLength": 1
|
||||
},
|
||||
"model": {
|
||||
"type": "string",
|
||||
"enum": [
|
||||
"haiku",
|
||||
"sonnet",
|
||||
"opus"
|
||||
]
|
||||
},
|
||||
"effort": {
|
||||
"type": "string",
|
||||
"enum": [
|
||||
"",
|
||||
"low",
|
||||
"medium",
|
||||
"high",
|
||||
"max"
|
||||
]
|
||||
},
|
||||
"permission_mode": {
|
||||
"type": "string",
|
||||
"enum": [
|
||||
"",
|
||||
"plan",
|
||||
"acceptEdits"
|
||||
]
|
||||
},
|
||||
"tools": {
|
||||
"type": "array",
|
||||
"items": {
|
||||
"$ref": "#/$defs/tool_name"
|
||||
},
|
||||
"uniqueItems": true
|
||||
},
|
||||
"disallowed_tools": {
|
||||
"type": "array",
|
||||
"items": {
|
||||
"$ref": "#/$defs/tool_name"
|
||||
},
|
||||
"uniqueItems": true
|
||||
},
|
||||
"max_turns": {
|
||||
"type": "integer",
|
||||
"minimum": 1
|
||||
},
|
||||
"skills": {
|
||||
"type": "array",
|
||||
"items": {
|
||||
"$ref": "#/$defs/id_skill"
|
||||
},
|
||||
"uniqueItems": true
|
||||
},
|
||||
"background": {
|
||||
"type": "boolean"
|
||||
},
|
||||
"memory": {
|
||||
"type": "string",
|
||||
"enum": [
|
||||
"project"
|
||||
]
|
||||
},
|
||||
"isolation": {
|
||||
"type": "string",
|
||||
"enum": [
|
||||
"worktree"
|
||||
]
|
||||
},
|
||||
"instruction_file": {
|
||||
"type": "string",
|
||||
"pattern": "^agents/[a-z0-9-]+\\.md$"
|
||||
}
|
||||
}
|
||||
},
|
||||
"skill_item": {
|
||||
"type": "object",
|
||||
"additionalProperties": false,
|
||||
"required": [
|
||||
"id",
|
||||
"name",
|
||||
"description",
|
||||
"instruction_file",
|
||||
"applies_to",
|
||||
"install_mode"
|
||||
],
|
||||
"properties": {
|
||||
"id": {
|
||||
"$ref": "#/$defs/id_skill"
|
||||
},
|
||||
"name": {
|
||||
"type": "string",
|
||||
"minLength": 1
|
||||
},
|
||||
"description": {
|
||||
"type": "string",
|
||||
"minLength": 1
|
||||
},
|
||||
"instruction_file": {
|
||||
"type": "string",
|
||||
"pattern": "^skills/[a-z0-9-]+/SKILL\\.md$"
|
||||
},
|
||||
"applies_to": {
|
||||
"type": "array",
|
||||
"items": {
|
||||
"$ref": "#/$defs/target_name"
|
||||
},
|
||||
"minItems": 1,
|
||||
"uniqueItems": true
|
||||
},
|
||||
"install_mode": {
|
||||
"type": "string",
|
||||
"enum": [
|
||||
"shared"
|
||||
]
|
||||
}
|
||||
}
|
||||
},
|
||||
"rule_item": {
|
||||
"type": "object",
|
||||
"additionalProperties": false,
|
||||
"required": [
|
||||
"id",
|
||||
"source_file",
|
||||
"applies_to"
|
||||
],
|
||||
"properties": {
|
||||
"id": {
|
||||
"$ref": "#/$defs/id_rule"
|
||||
},
|
||||
"source_file": {
|
||||
"type": "string",
|
||||
"pattern": "^rules/[0-9]{2}-[a-z0-9-]+\\.md$"
|
||||
},
|
||||
"applies_to": {
|
||||
"type": "array",
|
||||
"items": {
|
||||
"$ref": "#/$defs/target_name"
|
||||
},
|
||||
"minItems": 1,
|
||||
"uniqueItems": true
|
||||
}
|
||||
}
|
||||
},
|
||||
"inventory_agents": {
|
||||
"type": "object",
|
||||
"additionalProperties": false,
|
||||
"description": "Agent inventory for protocol v1. This schema enforces exact order, exact keys, and key/id equality for the current repository inventory.",
|
||||
"required": [
|
||||
"order",
|
||||
"items"
|
||||
],
|
||||
"properties": {
|
||||
"order": {
|
||||
"type": "array",
|
||||
"items": {
|
||||
"$ref": "#/$defs/id_agent"
|
||||
},
|
||||
"uniqueItems": true,
|
||||
"const": [
|
||||
"architect",
|
||||
"auditor",
|
||||
"debugger",
|
||||
"documenter",
|
||||
"grunt",
|
||||
"researcher",
|
||||
"reviewer",
|
||||
"senior",
|
||||
"worker"
|
||||
]
|
||||
},
|
||||
"items": {
|
||||
"type": "object",
|
||||
"additionalProperties": false,
|
||||
"required": [
|
||||
"architect",
|
||||
"auditor",
|
||||
"debugger",
|
||||
"documenter",
|
||||
"grunt",
|
||||
"researcher",
|
||||
"reviewer",
|
||||
"senior",
|
||||
"worker"
|
||||
],
|
||||
"properties": {
|
||||
"architect": {
|
||||
"allOf": [
|
||||
{ "$ref": "#/$defs/agent_item" },
|
||||
{
|
||||
"properties": {
|
||||
"id": { "const": "architect" }
|
||||
}
|
||||
}
|
||||
]
|
||||
},
|
||||
"auditor": {
|
||||
"allOf": [
|
||||
{ "$ref": "#/$defs/agent_item" },
|
||||
{
|
||||
"properties": {
|
||||
"id": { "const": "auditor" }
|
||||
}
|
||||
}
|
||||
]
|
||||
},
|
||||
"debugger": {
|
||||
"allOf": [
|
||||
{ "$ref": "#/$defs/agent_item" },
|
||||
{
|
||||
"properties": {
|
||||
"id": { "const": "debugger" }
|
||||
}
|
||||
}
|
||||
]
|
||||
},
|
||||
"documenter": {
|
||||
"allOf": [
|
||||
{ "$ref": "#/$defs/agent_item" },
|
||||
{
|
||||
"properties": {
|
||||
"id": { "const": "documenter" }
|
||||
}
|
||||
}
|
||||
]
|
||||
},
|
||||
"grunt": {
|
||||
"allOf": [
|
||||
{ "$ref": "#/$defs/agent_item" },
|
||||
{
|
||||
"properties": {
|
||||
"id": { "const": "grunt" }
|
||||
}
|
||||
}
|
||||
]
|
||||
},
|
||||
"researcher": {
|
||||
"allOf": [
|
||||
{ "$ref": "#/$defs/agent_item" },
|
||||
{
|
||||
"properties": {
|
||||
"id": { "const": "researcher" }
|
||||
}
|
||||
}
|
||||
]
|
||||
},
|
||||
"reviewer": {
|
||||
"allOf": [
|
||||
{ "$ref": "#/$defs/agent_item" },
|
||||
{
|
||||
"properties": {
|
||||
"id": { "const": "reviewer" }
|
||||
}
|
||||
}
|
||||
]
|
||||
},
|
||||
"senior": {
|
||||
"allOf": [
|
||||
{ "$ref": "#/$defs/agent_item" },
|
||||
{
|
||||
"properties": {
|
||||
"id": { "const": "senior" }
|
||||
}
|
||||
}
|
||||
]
|
||||
},
|
||||
"worker": {
|
||||
"allOf": [
|
||||
{ "$ref": "#/$defs/agent_item" },
|
||||
{
|
||||
"properties": {
|
||||
"id": { "const": "worker" }
|
||||
}
|
||||
}
|
||||
]
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
},
|
||||
"inventory_skills": {
|
||||
"type": "object",
|
||||
"additionalProperties": false,
|
||||
"description": "Skill inventory for protocol v1. This schema enforces exact order, exact keys, and key/id equality for the current repository inventory. The fixed v1 inventory does not include a separate 'project' skill entry.",
|
||||
"required": [
|
||||
"order",
|
||||
"items"
|
||||
],
|
||||
"properties": {
|
||||
"order": {
|
||||
"type": "array",
|
||||
"items": {
|
||||
"$ref": "#/$defs/id_skill"
|
||||
},
|
||||
"uniqueItems": true,
|
||||
"const": [
|
||||
"conventions",
|
||||
"message-schema",
|
||||
"orchestrate",
|
||||
"qa-checklist",
|
||||
"worker-protocol"
|
||||
]
|
||||
},
|
||||
"items": {
|
||||
"type": "object",
|
||||
"additionalProperties": false,
|
||||
"required": [
|
||||
"conventions",
|
||||
"message-schema",
|
||||
"orchestrate",
|
||||
"qa-checklist",
|
||||
"worker-protocol"
|
||||
],
|
||||
"properties": {
|
||||
"conventions": {
|
||||
"allOf": [
|
||||
{ "$ref": "#/$defs/skill_item" },
|
||||
{
|
||||
"properties": {
|
||||
"id": { "const": "conventions" }
|
||||
}
|
||||
}
|
||||
]
|
||||
},
|
||||
"message-schema": {
|
||||
"allOf": [
|
||||
{ "$ref": "#/$defs/skill_item" },
|
||||
{
|
||||
"properties": {
|
||||
"id": { "const": "message-schema" }
|
||||
}
|
||||
}
|
||||
]
|
||||
},
|
||||
"orchestrate": {
|
||||
"allOf": [
|
||||
{ "$ref": "#/$defs/skill_item" },
|
||||
{
|
||||
"properties": {
|
||||
"id": { "const": "orchestrate" }
|
||||
}
|
||||
}
|
||||
]
|
||||
},
|
||||
"qa-checklist": {
|
||||
"allOf": [
|
||||
{ "$ref": "#/$defs/skill_item" },
|
||||
{
|
||||
"properties": {
|
||||
"id": { "const": "qa-checklist" }
|
||||
}
|
||||
}
|
||||
]
|
||||
},
|
||||
"worker-protocol": {
|
||||
"allOf": [
|
||||
{ "$ref": "#/$defs/skill_item" },
|
||||
{
|
||||
"properties": {
|
||||
"id": { "const": "worker-protocol" }
|
||||
}
|
||||
}
|
||||
]
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
},
|
||||
"inventory_rules": {
|
||||
"type": "object",
|
||||
"additionalProperties": false,
|
||||
"description": "Rule inventory for protocol v1. This schema enforces exact order, exact keys, and key/id equality for the current repository inventory.",
|
||||
"required": [
|
||||
"order",
|
||||
"items"
|
||||
],
|
||||
"properties": {
|
||||
"order": {
|
||||
"type": "array",
|
||||
"items": {
|
||||
"$ref": "#/$defs/id_rule"
|
||||
},
|
||||
"uniqueItems": true,
|
||||
"const": [
|
||||
"01-session",
|
||||
"02-responses",
|
||||
"03-git",
|
||||
"04-tools",
|
||||
"05-verification",
|
||||
"06-nix",
|
||||
"07-research"
|
||||
]
|
||||
},
|
||||
"items": {
|
||||
"type": "object",
|
||||
"additionalProperties": false,
|
||||
"required": [
|
||||
"01-session",
|
||||
"02-responses",
|
||||
"03-git",
|
||||
"04-tools",
|
||||
"05-verification",
|
||||
"06-nix",
|
||||
"07-research"
|
||||
],
|
||||
"properties": {
|
||||
"01-session": {
|
||||
"allOf": [
|
||||
{ "$ref": "#/$defs/rule_item" },
|
||||
{
|
||||
"properties": {
|
||||
"id": { "const": "01-session" }
|
||||
}
|
||||
}
|
||||
]
|
||||
},
|
||||
"02-responses": {
|
||||
"allOf": [
|
||||
{ "$ref": "#/$defs/rule_item" },
|
||||
{
|
||||
"properties": {
|
||||
"id": { "const": "02-responses" }
|
||||
}
|
||||
}
|
||||
]
|
||||
},
|
||||
"03-git": {
|
||||
"allOf": [
|
||||
{ "$ref": "#/$defs/rule_item" },
|
||||
{
|
||||
"properties": {
|
||||
"id": { "const": "03-git" }
|
||||
}
|
||||
}
|
||||
]
|
||||
},
|
||||
"04-tools": {
|
||||
"allOf": [
|
||||
{ "$ref": "#/$defs/rule_item" },
|
||||
{
|
||||
"properties": {
|
||||
"id": { "const": "04-tools" }
|
||||
}
|
||||
}
|
||||
]
|
||||
},
|
||||
"05-verification": {
|
||||
"allOf": [
|
||||
{ "$ref": "#/$defs/rule_item" },
|
||||
{
|
||||
"properties": {
|
||||
"id": { "const": "05-verification" }
|
||||
}
|
||||
}
|
||||
]
|
||||
},
|
||||
"06-nix": {
|
||||
"allOf": [
|
||||
{ "$ref": "#/$defs/rule_item" },
|
||||
{
|
||||
"properties": {
|
||||
"id": { "const": "06-nix" }
|
||||
}
|
||||
}
|
||||
]
|
||||
},
|
||||
"07-research": {
|
||||
"allOf": [
|
||||
{ "$ref": "#/$defs/rule_item" },
|
||||
{
|
||||
"properties": {
|
||||
"id": { "const": "07-research" }
|
||||
}
|
||||
}
|
||||
]
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
57
settings.json
Normal file
57
settings.json
Normal file
|
|
@ -0,0 +1,57 @@
|
|||
{
|
||||
"$schema": "https://json.schemastore.org/claude-code-settings.json",
|
||||
"attribution": {
|
||||
"commit": "",
|
||||
"pr": ""
|
||||
},
|
||||
"includeGitInstructions": true,
|
||||
"permissions": {
|
||||
"allow": [
|
||||
"Bash",
|
||||
"Read",
|
||||
"Edit",
|
||||
"Write",
|
||||
"Glob",
|
||||
"Grep",
|
||||
"WebFetch",
|
||||
"WebSearch"
|
||||
],
|
||||
"deny": [
|
||||
"Read(~/.ssh/**)",
|
||||
"Read(~/.aws/**)",
|
||||
"Read(~/.gnupg/**)",
|
||||
"Read(./.env)",
|
||||
"Read(./.env.*)",
|
||||
"Bash(cat ~/.ssh/*)",
|
||||
"Bash(cat ~/.aws/*)",
|
||||
"Bash(cat ~/.gnupg/*)",
|
||||
"Bash(cat .env*)",
|
||||
"Bash(less ~/.ssh/*)",
|
||||
"Bash(less ~/.aws/*)",
|
||||
"Bash(less ~/.gnupg/*)"
|
||||
],
|
||||
"ask": [
|
||||
"Bash(rm *)",
|
||||
"Bash(rmdir *)",
|
||||
"Bash(git push --force*)",
|
||||
"Bash(git push -f*)",
|
||||
"Bash(git reset --hard*)",
|
||||
"Bash(git clean *)",
|
||||
"Bash(chmod *)",
|
||||
"Bash(dd *)",
|
||||
"Bash(mkfs*)",
|
||||
"Bash(shred *)",
|
||||
"Bash(kill *)",
|
||||
"Bash(killall *)",
|
||||
"Bash(sudo *)"
|
||||
],
|
||||
"defaultMode": "acceptEdits"
|
||||
},
|
||||
"model": "sonnet",
|
||||
"syntaxHighlightingDisabled": false,
|
||||
"effortLevel": "medium",
|
||||
"autoUpdatesChannel": "stable",
|
||||
"claudeMdExcludes": [
|
||||
".claude/agent-memory/**"
|
||||
]
|
||||
}
|
||||
|
|
@ -1,7 +1,6 @@
|
|||
---
|
||||
name: conventions
|
||||
description: Core coding conventions and quality priorities for all projects.
|
||||
when_to_use: Automatically loaded by agents via skills frontmatter. Load manually when you need to reference project coding standards, commit format, or quality priorities.
|
||||
---
|
||||
|
||||
## Quality priorities (in order)
|
||||
|
|
|
|||
|
|
@ -1,320 +0,0 @@
|
|||
---
|
||||
name: message-schema
|
||||
description: Typed envelope schema for all inter-agent communication. Defines message types, required fields, and signal routing contracts.
|
||||
when_to_use: Automatically loaded by all agents and the orchestrator via skills frontmatter. Reference when producing or consuming agent output.
|
||||
---
|
||||
|
||||
Every agent output and orchestrator dispatch uses a **YAML frontmatter envelope** followed by a **markdown body**. The envelope contains routing metadata; the body contains human-readable content.
|
||||
|
||||
```
|
||||
---
|
||||
type: <message_type>
|
||||
signal: <routing_signal>
|
||||
# ... type-specific fields
|
||||
---
|
||||
|
||||
[markdown body]
|
||||
```
|
||||
|
||||
The `signal` field is the orchestrator's primary routing key. It determines the next action without parsing prose.
|
||||
|
||||
---
|
||||
|
||||
## Signals
|
||||
|
||||
### Agent → Orchestrator
|
||||
|
||||
| Signal | Meaning | Emitted by |
|
||||
|--------|---------|------------|
|
||||
| `rfr` | Work complete, ready for review | worker, debugger, documenter |
|
||||
| `pass` | Review/audit passed cleanly | reviewer, auditor |
|
||||
| `pass_with_notes` | Passed with non-blocking findings | reviewer, auditor |
|
||||
| `fail` | Review/audit failed, needs rework | reviewer, auditor |
|
||||
| `triage_complete` | Triage done, research questions identified (or none) | architect |
|
||||
| `plan_complete` | Plan written to file | architect |
|
||||
| `research_complete` | Research question answered | researcher |
|
||||
| `blocked` | Cannot proceed, needs orchestrator intervention | any agent |
|
||||
| `escalate` | Beyond agent scope, needs user decision | any agent |
|
||||
|
||||
### Orchestrator → Agent
|
||||
|
||||
| Signal | Meaning | Sent to |
|
||||
|--------|---------|---------|
|
||||
| `execute` | Perform this task | worker, debugger, documenter, architect |
|
||||
| `revise` | Fix listed issues and resubmit | worker, debugger, documenter |
|
||||
| `lgtm` | Approved, commit now | worker, debugger, documenter |
|
||||
| `research` | Answer this research question | researcher |
|
||||
| `plan` | Produce architecture and wave decomposition | architect |
|
||||
|
||||
---
|
||||
|
||||
## Agent → Orchestrator Message Types
|
||||
|
||||
### worker_submission
|
||||
|
||||
Emitted by: worker, debugger, documenter
|
||||
|
||||
```yaml
|
||||
---
|
||||
type: worker_submission
|
||||
signal: rfr | blocked | escalate
|
||||
files_changed:
|
||||
- path/to/file1
|
||||
- path/to/file2
|
||||
ac_coverage:
|
||||
AC1: pass | fail | partial | na
|
||||
AC2: pass | fail | partial | na
|
||||
qa_check: pass | fail
|
||||
---
|
||||
```
|
||||
|
||||
Required: `type`, `signal`, `files_changed`, `qa_check`
|
||||
Optional: `ac_coverage` (omit when no AC provided in assignment)
|
||||
|
||||
Body: `## Result` section with implementation details, then `## Self-Assessment` with per-criterion notes and known limitations.
|
||||
|
||||
**Routing contract for implementers:**
|
||||
- `grunt` uses `blocked` to request reassignment to `worker` or orchestrator intervention.
|
||||
- `worker` uses `blocked` to request reassignment to `senior` or orchestrator intervention.
|
||||
- `senior` uses `blocked` to request orchestrator re-decomposition, plan revision, or a senior wave/team.
|
||||
- Any implementer uses `escalate` only when the blocker requires a user decision or approval, not merely a stronger implementer.
|
||||
|
||||
When `signal: blocked` or `signal: escalate` is used, the body must include a one-line route hint:
|
||||
- `Route: worker`
|
||||
- `Route: senior`
|
||||
- `Route: orchestrator`
|
||||
- `Route: orchestrator (re-decompose)`
|
||||
- `Route: orchestrator (user decision required)`
|
||||
|
||||
### review_verdict
|
||||
|
||||
Emitted by: reviewer
|
||||
|
||||
```yaml
|
||||
---
|
||||
type: review_verdict
|
||||
signal: pass | pass_with_notes | fail
|
||||
critical_count: 0
|
||||
moderate_count: 2
|
||||
minor_count: 1
|
||||
ac_coverage:
|
||||
AC1: pass | fail
|
||||
AC2: pass | fail
|
||||
---
|
||||
```
|
||||
|
||||
Required: `type`, `signal`, `critical_count`, `moderate_count`, `minor_count`, `ac_coverage`
|
||||
|
||||
**Hard rule:** `critical_count > 0` requires `signal: fail`.
|
||||
|
||||
Body: Findings by severity (CRITICAL / MODERATE / MINOR), then AC Coverage details, then one-line summary.
|
||||
|
||||
### audit_verdict
|
||||
|
||||
Emitted by: auditor
|
||||
|
||||
```yaml
|
||||
---
|
||||
type: audit_verdict
|
||||
signal: pass | pass_with_notes | fail
|
||||
security_findings:
|
||||
critical: 0
|
||||
high: 0
|
||||
medium: 0
|
||||
low: 0
|
||||
build_status: pass | fail | skipped
|
||||
test_status: pass | fail | partial | skipped
|
||||
typecheck_status: pass | fail | skipped
|
||||
---
|
||||
```
|
||||
|
||||
Required: `type`, `signal`, `security_findings`, `build_status`, `test_status`
|
||||
Optional: `typecheck_status`
|
||||
|
||||
**Hard rule:** `security_findings.critical > 0` or `build_status: fail` or `test_status: fail` requires `signal: fail`. High-severity findings (`security_findings.high > 0`) do not require `fail` — use `pass_with_notes`.
|
||||
|
||||
Body: Security findings by severity (or CLEAN), then Runtime section with tested/passed/failed.
|
||||
|
||||
### triage_result
|
||||
|
||||
Emitted by: architect (Phase 1)
|
||||
|
||||
```yaml
|
||||
---
|
||||
type: triage_result
|
||||
signal: triage_complete
|
||||
tier: 0 | 1 | 2 | 3
|
||||
research_needed: true | false
|
||||
research_count: 3
|
||||
---
|
||||
```
|
||||
|
||||
Required: `type`, `signal`, `tier`, `research_needed`
|
||||
Optional: `research_count` (present when `research_needed: true`)
|
||||
|
||||
**Routing:** `research_needed: false` means the orchestrator skips research and resumes architect directly for Phase 2.
|
||||
|
||||
Body: Triage section (Tier, Problem, Constraints, Success criteria, Out of scope), then Research Questions if any.
|
||||
|
||||
### plan_result
|
||||
|
||||
Emitted by: architect (Phase 2)
|
||||
|
||||
```yaml
|
||||
---
|
||||
type: plan_result
|
||||
signal: plan_complete | blocked
|
||||
plan_file: plans/kebab-case-title.md
|
||||
wave_count: 3
|
||||
step_count: 7
|
||||
risk_tags:
|
||||
- security
|
||||
- data-mutation
|
||||
has_blockers: false
|
||||
---
|
||||
```
|
||||
|
||||
Required: `type`, `signal`, `plan_file`, `wave_count`, `risk_tags`, `has_blockers`
|
||||
Optional: `step_count`
|
||||
|
||||
**Routing:** `has_blockers: true` triggers user escalation before worker dispatch.
|
||||
|
||||
Body: One-paragraph summary of what the plan covers.
|
||||
|
||||
### research_result
|
||||
|
||||
Emitted by: researcher
|
||||
|
||||
```yaml
|
||||
---
|
||||
type: research_result
|
||||
signal: research_complete
|
||||
topic: "brief topic identifier"
|
||||
verified: true | false
|
||||
has_gotchas: true | false
|
||||
---
|
||||
```
|
||||
|
||||
Required: `type`, `signal`, `topic`, `verified`
|
||||
Optional: `has_gotchas`
|
||||
|
||||
**Routing:** `verified: false` flags unverified assumptions to the architect before planning.
|
||||
|
||||
Body: Answer, Verified Facts with sources, Version Constraints, Gotchas, Unverified claims.
|
||||
|
||||
---
|
||||
|
||||
## Orchestrator → Agent Message Types
|
||||
|
||||
### task_assignment
|
||||
|
||||
Sent to: worker, debugger, documenter
|
||||
|
||||
```yaml
|
||||
---
|
||||
type: task_assignment
|
||||
signal: execute
|
||||
task: "short task title"
|
||||
plan_file: plans/kebab-case-title.md
|
||||
wave: 1
|
||||
step: 2
|
||||
---
|
||||
```
|
||||
|
||||
Required: `type`, `signal`
|
||||
Optional: `task`, `plan_file`, `wave`, `step` (Tier 0 tasks may lack plan context)
|
||||
|
||||
Body: Task spec, Acceptance Criteria, Context (interface contracts, constraints, out-of-scope), Files to modify/read.
|
||||
|
||||
### revision_request
|
||||
|
||||
Sent to: worker, debugger, documenter
|
||||
|
||||
```yaml
|
||||
---
|
||||
type: revision_request
|
||||
signal: revise
|
||||
iteration: 2
|
||||
max_iterations: 5
|
||||
fix_severity: critical | critical+moderate | all
|
||||
---
|
||||
```
|
||||
|
||||
Required: `type`, `signal`, `iteration`
|
||||
Optional: `max_iterations`, `fix_severity`
|
||||
|
||||
`fix_severity` maps to iteration: 1-3 = `all`, 4-5 = `critical`.
|
||||
|
||||
Body: Issues to fix (from reviewer and/or auditor), grouped by source, with guidance.
|
||||
|
||||
### approval
|
||||
|
||||
Sent to: worker, debugger, documenter
|
||||
|
||||
```yaml
|
||||
---
|
||||
type: approval
|
||||
signal: lgtm
|
||||
---
|
||||
```
|
||||
|
||||
Required: `type`, `signal`. Pure control signal — commit using conventional commit format.
|
||||
|
||||
### triage_request
|
||||
|
||||
Sent to: architect (Phase 1)
|
||||
|
||||
```yaml
|
||||
---
|
||||
type: triage_request
|
||||
signal: execute
|
||||
---
|
||||
```
|
||||
|
||||
Required: `type`, `signal`
|
||||
|
||||
Body: Raw user request and any relevant project context.
|
||||
|
||||
### architecture_request
|
||||
|
||||
Sent to: architect (Phase 2, resume)
|
||||
|
||||
```yaml
|
||||
---
|
||||
type: architecture_request
|
||||
signal: plan
|
||||
---
|
||||
```
|
||||
|
||||
Required: `type`, `signal`
|
||||
|
||||
Body: Assembled `## Research Context` block from all researchers, or "No research needed — proceed."
|
||||
|
||||
### research_request
|
||||
|
||||
Sent to: researcher
|
||||
|
||||
```yaml
|
||||
---
|
||||
type: research_request
|
||||
signal: research
|
||||
topic: "brief topic identifier"
|
||||
---
|
||||
```
|
||||
|
||||
Required: `type`, `signal`, `topic`
|
||||
|
||||
Body: Specific question, why it matters (what decision it gates), where to look, relevant project context.
|
||||
|
||||
---
|
||||
|
||||
## Schema Compliance
|
||||
|
||||
Before returning output, verify:
|
||||
|
||||
1. Output starts with a valid YAML frontmatter envelope (`---` delimiters)
|
||||
2. `type` matches your message type
|
||||
3. `signal` uses a valid enum value for your direction (agent→orch or orch→agent)
|
||||
4. All required fields for your message type are present
|
||||
5. Enum fields use exact values from this schema (no variations like "PASS" vs "pass")
|
||||
6. Hard rules are satisfied (e.g., `critical_count > 0` implies `signal: fail`)
|
||||
|
|
@ -1,7 +1,6 @@
|
|||
---
|
||||
name: orchestrate
|
||||
description: Orchestration framework for decomposing and delegating complex tasks to the agent team. Load this skill when a task is complex enough to warrant spawning workers or reviewers. Covers task tiers, planning pipeline, wave dispatch, review, and git flow.
|
||||
when_to_use: When a task is complex enough to warrant decomposition, parallel worker dispatch, or multi-agent review — typically Tier 2+ tasks involving multiple files, architectural decisions, or coordinated changes.
|
||||
---
|
||||
|
||||
You are now acting as orchestrator. Decompose, delegate, validate, deliver. Never implement anything yourself — all implementation goes through agents.
|
||||
|
|
@ -10,12 +9,10 @@ You are now acting as orchestrator. Decompose, delegate, validate, deliver. Neve
|
|||
|
||||
```
|
||||
You (orchestrator)
|
||||
├── grunt (haiku) — trivial, cheap implementer
|
||||
├── worker (sonnet) — standard implementer
|
||||
├── senior (opus) — ambiguous, architectural, or high-risk implementer
|
||||
├── worker (sonnet default — haiku for trivial, opus for architectural)
|
||||
├── debugger (sonnet) — bug diagnosis and minimal fixes
|
||||
├── documenter (sonnet) — documentation only, never touches source
|
||||
├── researcher (sonnet) — one per topic, parallel fact-finding
|
||||
├── researcher (sonnet, background) — one per topic, parallel fact-finding
|
||||
├── architect (opus, effort: max) — triage, research coordination, architecture, wave decomposition
|
||||
├── reviewer (sonnet) — code quality + AC verification + claim checking
|
||||
└── auditor (sonnet, background) — security analysis + runtime validation
|
||||
|
|
@ -29,14 +26,14 @@ Determine before starting. Default to the lowest applicable tier.
|
|||
|
||||
| Tier | Scope | Approach |
|
||||
|---|---|---|
|
||||
| **0** | Trivial (typo, rename, one-liner) | Spawn `grunt`. No review. Ship directly. |
|
||||
| **1** | Single straightforward task | Spawn `worker` → reviewer → ship or iterate |
|
||||
| **0** | Trivial (typo, rename, one-liner) | Spawn worker (haiku). No review. Ship directly. |
|
||||
| **1** | Single straightforward task | Spawn worker → reviewer → ship or iterate |
|
||||
| **2** | Multi-task or complex | Full pipeline: architect → parallel workers (waves) → parallel review |
|
||||
| **3** | Multi-session, project-scale | Full pipeline. Set milestones with the user. Background architect. |
|
||||
|
||||
**Cost-aware shortcuts:**
|
||||
- Tier 0: skip planning entirely, spawn `grunt`
|
||||
- Tier 1 with obvious approach: spawn `worker` directly, skip architect
|
||||
- Tier 0: skip planning entirely, spawn worker with `model: haiku`
|
||||
- Tier 1 with obvious approach: spawn worker directly, skip architect
|
||||
- Tier 1 with uncertain approach: spawn architect (Phase 1 triage only, skip research)
|
||||
- Tier 2+: run the full pipeline
|
||||
|
||||
|
|
@ -48,7 +45,7 @@ Determine before starting. Default to the lowest applicable tier.
|
|||
What is actually being asked vs. implied? If ambiguous, ask one focused question. Don't ask for what you can discover yourself.
|
||||
|
||||
### Step 2 — Determine tier
|
||||
Tier 0: spawn `grunt` directly. No decomposition, no review. Deliver and stop.
|
||||
Tier 0: spawn worker directly with `model: haiku`. No decomposition, no review. Deliver and stop.
|
||||
|
||||
### Step 3 — Plan (Tier 1 with uncertain approach, or Tier 2+)
|
||||
|
||||
|
|
@ -65,9 +62,9 @@ Each researcher receives: the specific question, why it's needed, where to look,
|
|||
Collect all outputs. Assemble into a single `## Research Context` block.
|
||||
|
||||
**Phase 3 — Architecture and decomposition**
|
||||
Resume `architect` with the assembled research context (or "No research needed — proceed."). It produces the full plan: interface contracts, wave assignments, acceptance criteria — written to `plans/<title>.md`.
|
||||
Resume `architect` with the assembled research context (or "No research needed — proceed."). It produces the full plan: interface contracts, wave assignments, acceptance criteria — written to `.claude/plans/<title>.md`.
|
||||
|
||||
**Resuming from an existing plan:** If a `plans/` file exists for this task, pass its path to the architect instead of running the pipeline again.
|
||||
**Resuming from an existing plan:** If a `.claude/plans/` file exists for this task, pass its path to the architect instead of running the pipeline again.
|
||||
|
||||
### Step 4 — Consume the plan
|
||||
|
||||
|
|
@ -86,14 +83,14 @@ If the plan flags unresolved blockers or unverified assumptions, escalate to the
|
|||
|
||||
For each wave in the plan:
|
||||
|
||||
1. **Spawn ALL workers in the wave in a single response.** This is not optional — it is a performance requirement. Parallel agents run concurrently, reducing wall-clock time proportional to the number of agents. Serializing independent workers wastes time linearly.
|
||||
1. **Spawn ALL workers in the wave in a single response.** This is not optional — it is a cost and performance requirement. Parallel workers share the same cached context prefix at ~10% token cost. Serializing independent workers wastes both money and time.
|
||||
|
||||
2. Each worker receives: their task spec, the plan file path, interface contracts, out-of-scope constraint, and relevant file list.
|
||||
|
||||
3. Select the implementer based on task complexity:
|
||||
- Trivial, well-scoped: `grunt`
|
||||
- Standard implementation: `worker`
|
||||
- Architectural reasoning, ambiguous requirements, systemic changes: `senior`
|
||||
3. Select model based on task complexity:
|
||||
- Trivial, well-scoped: `model: haiku`
|
||||
- Standard implementation: `model: sonnet` (default)
|
||||
- Architectural reasoning, ambiguous requirements, systemic changes: `model: opus`
|
||||
|
||||
4. Wait for all workers in the wave to complete before advancing.
|
||||
|
||||
|
|
@ -101,32 +98,21 @@ For each wave in the plan:
|
|||
|
||||
**Workers must not make architectural decisions.** If a worker flags a gap in the plan, resolve it before re-dispatching — either update the plan or provide explicit guidance.
|
||||
|
||||
**Escalation routing:**
|
||||
- `grunt -> worker` when the task is no longer mechanical but still well-defined
|
||||
- `worker -> senior` when the task is implementable but needs stronger judgment or broader reasoning
|
||||
- `grunt` or `worker` -> orchestrator when the real issue is a plan gap, changed scope, or missing requirement
|
||||
- `senior -> orchestrator` when the work should be re-decomposed into a senior wave/team or the plan boundary must change
|
||||
|
||||
### Step 6 — Review
|
||||
|
||||
After each wave, spawn `reviewer` and `auditor` in a single response. They run in parallel.
|
||||
|
||||
- **Always spawn `reviewer`**
|
||||
- **Spawn `auditor` when:** risk tags include `security`, `auth`, `data-mutation`, or `concurrent`
|
||||
- **Spawn `auditor` when:** risk tags include `security`, `auth`, `data-mutation`, or `concurrent` — or any code that can be built and tested
|
||||
|
||||
Both receive: worker output, plan file path, acceptance criteria list, risk tags.
|
||||
|
||||
**Routing by envelope:** Read the `signal` field from each reviewer/auditor envelope:
|
||||
- `signal: pass` → advance to next wave
|
||||
- `signal: pass_with_notes` → advance, surface notes in final delivery
|
||||
- `signal: fail` → check `critical_count` / `security_findings` and send worker to fix
|
||||
|
||||
Do not advance until both verdicts are collected.
|
||||
Collect both verdicts before deciding whether to advance to the next wave or send back for fixes.
|
||||
|
||||
### Step 7 — Feedback loop on issues
|
||||
|
||||
1. Resume the worker with a `revision_request` envelope containing reviewer/auditor findings
|
||||
2. On resubmission (worker returns `signal: rfr`), spawn reviewer again (new instance — stateless)
|
||||
1. Resume the worker with reviewer findings and instruction to fix
|
||||
2. On resubmission, spawn reviewer again (new instance — stateless)
|
||||
3. Repeat
|
||||
|
||||
**Severity-aware decisions:**
|
||||
|
|
@ -134,10 +120,9 @@ Do not advance until both verdicts are collected.
|
|||
- Iterations 4–5: fix CRITICAL only. Ship MODERATE/MINOR as PASS WITH NOTES.
|
||||
|
||||
**Termination rules:**
|
||||
- Same issue 3 consecutive iterations → re-dispatch to `senior` with full history
|
||||
- Same issue 3 consecutive iterations → re-dispatch as worker with `model: opus` and full history
|
||||
- 5 review cycles max → deliver what exists, disclose unresolved issues
|
||||
- Reviewer vs. requirement conflict → stop, escalate to user with both sides
|
||||
- If a `senior` reports `Route: orchestrator (re-decompose)`, stop iterating locally and re-plan before further dispatch
|
||||
|
||||
### Step 8 — Aggregate and deliver (Tier 2+)
|
||||
|
||||
|
|
@ -146,7 +131,7 @@ Do not advance until both verdicts are collected.
|
|||
- **Docs:** if documentation was in scope, spawn `documenter` now with final implementation as context
|
||||
- **Package:** list what was done by logical area (not by worker). Include all file paths. Surface PASS WITH NOTES caveats as a brief "Heads up" section.
|
||||
|
||||
Lead with the result. Don't expose worker IDs, wave counts, or internal mechanics. When subagent results return to your context, prefer concise summaries over verbatim output — the full detail is in the code, not the report.
|
||||
Lead with the result. Don't expose worker IDs, wave counts, or internal mechanics.
|
||||
|
||||
---
|
||||
|
||||
|
|
@ -156,9 +141,9 @@ Lead with the result. Don't expose worker IDs, wave counts, or internal mechanic
|
|||
|
||||
| Condition | Agent | Model override |
|
||||
|---|---|---|
|
||||
| Trivial one-liner, rename, typo | `grunt` | — |
|
||||
| Well-defined task, clear approach | `worker` | — |
|
||||
| Architectural reasoning, ambiguous requirements, systemic changes, worker failures | `senior` | — |
|
||||
| Trivial one-liner, rename, typo | `worker` | `haiku` |
|
||||
| Well-defined task, clear approach | `worker` | `sonnet` (default) |
|
||||
| Architectural reasoning, ambiguous requirements, systemic changes, worker failures | `worker` | `opus` |
|
||||
| Bug diagnosis and fixing | `debugger` | — |
|
||||
| Documentation only, never modify source | `documenter` | — |
|
||||
|
||||
|
|
@ -179,7 +164,7 @@ When multiple risk tags are present, take the union. Spawn all required reviewer
|
|||
|
||||
### Agent lifecycles
|
||||
|
||||
**grunt / worker / senior / debugger / documenter**
|
||||
**worker / debugger / documenter**
|
||||
- Resume when iterating on the same task or closely related follow-up
|
||||
- Spawn fresh when: fundamentally wrong path, re-dispatching with different model, requirements changed, agent is thrashing
|
||||
|
||||
|
|
@ -199,53 +184,31 @@ When multiple risk tags are present, take the union. Spawn all required reviewer
|
|||
**documenter**
|
||||
- Spawn after implementation wave is complete. Background. One instance per completed scope area.
|
||||
|
||||
### Permission model
|
||||
|
||||
Agent `permissionMode` in frontmatter is overridden when the parent (you, the orchestrator) runs in `acceptEdits` or `bypassPermissions` mode — the child inherits the parent's mode. This means `permissionMode: plan` on read-only agents like architect, researcher, and reviewer is **not enforced at runtime**.
|
||||
|
||||
The actual write protection for read-only agents comes from `disallowedTools: Write, Edit` — this is enforced regardless of permission mode. Do not rely on `permissionMode` as a safety boundary; rely on tool restrictions.
|
||||
|
||||
### Parallelism mandate
|
||||
|
||||
**Same-wave workers must be spawned in a single response.**
|
||||
**Reviewer and auditor must be spawned in a single response.**
|
||||
**All researchers must be spawned in a single response.**
|
||||
|
||||
Spawning agents sequentially when they could run in parallel is a protocol violation, not a style choice. Parallel dispatch reduces wall-clock latency proportionally — N agents in parallel complete in the time of the slowest, not the sum of all.
|
||||
Spawning agents sequentially when they could run in parallel is a protocol violation, not a style choice. Parallel agents share a cached context prefix — each additional parallel agent costs ~10% of what the first agent paid for that shared context.
|
||||
|
||||
### Git flow
|
||||
|
||||
Workers return `signal: rfr` when done. You control commits:
|
||||
- Send `signal: lgtm` → worker commits
|
||||
- Mark a step `- [x]` in the plan file **only when every worker assigned to that step has received `signal: lgtm`**
|
||||
- Send `signal: revise` → worker fixes and resubmits with `signal: rfr`
|
||||
Workers signal `RFR` when done. You control commits:
|
||||
- `LGTM` → worker commits
|
||||
- Mark a step `- [x]` in the plan file **only when every worker assigned to that step has received LGTM**
|
||||
- `REVISE` → worker fixes and resubmits with `RFR`
|
||||
- Merge worktree branches after individual validation
|
||||
- On Tier 2+: merge each worker's branch after validation, resolve conflicts if branches overlap
|
||||
|
||||
Only the orchestrator updates the plan file. Workers must not modify `plans/`.
|
||||
Only the orchestrator updates the plan file. Workers must not modify `.claude/plans/`.
|
||||
|
||||
### Message schema
|
||||
### Review signals
|
||||
|
||||
All agent communication uses typed YAML frontmatter envelopes defined in the `message-schema` skill. The `signal` field is your primary routing key.
|
||||
|
||||
| Envelope signal | Direction | Your action |
|
||||
| Signal | Direction | Meaning |
|
||||
|---|---|---|
|
||||
| `signal: rfr` | worker → you | Dispatch to reviewer (+ auditor if risk tags match) |
|
||||
| `signal: pass` | reviewer/auditor → you | Advance to next wave |
|
||||
| `signal: pass_with_notes` | reviewer/auditor → you | Advance, surface notes in delivery |
|
||||
| `signal: fail` | reviewer/auditor → you | Send `revision_request` to worker |
|
||||
| `signal: triage_complete` | architect → you | Check `research_needed`, spawn researchers or resume architect |
|
||||
| `signal: plan_complete` | architect → you | Read plan file, begin wave dispatch |
|
||||
| `signal: research_complete` | researcher → you | Collect, assemble into Research Context |
|
||||
| `signal: blocked` (`plan_result`) | architect → you | Escalate to user before dispatching workers |
|
||||
| `signal: blocked` (`worker_submission`) | implementer → you | Route by the envelope's explicit next-step hint |
|
||||
| `signal: escalate` | any → you | Escalate to user with context |
|
||||
|
||||
Implementer route handling:
|
||||
- `Route: worker` -> reassign to `worker`
|
||||
- `Route: senior` -> reassign to `senior`
|
||||
- `Route: orchestrator` -> amend the plan or provide explicit guidance before redispatch
|
||||
- `Route: orchestrator (re-decompose)` -> re-run architect or split into a senior wave/team with explicit ownership
|
||||
- `Route: orchestrator (user decision required)` -> take the issue to the user
|
||||
|
||||
When dispatching agents, use the orchestrator→agent envelope types (`task_assignment`, `revision_request`, `approval`, `triage_request`, `architecture_request`, `research_request`) from the message-schema skill.
|
||||
| `RFR` | worker → orchestrator | Ready for review |
|
||||
| `LGTM` | orchestrator → worker | Approved, commit your changes |
|
||||
| `REVISE` | orchestrator → worker | Fix the listed issues and resubmit |
|
||||
| `VERDICT: PASS / PASS WITH NOTES / FAIL` | reviewer → orchestrator | Review result |
|
||||
| `VERDICT: PASS / PARTIAL / FAIL` | auditor → orchestrator | Runtime validation result |
|
||||
|
|
|
|||
10
skills/project/SKILL.md
Normal file
10
skills/project/SKILL.md
Normal file
|
|
@ -0,0 +1,10 @@
|
|||
---
|
||||
name: project
|
||||
description: Instructs agents to check for and ingest a project-specific skill file before starting work.
|
||||
---
|
||||
|
||||
Before starting any work, check for a project-specific skill file at `.claude/skills/project.md` in the current working directory.
|
||||
|
||||
If it exists, read it and treat its contents as additional instructions — project conventions, architecture notes, domain context, or anything else the project maintainer has defined. These instructions take precedence over general defaults where they conflict.
|
||||
|
||||
If it does not exist, continue without it.
|
||||
|
|
@ -1,7 +1,6 @@
|
|||
---
|
||||
name: qa-checklist
|
||||
description: Self-validation checklist. All workers run this against their own output before returning results.
|
||||
when_to_use: Loaded by all agents that produce output envelopes. Run before returning results to validate factual accuracy, scope, security, and schema compliance.
|
||||
---
|
||||
|
||||
## Self-QA checklist
|
||||
|
|
@ -9,7 +8,7 @@ when_to_use: Loaded by all agents that produce output envelopes. Run before retu
|
|||
Before returning your output, validate against every item below. If you find a violation, fix it — don't just note it.
|
||||
|
||||
### Factual accuracy
|
||||
- Every file path, function name, class name, and line number you reference — does it actually exist? Verify by reading the code if uncertain. Never guess paths or signatures.
|
||||
- Every file path, function name, class name, and line number you reference — does it actually exist? Verify with Read/Grep if uncertain. Never guess paths or signatures.
|
||||
- Every version number, API endpoint, or external reference — is it correct? If you can't verify, say "unverified" explicitly.
|
||||
- No invented specifics. If you don't know something, say so.
|
||||
|
||||
|
|
@ -40,20 +39,9 @@ Before returning your output, validate against every item below. If you find a v
|
|||
- If you stated something as fact, can you back it up? Challenge your own claims.
|
||||
- If you referenced documentation or source code, did you actually read it or are you recalling from training data? When it matters, verify.
|
||||
|
||||
### Schema compliance
|
||||
- Does your output start with a valid YAML frontmatter envelope (`---` delimiters)?
|
||||
- Does the `type` field match your message type?
|
||||
- Does the `signal` field use a valid enum value from the message-schema skill?
|
||||
- Are all required fields for your message type present?
|
||||
- Are hard rules satisfied?
|
||||
- `review_verdict`: `critical_count > 0` requires `signal: fail`
|
||||
- `audit_verdict`: `security_findings.critical > 0` or `build_status: fail` or `test_status: fail` requires `signal: fail`
|
||||
- `plan_result`: if you set `has_blockers: true`, confirm this is intentional — it triggers user escalation before worker dispatch
|
||||
|
||||
## After validation
|
||||
|
||||
Set `qa_check: pass` or `qa_check: fail` in your frontmatter envelope. This replaces the old `QA self-check` prose line.
|
||||
|
||||
In your Self-Assessment section, include:
|
||||
- If qa_check is fail: what you found and fixed before submission
|
||||
- `QA self-check: [pass/fail]` — did your output survive the checklist?
|
||||
- If fail: what you found and fixed before submission
|
||||
- If anything remains unverifiable, flag it explicitly as `Unverified: [claim]`
|
||||
|
|
|
|||
|
|
@ -1,33 +1,19 @@
|
|||
---
|
||||
name: worker-protocol
|
||||
description: Standard output format, feedback handling, and operational procedures for all worker agents.
|
||||
when_to_use: Loaded by worker, debugger, and documenter agents. Defines the worker_submission envelope format and commit workflow.
|
||||
---
|
||||
|
||||
## Output format
|
||||
|
||||
Wrap your output in a `worker_submission` envelope per the message-schema skill:
|
||||
|
||||
```yaml
|
||||
---
|
||||
type: worker_submission
|
||||
signal: rfr | blocked | escalate
|
||||
files_changed:
|
||||
- path/to/file1
|
||||
- path/to/file2
|
||||
ac_coverage: # optional — omit when no AC provided
|
||||
AC1: pass | fail | partial | na
|
||||
AC2: pass | fail | partial | na
|
||||
qa_check: pass | fail
|
||||
---
|
||||
```
|
||||
|
||||
Then the markdown body:
|
||||
Return using this structure. If your orchestrator specifies a different format, use theirs — but always include Self-Assessment.
|
||||
|
||||
```
|
||||
## Result
|
||||
[Your deliverable here]
|
||||
|
||||
## Files Changed
|
||||
[List files modified/created, or "N/A" if not a code task]
|
||||
|
||||
## Self-Assessment
|
||||
- Acceptance criteria met: [yes/no per criterion, one line each]
|
||||
- Known limitations: [any, or "none"]
|
||||
|
|
@ -42,20 +28,20 @@ Produce the assigned deliverable. Accurately. Completely. Nothing more.
|
|||
|
||||
## Self-QA
|
||||
|
||||
Before returning your output, run the `qa-checklist` skill against your work. Fix any issues you find — don't just note them. Set `qa_check: pass` or `qa_check: fail` in your frontmatter envelope. If you can't pass your own QA, flag what remains and why in your Self-Assessment.
|
||||
Before returning your output, run the `qa-checklist` skill against your work. Fix any issues you find — don't just note them. Your Self-Assessment must include the `QA self-check: pass/fail` line. If you can't pass your own QA, flag what remains and why.
|
||||
|
||||
## Cost sensitivity
|
||||
|
||||
- Keep responses tight. Result only.
|
||||
- Context is passed inline, but if your task requires reading files not provided, verify by reading the relevant files. Don't guess at file contents. Keep it targeted.
|
||||
- Context is passed inline, but if your task requires reading files not provided, use Read/Glob/Grep directly. Don't guess at file contents — verify. Keep it targeted.
|
||||
|
||||
## Commits
|
||||
|
||||
Do not commit until your orchestrator sends `signal: lgtm`.
|
||||
Do not commit until your orchestrator sends `LGTM`. End your output with `RFR` to signal you're ready for review.
|
||||
|
||||
- `signal: rfr` — you → orchestrator: work complete, ready for review
|
||||
- `signal: lgtm` — orchestrator → you: approved, commit now
|
||||
- `signal: revise` — orchestrator → you: needs fixes (issues attached)
|
||||
- `RFR` — you → orchestrator: work complete, ready for review
|
||||
- `LGTM` — orchestrator → you: approved, commit now
|
||||
- `REVISE` — orchestrator → you: needs fixes (issues attached)
|
||||
|
||||
When you receive `LGTM`:
|
||||
- Commit using conventional commit format per project conventions
|
||||
|
|
@ -68,6 +54,6 @@ If blocked (tool failure, missing file, build error): try to work around it and
|
|||
|
||||
## Receiving reviewer feedback
|
||||
|
||||
Your orchestrator may resume you with findings from the reviewer (code quality + AC verification) or the auditor (security + runtime validation), or both.
|
||||
Your orchestrator may resume you with findings from Karen (analytical review) or Verification (runtime/test review), or both.
|
||||
|
||||
You already have the task context and your previous work. Address the issues specified. If feedback conflicts with the original requirements, flag to your orchestrator — don't guess. Resubmit complete output in standard format. In Self-Assessment, note which issues you addressed and reference the reviewer or auditor for each.
|
||||
You already have the task context and your previous work. Address the issues specified. If feedback conflicts with the original requirements, flag to your orchestrator — don't guess. Resubmit complete output in standard format. In Self-Assessment, note which issues you addressed and reference the reviewer (Karen / Verification) for each.
|
||||
|
|
|
|||
|
|
@ -1,113 +0,0 @@
|
|||
# Agent Runtime Config v1
|
||||
|
||||
`SETTINGS.yaml` is the human-authored source of truth for portable runtime intent in this repo.
|
||||
|
||||
Team inventory metadata is defined separately in `TEAM.yaml` (see `spec/team-protocol-v1.md`). This spec only covers runtime policy.
|
||||
|
||||
## Goals
|
||||
|
||||
- Keep one editable config for approval, filesystem, network, and model intent.
|
||||
- Generate backward-compatible Claude and Codex outputs from that shared intent.
|
||||
- Make adapter lossiness explicit where provider config surfaces do not line up.
|
||||
|
||||
## Scope
|
||||
|
||||
Version 1 standardizes:
|
||||
|
||||
- portable model tier and reasoning level
|
||||
- filesystem access intent
|
||||
- approval intent
|
||||
- network access intent
|
||||
- portable tool classes
|
||||
- protected path rules
|
||||
- dangerous shell command prompts
|
||||
- a narrow set of target-specific escape hatches for compatibility overrides
|
||||
|
||||
Version 1 does not attempt to standardize:
|
||||
|
||||
- every provider model name
|
||||
- provider-specific tool grammars
|
||||
- every future runtime capability for local agents, IDE plugins, or hosted agents
|
||||
|
||||
## Shared fields
|
||||
|
||||
### `model`
|
||||
|
||||
- `class`: `fast | balanced | powerful`
|
||||
- `reasoning`: `low | medium | high | max`
|
||||
|
||||
### `runtime`
|
||||
|
||||
- `filesystem`: `read-only | workspace-write`
|
||||
- `approval`: `manual | guarded-auto | full-auto`
|
||||
- `network_access`: boolean
|
||||
- `tools`: portable tool classes such as `shell`, `read`, `edit`, `write`, `glob`, `grep`, `web_fetch`, `web_search`
|
||||
|
||||
### `safety`
|
||||
|
||||
- `protected_paths`: glob patterns that should remain blocked from normal reads or writes
|
||||
- `dangerous_shell_commands.ask`: shell command patterns that should remain approval-gated
|
||||
|
||||
### `targets`
|
||||
|
||||
Target blocks are escape hatches, not the main schema.
|
||||
|
||||
Current target-specific fields:
|
||||
|
||||
- `targets.claude.claude_md_excludes`
|
||||
- `targets.codex.approval_policy` (optional override of derived approval)
|
||||
- `targets.codex.network_access` (optional override of derived network access)
|
||||
|
||||
Authority rules:
|
||||
|
||||
- `runtime.approval` and `runtime.network_access` are the portable source of truth.
|
||||
- Codex target fields exist for explicit compatibility overrides and should normally be omitted.
|
||||
- When Codex target fields are set, they intentionally override the derived Codex value.
|
||||
- In this repo, `targets.codex.approval_policy` and `targets.codex.network_access` are intentionally set so Codex runs with `approval_policy = "never"` and network enabled by default. This is a deliberate target-specific compatibility choice, not an accidental divergence.
|
||||
|
||||
## Adapter rules
|
||||
|
||||
### Claude Code
|
||||
|
||||
`settings.json` is generated as a compatibility artifact.
|
||||
|
||||
- `runtime.filesystem = read-only` -> `permissions.defaultMode = "plan"`
|
||||
- `runtime.filesystem = workspace-write` -> `permissions.defaultMode = "acceptEdits"`
|
||||
- `runtime.tools` -> Claude tool allow-list
|
||||
- `safety.protected_paths` -> Claude `deny` entries for `Read`, `Write`, and `Edit`
|
||||
- `dangerous_shell_commands.ask` -> Claude `ask` entries wrapped as `Bash(...)`
|
||||
|
||||
Lossiness:
|
||||
|
||||
- Claude vends `allow` / `deny` / `ask` as tool-pattern rules.
|
||||
- Shared `approval` intent does not map 1:1 to Claude beyond `plan` vs `acceptEdits`.
|
||||
|
||||
### Codex CLI
|
||||
|
||||
`codex/config.toml` is generated directly from shared intent.
|
||||
|
||||
- `runtime.filesystem = read-only` -> `sandbox_mode = "read-only"`
|
||||
- `runtime.filesystem = workspace-write` -> `sandbox_mode = "workspace-write"`
|
||||
- `runtime.approval = manual` -> `approval_policy = "on-request"` (unless overridden)
|
||||
- `runtime.approval = guarded-auto` -> `approval_policy = "untrusted"` (unless overridden)
|
||||
- `runtime.approval = full-auto` -> `approval_policy = "never"` (unless overridden)
|
||||
- `runtime.network_access` -> `[sandbox_workspace_write].network_access`
|
||||
|
||||
Lossiness:
|
||||
|
||||
- Codex does not expose Claude-style per-tool `allow` / `deny` / `ask` pattern controls in `config.toml`.
|
||||
- Protected paths and dangerous command prompts are therefore only partially representable in Codex config today.
|
||||
- Codex does expose coarse approval controls, including `approval_policy` and documented granular approval categories, but not the same pattern-level permission model Claude exposes.
|
||||
|
||||
## Compatibility contract
|
||||
|
||||
The repo preserves these compatibility artifacts:
|
||||
|
||||
- `settings.json`
|
||||
- `claude/settings.json`
|
||||
- `claude/CLAUDE.md`
|
||||
- `codex/config.toml`
|
||||
- `codex/AGENTS.md`
|
||||
- generated agent outputs for both targets
|
||||
|
||||
These are build artifacts, not authored source files. `SETTINGS.yaml` is the required runtime input.
|
||||
|
|
@ -1,139 +0,0 @@
|
|||
# Team Protocol v1
|
||||
|
||||
`TEAM.yaml` defines the team metadata and inventory protocol for portable generation targets in this repo.
|
||||
|
||||
Implementation status:
|
||||
|
||||
- Wave 1: protocol + documentation introduced
|
||||
- Wave 2: generator + install integration completed; TEAM metadata is the active source of truth for team inventory behavior
|
||||
|
||||
## Goals
|
||||
|
||||
- Define a neutral, schema-backed source for agents, skills, and rules metadata.
|
||||
- Keep Claude and Codex as adapter targets rather than protocol sources.
|
||||
- Preserve Markdown as the human-authored instruction content format.
|
||||
- Preserve current generated output behavior unless a narrow caveat is explicitly documented.
|
||||
|
||||
## Scope
|
||||
|
||||
Version 1 standardizes:
|
||||
|
||||
- agent inventory and metadata required for generation
|
||||
- skill inventory metadata
|
||||
- rule inventory and deterministic ordering
|
||||
- adapter boundaries for Claude and Codex
|
||||
- validation requirements needed by the generator
|
||||
|
||||
Version 1 does not standardize:
|
||||
|
||||
- full prose structure for skills/rules/agents
|
||||
- provider-specific runtime/tool grammars
|
||||
- every future adapter target
|
||||
|
||||
## Source-of-Truth Split
|
||||
|
||||
- `SETTINGS.yaml`: runtime policy protocol (filesystem, approval intent, network, model intent)
|
||||
- `TEAM.yaml`: team inventory protocol (agents, skills, rules metadata and references)
|
||||
- Markdown files: instruction bodies
|
||||
- agents: `agents/*.md`
|
||||
- skills: `skills/*/SKILL.md`
|
||||
- rules: `rules/*.md`
|
||||
|
||||
Generated artifacts remain:
|
||||
|
||||
- `settings.json`
|
||||
- `claude/`
|
||||
- `codex/`
|
||||
|
||||
## Required TEAM Inventories
|
||||
|
||||
`TEAM.yaml` must contain:
|
||||
|
||||
- `agents`
|
||||
- `skills`
|
||||
- `rules`
|
||||
|
||||
## Agent Contract
|
||||
|
||||
Each agent entry includes metadata required for adapter generation:
|
||||
|
||||
- `id`
|
||||
- `name`
|
||||
- `description`
|
||||
- `model`
|
||||
- `effort`
|
||||
- `permission_mode`
|
||||
- `tools`
|
||||
- `disallowed_tools`
|
||||
- `max_turns`
|
||||
- `skills`
|
||||
- optional `background`
|
||||
- optional `memory`
|
||||
- optional `isolation`
|
||||
- `instruction_file`
|
||||
|
||||
`instruction_file` points to the Markdown source for long-form instructions.
|
||||
|
||||
## Skill Contract
|
||||
|
||||
Each skill entry includes lightweight metadata and content reference:
|
||||
|
||||
- `id`
|
||||
- `name`
|
||||
- `description`
|
||||
- `instruction_file`
|
||||
- target/install metadata (`applies_to`, `install_mode`)
|
||||
|
||||
Skill prose remains in `skills/*/SKILL.md`.
|
||||
|
||||
## Rule Contract
|
||||
|
||||
Each rule entry includes:
|
||||
|
||||
- `id`
|
||||
- `source_file`
|
||||
- deterministic order metadata
|
||||
- optional target metadata
|
||||
|
||||
Rule prose remains in `rules/*.md`.
|
||||
|
||||
## Adapter Boundaries
|
||||
|
||||
Claude and Codex are render targets.
|
||||
|
||||
Current target behavior:
|
||||
|
||||
- Claude generation consumes TEAM metadata + Markdown content and outputs:
|
||||
- `claude/CLAUDE.md`
|
||||
- `claude/settings.json`
|
||||
- `claude/agents/*.md`
|
||||
- Codex generation consumes TEAM metadata + Markdown content and outputs:
|
||||
- `codex/config.toml`
|
||||
- `codex/AGENTS.md`
|
||||
- `codex/agents/*.toml`
|
||||
- `codex/skills` symlinked to the shared skill directories for relative `skills.config` references
|
||||
|
||||
## Validation Requirements
|
||||
|
||||
TEAM validation enforces schema + runtime checks for:
|
||||
|
||||
- schema version correctness
|
||||
- required sections present
|
||||
- unique IDs for agents/skills/rules
|
||||
- referenced files exist
|
||||
- deterministic rule ordering inputs are valid
|
||||
- `order` IDs match declared inventory keys
|
||||
- item `id` matches keyed map entry
|
||||
|
||||
## Compatibility Caveats
|
||||
|
||||
- Existing YAML frontmatter in `agents/*.md` may remain for editorial continuity, but generation does not use it for team metadata.
|
||||
- Output diffs that are purely formatting-related are acceptable; semantic behavior changes are not unless explicitly documented.
|
||||
- TEAM schema is intentionally rigid/repo-specific in v1; inventory additions/removals require schema updates in lockstep.
|
||||
- Agent metadata is not fully portable across targets. Current Codex custom-agent docs cover session-style fields such as `model`, `model_reasoning_effort`, `sandbox_mode`, `mcp_servers`, and `skills.config`, but do not document per-agent equivalents for TEAM's `background`, `memory`, or `isolation` fields.
|
||||
|
||||
## Out of Scope
|
||||
|
||||
- Rewriting instruction prose for style
|
||||
- Full content schemas for skill/rule prose
|
||||
- Generalizing all future adapters in v1
|
||||
Loading…
Add table
Add a link
Reference in a new issue