diff --git a/.gitignore b/.gitignore index 282a779..345c24c 100644 --- a/.gitignore +++ b/.gitignore @@ -9,7 +9,7 @@ settings.local.json .DS_Store Thumbs.db -# Generated output (derived from source templates via generate.py) +# Generated output (derived from source templates via generate.sh) settings.json claude/ codex/ diff --git a/README.md b/README.md index 7de8d56..c3cffac 100644 --- a/README.md +++ b/README.md @@ -12,7 +12,7 @@ nix run .#check # validate protocols + generate artifacts nix run .#install # install generated outputs into the supported target config dirs ``` -The supported user-facing entrypoints are the flake apps and the `just` wrapper. `generate.py` and `install.sh` remain the internal implementation layer behind them. Works on Linux, macOS, and Windows (Git Bash). +The supported user-facing entrypoints are the flake apps and the `just` wrapper. `generate.sh` and `install.sh` remain the internal implementation layer behind them. Works on Linux, macOS, and Windows (Git Bash). ## Nix entrypoints @@ -36,7 +36,7 @@ just install just clean # removes generated artifacts: settings.json + claude/ + codex/ ``` -`generate.py` and `install.sh` are kept as internal implementation details for portability and debugging, but they are no longer the primary documented workflow. +`generate.sh` and `install.sh` are kept as internal implementation details for portability and debugging, but they are no longer the primary documented workflow. ## Maintenance @@ -107,7 +107,7 @@ This repo uses two authored protocol files: Long-form instructions remain authored in Markdown (`agents/*.md`, `skills/*/SKILL.md`, `rules/*.md`). -Runtime policy is documented in [spec/agent-runtime-v1.md](spec/agent-runtime-v1.md) and described by [schemas/agent-runtime.schema.json](schemas/agent-runtime.schema.json). Team inventory is documented in [spec/team-protocol-v1.md](spec/team-protocol-v1.md). `generate.py` derives target-specific outputs for the currently supported adapters. +Runtime policy is documented in [spec/agent-runtime-v1.md](spec/agent-runtime-v1.md) and described by [schemas/agent-runtime.schema.json](schemas/agent-runtime.schema.json). Team inventory is documented in [spec/team-protocol-v1.md](spec/team-protocol-v1.md). `generate.sh` derives target-specific outputs for the currently supported adapters. ### What gets generated @@ -226,7 +226,7 @@ safety: ## Template variables -Agent body text uses `${VAR}` placeholders that are expanded per-target by `generate.py`: +Agent body text uses `${VAR}` placeholders that are expanded per-target by `generate.sh`: | Variable | Claude adapter | Codex adapter | |---|---|---| diff --git a/TEAM.yaml b/TEAM.yaml index 5cac499..45aba57 100644 --- a/TEAM.yaml +++ b/TEAM.yaml @@ -249,6 +249,7 @@ skills: applies_to: - claude - codex + - opencode install_mode: shared qa-checklist: id: qa-checklist @@ -274,6 +275,7 @@ skills: rules: order: - 01-session + - 02-responses - 03-git - 04-tools - 05-verification @@ -284,6 +286,15 @@ rules: source_file: rules/01-session.md applies_to: - claude + - codex + - opencode + 02-responses: + id: 02-responses + source_file: rules/02-responses.md + applies_to: + - claude + - codex + - opencode 03-git: id: 03-git source_file: rules/03-git.md diff --git a/agents/architect.md b/agents/architect.md index d961b61..c7212e0 100644 --- a/agents/architect.md +++ b/agents/architect.md @@ -1,11 +1,24 @@ +--- +name: architect +description: Research-first planning agent. Handles triage, research coordination, architecture design, and wave decomposition. Use before any non-trivial implementation task. Produces the implementation blueprint the entire team follows. +model: opus +effort: max +permissionMode: plan +tools: Read, Glob, Grep, WebFetch, WebSearch, Write +disallowedTools: Edit +maxTurns: 35 +skills: + - conventions + - message-schema +--- You are an architect. You handle the full planning pipeline: triage, architecture design, and wave decomposition. Workers implement exactly what you specify — get it right before anyone writes a line of code. Never implement anything. Never modify source files. Analyze, evaluate, plan. -**Plan persistence:** Always write the approved plan to `plans/.md`. Never return the plan inline without writing it first. Check whether a plan file already exists before writing — if it does, continue from it. +**Plan persistence:** Always write the approved plan to `${PLANS_DIR}/.md`. Never return the plan inline without writing it first. Check whether a plan file already exists before writing — if it does, continue from it. -**Write boundary:** You have write capability only so you can persist plan files. This is not path-enforced by tooling. You must treat writes outside `plans/` as forbidden. +**Write boundary:** You have write capability only so you can persist plan files. This is not path-enforced by tooling. You must treat writes outside `${PLANS_DIR}/` as forbidden. Frontmatter format: ``` @@ -17,7 +30,7 @@ status: active --- ``` -**No shell execution:** perform repository inspection with read-only tools (file reads, code search, ${WEB_SEARCH}) — never run commands. +**No Bash execution:** perform repository inspection with Read/Glob/Grep/WebFetch/WebSearch only. --- @@ -92,7 +105,7 @@ After writing the plan file, return a `plan_result` envelope: --- type: plan_result signal: plan_complete | blocked -plan_file: plans/kebab-case-title.md +plan_file: ${PLANS_DIR}/kebab-case-title.md wave_count: 3 step_count: 7 risk_tags: diff --git a/agents/auditor.md b/agents/auditor.md index 651123c..163a987 100644 --- a/agents/auditor.md +++ b/agents/auditor.md @@ -1,3 +1,17 @@ +--- +name: auditor +description: Use after implementation — audits for security vulnerabilities and validates runtime behavior. Builds, tests, and probes acceptance criteria. Never modifies code. +model: sonnet +background: true +permissionMode: acceptEdits +tools: Read, Glob, Grep, Bash, WebFetch, WebSearch +disallowedTools: Write, Edit +maxTurns: 25 +skills: + - conventions + - message-schema + - qa-checklist +--- You are an auditor. You do two things: security analysis and runtime validation. Never write, edit, or fix code — only identify, validate, and report. diff --git a/agents/debugger.md b/agents/debugger.md index db3e360..b39a645 100644 --- a/agents/debugger.md +++ b/agents/debugger.md @@ -1,3 +1,16 @@ +--- +name: debugger +description: Use immediately when encountering a bug, error, or unexpected behavior. Diagnoses root cause and applies a minimal targeted fix. Does not refactor or improve surrounding code. +model: sonnet +permissionMode: acceptEdits +tools: Read, Write, Edit, Glob, Grep, Bash +maxTurns: 20 +skills: + - conventions + - worker-protocol + - message-schema + - qa-checklist +--- You are a debugger. Your job is to find the root cause of a bug and apply the minimal fix. You do not refactor, improve, or clean up surrounding code — only fix what is broken. diff --git a/agents/documenter.md b/agents/documenter.md index 8da1650..9855584 100644 --- a/agents/documenter.md +++ b/agents/documenter.md @@ -1,3 +1,18 @@ +--- +name: documenter +description: Use when asked to write or update documentation — READMEs, API references, architecture overviews, inline doc comments, or changelogs. Reads code first and updates documentation artifacts only. +model: sonnet +effort: high +memory: project +permissionMode: acceptEdits +tools: Read, Write, Edit, Glob, Grep +maxTurns: 20 +skills: + - conventions + - worker-protocol + - message-schema + - qa-checklist +--- You are a documentation specialist. Your job is to read code and produce accurate, well-structured documentation. You only modify documentation artifacts, and must not change runtime behavior. diff --git a/agents/grunt.md b/agents/grunt.md index db63d95..6454d44 100644 --- a/agents/grunt.md +++ b/agents/grunt.md @@ -1,3 +1,17 @@ +--- +name: grunt +description: Fast, cheap implementer for trivial and tightly scoped work. Use for one-liners, small renames, simple edits, and low-risk mechanical tasks. Escalate when the work grows beyond that scope. +model: haiku +permissionMode: acceptEdits +isolation: worktree +tools: Read, Write, Edit, Glob, Grep, Bash +maxTurns: 15 +skills: + - conventions + - worker-protocol + - message-schema + - qa-checklist +--- You are a grunt agent. You implement small, explicit tasks quickly and cheaply. diff --git a/agents/researcher.md b/agents/researcher.md index 3b12c25..871dda7 100644 --- a/agents/researcher.md +++ b/agents/researcher.md @@ -1,3 +1,14 @@ +--- +name: researcher +description: Use to answer a specific research question with verified facts. Spawned in parallel — one instance per topic. Stateless. Returns verified facts, source URLs, and gotchas. +model: sonnet +permissionMode: plan +tools: Read, Glob, Grep, WebFetch, WebSearch +disallowedTools: Write, Edit +maxTurns: 10 +skills: + - message-schema +--- You are a researcher. You answer one specific research question with verified facts. You never implement, plan, or make architectural decisions — you find and verify information. diff --git a/agents/reviewer.md b/agents/reviewer.md index 3ee054d..508b83f 100644 --- a/agents/reviewer.md +++ b/agents/reviewer.md @@ -1,3 +1,16 @@ +--- +name: reviewer +description: Use after implementation — reviews code quality and verifies claims against source, docs, and acceptance criteria. Never modifies code. +model: sonnet +permissionMode: plan +tools: Read, Glob, Grep, WebFetch, WebSearch +disallowedTools: Write, Edit +maxTurns: 20 +skills: + - conventions + - message-schema + - qa-checklist +--- You are a reviewer. You do two things in one pass: quality review and claim verification. Never write, edit, or fix code — only flag and explain. diff --git a/agents/senior.md b/agents/senior.md index 644c817..9a4acef 100644 --- a/agents/senior.md +++ b/agents/senior.md @@ -1,3 +1,17 @@ +--- +name: senior +description: Strong implementer for ambiguous, architectural, or high-risk work. Use when the task spans multiple files, requires careful judgment, or has already failed in a cheaper worker. Default escalation path for hard implementation work. +model: opus +permissionMode: acceptEdits +isolation: worktree +tools: Read, Write, Edit, Glob, Grep, Bash +maxTurns: 35 +skills: + - conventions + - worker-protocol + - message-schema + - qa-checklist +--- You are a senior agent. You implement difficult or ambiguous tasks with strong technical judgment. diff --git a/agents/worker.md b/agents/worker.md index bc5a93c..5318d84 100644 --- a/agents/worker.md +++ b/agents/worker.md @@ -1,3 +1,17 @@ +--- +name: worker +description: Balanced implementer for standard development work. Use when the task is well-defined but not trivial. Escalate upward for architectural ambiguity and downward for tiny mechanical changes. +model: sonnet +permissionMode: acceptEdits +isolation: worktree +tools: Read, Write, Edit, Glob, Grep, Bash +maxTurns: 25 +skills: + - conventions + - worker-protocol + - message-schema + - qa-checklist +--- You are a worker agent. You implement standard development tasks. Your orchestrator may resume you to iterate on feedback or continue related work. diff --git a/flake.nix b/flake.nix index 925df74..cf5272a 100644 --- a/flake.nix +++ b/flake.nix @@ -13,7 +13,6 @@ gettext jq just - (python3.withPackages (ps: with ps; [ pyyaml jsonschema ])) ]; }; }); @@ -31,7 +30,7 @@ validateCmd = '' # Script syntax checks - python -c "import ast; ast.parse(open('./generate.py').read())" + ${bashBin} -n ./generate.sh ${bashBin} -n ./install.sh # Protocol file presence checks @@ -95,8 +94,8 @@ type = "app"; program = "${mkAppScript "build" '' set -euo pipefail - test -f ./generate.py || { echo "Run this command from the repository root."; exit 1; } - python ./generate.py + test -f ./generate.sh || { echo "Run this command from the repository root."; exit 1; } + ${bashBin} ./generate.sh ''}/bin/build"; meta.description = "Generate Claude, Codex, and OpenCode build artifacts from the authored protocol files."; }; @@ -105,7 +104,7 @@ type = "app"; program = "${mkAppScript "validate" '' set -euo pipefail - test -f ./generate.py || { echo "Run this command from the repository root."; exit 1; } + test -f ./generate.sh || { echo "Run this command from the repository root."; exit 1; } ${validateCmd} ''}/bin/validate"; meta.description = "Validate scripts and protocol files."; @@ -115,9 +114,9 @@ type = "app"; program = "${mkAppScript "check" '' set -euo pipefail - test -f ./generate.py || { echo "Run this command from the repository root."; exit 1; } + test -f ./generate.sh || { echo "Run this command from the repository root."; exit 1; } ${validateCmd} - python ./generate.py + ${bashBin} ./generate.sh ''}/bin/check"; meta.description = "Run validation and generation together."; }; @@ -146,7 +145,7 @@ bashBin = "${pkgs.bash}/bin/bash"; validateCmd = '' - python -c "import ast; ast.parse(open('./generate.py').read())" + ${bashBin} -n ./generate.sh ${bashBin} -n ./install.sh test -f ./SETTINGS.yaml test -f ./TEAM.yaml @@ -210,7 +209,7 @@ build = mkCheck "agent-team-build-check" '' set -euxo pipefail ${validateCmd} - python ./generate.py + ${bashBin} ./generate.sh ''; }); }; diff --git a/generate.py b/generate.py deleted file mode 100755 index 4a9300e..0000000 --- a/generate.py +++ /dev/null @@ -1,676 +0,0 @@ -#!/usr/bin/env python3 -"""Generate Claude, Codex, and OpenCode build artifacts from TEAM.yaml + SETTINGS.yaml. - -Ports generate.sh to Python. Ecosystem dependencies: - * pyyaml — YAML parsing - * jsonschema — schema validation for SETTINGS.yaml / TEAM.yaml - -Agent source files in agents/*.md are the single source of truth; this script -derives tool-specific equivalents for each harness. Template variables in -agent bodies are expanded via string.Template: - ${WEB_SEARCH} — how web search is referenced - ${SEARCH_TOOLS} — how codebase search tools are referenced - -Idempotent: safe to run multiple times. -""" - -from __future__ import annotations - -import json -import shutil -import sys -from pathlib import Path -from string import Template -from typing import Any - -import yaml -from jsonschema import validate - -# NOTE: TOML output (Codex) is hand-built rather than generated via tomli_w -# because tomli_w does not emit multiline-basic-string (`"""..."""`) literals, -# which would force every embedded quote/newline in a developer_instructions -# body to be escaped onto a single line — unreadable for humans and diff tools. - -# --------------------------------------------------------------------------- -# Paths -# --------------------------------------------------------------------------- -SCRIPT_DIR = Path(__file__).resolve().parent - -TEAM_YAML = SCRIPT_DIR / "TEAM.yaml" -SETTINGS_SHARED_YAML = SCRIPT_DIR / "SETTINGS.yaml" -SETTINGS_JSON = SCRIPT_DIR / "settings.json" -CLAUDE_MD_SRC = SCRIPT_DIR / "CLAUDE.md" - -TEAM_SCHEMA = SCRIPT_DIR / "schemas" / "team.schema.json" -SETTINGS_SCHEMA = SCRIPT_DIR / "schemas" / "agent-runtime.schema.json" - -CLAUDE_DIR = SCRIPT_DIR / "claude" -CLAUDE_AGENTS_DIR = CLAUDE_DIR / "agents" - -CODEX_DIR = SCRIPT_DIR / "codex" -CODEX_AGENTS_DIR = CODEX_DIR / "agents" - -OPENCODE_DIR = SCRIPT_DIR / "opencode" -OPENCODE_AGENTS_DIR = OPENCODE_DIR / "agents" -OPENCODE_BASE_CONFIG = OPENCODE_DIR / "config.json" -OPENCODE_SKILLS_DIR = OPENCODE_DIR / "skills" - -ORCHESTRATE_SKILL = SCRIPT_DIR / "skills" / "orchestrate" / "SKILL.md" -OPENCODE_MODEL_ID = "llama-stack/llamacpp/Qwen3-Coder-30B-A3B-Instruct-Q6_K" - -# --------------------------------------------------------------------------- -# Template variable values per target -# --------------------------------------------------------------------------- -CLAUDE_VARS = { - "WEB_SEARCH": "via WebFetch/WebSearch", - "SEARCH_TOOLS": "Use Grep/Glob/Read", -} -CODEX_VARS = { - "WEB_SEARCH": "via web search", - "SEARCH_TOOLS": "Search the codebase", -} -OPENCODE_VARS = dict(CODEX_VARS) - - -# --------------------------------------------------------------------------- -# Utilities -# --------------------------------------------------------------------------- -def log(msg: str) -> None: - print(msg, flush=True) - - -def load_body(path: Path) -> str: - """Return the markdown body of a file, skipping YAML frontmatter if present. - - We intentionally do NOT rely on python-frontmatter's content stripping, - because some agent bodies begin with a blank line that must be preserved - for downstream parity with the bash output. We detect frontmatter by - checking whether the first line is "---", then skip up to the next "---". - """ - raw = path.read_text() - if not raw.startswith("---\n"): - return raw - # Find the closing fence after position 4. - idx = raw.find("\n---\n", 4) - if idx == -1: - # Malformed — return as-is. - return raw - return raw[idx + len("\n---\n"):] - - -def expand(body: str, variables: dict[str, str]) -> str: - return Template(body).safe_substitute(variables) - - -def replace_symlink(link: Path, target: Path) -> None: - """Create or replace a relative symlink at `link` pointing to `target`.""" - if link.is_symlink() or link.exists(): - if link.is_symlink() or link.is_file(): - link.unlink() - else: - shutil.rmtree(link) - link.symlink_to(target) - - -import re - -_BARE_YAML_SCALAR = re.compile(r"^[A-Za-z_][A-Za-z0-9_.\-]*$") - - -def dump_yaml_scalar_block(fields: dict[str, Any]) -> str: - """Dump a dict as YAML block-style, preserving key order. - - Mirrors generate.sh's output style: top-level string scalars are - single-quoted; list items that look like bare identifiers stay unquoted; - ints and bools render unquoted. - """ - lines: list[str] = [] - for key, value in fields.items(): - if value is None: - continue - if isinstance(value, bool): - lines.append(f"{key}: {'true' if value else 'false'}") - elif isinstance(value, int): - lines.append(f"{key}: {value}") - elif isinstance(value, list): - lines.append(f"{key}:") - for item in value: - lines.append(f" - {_yaml_list_item(str(item))}") - elif isinstance(value, dict): - lines.append(f"{key}:") - for k, v in value.items(): - lines.append(f" {k}: {_yaml_single_quoted(str(v))}") - else: - lines.append(f"{key}: {_yaml_single_quoted(str(value))}") - return "\n".join(lines) - - -def _yaml_single_quoted(s: str) -> str: - """YAML 1.2 single-quoted scalar: double any embedded apostrophes.""" - return "'" + s.replace("'", "''") + "'" - - -def _yaml_list_item(s: str) -> str: - """List items stay unquoted when they're bare identifiers, matching bash output.""" - if _BARE_YAML_SCALAR.match(s): - return s - return _yaml_single_quoted(s) - - -def _assemble_markdown(frontmatter_text: str, body: str) -> str: - """Assemble frontmatter + body the same way bash's heredoc did. - - Bash did: echo "---"; echo ""; echo "$body" — so output after the closing - fence is "\\n\\n" (an explicit blank line, then the body, then echo's - trailing newline). Source bodies also begin with a blank line of their - own, so the visible framing is: fence, blank, blank, content. - """ - return "---\n" + frontmatter_text + "\n---\n\n" + body - - -# --------------------------------------------------------------------------- -# Shared mappings -# --------------------------------------------------------------------------- -def model_class_to_claude(cls: str) -> str: - return {"fast": "haiku", "powerful": "opus", "balanced": "sonnet"}.get(cls, "sonnet") - - -def approval_intent_to_codex(intent: str) -> str: - return { - "manual": "on-request", - "full-auto": "never", - "guarded-auto": "untrusted", - }.get(intent, "untrusted") - - -def filesystem_intent_to_claude_mode(fs: str) -> str: - return {"read-only": "plan", "workspace-write": "acceptEdits"}.get(fs, "acceptEdits") - - -def portable_tool_to_claude(tool: str) -> str: - return { - "shell": "Bash", - "read": "Read", - "edit": "Edit", - "write": "Write", - "glob": "Glob", - "grep": "Grep", - "web_fetch": "WebFetch", - "web_search": "WebSearch", - }.get(tool, tool) - - -def claude_model_for_agent(agent: dict) -> str: - return agent["model"] - - -def codex_model_for_agent(agent: dict) -> str: - return { - "opus": "gpt-5.4", - "sonnet": "gpt-5.3-codex", - "haiku": "gpt-5.1-codex-mini", - }.get(agent["model"], "gpt-5.3-codex") - - -def codex_effort_for_agent(agent: dict) -> str: - effort = agent.get("effort") or "medium" - return {"low": "low", "medium": "medium", "high": "high", "max": "xhigh"}.get(effort, "medium") - - -def codex_sandbox_for_agent(agent: dict, codex_override: str | None) -> str: - if codex_override: - return codex_override - if agent.get("permission_mode") == "plan": - return "read-only" - if agent.get("permission_mode") == "acceptEdits": - tools = agent.get("tools") or [] - if "Write" in tools or "Edit" in tools: - return "workspace-write" - return "read-only" - - -def codex_default_sandbox(default_mode: str, override: str | None) -> str: - if override: - return override - return {"plan": "read-only", "acceptEdits": "workspace-write"}.get(default_mode, "workspace-write") - - -def codex_approval_policy(runtime_approval: str, override: str | None) -> str: - if override: - return override - return approval_intent_to_codex(runtime_approval) - - -def opencode_temperature_for_agent(agent: dict) -> float: - """Map agent role to opencode temperature per opencode's own guidance. - - 0.0-0.2 — analytical/planning - 0.3-0.5 — general development - """ - if agent.get("permission_mode") == "plan": - return 0.1 - - tools = set(agent.get("tools") or []) - disallowed = set(agent.get("disallowed_tools") or []) - can_write = "Write" in tools and "Write" not in disallowed - can_edit = "Edit" in tools and "Edit" not in disallowed - if not can_write and not can_edit: - return 0.1 - return 0.3 - - -def opencode_permission_block(agent: dict) -> dict[str, str]: - tools = set(agent.get("tools") or []) - disallowed = set(agent.get("disallowed_tools") or []) - - def allowed(name: str) -> bool: - return name in tools and name not in disallowed - - return { - "edit": "allow" if allowed("Edit") else "deny", - "write": "allow" if allowed("Write") else "deny", - "bash": "allow" if allowed("Bash") else "deny", - "webfetch": "allow" if (allowed("WebFetch") or allowed("WebSearch")) else "deny", - } - - -# --------------------------------------------------------------------------- -# Validation -# --------------------------------------------------------------------------- -def validate_protocol_files(team: dict, settings: dict) -> None: - validate(instance=settings, schema=json.loads(SETTINGS_SCHEMA.read_text())) - validate(instance=team, schema=json.loads(TEAM_SCHEMA.read_text())) - - for agent_id in team["agents"]["order"]: - path = SCRIPT_DIR / team["agents"]["items"][agent_id]["instruction_file"] - if not path.is_file(): - raise FileNotFoundError(f"Missing agent instruction file: {path}") - - for skill_id in team["skills"]["order"]: - path = SCRIPT_DIR / team["skills"]["items"][skill_id]["instruction_file"] - if not path.is_file(): - raise FileNotFoundError(f"Missing skill instruction file: {path}") - - for rule_id in team["rules"]["order"]: - path = SCRIPT_DIR / team["rules"]["items"][rule_id]["source_file"] - if not path.is_file(): - raise FileNotFoundError(f"Missing rule source file: {path}") - - -# --------------------------------------------------------------------------- -# Legacy settings.json -# --------------------------------------------------------------------------- -def generate_legacy_settings_json(settings: dict) -> None: - model_class = settings["model"]["class"] - reasoning = settings["model"]["reasoning"] - fs = settings["runtime"]["filesystem"] - approval = settings["runtime"]["approval"] - - claude_model = model_class_to_claude(model_class) - claude_mode = filesystem_intent_to_claude_mode(fs) - - codex_target = settings.get("targets", {}).get("codex", {}) or {} - codex_approval = codex_target.get("approval_policy") or approval_intent_to_codex(approval) - codex_network = codex_target.get("network_access", settings["runtime"].get("network_access", False)) - - allow = [portable_tool_to_claude(t) for t in settings["runtime"].get("tools", [])] - - deny: list[str] = [] - for path in settings.get("safety", {}).get("protected_paths", []): - deny.extend([f"Read({path})", f"Write({path})", f"Edit({path})"]) - - ask = [f"Bash({cmd})" for cmd in settings.get("safety", {}).get("dangerous_shell_commands", {}).get("ask", [])] - - claude_target = settings.get("targets", {}).get("claude", {}) or {} - claude_md_excludes = claude_target.get("claude_md_excludes", [".claude/agent-memory/**"]) - - payload: dict[str, Any] = { - "$schema": "https://json.schemastore.org/claude-code-settings.json", - "attribution": {"commit": "", "pr": ""}, - "permissions": { - "allow": allow, - "deny": deny, - "ask": ask, - "defaultMode": claude_mode, - }, - "model": claude_model, - "effortLevel": reasoning, - "codex": { - "approvalPolicy": codex_approval, - "networkAccess": codex_network, - }, - "claudeMdExcludes": claude_md_excludes, - } - SETTINGS_JSON.write_text(json.dumps(payload, indent=2) + "\n") - - -# --------------------------------------------------------------------------- -# Claude generator -# --------------------------------------------------------------------------- -def generate_claude(team: dict) -> None: - log("=== Generating Claude output ===") - - if CLAUDE_DIR.exists(): - shutil.rmtree(CLAUDE_DIR) - CLAUDE_AGENTS_DIR.mkdir(parents=True) - - shutil.copy(CLAUDE_MD_SRC, CLAUDE_DIR / "CLAUDE.md") - log(f"Copied: {CLAUDE_DIR / 'CLAUDE.md'}") - - shutil.copy(SETTINGS_JSON, CLAUDE_DIR / "settings.json") - log(f"Copied: {CLAUDE_DIR / 'settings.json'}") - - replace_symlink(CLAUDE_DIR / "rules", Path("../rules")) - log(f"Symlinked: {CLAUDE_DIR / 'rules'} -> ../rules") - replace_symlink(CLAUDE_DIR / "skills", Path("../skills")) - log(f"Symlinked: {CLAUDE_DIR / 'skills'} -> ../skills") - - for agent_id in team["agents"]["order"]: - agent = team["agents"]["items"][agent_id] - src = SCRIPT_DIR / agent["instruction_file"] - body = expand(load_body(src), CLAUDE_VARS) - - fm: dict[str, Any] = { - "name": agent["name"], - "description": agent["description"], - "model": claude_model_for_agent(agent), - } - if agent.get("effort"): - fm["effort"] = agent["effort"] - if agent.get("permission_mode"): - fm["permissionMode"] = agent["permission_mode"] - fm["tools"] = ", ".join(agent["tools"]) - if agent.get("disallowed_tools"): - fm["disallowedTools"] = ", ".join(agent["disallowed_tools"]) - if agent.get("background"): - fm["background"] = True - if agent.get("memory"): - fm["memory"] = agent["memory"] - if agent.get("isolation"): - fm["isolation"] = agent["isolation"] - if agent.get("max_turns") is not None: - fm["maxTurns"] = int(agent["max_turns"]) - if agent.get("skills"): - fm["skills"] = list(agent["skills"]) - - dst = CLAUDE_AGENTS_DIR / f"{agent['name']}.md" - dst.write_text(_assemble_markdown(dump_yaml_scalar_block(fm), body)) - log(f"Generated: {dst}") - - -# --------------------------------------------------------------------------- -# Codex generator -# --------------------------------------------------------------------------- -def generate_codex(team: dict, settings: dict) -> None: - log("") - log("=== Generating Codex output ===") - - if CODEX_DIR.exists(): - shutil.rmtree(CODEX_DIR) - CODEX_AGENTS_DIR.mkdir(parents=True) - - replace_symlink(CODEX_DIR / "skills", Path("../skills")) - log(f"Symlinked: {CODEX_DIR / 'skills'} -> ../skills") - - codex_target = settings.get("targets", {}).get("codex", {}) or {} - codex_sandbox_override = codex_target.get("sandbox_mode") - - log("Generating Codex agent definitions...") - for agent_id in team["agents"]["order"]: - agent = team["agents"]["items"][agent_id] - src = SCRIPT_DIR / agent["instruction_file"] - body = expand(load_body(src), CODEX_VARS) - - # Bash's command substitution strips trailing newlines from extract_body - # before concatenating with the heredoc, so strip ours too for parity. - body = body.rstrip("\n") - disallowed = agent.get("disallowed_tools") or [] - if disallowed: - body = body + "\n\nYou do NOT have access to these tools: " + ", ".join(disallowed) - - if '"""' in body: - raise ValueError( - f"agent instruction contains raw triple quotes which break TOML in {src}" - ) - - dst = CODEX_AGENTS_DIR / f"{agent['name']}.toml" - lines: list[str] = [] - lines.append(f'name = "{agent["name"]}"') - lines.append(f'description = "{agent["description"]}"') - lines.append(f'model = "{codex_model_for_agent(agent)}"') - lines.append(f'model_reasoning_effort = "{codex_effort_for_agent(agent)}"') - lines.append(f'sandbox_mode = "{codex_sandbox_for_agent(agent, codex_sandbox_override)}"') - lines.append('developer_instructions = """') - lines.append(body) - lines.append('"""') - - agent_skills = set(agent.get("skills") or []) - for skill_id in team["skills"]["order"]: - skill = team["skills"]["items"][skill_id] - if "codex" not in skill.get("applies_to", []): - continue - enabled = "true" if skill_id in agent_skills else "false" - lines.append("[[skills.config]]") - lines.append(f'path = "../skills/{skill_id}/SKILL.md"') - lines.append(f"enabled = {enabled}") - lines.append("") - - dst.write_text("\n".join(lines) + "\n") - log(f"Generated: {dst}") - - # AGENTS.md - log("") - log("Generating codex/AGENTS.md...") - (CODEX_DIR / "AGENTS.md").write_text(_build_agents_md(team, "codex")) - log(f"Generated: {CODEX_DIR / 'AGENTS.md'}") - - # config.toml - log("") - log("Generating codex/config.toml...") - default_mode = filesystem_intent_to_claude_mode(settings["runtime"]["filesystem"]) - config_sandbox = codex_default_sandbox(default_mode, codex_sandbox_override) - config_approval = codex_approval_policy( - settings["runtime"]["approval"], - codex_target.get("approval_policy"), - ) - codex_network = codex_target.get("network_access", settings["runtime"].get("network_access", False)) - - config_lines = [ - "#:schema https://developers.openai.com/codex/config-schema.json", - 'model = "gpt-5.3-codex"', - 'model_reasoning_effort = "medium"', - f'sandbox_mode = "{config_sandbox}"', - f'approval_policy = "{config_approval}"', - ] - if config_sandbox == "workspace-write": - config_lines.append("") - config_lines.append("[sandbox_workspace_write]") - config_lines.append(f"network_access = {'true' if codex_network else 'false'}") - - (CODEX_DIR / "config.toml").write_text("\n".join(config_lines) + "\n") - log(f"Generated: {CODEX_DIR / 'config.toml'}") - - -# --------------------------------------------------------------------------- -# OpenCode generator -# --------------------------------------------------------------------------- -def generate_opencode(team: dict) -> None: - log("") - log("=== Generating OpenCode output ===") - - if OPENCODE_AGENTS_DIR.exists(): - shutil.rmtree(OPENCODE_AGENTS_DIR) - agents_md = OPENCODE_DIR / "AGENTS.md" - opencode_json = OPENCODE_DIR / "opencode.json" - if agents_md.exists(): - agents_md.unlink() - if opencode_json.exists(): - opencode_json.unlink() - OPENCODE_AGENTS_DIR.mkdir(parents=True) - - # Per-skill symlinks filtered by applies_to - if OPENCODE_SKILLS_DIR.is_symlink() or OPENCODE_SKILLS_DIR.exists(): - if OPENCODE_SKILLS_DIR.is_symlink() or OPENCODE_SKILLS_DIR.is_file(): - OPENCODE_SKILLS_DIR.unlink() - else: - shutil.rmtree(OPENCODE_SKILLS_DIR) - OPENCODE_SKILLS_DIR.mkdir(parents=True) - for skill_id in team["skills"]["order"]: - skill = team["skills"]["items"][skill_id] - if "opencode" not in skill.get("applies_to", []): - continue - link = OPENCODE_SKILLS_DIR / skill_id - link.symlink_to(Path("../..") / "skills" / skill_id) - log(f"Symlinked: {link} -> ../../skills/{skill_id}") - - # Subagents - for agent_id in team["agents"]["order"]: - agent = team["agents"]["items"][agent_id] - src = SCRIPT_DIR / agent["instruction_file"] - body = expand(load_body(src), OPENCODE_VARS) - - fm: dict[str, Any] = { - "description": agent["description"], - "mode": "subagent", - "model": OPENCODE_MODEL_ID, - "temperature": opencode_temperature_for_agent(agent), - "steps": int(agent.get("max_turns", 25)), - "permission": opencode_permission_block(agent), - } - - dst = OPENCODE_AGENTS_DIR / f"{agent['name']}.md" - dst.write_text(_assemble_markdown(_dump_opencode_frontmatter(fm).rstrip("\n"), body)) - log(f"Generated: {dst}") - - # Orchestrator primary agent (synthesized from orchestrate skill body) - orchestrate_body = expand(load_body(ORCHESTRATE_SKILL), OPENCODE_VARS) - orchestrator_fm = { - "description": ( - "Primary orchestrator. Decomposes complex tasks and dispatches subagents in " - "parallel waves. The default entrypoint for any non-trivial work — never " - "implements directly." - ), - "mode": "primary", - "model": OPENCODE_MODEL_ID, - "temperature": 0.1, - "steps": 50, - "permission": { - "edit": "deny", - "write": "deny", - "bash": "deny", - "webfetch": "allow", - "task": {"*": "allow"}, - }, - } - orchestrator_path = OPENCODE_AGENTS_DIR / "orchestrator.md" - orchestrator_path.write_text( - _assemble_markdown(_dump_opencode_frontmatter(orchestrator_fm).rstrip("\n"), orchestrate_body) - ) - log(f"Generated: {orchestrator_path}") - - # AGENTS.md - log("") - log("Generating opencode/AGENTS.md...") - agents_md.write_text(_build_agents_md(team, "opencode")) - log(f"Generated: {agents_md}") - - # opencode.json — merge base config with generated overlay - log("") - log("Generating opencode/opencode.json...") - if not OPENCODE_BASE_CONFIG.exists(): - raise FileNotFoundError(f"missing base config at {OPENCODE_BASE_CONFIG}") - base = json.loads(OPENCODE_BASE_CONFIG.read_text()) - overlay = { - "permission": { - "edit": "ask", - "bash": {"*": "ask"}, - "webfetch": "allow", - "skill": {"*": "allow"}, - }, - "compaction": {"auto": True, "prune": True}, - "snapshot": True, - } - merged = _deep_merge(base, overlay) - opencode_json.write_text(json.dumps(merged, indent=2) + "\n") - log(f"Generated: {opencode_json}") - - -def _build_agents_md(team: dict, harness: str) -> str: - """Concatenate rule files for a harness, matching bash's `echo ""; cat` pattern. - - Bash did: echo header, then for each applicable rule, echo blank + cat file. - `cat` preserves the file's own trailing whitespace, so trailing blank lines - in a rule file become visible separators in the output. We replicate that - by reading file contents verbatim rather than stripping. - """ - out = "# Agent Team Instructions\n\nAgent-team specific protocols live in skills (orchestrate, conventions, worker-protocol, qa-checklist, message-schema).\n" - for rule_id in team["rules"]["order"]: - rule = team["rules"]["items"][rule_id] - if harness not in rule.get("applies_to", []): - continue - out += "\n" + (SCRIPT_DIR / rule["source_file"]).read_text() - return out - - -def _deep_merge(a: dict, b: dict) -> dict: - """Deep-merge b into a, producing a new dict. Matches `jq -s '.[0] * .[1]'`.""" - out = dict(a) - for k, v in b.items(): - if isinstance(v, dict) and isinstance(out.get(k), dict): - out[k] = _deep_merge(out[k], v) - else: - out[k] = v - return out - - -def _dump_opencode_frontmatter(fm: dict[str, Any]) -> str: - """Opencode accepts YAML 1.2; use pyyaml with block style for nested maps.""" - # Use yaml.dump for the nested permission structure; top-level scalars we - # want unquoted for parity with the current bash output where possible. - out: list[str] = [] - for key, value in fm.items(): - if isinstance(value, dict): - out.append(f"{key}:") - for k, v in value.items(): - if isinstance(v, dict): - out.append(f" {k}:") - for k2, v2 in v.items(): - out.append(f' "{k2}": {v2}') - else: - out.append(f" {k}: {v}") - elif isinstance(value, str): - # Description uses single quotes for parity; other strings unquoted. - if key == "description": - out.append(f"{key}: {_yaml_single_quoted(value)}") - else: - out.append(f"{key}: {value}") - elif isinstance(value, bool): - out.append(f"{key}: {'true' if value else 'false'}") - else: - out.append(f"{key}: {value}") - return "\n".join(out) + "\n" - - -# --------------------------------------------------------------------------- -# Main -# --------------------------------------------------------------------------- -def main() -> int: - team = yaml.safe_load(TEAM_YAML.read_text()) - settings = yaml.safe_load(SETTINGS_SHARED_YAML.read_text()) - - log(f"Using shared config: {SETTINGS_SHARED_YAML}") - validate_protocol_files(team, settings) - generate_legacy_settings_json(settings) - log(f"Generated compatibility artifact: {SETTINGS_JSON}") - - generate_claude(team) - generate_codex(team, settings) - generate_opencode(team) - - log("") - log("Done.") - return 0 - - -if __name__ == "__main__": - sys.exit(main()) diff --git a/generate.sh b/generate.sh new file mode 100755 index 0000000..c25fd97 --- /dev/null +++ b/generate.sh @@ -0,0 +1,950 @@ +#!/usr/bin/env bash +set -euo pipefail + +# generate.sh — generates both Claude and Codex output directories from +# shared agent source files plus a vendor-neutral runtime config. +# Agent source files (agents/*.md) are the single source of truth; this +# script derives tool-specific equivalents. +# +# Template variables in agent bodies are expanded per-target: +# ${PLANS_DIR} — where plans live (.claude/plans vs plans) +# ${WEB_SEARCH} — how web search is referenced +# ${SEARCH_TOOLS} — how codebase search tools are referenced +# +# Idempotent: safe to run multiple times. + +SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)" + +AGENTS_SRC="$SCRIPT_DIR/agents" +RULES_DIR="$SCRIPT_DIR/rules" +CLAUDE_MD="$SCRIPT_DIR/CLAUDE.md" +SETTINGS_SHARED_YAML="$SCRIPT_DIR/SETTINGS.yaml" +TEAM_YAML="$SCRIPT_DIR/TEAM.yaml" +SETTINGS_JSON="$SCRIPT_DIR/settings.json" + +CLAUDE_DIR="$SCRIPT_DIR/claude" +CLAUDE_AGENTS_DIR="$CLAUDE_DIR/agents" + +CODEX_DIR="$SCRIPT_DIR/codex" +CODEX_AGENTS_DIR="$CODEX_DIR/agents" + +OPENCODE_DIR="$SCRIPT_DIR/opencode" +OPENCODE_AGENTS_DIR="$OPENCODE_DIR/agents" +OPENCODE_BASE_CONFIG="$OPENCODE_DIR/config.json" + +# --------------------------------------------------------------------------- +# Template variable values per target (KEY=VALUE pairs) +# --------------------------------------------------------------------------- +CLAUDE_VARS=( + "PLANS_DIR=.claude/plans" + "WEB_SEARCH=via WebFetch/WebSearch" + "SEARCH_TOOLS=Use Grep/Glob/Read" +) + +CODEX_VARS=( + "PLANS_DIR=plans" + "WEB_SEARCH=via web search" + "SEARCH_TOOLS=Search the codebase" +) + +OPENCODE_VARS=( + "PLANS_DIR=plans" + "WEB_SEARCH=via web search" + "SEARCH_TOOLS=Search the codebase" +) + +# --------------------------------------------------------------------------- +# extract_body — extracts everything after the second --- (YAML frontmatter) +# --------------------------------------------------------------------------- +extract_body() { + local file="$1" + awk 'BEGIN{fm=0} /^---$/{if(fm==0){fm=1;next} if(fm==1){fm=2;next}} fm==2{print}' "$file" +} + +# --------------------------------------------------------------------------- +# expand_body — runs envsubst on body text, substituting only our 3 variables +# $1 = body text +# $2.. = KEY=VALUE pairs to export +# --------------------------------------------------------------------------- +expand_body() { + local body="$1" + shift + # Export only the specified variables + for pair in "$@"; do + export "${pair%%=*}=${pair#*=}" + done + echo "$body" | envsubst '${PLANS_DIR} ${WEB_SEARCH} ${SEARCH_TOOLS}' + # Clean up exported variables + for pair in "$@"; do + unset "${pair%%=*}" + done +} + +# --------------------------------------------------------------------------- +# yaml_escape_single_quoted — escapes text for YAML single-quoted scalars +# --------------------------------------------------------------------------- +yaml_escape_single_quoted() { + printf '%s' "$1" | sed "s/'/''/g" +} + +# --------------------------------------------------------------------------- +# csv_from_yaml_array — joins YAML array values from stdin with ", " +# --------------------------------------------------------------------------- +csv_from_yaml_array() { + local first=1 + local item + while IFS= read -r item; do + [ -n "$item" ] || continue + if [ "$first" -eq 0 ]; then + printf ', ' + fi + printf '%s' "$item" + first=0 + done +} + +# --------------------------------------------------------------------------- +# validate_team_protocol — validates TEAM protocol fields and referenced files +# --------------------------------------------------------------------------- +validate_team_protocol() { + [ -f "$TEAM_YAML" ] || { + echo "Error: missing $TEAM_YAML" + exit 1 + } + + yq -e '.version == 1' "$TEAM_YAML" > /dev/null + yq -e '.agents.order and .agents.items and .skills.order and .skills.items and .rules.order and .rules.items' "$TEAM_YAML" > /dev/null + + local section id ids_in_order + for section in agents skills rules; do + while IFS= read -r id; do + [ -n "$id" ] || continue + yq -e ".${section}.items.${id}" "$TEAM_YAML" > /dev/null + [ "$(yq -r ".${section}.items.${id}.id" "$TEAM_YAML")" = "$id" ] || { + echo "Error: TEAM ${section} item '${id}' has mismatched id field" + exit 1 + } + done < <(yq -r ".${section}.order[]" "$TEAM_YAML") + + ids_in_order="$(yq -r ".${section}.order[]" "$TEAM_YAML")" + while IFS= read -r id; do + [ -n "$id" ] || continue + printf '%s\n' "$ids_in_order" | grep -qx "$id" || { + echo "Error: TEAM ${section} item '${id}' missing from order list" + exit 1 + } + done < <(yq -r ".${section}.items | keys | .[]" "$TEAM_YAML") + done + + while IFS= read -r id; do + [ -n "$id" ] || continue + local path + path="$SCRIPT_DIR/$(yq -r ".agents.items.${id}.instruction_file" "$TEAM_YAML")" + [ -f "$path" ] || { + echo "Error: missing agent instruction file for '${id}': $path" + exit 1 + } + done < <(yq -r '.agents.order[]' "$TEAM_YAML") + + while IFS= read -r id; do + [ -n "$id" ] || continue + local path + path="$SCRIPT_DIR/$(yq -r ".skills.items.${id}.instruction_file" "$TEAM_YAML")" + [ -f "$path" ] || { + echo "Error: missing skill instruction file for '${id}': $path" + exit 1 + } + done < <(yq -r '.skills.order[]' "$TEAM_YAML") + + while IFS= read -r id; do + [ -n "$id" ] || continue + local path + path="$SCRIPT_DIR/$(yq -r ".rules.items.${id}.source_file" "$TEAM_YAML")" + [ -f "$path" ] || { + echo "Error: missing rule source file for '${id}': $path" + exit 1 + } + done < <(yq -r '.rules.order[]' "$TEAM_YAML") +} + +# --------------------------------------------------------------------------- +# validate_shared_settings — validates the shared protocol fields we rely on +# --------------------------------------------------------------------------- +validate_shared_settings() { + [ -f "$SETTINGS_SHARED_YAML" ] || { + echo "Error: missing $SETTINGS_SHARED_YAML" + exit 1 + } + + yq -e '.version == 1' "$SETTINGS_SHARED_YAML" > /dev/null + yq -e '.model.class == "fast" or .model.class == "balanced" or .model.class == "powerful"' "$SETTINGS_SHARED_YAML" > /dev/null + yq -e '.model.reasoning == "low" or .model.reasoning == "medium" or .model.reasoning == "high" or .model.reasoning == "max"' "$SETTINGS_SHARED_YAML" > /dev/null + yq -e '.runtime.filesystem == "read-only" or .runtime.filesystem == "workspace-write"' "$SETTINGS_SHARED_YAML" > /dev/null + yq -e '.runtime.approval == "manual" or .runtime.approval == "guarded-auto" or .runtime.approval == "full-auto"' "$SETTINGS_SHARED_YAML" > /dev/null + yq -e '(.runtime.network_access | type) == "!!bool"' "$SETTINGS_SHARED_YAML" > /dev/null + yq -e ' + (.runtime.tools // []) as $tools | + ( + $tools | + map( + select( + . == "shell" or + . == "read" or + . == "edit" or + . == "write" or + . == "glob" or + . == "grep" or + . == "web_fetch" or + . == "web_search" + ) + ) | + length + ) == ($tools | length) + ' "$SETTINGS_SHARED_YAML" > /dev/null +} + +# --------------------------------------------------------------------------- +# map_model_class_to_claude — maps shared model.class to Claude model value +# --------------------------------------------------------------------------- +map_model_class_to_claude() { + local model_class="$1" + case "$model_class" in + fast) echo "haiku" ;; + powerful) echo "opus" ;; + balanced) echo "sonnet" ;; + *) echo "sonnet" ;; + esac +} + +# --------------------------------------------------------------------------- +# map_approval_intent_to_codex_policy — shared approval intent to Codex value +# --------------------------------------------------------------------------- +map_approval_intent_to_codex_policy() { + local approval_intent="$1" + case "$approval_intent" in + manual) echo "on-request" ;; + full-auto) echo "never" ;; + guarded-auto) echo "untrusted" ;; + *) echo "untrusted" ;; + esac +} + +# --------------------------------------------------------------------------- +# map_filesystem_intent_to_claude_mode — shared filesystem to Claude mode +# --------------------------------------------------------------------------- +map_filesystem_intent_to_claude_mode() { + local filesystem="$1" + case "$filesystem" in + read-only) echo "plan" ;; + workspace-write) echo "acceptEdits" ;; + *) echo "acceptEdits" ;; + esac +} + +# --------------------------------------------------------------------------- +# map_portable_tool_to_claude — shared runtime tool to Claude allow-list name +# --------------------------------------------------------------------------- +map_portable_tool_to_claude() { + local tool="$1" + case "$tool" in + shell) echo "Bash" ;; + read) echo "Read" ;; + edit) echo "Edit" ;; + write) echo "Write" ;; + glob) echo "Glob" ;; + grep) echo "Grep" ;; + web_fetch) echo "WebFetch" ;; + web_search) echo "WebSearch" ;; + *) echo "$tool" ;; + esac +} + +# --------------------------------------------------------------------------- +# map_model_to_opencode — all models map to the single local model +# --------------------------------------------------------------------------- +map_model_to_opencode() { + echo "llama.cpp/qwen3-coder:a3b" +} + +# --------------------------------------------------------------------------- +# map_effort_to_temperature — maps effort to temperature float +# --------------------------------------------------------------------------- +map_effort_to_temperature() { + local effort="$1" + case "$effort" in + max) echo "0.1" ;; + high) echo "0.2" ;; + medium) echo "0.3" ;; + low) echo "0.5" ;; + *) echo "0.3" ;; + esac +} + +# --------------------------------------------------------------------------- +# map_permission_mode_to_opencode_mode — maps permission mode to agent mode +# --------------------------------------------------------------------------- +map_permission_mode_to_opencode_mode() { + local permission_mode="$1" + case "$permission_mode" in + plan) echo "subagent" ;; + *) echo "primary" ;; + esac +} + +# --------------------------------------------------------------------------- +# generate_opencode_permission_block — emits YAML permission block for agent +# $1 = tools (comma-separated Claude tool names) +# $2 = disallowed_tools (comma-separated Claude tool names) +# $3 = permission_mode (plan/acceptEdits/"") +# --------------------------------------------------------------------------- +generate_opencode_permission_block() { + local tools="$1" + local disallowed_tools="$2" + local permission_mode="$3" + + local edit_perm="deny" + local bash_perm="deny" + local webfetch_perm="deny" + + if [ "$permission_mode" = "plan" ]; then + # Plan-mode agents: read-only, no edits, no bash + edit_perm="deny" + bash_perm="deny" + # Researchers/reviewers still need web access + if echo "$tools" | grep -qE '\bWebFetch\b|\bWebSearch\b'; then + webfetch_perm="allow" + fi + else + # Check edit permission + if echo "$tools" | grep -qE '\bWrite\b|\bEdit\b'; then + edit_perm="allow" + fi + if echo "$disallowed_tools" | grep -qE '\bWrite\b|\bEdit\b'; then + edit_perm="deny" + fi + + # Check bash permission + if echo "$tools" | grep -q '\bBash\b'; then + bash_perm="ask" + fi + if echo "$disallowed_tools" | grep -q '\bBash\b'; then + bash_perm="deny" + fi + + # Check web permission + if echo "$tools" | grep -qE '\bWebFetch\b|\bWebSearch\b'; then + webfetch_perm="allow" + fi + fi + + echo "permission:" + echo " edit: ${edit_perm}" + + if [ "$bash_perm" = "ask" ]; then + echo " bash:" + echo " \"*\": ask" + echo " \"git status\": allow" + echo " \"git diff *\": allow" + echo " \"git log *\": allow" + elif [ "$bash_perm" = "deny" ]; then + echo " bash:" + echo " \"*\": deny" + fi + + echo " webfetch: ${webfetch_perm}" +} + +# --------------------------------------------------------------------------- +# json_escape — escapes a string for JSON string literal output +# --------------------------------------------------------------------------- +json_escape() { + printf '%s' "$1" | sed 's/\\/\\\\/g; s/"/\\"/g' +} + +# --------------------------------------------------------------------------- +# json_array_from_lines — renders stdin as a compact JSON string array +# --------------------------------------------------------------------------- +json_array_from_lines() { + local first=1 + local item + + printf '[' + while IFS= read -r item; do + [ -n "$item" ] || continue + if [ "$first" -eq 0 ]; then + printf ', ' + fi + printf '"%s"' "$(json_escape "$item")" + first=0 + done + printf ']' +} + +# --------------------------------------------------------------------------- +# generate_legacy_settings_json — emits Claude-compatible settings.json +# from SETTINGS.yaml so downstream generation stays backward-compatible +# --------------------------------------------------------------------------- +generate_legacy_settings_json() { + local model_class model_reasoning runtime_filesystem runtime_approval + local claude_model claude_default_mode codex_approval_policy codex_network_access + local allow_json deny_json ask_json claude_md_excludes_json + + model_class="$(yq -r '.model.class' "$SETTINGS_SHARED_YAML")" + model_reasoning="$(yq -r '.model.reasoning' "$SETTINGS_SHARED_YAML")" + runtime_filesystem="$(yq -r '.runtime.filesystem' "$SETTINGS_SHARED_YAML")" + runtime_approval="$(yq -r '.runtime.approval' "$SETTINGS_SHARED_YAML")" + + claude_model="$(map_model_class_to_claude "$model_class")" + claude_default_mode="$(map_filesystem_intent_to_claude_mode "$runtime_filesystem")" + codex_approval_policy="$(yq -r '.targets.codex.approval_policy // ""' "$SETTINGS_SHARED_YAML")" + codex_network_access="$(yq -r '.targets.codex.network_access // .runtime.network_access // false' "$SETTINGS_SHARED_YAML")" + + if [ -z "$codex_approval_policy" ] || [ "$codex_approval_policy" = "null" ]; then + codex_approval_policy="$(map_approval_intent_to_codex_policy "$runtime_approval")" + fi + + allow_json="$( + yq -r '.runtime.tools[]' "$SETTINGS_SHARED_YAML" \ + | while IFS= read -r tool; do + map_portable_tool_to_claude "$tool" + done \ + | json_array_from_lines + )" + + deny_json="$( + { + yq -r '.safety.protected_paths[]' "$SETTINGS_SHARED_YAML" | while IFS= read -r path; do + printf 'Read(%s)\n' "$path" + printf 'Write(%s)\n' "$path" + printf 'Edit(%s)\n' "$path" + done + } | json_array_from_lines + )" + + ask_json="$( + yq -r '.safety.dangerous_shell_commands.ask[]' "$SETTINGS_SHARED_YAML" \ + | while IFS= read -r cmd; do + printf 'Bash(%s)\n' "$cmd" + done \ + | json_array_from_lines + )" + + claude_md_excludes_json="$( + yq -r '(.targets.claude.claude_md_excludes // [".claude/agent-memory/**"])[]' "$SETTINGS_SHARED_YAML" \ + | json_array_from_lines + )" + + cat > "$SETTINGS_JSON" < ../rules" + + ln -s ../skills "$CLAUDE_DIR/skills" + echo "Symlinked: $CLAUDE_DIR/skills -> ../skills" + + # Generate agent .md files from TEAM metadata + markdown instruction body + local agent_id + while IFS= read -r agent_id; do + [ -n "$agent_id" ] || continue + + local name description model effort permission_mode + local src_file dst_file body expanded_body + local max_turns background memory isolation + local tools_csv disallowed_tools_csv + + name="$(yq -r ".agents.items.${agent_id}.name" "$TEAM_YAML")" + description="$(yq -r ".agents.items.${agent_id}.description" "$TEAM_YAML")" + model="$(yq -r ".agents.items.${agent_id}.model" "$TEAM_YAML")" + effort="$(yq -r ".agents.items.${agent_id}.effort // \"\"" "$TEAM_YAML")" + permission_mode="$(yq -r ".agents.items.${agent_id}.permission_mode // \"\"" "$TEAM_YAML")" + tools_csv="$(yq -r ".agents.items.${agent_id}.tools[]" "$TEAM_YAML" | csv_from_yaml_array)" + disallowed_tools_csv="$(yq -r ".agents.items.${agent_id}.disallowed_tools // [] | .[]" "$TEAM_YAML" | csv_from_yaml_array)" + max_turns="$(yq -r ".agents.items.${agent_id}.max_turns // \"\"" "$TEAM_YAML")" + background="$(yq -r ".agents.items.${agent_id}.background // \"\"" "$TEAM_YAML")" + memory="$(yq -r ".agents.items.${agent_id}.memory // \"\"" "$TEAM_YAML")" + isolation="$(yq -r ".agents.items.${agent_id}.isolation // \"\"" "$TEAM_YAML")" + + src_file="$SCRIPT_DIR/$(yq -r ".agents.items.${agent_id}.instruction_file" "$TEAM_YAML")" + dst_file="$CLAUDE_AGENTS_DIR/${name}.md" + + body="$(extract_body "$src_file")" + expanded_body="$(expand_body "$body" "${CLAUDE_VARS[@]}")" + + { + echo "---" + echo "name: '$(yaml_escape_single_quoted "$name")'" + echo "description: '$(yaml_escape_single_quoted "$description")'" + echo "model: '$(yaml_escape_single_quoted "$model")'" + if [ -n "$effort" ] && [ "$effort" != "null" ]; then + echo "effort: '$(yaml_escape_single_quoted "$effort")'" + fi + if [ -n "$permission_mode" ] && [ "$permission_mode" != "null" ]; then + echo "permissionMode: '$(yaml_escape_single_quoted "$permission_mode")'" + fi + echo "tools: '$(yaml_escape_single_quoted "$tools_csv")'" + if [ -n "$disallowed_tools_csv" ] && [ "$disallowed_tools_csv" != "null" ]; then + echo "disallowedTools: '$(yaml_escape_single_quoted "$disallowed_tools_csv")'" + fi + if [ "$background" = "true" ]; then + echo "background: true" + fi + if [ -n "$memory" ] && [ "$memory" != "null" ]; then + echo "memory: '$(yaml_escape_single_quoted "$memory")'" + fi + if [ -n "$isolation" ] && [ "$isolation" != "null" ]; then + echo "isolation: '$(yaml_escape_single_quoted "$isolation")'" + fi + if [ -n "$max_turns" ] && [ "$max_turns" != "null" ]; then + echo "maxTurns: $max_turns" + fi + echo "skills:" + yq -r ".agents.items.${agent_id}.skills[]" "$TEAM_YAML" | while IFS= read -r skill; do + echo " - $(yaml_escape_single_quoted "$skill")" + done + echo "---" + echo "" + echo "$expanded_body" + } > "$dst_file" + + echo "Generated: $dst_file" + done < <(yq -r '.agents.order[]' "$TEAM_YAML") +} + +# --------------------------------------------------------------------------- +# generate_codex — produces codex/ output directory +# --------------------------------------------------------------------------- +generate_codex() { + echo "" + echo "=== Generating Codex output ===" + + # Clean and recreate output directories + rm -rf "$CODEX_DIR" + mkdir -p "$CODEX_AGENTS_DIR" + ln -s ../skills "$CODEX_DIR/skills" + echo "Symlinked: $CODEX_DIR/skills -> ../skills" + + # Generate agent .toml files from TEAM metadata + markdown instruction body + echo "Generating Codex agent definitions..." + local agent_id + while IFS= read -r agent_id; do + [ -n "$agent_id" ] || continue + + local name description model effort permission_mode tools disallowed_tools + local codex_sandbox_override + local agent_skills + local src_file dst_file + name="$(yq -r ".agents.items.${agent_id}.name" "$TEAM_YAML")" + description="$(yq -r ".agents.items.${agent_id}.description" "$TEAM_YAML")" + model="$(yq -r ".agents.items.${agent_id}.model" "$TEAM_YAML")" + effort="$(yq -r ".agents.items.${agent_id}.effort // \"\"" "$TEAM_YAML")" + permission_mode="$(yq -r ".agents.items.${agent_id}.permission_mode // \"\"" "$TEAM_YAML")" + tools="$(yq -r ".agents.items.${agent_id}.tools[]" "$TEAM_YAML" | csv_from_yaml_array)" + disallowed_tools="$(yq -r ".agents.items.${agent_id}.disallowed_tools // [] | .[]" "$TEAM_YAML" | csv_from_yaml_array)" + codex_sandbox_override="$(yq -r '.targets.codex.sandbox_mode // ""' "$SETTINGS_SHARED_YAML")" + agent_skills="$(yq -r ".agents.items.${agent_id}.skills[]" "$TEAM_YAML")" + src_file="$SCRIPT_DIR/$(yq -r ".agents.items.${agent_id}.instruction_file" "$TEAM_YAML")" + dst_file="$CODEX_AGENTS_DIR/${name}.toml" + + # Map to Codex equivalents + local codex_model codex_effort codex_sandbox + codex_model="$(map_model "$model")" + codex_effort="$(map_effort "${effort:-medium}")" + codex_sandbox="$(map_sandbox_mode "$permission_mode" "$tools" "$codex_sandbox_override")" + + # Extract and expand body with Codex variable values + local body expanded_body + body="$(extract_body "$src_file")" + expanded_body="$(expand_body "$body" "${CODEX_VARS[@]}")" + + # Build developer_instructions: append disallowedTools note if present + local developer_instructions + developer_instructions="$expanded_body" + if [ -n "$disallowed_tools" ] && [ "$disallowed_tools" != "null" ]; then + developer_instructions="${developer_instructions} + +You do NOT have access to these tools: ${disallowed_tools}" + fi + + # TOML multiline basic strings use """ delimiters; reject raw delimiter + # sequences in instruction bodies so generated TOML remains parseable. + if printf '%s' "$developer_instructions" | grep -q '"""'; then + echo "Error: agent instruction contains raw triple quotes (\"\"\") which break TOML in $src_file" + exit 1 + fi + + # Write TOML output + cat > "$dst_file" <> "$dst_file" <> "$dst_file" < "$CODEX_DIR/AGENTS.md" + echo "Generated: $CODEX_DIR/AGENTS.md" + + # Generate config.toml — derive sandbox/approval defaults from shared config + echo "" + echo "Generating codex/config.toml..." + + local default_mode runtime_approval codex_approval_override codex_network_access codex_sandbox_override + default_mode="$(map_filesystem_intent_to_claude_mode "$(yq -r '.runtime.filesystem' "$SETTINGS_SHARED_YAML")")" + runtime_approval="$(yq -r '.runtime.approval' "$SETTINGS_SHARED_YAML")" + codex_sandbox_override="$(yq -r '.targets.codex.sandbox_mode // ""' "$SETTINGS_SHARED_YAML")" + codex_approval_override="$(yq -r '.targets.codex.approval_policy // ""' "$SETTINGS_SHARED_YAML")" + codex_network_access="$(yq -r '.targets.codex.network_access // .runtime.network_access // false' "$SETTINGS_SHARED_YAML")" + + local config_sandbox config_approval + config_sandbox="$(map_default_sandbox_mode "$default_mode" "$codex_sandbox_override")" + config_approval="$(map_approval_policy "$runtime_approval" "$codex_approval_override")" + + if [ "$config_sandbox" = "workspace-write" ]; then + cat > "$CODEX_DIR/config.toml" < "$CODEX_DIR/config.toml" < ../skills" + + # Generate agent .md files with OpenCode frontmatter + local agent_id + while IFS= read -r agent_id; do + [ -n "$agent_id" ] || continue + + local name description model effort permission_mode + local src_file dst_file body expanded_body + local max_turns tools_csv disallowed_tools_csv + local opencode_model opencode_temperature opencode_mode opencode_steps + + name="$(yq -r ".agents.items.${agent_id}.name" "$TEAM_YAML")" + description="$(yq -r ".agents.items.${agent_id}.description" "$TEAM_YAML")" + model="$(yq -r ".agents.items.${agent_id}.model" "$TEAM_YAML")" + effort="$(yq -r ".agents.items.${agent_id}.effort // \"\"" "$TEAM_YAML")" + permission_mode="$(yq -r ".agents.items.${agent_id}.permission_mode // \"\"" "$TEAM_YAML")" + tools_csv="$(yq -r ".agents.items.${agent_id}.tools[]" "$TEAM_YAML" | csv_from_yaml_array)" + disallowed_tools_csv="$(yq -r ".agents.items.${agent_id}.disallowed_tools // [] | .[]" "$TEAM_YAML" | csv_from_yaml_array)" + max_turns="$(yq -r ".agents.items.${agent_id}.max_turns // \"\"" "$TEAM_YAML")" + + src_file="$SCRIPT_DIR/$(yq -r ".agents.items.${agent_id}.instruction_file" "$TEAM_YAML")" + dst_file="$OPENCODE_AGENTS_DIR/${name}.md" + + body="$(extract_body "$src_file")" + expanded_body="$(expand_body "$body" "${OPENCODE_VARS[@]}")" + + # Map to OpenCode equivalents + opencode_model="$(map_model_to_opencode "$model")" + opencode_temperature="$(map_effort_to_temperature "${effort:-medium}")" + opencode_mode="$(map_permission_mode_to_opencode_mode "$permission_mode")" + opencode_steps="${max_turns:-25}" + + { + echo "---" + echo "description: '$(yaml_escape_single_quoted "$description")'" + echo "mode: ${opencode_mode}" + echo "model: ${opencode_model}" + echo "temperature: ${opencode_temperature}" + echo "steps: ${opencode_steps}" + generate_opencode_permission_block "$tools_csv" "$disallowed_tools_csv" "$permission_mode" + echo "---" + echo "" + echo "$expanded_body" + } > "$dst_file" + + echo "Generated: $dst_file" + done < <(yq -r '.agents.order[]' "$TEAM_YAML") + + # Generate AGENTS.md — concatenate TEAM-ordered rules for opencode target + echo "" + echo "Generating opencode/AGENTS.md..." + { + echo "# Agent Team Instructions" + echo "" + echo "Agent-team specific protocols live in skills (orchestrate, conventions, worker-protocol, qa-checklist, message-schema)." + local rule_id rules_file + while IFS= read -r rule_id; do + [ -n "$rule_id" ] || continue + yq -r ".rules.items.${rule_id}.applies_to[]" "$TEAM_YAML" | grep -qx "opencode" || continue + rules_file="$SCRIPT_DIR/$(yq -r ".rules.items.${rule_id}.source_file" "$TEAM_YAML")" + echo "" + cat "$rules_file" + done < <(yq -r '.rules.order[]' "$TEAM_YAML") + } > "$OPENCODE_DIR/AGENTS.md" + echo "Generated: $OPENCODE_DIR/AGENTS.md" + + # Generate merged opencode.json — base config + generated overlay + echo "" + echo "Generating opencode/opencode.json..." + + if [ ! -f "$OPENCODE_BASE_CONFIG" ]; then + echo "Error: missing base config at $OPENCODE_BASE_CONFIG" + exit 1 + fi + + # Build the generated overlay with global permissions from SETTINGS.yaml + local overlay_json + overlay_json="$(cat <<'OVERLAY' +{ + "permission": { + "edit": "ask", + "bash": { + "*": "ask" + }, + "webfetch": "allow", + "skill": { + "*": "allow" + } + }, + "compaction": { + "auto": true, + "prune": true + }, + "snapshot": true +} +OVERLAY +)" + + jq -s '.[0] * .[1]' "$OPENCODE_BASE_CONFIG" <(echo "$overlay_json") > "$OPENCODE_DIR/opencode.json" + echo "Generated: $OPENCODE_DIR/opencode.json" +} + +# --------------------------------------------------------------------------- +# Main +# --------------------------------------------------------------------------- +prepare_settings_json +generate_claude +generate_codex +generate_opencode + +echo "" +echo "Done." diff --git a/install.sh b/install.sh index 0db7870..9cadb3f 100755 --- a/install.sh +++ b/install.sh @@ -31,13 +31,13 @@ echo "Target: $CLAUDE_DIR" echo "" # Pre-flight: build fresh generated outputs before proceeding. -if [ ! -f "$SCRIPT_DIR/generate.py" ]; then - echo "Error: generate.py not found." +if [ ! -f "$SCRIPT_DIR/generate.sh" ]; then + echo "Error: generate.sh not found." exit 1 fi echo "Generating fresh artifacts before install..." -python "$SCRIPT_DIR/generate.py" +bash "$SCRIPT_DIR/generate.sh" # Ensure ~/.claude exists mkdir -p "$CLAUDE_DIR" @@ -289,7 +289,7 @@ if [ -d "$SCRIPT_DIR/codex" ]; then if [ -d "$SCRIPT_DIR/codex/agents" ]; then create_symlink "$SCRIPT_DIR/codex/agents" "$CODEX_DIR/agents" "codex agents" else - echo "Run ./generate.py first to generate Codex agent definitions" + echo "Run ./generate.sh first to generate Codex agent definitions" fi # Generated AGENTS.md (symlink to project root for Codex discovery) @@ -318,7 +318,7 @@ if [ -d "$SCRIPT_DIR/opencode" ]; then if [ -d "$SCRIPT_DIR/opencode/agents" ]; then create_symlink "$SCRIPT_DIR/opencode/agents" "$OPENCODE_CONFIG_DIR/agents" "opencode agents" else - echo "Run ./generate.py first to generate OpenCode agent definitions" + echo "Run ./generate.sh first to generate OpenCode agent definitions" fi # Generated AGENTS.md diff --git a/opencode/config.json b/opencode/config.json index c50afc8..bd76223 100644 --- a/opencode/config.json +++ b/opencode/config.json @@ -13,7 +13,7 @@ "name": "Qwen3-Coder-30B-A3B-Instruct-Q6", "limit": { "context": 262144, - "output": 8192 + "output": 262144 }, "cost": { "input": 0, diff --git a/opencode/skills b/opencode/skills new file mode 120000 index 0000000..42c5394 --- /dev/null +++ b/opencode/skills @@ -0,0 +1 @@ +../skills \ No newline at end of file diff --git a/opencode/skills/conventions b/opencode/skills/conventions deleted file mode 120000 index ac94a46..0000000 --- a/opencode/skills/conventions +++ /dev/null @@ -1 +0,0 @@ -../../skills/conventions \ No newline at end of file diff --git a/opencode/skills/message-schema b/opencode/skills/message-schema deleted file mode 120000 index 01d79a7..0000000 --- a/opencode/skills/message-schema +++ /dev/null @@ -1 +0,0 @@ -../../skills/message-schema \ No newline at end of file diff --git a/opencode/skills/qa-checklist b/opencode/skills/qa-checklist deleted file mode 120000 index d152b69..0000000 --- a/opencode/skills/qa-checklist +++ /dev/null @@ -1 +0,0 @@ -../../skills/qa-checklist \ No newline at end of file diff --git a/opencode/skills/worker-protocol b/opencode/skills/worker-protocol deleted file mode 120000 index e476a2c..0000000 --- a/opencode/skills/worker-protocol +++ /dev/null @@ -1 +0,0 @@ -../../skills/worker-protocol \ No newline at end of file diff --git a/rules/02-responses.md b/rules/02-responses.md new file mode 100644 index 0000000..2dbbdf1 --- /dev/null +++ b/rules/02-responses.md @@ -0,0 +1,6 @@ +# Responses & Explanations + +- Be concise — lead with the action or answer, not the preamble +- Include just enough reasoning to explain *why* a decision was made, not a full walkthrough +- Skip trailing summaries ("Here's what I did...") — the diff speaks for itself +- No emojis unless explicitly asked diff --git a/rules/04-tools.md b/rules/04-tools.md index 66a6cbf..53be043 100644 --- a/rules/04-tools.md +++ b/rules/04-tools.md @@ -20,3 +20,14 @@ - Commonly run development workflows MUST be wired into `just` recipes as the user-facing entrypoints - Temporary artifacts created during work MUST be cleaned up before completion unless the user explicitly asked to keep them +# Parallelism + +- Always parallelize independent work — tool calls, file reads, searches +- When a task has components that don't depend on each other, run them concurrently by default +- Sequential execution is allowed only when required by dependencies or operational constraints (tool/runtime limits, contention, staged validation) + +# Context Management + +- Use subagents for exploratory reads and investigations to keep the main context clean +- Use scoped file reads (offset/limit) over reading entire large files +- When a task is complete or the topic shifts significantly, suggest clearing context or starting a new session diff --git a/rules/05-verification.md b/rules/05-verification.md index c0f91df..4965eeb 100644 --- a/rules/05-verification.md +++ b/rules/05-verification.md @@ -1,5 +1,6 @@ # Verification +- After making changes, run relevant tests or build commands to verify correctness before reporting success - If no tests exist for the changed code, say so rather than silently assuming it works - Run single targeted tests by default; run the full suite when requested or when targeted coverage is insufficient diff --git a/schemas/team.schema.json b/schemas/team.schema.json index 0fbd268..0d9668c 100644 --- a/schemas/team.schema.json +++ b/schemas/team.schema.json @@ -470,6 +470,7 @@ "uniqueItems": true, "const": [ "01-session", + "02-responses", "03-git", "04-tools", "05-verification", @@ -481,6 +482,7 @@ "additionalProperties": false, "required": [ "01-session", + "02-responses", "03-git", "04-tools", "05-verification", @@ -497,6 +499,16 @@ } ] }, + "02-responses": { + "allOf": [ + { "$ref": "#/$defs/rule_item" }, + { + "properties": { + "id": { "const": "02-responses" } + } + } + ] + }, "03-git": { "allOf": [ { "$ref": "#/$defs/rule_item" }, diff --git a/skills/orchestrate/SKILL.md b/skills/orchestrate/SKILL.md index 0cb13d8..c743034 100644 --- a/skills/orchestrate/SKILL.md +++ b/skills/orchestrate/SKILL.md @@ -10,19 +10,17 @@ You are now acting as orchestrator. Decompose, delegate, validate, deliver. Neve ``` You (orchestrator) - ├── grunt — trivial, cheap implementer - ├── worker — standard implementer - ├── senior — ambiguous, architectural, or high-risk implementer - ├── debugger — bug diagnosis and minimal fixes - ├── documenter — documentation only, never touches source - ├── researcher — one per topic, parallel fact-finding - ├── architect — triage, research coordination, architecture, wave decomposition - ├── reviewer — code quality + AC verification + claim checking - └── auditor — security analysis + runtime validation + ├── grunt (haiku) — trivial, cheap implementer + ├── worker (sonnet) — standard implementer + ├── senior (opus) — ambiguous, architectural, or high-risk implementer + ├── debugger (sonnet) — bug diagnosis and minimal fixes + ├── documenter (sonnet) — documentation only, never touches source + ├── researcher (sonnet) — one per topic, parallel fact-finding + ├── architect (opus, effort: max) — triage, research coordination, architecture, wave decomposition + ├── reviewer (sonnet) — code quality + AC verification + claim checking + └── auditor (sonnet, background) — security analysis + runtime validation ``` -Models and effort levels are pinned per-agent in each harness's config. Pick agents by role; the harness handles model selection. - --- ## Task tiers @@ -203,7 +201,9 @@ When multiple risk tags are present, take the union. Spawn all required reviewer ### Permission model -Each agent declares its allowed tools in its frontmatter — read-only agents (architect, researcher, reviewer, auditor) cannot write, edit, or run shell commands because those tools are denied at the agent level, not gated by a runtime mode. Trust the per-agent tool restrictions as the real safety boundary. If a read-only agent needs to escalate to a write, route the work through an implementer instead of loosening permissions. +Agent `permissionMode` in frontmatter is overridden when the parent (you, the orchestrator) runs in `acceptEdits` or `bypassPermissions` mode — the child inherits the parent's mode. This means `permissionMode: plan` on read-only agents like architect, researcher, and reviewer is **not enforced at runtime**. + +The actual write protection for read-only agents comes from `disallowedTools: Write, Edit` — this is enforced regardless of permission mode. Do not rely on `permissionMode` as a safety boundary; rely on tool restrictions. ### Parallelism mandate