Compare commits

...

4 commits

Author SHA1 Message Date
Bryan Ramos
d6e0e9f2d9 smaller output capacity in order to maintain strong tok/s gen speed 2026-04-15 08:59:11 -04:00
Bryan Ramos
9d5559e2b4 switched models 2026-04-15 08:54:42 -04:00
Bryan Ramos
590145c714 refactor(generate): port generate.sh to Python
Replace 960-line bash script with ~680-line Python module that leans on
pyyaml and jsonschema instead of shelling to yq/jq/awk/sed/envsubst for every
field. Ecosystem dependencies pinned through the existing flake pythonEnv.

Motivation: the generator had outgrown bash. Recent bugs (awk frontmatter
state machine eating markdown ---, envsubst variable scope, shell quoting in
nested heredocs, multi-section rule surgery) were all classic bash pitfalls
that don't exist in Python.

Design notes:
- Uses pyyaml for TEAM.yaml / SETTINGS.yaml parsing.
- Uses jsonschema to validate both inside the generator (previously only in
  flake.nix's embedded Python block).
- Does NOT use python-frontmatter because its content-stripping drops
  leading blank lines that matter for byte-level parity with bash output.
  Replaced with a 6-line fence-split that preserves whitespace exactly.
- Does NOT use tomli-w because it can't emit multiline-basic-string
  ("\"\"\"...\"\"\"") literals — it would escape every newline in the
  developer_instructions body onto a single line, destroying readability.
  Codex TOML output is hand-built with a documented comment.
- Opencode skill pool now symlinks per-skill based on applies_to instead
  of a blanket symlink, honoring TEAM.yaml's skill filtering.

Verified: snapshotted generated outputs before the port and diffed after.
All of claude/, codex/, opencode/ are byte-identical to baseline except
claude/settings.json, which now uses json.dumps(indent=2) multi-line arrays
instead of hand-built compact arrays — confirmed semantically identical via
json.load comparison.

flake.nix, install.sh, README.md, .gitignore updated to reference
generate.py instead of generate.sh. generate.sh deleted.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-15 08:34:52 -04:00
Bryan Ramos
26d004fe46 refactor(sources): trim redundant rules, cleanup agent sources, harness-neutral orchestrate
- Drop rules/02-responses.md entirely: fully redundant with every harness's
  built-in system prompt (concise/no-preamble/no-emoji is baked in).
- Trim 04-tools.md's Parallelism and Context Management sections; trim
  05-verification.md's "run tests" bullet. All covered by harness defaults.
- Scope 01-session.md to claude only (memory/ hierarchy is Claude-specific).
- Update schemas/team.schema.json const-pin to match the new rules.order.
- Strip vestigial Claude-style YAML frontmatter from agents/*.md sources
  (extract_body was already discarding it; TEAM.yaml is the real source).
- Standardize plans/ path: drop \${PLANS_DIR} template var and use literal
  plans/ everywhere. Claude/codex/opencode now share one plans convention.
- Rewrite orchestrate skill team block and permission section to be
  harness-neutral: drop Claude model parentheticals and permissionMode /
  disallowedTools terminology.
- Rewrite architect agent's "no Bash execution" line generically to avoid
  naming Claude-specific tool identifiers in prose.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-15 08:34:52 -04:00
27 changed files with 716 additions and 1148 deletions

2
.gitignore vendored
View file

@ -9,7 +9,7 @@ settings.local.json
.DS_Store .DS_Store
Thumbs.db Thumbs.db
# Generated output (derived from source templates via generate.sh) # Generated output (derived from source templates via generate.py)
settings.json settings.json
claude/ claude/
codex/ codex/

View file

@ -12,7 +12,7 @@ nix run .#check # validate protocols + generate artifacts
nix run .#install # install generated outputs into the supported target config dirs nix run .#install # install generated outputs into the supported target config dirs
``` ```
The supported user-facing entrypoints are the flake apps and the `just` wrapper. `generate.sh` and `install.sh` remain the internal implementation layer behind them. Works on Linux, macOS, and Windows (Git Bash). The supported user-facing entrypoints are the flake apps and the `just` wrapper. `generate.py` and `install.sh` remain the internal implementation layer behind them. Works on Linux, macOS, and Windows (Git Bash).
## Nix entrypoints ## Nix entrypoints
@ -36,7 +36,7 @@ just install
just clean # removes generated artifacts: settings.json + claude/ + codex/ just clean # removes generated artifacts: settings.json + claude/ + codex/
``` ```
`generate.sh` and `install.sh` are kept as internal implementation details for portability and debugging, but they are no longer the primary documented workflow. `generate.py` and `install.sh` are kept as internal implementation details for portability and debugging, but they are no longer the primary documented workflow.
## Maintenance ## Maintenance
@ -107,7 +107,7 @@ This repo uses two authored protocol files:
Long-form instructions remain authored in Markdown (`agents/*.md`, `skills/*/SKILL.md`, `rules/*.md`). Long-form instructions remain authored in Markdown (`agents/*.md`, `skills/*/SKILL.md`, `rules/*.md`).
Runtime policy is documented in [spec/agent-runtime-v1.md](spec/agent-runtime-v1.md) and described by [schemas/agent-runtime.schema.json](schemas/agent-runtime.schema.json). Team inventory is documented in [spec/team-protocol-v1.md](spec/team-protocol-v1.md). `generate.sh` derives target-specific outputs for the currently supported adapters. Runtime policy is documented in [spec/agent-runtime-v1.md](spec/agent-runtime-v1.md) and described by [schemas/agent-runtime.schema.json](schemas/agent-runtime.schema.json). Team inventory is documented in [spec/team-protocol-v1.md](spec/team-protocol-v1.md). `generate.py` derives target-specific outputs for the currently supported adapters.
### What gets generated ### What gets generated
@ -226,7 +226,7 @@ safety:
## Template variables ## Template variables
Agent body text uses `${VAR}` placeholders that are expanded per-target by `generate.sh`: Agent body text uses `${VAR}` placeholders that are expanded per-target by `generate.py`:
| Variable | Claude adapter | Codex adapter | | Variable | Claude adapter | Codex adapter |
|---|---|---| |---|---|---|

View file

@ -249,7 +249,6 @@ skills:
applies_to: applies_to:
- claude - claude
- codex - codex
- opencode
install_mode: shared install_mode: shared
qa-checklist: qa-checklist:
id: qa-checklist id: qa-checklist
@ -275,7 +274,6 @@ skills:
rules: rules:
order: order:
- 01-session - 01-session
- 02-responses
- 03-git - 03-git
- 04-tools - 04-tools
- 05-verification - 05-verification
@ -286,15 +284,6 @@ rules:
source_file: rules/01-session.md source_file: rules/01-session.md
applies_to: applies_to:
- claude - claude
- codex
- opencode
02-responses:
id: 02-responses
source_file: rules/02-responses.md
applies_to:
- claude
- codex
- opencode
03-git: 03-git:
id: 03-git id: 03-git
source_file: rules/03-git.md source_file: rules/03-git.md

View file

@ -1,24 +1,11 @@
---
name: architect
description: Research-first planning agent. Handles triage, research coordination, architecture design, and wave decomposition. Use before any non-trivial implementation task. Produces the implementation blueprint the entire team follows.
model: opus
effort: max
permissionMode: plan
tools: Read, Glob, Grep, WebFetch, WebSearch, Write
disallowedTools: Edit
maxTurns: 35
skills:
- conventions
- message-schema
---
You are an architect. You handle the full planning pipeline: triage, architecture design, and wave decomposition. Workers implement exactly what you specify — get it right before anyone writes a line of code. You are an architect. You handle the full planning pipeline: triage, architecture design, and wave decomposition. Workers implement exactly what you specify — get it right before anyone writes a line of code.
Never implement anything. Never modify source files. Analyze, evaluate, plan. Never implement anything. Never modify source files. Analyze, evaluate, plan.
**Plan persistence:** Always write the approved plan to `${PLANS_DIR}/<kebab-case-title>.md`. Never return the plan inline without writing it first. Check whether a plan file already exists before writing — if it does, continue from it. **Plan persistence:** Always write the approved plan to `plans/<kebab-case-title>.md`. Never return the plan inline without writing it first. Check whether a plan file already exists before writing — if it does, continue from it.
**Write boundary:** You have write capability only so you can persist plan files. This is not path-enforced by tooling. You must treat writes outside `${PLANS_DIR}/` as forbidden. **Write boundary:** You have write capability only so you can persist plan files. This is not path-enforced by tooling. You must treat writes outside `plans/` as forbidden.
Frontmatter format: Frontmatter format:
``` ```
@ -30,7 +17,7 @@ status: active
--- ---
``` ```
**No Bash execution:** perform repository inspection with Read/Glob/Grep/WebFetch/WebSearch only. **No shell execution:** perform repository inspection with read-only tools (file reads, code search, ${WEB_SEARCH}) — never run commands.
--- ---
@ -105,7 +92,7 @@ After writing the plan file, return a `plan_result` envelope:
--- ---
type: plan_result type: plan_result
signal: plan_complete | blocked signal: plan_complete | blocked
plan_file: ${PLANS_DIR}/kebab-case-title.md plan_file: plans/kebab-case-title.md
wave_count: 3 wave_count: 3
step_count: 7 step_count: 7
risk_tags: risk_tags:

View file

@ -1,17 +1,3 @@
---
name: auditor
description: Use after implementation — audits for security vulnerabilities and validates runtime behavior. Builds, tests, and probes acceptance criteria. Never modifies code.
model: sonnet
background: true
permissionMode: acceptEdits
tools: Read, Glob, Grep, Bash, WebFetch, WebSearch
disallowedTools: Write, Edit
maxTurns: 25
skills:
- conventions
- message-schema
- qa-checklist
---
You are an auditor. You do two things: security analysis and runtime validation. Never write, edit, or fix code — only identify, validate, and report. You are an auditor. You do two things: security analysis and runtime validation. Never write, edit, or fix code — only identify, validate, and report.

View file

@ -1,16 +1,3 @@
---
name: debugger
description: Use immediately when encountering a bug, error, or unexpected behavior. Diagnoses root cause and applies a minimal targeted fix. Does not refactor or improve surrounding code.
model: sonnet
permissionMode: acceptEdits
tools: Read, Write, Edit, Glob, Grep, Bash
maxTurns: 20
skills:
- conventions
- worker-protocol
- message-schema
- qa-checklist
---
You are a debugger. Your job is to find the root cause of a bug and apply the minimal fix. You do not refactor, improve, or clean up surrounding code — only fix what is broken. You are a debugger. Your job is to find the root cause of a bug and apply the minimal fix. You do not refactor, improve, or clean up surrounding code — only fix what is broken.

View file

@ -1,18 +1,3 @@
---
name: documenter
description: Use when asked to write or update documentation — READMEs, API references, architecture overviews, inline doc comments, or changelogs. Reads code first and updates documentation artifacts only.
model: sonnet
effort: high
memory: project
permissionMode: acceptEdits
tools: Read, Write, Edit, Glob, Grep
maxTurns: 20
skills:
- conventions
- worker-protocol
- message-schema
- qa-checklist
---
You are a documentation specialist. Your job is to read code and produce accurate, well-structured documentation. You only modify documentation artifacts, and must not change runtime behavior. You are a documentation specialist. Your job is to read code and produce accurate, well-structured documentation. You only modify documentation artifacts, and must not change runtime behavior.

View file

@ -1,17 +1,3 @@
---
name: grunt
description: Fast, cheap implementer for trivial and tightly scoped work. Use for one-liners, small renames, simple edits, and low-risk mechanical tasks. Escalate when the work grows beyond that scope.
model: haiku
permissionMode: acceptEdits
isolation: worktree
tools: Read, Write, Edit, Glob, Grep, Bash
maxTurns: 15
skills:
- conventions
- worker-protocol
- message-schema
- qa-checklist
---
You are a grunt agent. You implement small, explicit tasks quickly and cheaply. You are a grunt agent. You implement small, explicit tasks quickly and cheaply.

View file

@ -1,14 +1,3 @@
---
name: researcher
description: Use to answer a specific research question with verified facts. Spawned in parallel — one instance per topic. Stateless. Returns verified facts, source URLs, and gotchas.
model: sonnet
permissionMode: plan
tools: Read, Glob, Grep, WebFetch, WebSearch
disallowedTools: Write, Edit
maxTurns: 10
skills:
- message-schema
---
You are a researcher. You answer one specific research question with verified facts. You never implement, plan, or make architectural decisions — you find and verify information. You are a researcher. You answer one specific research question with verified facts. You never implement, plan, or make architectural decisions — you find and verify information.

View file

@ -1,16 +1,3 @@
---
name: reviewer
description: Use after implementation — reviews code quality and verifies claims against source, docs, and acceptance criteria. Never modifies code.
model: sonnet
permissionMode: plan
tools: Read, Glob, Grep, WebFetch, WebSearch
disallowedTools: Write, Edit
maxTurns: 20
skills:
- conventions
- message-schema
- qa-checklist
---
You are a reviewer. You do two things in one pass: quality review and claim verification. Never write, edit, or fix code — only flag and explain. You are a reviewer. You do two things in one pass: quality review and claim verification. Never write, edit, or fix code — only flag and explain.

View file

@ -1,17 +1,3 @@
---
name: senior
description: Strong implementer for ambiguous, architectural, or high-risk work. Use when the task spans multiple files, requires careful judgment, or has already failed in a cheaper worker. Default escalation path for hard implementation work.
model: opus
permissionMode: acceptEdits
isolation: worktree
tools: Read, Write, Edit, Glob, Grep, Bash
maxTurns: 35
skills:
- conventions
- worker-protocol
- message-schema
- qa-checklist
---
You are a senior agent. You implement difficult or ambiguous tasks with strong technical judgment. You are a senior agent. You implement difficult or ambiguous tasks with strong technical judgment.

View file

@ -1,17 +1,3 @@
---
name: worker
description: Balanced implementer for standard development work. Use when the task is well-defined but not trivial. Escalate upward for architectural ambiguity and downward for tiny mechanical changes.
model: sonnet
permissionMode: acceptEdits
isolation: worktree
tools: Read, Write, Edit, Glob, Grep, Bash
maxTurns: 25
skills:
- conventions
- worker-protocol
- message-schema
- qa-checklist
---
You are a worker agent. You implement standard development tasks. Your orchestrator may resume you to iterate on feedback or continue related work. You are a worker agent. You implement standard development tasks. Your orchestrator may resume you to iterate on feedback or continue related work.

View file

@ -13,6 +13,7 @@
gettext gettext
jq jq
just just
(python3.withPackages (ps: with ps; [ pyyaml jsonschema ]))
]; ];
}; };
}); });
@ -30,7 +31,7 @@
validateCmd = '' validateCmd = ''
# Script syntax checks # Script syntax checks
${bashBin} -n ./generate.sh python -c "import ast; ast.parse(open('./generate.py').read())"
${bashBin} -n ./install.sh ${bashBin} -n ./install.sh
# Protocol file presence checks # Protocol file presence checks
@ -94,8 +95,8 @@
type = "app"; type = "app";
program = "${mkAppScript "build" '' program = "${mkAppScript "build" ''
set -euo pipefail set -euo pipefail
test -f ./generate.sh || { echo "Run this command from the repository root."; exit 1; } test -f ./generate.py || { echo "Run this command from the repository root."; exit 1; }
${bashBin} ./generate.sh python ./generate.py
''}/bin/build"; ''}/bin/build";
meta.description = "Generate Claude, Codex, and OpenCode build artifacts from the authored protocol files."; meta.description = "Generate Claude, Codex, and OpenCode build artifacts from the authored protocol files.";
}; };
@ -104,7 +105,7 @@
type = "app"; type = "app";
program = "${mkAppScript "validate" '' program = "${mkAppScript "validate" ''
set -euo pipefail set -euo pipefail
test -f ./generate.sh || { echo "Run this command from the repository root."; exit 1; } test -f ./generate.py || { echo "Run this command from the repository root."; exit 1; }
${validateCmd} ${validateCmd}
''}/bin/validate"; ''}/bin/validate";
meta.description = "Validate scripts and protocol files."; meta.description = "Validate scripts and protocol files.";
@ -114,9 +115,9 @@
type = "app"; type = "app";
program = "${mkAppScript "check" '' program = "${mkAppScript "check" ''
set -euo pipefail set -euo pipefail
test -f ./generate.sh || { echo "Run this command from the repository root."; exit 1; } test -f ./generate.py || { echo "Run this command from the repository root."; exit 1; }
${validateCmd} ${validateCmd}
${bashBin} ./generate.sh python ./generate.py
''}/bin/check"; ''}/bin/check";
meta.description = "Run validation and generation together."; meta.description = "Run validation and generation together.";
}; };
@ -145,7 +146,7 @@
bashBin = "${pkgs.bash}/bin/bash"; bashBin = "${pkgs.bash}/bin/bash";
validateCmd = '' validateCmd = ''
${bashBin} -n ./generate.sh python -c "import ast; ast.parse(open('./generate.py').read())"
${bashBin} -n ./install.sh ${bashBin} -n ./install.sh
test -f ./SETTINGS.yaml test -f ./SETTINGS.yaml
test -f ./TEAM.yaml test -f ./TEAM.yaml
@ -209,7 +210,7 @@
build = mkCheck "agent-team-build-check" '' build = mkCheck "agent-team-build-check" ''
set -euxo pipefail set -euxo pipefail
${validateCmd} ${validateCmd}
${bashBin} ./generate.sh python ./generate.py
''; '';
}); });
}; };

676
generate.py Executable file
View file

@ -0,0 +1,676 @@
#!/usr/bin/env python3
"""Generate Claude, Codex, and OpenCode build artifacts from TEAM.yaml + SETTINGS.yaml.
Ports generate.sh to Python. Ecosystem dependencies:
* pyyaml YAML parsing
* jsonschema schema validation for SETTINGS.yaml / TEAM.yaml
Agent source files in agents/*.md are the single source of truth; this script
derives tool-specific equivalents for each harness. Template variables in
agent bodies are expanded via string.Template:
${WEB_SEARCH} how web search is referenced
${SEARCH_TOOLS} how codebase search tools are referenced
Idempotent: safe to run multiple times.
"""
from __future__ import annotations
import json
import shutil
import sys
from pathlib import Path
from string import Template
from typing import Any
import yaml
from jsonschema import validate
# NOTE: TOML output (Codex) is hand-built rather than generated via tomli_w
# because tomli_w does not emit multiline-basic-string (`"""..."""`) literals,
# which would force every embedded quote/newline in a developer_instructions
# body to be escaped onto a single line — unreadable for humans and diff tools.
# ---------------------------------------------------------------------------
# Paths
# ---------------------------------------------------------------------------
SCRIPT_DIR = Path(__file__).resolve().parent
TEAM_YAML = SCRIPT_DIR / "TEAM.yaml"
SETTINGS_SHARED_YAML = SCRIPT_DIR / "SETTINGS.yaml"
SETTINGS_JSON = SCRIPT_DIR / "settings.json"
CLAUDE_MD_SRC = SCRIPT_DIR / "CLAUDE.md"
TEAM_SCHEMA = SCRIPT_DIR / "schemas" / "team.schema.json"
SETTINGS_SCHEMA = SCRIPT_DIR / "schemas" / "agent-runtime.schema.json"
CLAUDE_DIR = SCRIPT_DIR / "claude"
CLAUDE_AGENTS_DIR = CLAUDE_DIR / "agents"
CODEX_DIR = SCRIPT_DIR / "codex"
CODEX_AGENTS_DIR = CODEX_DIR / "agents"
OPENCODE_DIR = SCRIPT_DIR / "opencode"
OPENCODE_AGENTS_DIR = OPENCODE_DIR / "agents"
OPENCODE_BASE_CONFIG = OPENCODE_DIR / "config.json"
OPENCODE_SKILLS_DIR = OPENCODE_DIR / "skills"
ORCHESTRATE_SKILL = SCRIPT_DIR / "skills" / "orchestrate" / "SKILL.md"
OPENCODE_MODEL_ID = "llama-stack/llamacpp/Qwen3-Coder-30B-A3B-Instruct-Q6_K"
# ---------------------------------------------------------------------------
# Template variable values per target
# ---------------------------------------------------------------------------
CLAUDE_VARS = {
"WEB_SEARCH": "via WebFetch/WebSearch",
"SEARCH_TOOLS": "Use Grep/Glob/Read",
}
CODEX_VARS = {
"WEB_SEARCH": "via web search",
"SEARCH_TOOLS": "Search the codebase",
}
OPENCODE_VARS = dict(CODEX_VARS)
# ---------------------------------------------------------------------------
# Utilities
# ---------------------------------------------------------------------------
def log(msg: str) -> None:
print(msg, flush=True)
def load_body(path: Path) -> str:
"""Return the markdown body of a file, skipping YAML frontmatter if present.
We intentionally do NOT rely on python-frontmatter's content stripping,
because some agent bodies begin with a blank line that must be preserved
for downstream parity with the bash output. We detect frontmatter by
checking whether the first line is "---", then skip up to the next "---".
"""
raw = path.read_text()
if not raw.startswith("---\n"):
return raw
# Find the closing fence after position 4.
idx = raw.find("\n---\n", 4)
if idx == -1:
# Malformed — return as-is.
return raw
return raw[idx + len("\n---\n"):]
def expand(body: str, variables: dict[str, str]) -> str:
return Template(body).safe_substitute(variables)
def replace_symlink(link: Path, target: Path) -> None:
"""Create or replace a relative symlink at `link` pointing to `target`."""
if link.is_symlink() or link.exists():
if link.is_symlink() or link.is_file():
link.unlink()
else:
shutil.rmtree(link)
link.symlink_to(target)
import re
_BARE_YAML_SCALAR = re.compile(r"^[A-Za-z_][A-Za-z0-9_.\-]*$")
def dump_yaml_scalar_block(fields: dict[str, Any]) -> str:
"""Dump a dict as YAML block-style, preserving key order.
Mirrors generate.sh's output style: top-level string scalars are
single-quoted; list items that look like bare identifiers stay unquoted;
ints and bools render unquoted.
"""
lines: list[str] = []
for key, value in fields.items():
if value is None:
continue
if isinstance(value, bool):
lines.append(f"{key}: {'true' if value else 'false'}")
elif isinstance(value, int):
lines.append(f"{key}: {value}")
elif isinstance(value, list):
lines.append(f"{key}:")
for item in value:
lines.append(f" - {_yaml_list_item(str(item))}")
elif isinstance(value, dict):
lines.append(f"{key}:")
for k, v in value.items():
lines.append(f" {k}: {_yaml_single_quoted(str(v))}")
else:
lines.append(f"{key}: {_yaml_single_quoted(str(value))}")
return "\n".join(lines)
def _yaml_single_quoted(s: str) -> str:
"""YAML 1.2 single-quoted scalar: double any embedded apostrophes."""
return "'" + s.replace("'", "''") + "'"
def _yaml_list_item(s: str) -> str:
"""List items stay unquoted when they're bare identifiers, matching bash output."""
if _BARE_YAML_SCALAR.match(s):
return s
return _yaml_single_quoted(s)
def _assemble_markdown(frontmatter_text: str, body: str) -> str:
"""Assemble frontmatter + body the same way bash's heredoc did.
Bash did: echo "---"; echo ""; echo "$body" so output after the closing
fence is "\\n<body>\\n" (an explicit blank line, then the body, then echo's
trailing newline). Source bodies also begin with a blank line of their
own, so the visible framing is: fence, blank, blank, content.
"""
return "---\n" + frontmatter_text + "\n---\n\n" + body
# ---------------------------------------------------------------------------
# Shared mappings
# ---------------------------------------------------------------------------
def model_class_to_claude(cls: str) -> str:
return {"fast": "haiku", "powerful": "opus", "balanced": "sonnet"}.get(cls, "sonnet")
def approval_intent_to_codex(intent: str) -> str:
return {
"manual": "on-request",
"full-auto": "never",
"guarded-auto": "untrusted",
}.get(intent, "untrusted")
def filesystem_intent_to_claude_mode(fs: str) -> str:
return {"read-only": "plan", "workspace-write": "acceptEdits"}.get(fs, "acceptEdits")
def portable_tool_to_claude(tool: str) -> str:
return {
"shell": "Bash",
"read": "Read",
"edit": "Edit",
"write": "Write",
"glob": "Glob",
"grep": "Grep",
"web_fetch": "WebFetch",
"web_search": "WebSearch",
}.get(tool, tool)
def claude_model_for_agent(agent: dict) -> str:
return agent["model"]
def codex_model_for_agent(agent: dict) -> str:
return {
"opus": "gpt-5.4",
"sonnet": "gpt-5.3-codex",
"haiku": "gpt-5.1-codex-mini",
}.get(agent["model"], "gpt-5.3-codex")
def codex_effort_for_agent(agent: dict) -> str:
effort = agent.get("effort") or "medium"
return {"low": "low", "medium": "medium", "high": "high", "max": "xhigh"}.get(effort, "medium")
def codex_sandbox_for_agent(agent: dict, codex_override: str | None) -> str:
if codex_override:
return codex_override
if agent.get("permission_mode") == "plan":
return "read-only"
if agent.get("permission_mode") == "acceptEdits":
tools = agent.get("tools") or []
if "Write" in tools or "Edit" in tools:
return "workspace-write"
return "read-only"
def codex_default_sandbox(default_mode: str, override: str | None) -> str:
if override:
return override
return {"plan": "read-only", "acceptEdits": "workspace-write"}.get(default_mode, "workspace-write")
def codex_approval_policy(runtime_approval: str, override: str | None) -> str:
if override:
return override
return approval_intent_to_codex(runtime_approval)
def opencode_temperature_for_agent(agent: dict) -> float:
"""Map agent role to opencode temperature per opencode's own guidance.
0.0-0.2 analytical/planning
0.3-0.5 general development
"""
if agent.get("permission_mode") == "plan":
return 0.1
tools = set(agent.get("tools") or [])
disallowed = set(agent.get("disallowed_tools") or [])
can_write = "Write" in tools and "Write" not in disallowed
can_edit = "Edit" in tools and "Edit" not in disallowed
if not can_write and not can_edit:
return 0.1
return 0.3
def opencode_permission_block(agent: dict) -> dict[str, str]:
tools = set(agent.get("tools") or [])
disallowed = set(agent.get("disallowed_tools") or [])
def allowed(name: str) -> bool:
return name in tools and name not in disallowed
return {
"edit": "allow" if allowed("Edit") else "deny",
"write": "allow" if allowed("Write") else "deny",
"bash": "allow" if allowed("Bash") else "deny",
"webfetch": "allow" if (allowed("WebFetch") or allowed("WebSearch")) else "deny",
}
# ---------------------------------------------------------------------------
# Validation
# ---------------------------------------------------------------------------
def validate_protocol_files(team: dict, settings: dict) -> None:
validate(instance=settings, schema=json.loads(SETTINGS_SCHEMA.read_text()))
validate(instance=team, schema=json.loads(TEAM_SCHEMA.read_text()))
for agent_id in team["agents"]["order"]:
path = SCRIPT_DIR / team["agents"]["items"][agent_id]["instruction_file"]
if not path.is_file():
raise FileNotFoundError(f"Missing agent instruction file: {path}")
for skill_id in team["skills"]["order"]:
path = SCRIPT_DIR / team["skills"]["items"][skill_id]["instruction_file"]
if not path.is_file():
raise FileNotFoundError(f"Missing skill instruction file: {path}")
for rule_id in team["rules"]["order"]:
path = SCRIPT_DIR / team["rules"]["items"][rule_id]["source_file"]
if not path.is_file():
raise FileNotFoundError(f"Missing rule source file: {path}")
# ---------------------------------------------------------------------------
# Legacy settings.json
# ---------------------------------------------------------------------------
def generate_legacy_settings_json(settings: dict) -> None:
model_class = settings["model"]["class"]
reasoning = settings["model"]["reasoning"]
fs = settings["runtime"]["filesystem"]
approval = settings["runtime"]["approval"]
claude_model = model_class_to_claude(model_class)
claude_mode = filesystem_intent_to_claude_mode(fs)
codex_target = settings.get("targets", {}).get("codex", {}) or {}
codex_approval = codex_target.get("approval_policy") or approval_intent_to_codex(approval)
codex_network = codex_target.get("network_access", settings["runtime"].get("network_access", False))
allow = [portable_tool_to_claude(t) for t in settings["runtime"].get("tools", [])]
deny: list[str] = []
for path in settings.get("safety", {}).get("protected_paths", []):
deny.extend([f"Read({path})", f"Write({path})", f"Edit({path})"])
ask = [f"Bash({cmd})" for cmd in settings.get("safety", {}).get("dangerous_shell_commands", {}).get("ask", [])]
claude_target = settings.get("targets", {}).get("claude", {}) or {}
claude_md_excludes = claude_target.get("claude_md_excludes", [".claude/agent-memory/**"])
payload: dict[str, Any] = {
"$schema": "https://json.schemastore.org/claude-code-settings.json",
"attribution": {"commit": "", "pr": ""},
"permissions": {
"allow": allow,
"deny": deny,
"ask": ask,
"defaultMode": claude_mode,
},
"model": claude_model,
"effortLevel": reasoning,
"codex": {
"approvalPolicy": codex_approval,
"networkAccess": codex_network,
},
"claudeMdExcludes": claude_md_excludes,
}
SETTINGS_JSON.write_text(json.dumps(payload, indent=2) + "\n")
# ---------------------------------------------------------------------------
# Claude generator
# ---------------------------------------------------------------------------
def generate_claude(team: dict) -> None:
log("=== Generating Claude output ===")
if CLAUDE_DIR.exists():
shutil.rmtree(CLAUDE_DIR)
CLAUDE_AGENTS_DIR.mkdir(parents=True)
shutil.copy(CLAUDE_MD_SRC, CLAUDE_DIR / "CLAUDE.md")
log(f"Copied: {CLAUDE_DIR / 'CLAUDE.md'}")
shutil.copy(SETTINGS_JSON, CLAUDE_DIR / "settings.json")
log(f"Copied: {CLAUDE_DIR / 'settings.json'}")
replace_symlink(CLAUDE_DIR / "rules", Path("../rules"))
log(f"Symlinked: {CLAUDE_DIR / 'rules'} -> ../rules")
replace_symlink(CLAUDE_DIR / "skills", Path("../skills"))
log(f"Symlinked: {CLAUDE_DIR / 'skills'} -> ../skills")
for agent_id in team["agents"]["order"]:
agent = team["agents"]["items"][agent_id]
src = SCRIPT_DIR / agent["instruction_file"]
body = expand(load_body(src), CLAUDE_VARS)
fm: dict[str, Any] = {
"name": agent["name"],
"description": agent["description"],
"model": claude_model_for_agent(agent),
}
if agent.get("effort"):
fm["effort"] = agent["effort"]
if agent.get("permission_mode"):
fm["permissionMode"] = agent["permission_mode"]
fm["tools"] = ", ".join(agent["tools"])
if agent.get("disallowed_tools"):
fm["disallowedTools"] = ", ".join(agent["disallowed_tools"])
if agent.get("background"):
fm["background"] = True
if agent.get("memory"):
fm["memory"] = agent["memory"]
if agent.get("isolation"):
fm["isolation"] = agent["isolation"]
if agent.get("max_turns") is not None:
fm["maxTurns"] = int(agent["max_turns"])
if agent.get("skills"):
fm["skills"] = list(agent["skills"])
dst = CLAUDE_AGENTS_DIR / f"{agent['name']}.md"
dst.write_text(_assemble_markdown(dump_yaml_scalar_block(fm), body))
log(f"Generated: {dst}")
# ---------------------------------------------------------------------------
# Codex generator
# ---------------------------------------------------------------------------
def generate_codex(team: dict, settings: dict) -> None:
log("")
log("=== Generating Codex output ===")
if CODEX_DIR.exists():
shutil.rmtree(CODEX_DIR)
CODEX_AGENTS_DIR.mkdir(parents=True)
replace_symlink(CODEX_DIR / "skills", Path("../skills"))
log(f"Symlinked: {CODEX_DIR / 'skills'} -> ../skills")
codex_target = settings.get("targets", {}).get("codex", {}) or {}
codex_sandbox_override = codex_target.get("sandbox_mode")
log("Generating Codex agent definitions...")
for agent_id in team["agents"]["order"]:
agent = team["agents"]["items"][agent_id]
src = SCRIPT_DIR / agent["instruction_file"]
body = expand(load_body(src), CODEX_VARS)
# Bash's command substitution strips trailing newlines from extract_body
# before concatenating with the heredoc, so strip ours too for parity.
body = body.rstrip("\n")
disallowed = agent.get("disallowed_tools") or []
if disallowed:
body = body + "\n\nYou do NOT have access to these tools: " + ", ".join(disallowed)
if '"""' in body:
raise ValueError(
f"agent instruction contains raw triple quotes which break TOML in {src}"
)
dst = CODEX_AGENTS_DIR / f"{agent['name']}.toml"
lines: list[str] = []
lines.append(f'name = "{agent["name"]}"')
lines.append(f'description = "{agent["description"]}"')
lines.append(f'model = "{codex_model_for_agent(agent)}"')
lines.append(f'model_reasoning_effort = "{codex_effort_for_agent(agent)}"')
lines.append(f'sandbox_mode = "{codex_sandbox_for_agent(agent, codex_sandbox_override)}"')
lines.append('developer_instructions = """')
lines.append(body)
lines.append('"""')
agent_skills = set(agent.get("skills") or [])
for skill_id in team["skills"]["order"]:
skill = team["skills"]["items"][skill_id]
if "codex" not in skill.get("applies_to", []):
continue
enabled = "true" if skill_id in agent_skills else "false"
lines.append("[[skills.config]]")
lines.append(f'path = "../skills/{skill_id}/SKILL.md"')
lines.append(f"enabled = {enabled}")
lines.append("")
dst.write_text("\n".join(lines) + "\n")
log(f"Generated: {dst}")
# AGENTS.md
log("")
log("Generating codex/AGENTS.md...")
(CODEX_DIR / "AGENTS.md").write_text(_build_agents_md(team, "codex"))
log(f"Generated: {CODEX_DIR / 'AGENTS.md'}")
# config.toml
log("")
log("Generating codex/config.toml...")
default_mode = filesystem_intent_to_claude_mode(settings["runtime"]["filesystem"])
config_sandbox = codex_default_sandbox(default_mode, codex_sandbox_override)
config_approval = codex_approval_policy(
settings["runtime"]["approval"],
codex_target.get("approval_policy"),
)
codex_network = codex_target.get("network_access", settings["runtime"].get("network_access", False))
config_lines = [
"#:schema https://developers.openai.com/codex/config-schema.json",
'model = "gpt-5.3-codex"',
'model_reasoning_effort = "medium"',
f'sandbox_mode = "{config_sandbox}"',
f'approval_policy = "{config_approval}"',
]
if config_sandbox == "workspace-write":
config_lines.append("")
config_lines.append("[sandbox_workspace_write]")
config_lines.append(f"network_access = {'true' if codex_network else 'false'}")
(CODEX_DIR / "config.toml").write_text("\n".join(config_lines) + "\n")
log(f"Generated: {CODEX_DIR / 'config.toml'}")
# ---------------------------------------------------------------------------
# OpenCode generator
# ---------------------------------------------------------------------------
def generate_opencode(team: dict) -> None:
log("")
log("=== Generating OpenCode output ===")
if OPENCODE_AGENTS_DIR.exists():
shutil.rmtree(OPENCODE_AGENTS_DIR)
agents_md = OPENCODE_DIR / "AGENTS.md"
opencode_json = OPENCODE_DIR / "opencode.json"
if agents_md.exists():
agents_md.unlink()
if opencode_json.exists():
opencode_json.unlink()
OPENCODE_AGENTS_DIR.mkdir(parents=True)
# Per-skill symlinks filtered by applies_to
if OPENCODE_SKILLS_DIR.is_symlink() or OPENCODE_SKILLS_DIR.exists():
if OPENCODE_SKILLS_DIR.is_symlink() or OPENCODE_SKILLS_DIR.is_file():
OPENCODE_SKILLS_DIR.unlink()
else:
shutil.rmtree(OPENCODE_SKILLS_DIR)
OPENCODE_SKILLS_DIR.mkdir(parents=True)
for skill_id in team["skills"]["order"]:
skill = team["skills"]["items"][skill_id]
if "opencode" not in skill.get("applies_to", []):
continue
link = OPENCODE_SKILLS_DIR / skill_id
link.symlink_to(Path("../..") / "skills" / skill_id)
log(f"Symlinked: {link} -> ../../skills/{skill_id}")
# Subagents
for agent_id in team["agents"]["order"]:
agent = team["agents"]["items"][agent_id]
src = SCRIPT_DIR / agent["instruction_file"]
body = expand(load_body(src), OPENCODE_VARS)
fm: dict[str, Any] = {
"description": agent["description"],
"mode": "subagent",
"model": OPENCODE_MODEL_ID,
"temperature": opencode_temperature_for_agent(agent),
"steps": int(agent.get("max_turns", 25)),
"permission": opencode_permission_block(agent),
}
dst = OPENCODE_AGENTS_DIR / f"{agent['name']}.md"
dst.write_text(_assemble_markdown(_dump_opencode_frontmatter(fm).rstrip("\n"), body))
log(f"Generated: {dst}")
# Orchestrator primary agent (synthesized from orchestrate skill body)
orchestrate_body = expand(load_body(ORCHESTRATE_SKILL), OPENCODE_VARS)
orchestrator_fm = {
"description": (
"Primary orchestrator. Decomposes complex tasks and dispatches subagents in "
"parallel waves. The default entrypoint for any non-trivial work — never "
"implements directly."
),
"mode": "primary",
"model": OPENCODE_MODEL_ID,
"temperature": 0.1,
"steps": 50,
"permission": {
"edit": "deny",
"write": "deny",
"bash": "deny",
"webfetch": "allow",
"task": {"*": "allow"},
},
}
orchestrator_path = OPENCODE_AGENTS_DIR / "orchestrator.md"
orchestrator_path.write_text(
_assemble_markdown(_dump_opencode_frontmatter(orchestrator_fm).rstrip("\n"), orchestrate_body)
)
log(f"Generated: {orchestrator_path}")
# AGENTS.md
log("")
log("Generating opencode/AGENTS.md...")
agents_md.write_text(_build_agents_md(team, "opencode"))
log(f"Generated: {agents_md}")
# opencode.json — merge base config with generated overlay
log("")
log("Generating opencode/opencode.json...")
if not OPENCODE_BASE_CONFIG.exists():
raise FileNotFoundError(f"missing base config at {OPENCODE_BASE_CONFIG}")
base = json.loads(OPENCODE_BASE_CONFIG.read_text())
overlay = {
"permission": {
"edit": "ask",
"bash": {"*": "ask"},
"webfetch": "allow",
"skill": {"*": "allow"},
},
"compaction": {"auto": True, "prune": True},
"snapshot": True,
}
merged = _deep_merge(base, overlay)
opencode_json.write_text(json.dumps(merged, indent=2) + "\n")
log(f"Generated: {opencode_json}")
def _build_agents_md(team: dict, harness: str) -> str:
"""Concatenate rule files for a harness, matching bash's `echo ""; cat` pattern.
Bash did: echo header, then for each applicable rule, echo blank + cat file.
`cat` preserves the file's own trailing whitespace, so trailing blank lines
in a rule file become visible separators in the output. We replicate that
by reading file contents verbatim rather than stripping.
"""
out = "# Agent Team Instructions\n\nAgent-team specific protocols live in skills (orchestrate, conventions, worker-protocol, qa-checklist, message-schema).\n"
for rule_id in team["rules"]["order"]:
rule = team["rules"]["items"][rule_id]
if harness not in rule.get("applies_to", []):
continue
out += "\n" + (SCRIPT_DIR / rule["source_file"]).read_text()
return out
def _deep_merge(a: dict, b: dict) -> dict:
"""Deep-merge b into a, producing a new dict. Matches `jq -s '.[0] * .[1]'`."""
out = dict(a)
for k, v in b.items():
if isinstance(v, dict) and isinstance(out.get(k), dict):
out[k] = _deep_merge(out[k], v)
else:
out[k] = v
return out
def _dump_opencode_frontmatter(fm: dict[str, Any]) -> str:
"""Opencode accepts YAML 1.2; use pyyaml with block style for nested maps."""
# Use yaml.dump for the nested permission structure; top-level scalars we
# want unquoted for parity with the current bash output where possible.
out: list[str] = []
for key, value in fm.items():
if isinstance(value, dict):
out.append(f"{key}:")
for k, v in value.items():
if isinstance(v, dict):
out.append(f" {k}:")
for k2, v2 in v.items():
out.append(f' "{k2}": {v2}')
else:
out.append(f" {k}: {v}")
elif isinstance(value, str):
# Description uses single quotes for parity; other strings unquoted.
if key == "description":
out.append(f"{key}: {_yaml_single_quoted(value)}")
else:
out.append(f"{key}: {value}")
elif isinstance(value, bool):
out.append(f"{key}: {'true' if value else 'false'}")
else:
out.append(f"{key}: {value}")
return "\n".join(out) + "\n"
# ---------------------------------------------------------------------------
# Main
# ---------------------------------------------------------------------------
def main() -> int:
team = yaml.safe_load(TEAM_YAML.read_text())
settings = yaml.safe_load(SETTINGS_SHARED_YAML.read_text())
log(f"Using shared config: {SETTINGS_SHARED_YAML}")
validate_protocol_files(team, settings)
generate_legacy_settings_json(settings)
log(f"Generated compatibility artifact: {SETTINGS_JSON}")
generate_claude(team)
generate_codex(team, settings)
generate_opencode(team)
log("")
log("Done.")
return 0
if __name__ == "__main__":
sys.exit(main())

View file

@ -1,950 +0,0 @@
#!/usr/bin/env bash
set -euo pipefail
# generate.sh — generates both Claude and Codex output directories from
# shared agent source files plus a vendor-neutral runtime config.
# Agent source files (agents/*.md) are the single source of truth; this
# script derives tool-specific equivalents.
#
# Template variables in agent bodies are expanded per-target:
# ${PLANS_DIR} — where plans live (.claude/plans vs plans)
# ${WEB_SEARCH} — how web search is referenced
# ${SEARCH_TOOLS} — how codebase search tools are referenced
#
# Idempotent: safe to run multiple times.
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
AGENTS_SRC="$SCRIPT_DIR/agents"
RULES_DIR="$SCRIPT_DIR/rules"
CLAUDE_MD="$SCRIPT_DIR/CLAUDE.md"
SETTINGS_SHARED_YAML="$SCRIPT_DIR/SETTINGS.yaml"
TEAM_YAML="$SCRIPT_DIR/TEAM.yaml"
SETTINGS_JSON="$SCRIPT_DIR/settings.json"
CLAUDE_DIR="$SCRIPT_DIR/claude"
CLAUDE_AGENTS_DIR="$CLAUDE_DIR/agents"
CODEX_DIR="$SCRIPT_DIR/codex"
CODEX_AGENTS_DIR="$CODEX_DIR/agents"
OPENCODE_DIR="$SCRIPT_DIR/opencode"
OPENCODE_AGENTS_DIR="$OPENCODE_DIR/agents"
OPENCODE_BASE_CONFIG="$OPENCODE_DIR/config.json"
# ---------------------------------------------------------------------------
# Template variable values per target (KEY=VALUE pairs)
# ---------------------------------------------------------------------------
CLAUDE_VARS=(
"PLANS_DIR=.claude/plans"
"WEB_SEARCH=via WebFetch/WebSearch"
"SEARCH_TOOLS=Use Grep/Glob/Read"
)
CODEX_VARS=(
"PLANS_DIR=plans"
"WEB_SEARCH=via web search"
"SEARCH_TOOLS=Search the codebase"
)
OPENCODE_VARS=(
"PLANS_DIR=plans"
"WEB_SEARCH=via web search"
"SEARCH_TOOLS=Search the codebase"
)
# ---------------------------------------------------------------------------
# extract_body — extracts everything after the second --- (YAML frontmatter)
# ---------------------------------------------------------------------------
extract_body() {
local file="$1"
awk 'BEGIN{fm=0} /^---$/{if(fm==0){fm=1;next} if(fm==1){fm=2;next}} fm==2{print}' "$file"
}
# ---------------------------------------------------------------------------
# expand_body — runs envsubst on body text, substituting only our 3 variables
# $1 = body text
# $2.. = KEY=VALUE pairs to export
# ---------------------------------------------------------------------------
expand_body() {
local body="$1"
shift
# Export only the specified variables
for pair in "$@"; do
export "${pair%%=*}=${pair#*=}"
done
echo "$body" | envsubst '${PLANS_DIR} ${WEB_SEARCH} ${SEARCH_TOOLS}'
# Clean up exported variables
for pair in "$@"; do
unset "${pair%%=*}"
done
}
# ---------------------------------------------------------------------------
# yaml_escape_single_quoted — escapes text for YAML single-quoted scalars
# ---------------------------------------------------------------------------
yaml_escape_single_quoted() {
printf '%s' "$1" | sed "s/'/''/g"
}
# ---------------------------------------------------------------------------
# csv_from_yaml_array — joins YAML array values from stdin with ", "
# ---------------------------------------------------------------------------
csv_from_yaml_array() {
local first=1
local item
while IFS= read -r item; do
[ -n "$item" ] || continue
if [ "$first" -eq 0 ]; then
printf ', '
fi
printf '%s' "$item"
first=0
done
}
# ---------------------------------------------------------------------------
# validate_team_protocol — validates TEAM protocol fields and referenced files
# ---------------------------------------------------------------------------
validate_team_protocol() {
[ -f "$TEAM_YAML" ] || {
echo "Error: missing $TEAM_YAML"
exit 1
}
yq -e '.version == 1' "$TEAM_YAML" > /dev/null
yq -e '.agents.order and .agents.items and .skills.order and .skills.items and .rules.order and .rules.items' "$TEAM_YAML" > /dev/null
local section id ids_in_order
for section in agents skills rules; do
while IFS= read -r id; do
[ -n "$id" ] || continue
yq -e ".${section}.items.${id}" "$TEAM_YAML" > /dev/null
[ "$(yq -r ".${section}.items.${id}.id" "$TEAM_YAML")" = "$id" ] || {
echo "Error: TEAM ${section} item '${id}' has mismatched id field"
exit 1
}
done < <(yq -r ".${section}.order[]" "$TEAM_YAML")
ids_in_order="$(yq -r ".${section}.order[]" "$TEAM_YAML")"
while IFS= read -r id; do
[ -n "$id" ] || continue
printf '%s\n' "$ids_in_order" | grep -qx "$id" || {
echo "Error: TEAM ${section} item '${id}' missing from order list"
exit 1
}
done < <(yq -r ".${section}.items | keys | .[]" "$TEAM_YAML")
done
while IFS= read -r id; do
[ -n "$id" ] || continue
local path
path="$SCRIPT_DIR/$(yq -r ".agents.items.${id}.instruction_file" "$TEAM_YAML")"
[ -f "$path" ] || {
echo "Error: missing agent instruction file for '${id}': $path"
exit 1
}
done < <(yq -r '.agents.order[]' "$TEAM_YAML")
while IFS= read -r id; do
[ -n "$id" ] || continue
local path
path="$SCRIPT_DIR/$(yq -r ".skills.items.${id}.instruction_file" "$TEAM_YAML")"
[ -f "$path" ] || {
echo "Error: missing skill instruction file for '${id}': $path"
exit 1
}
done < <(yq -r '.skills.order[]' "$TEAM_YAML")
while IFS= read -r id; do
[ -n "$id" ] || continue
local path
path="$SCRIPT_DIR/$(yq -r ".rules.items.${id}.source_file" "$TEAM_YAML")"
[ -f "$path" ] || {
echo "Error: missing rule source file for '${id}': $path"
exit 1
}
done < <(yq -r '.rules.order[]' "$TEAM_YAML")
}
# ---------------------------------------------------------------------------
# validate_shared_settings — validates the shared protocol fields we rely on
# ---------------------------------------------------------------------------
validate_shared_settings() {
[ -f "$SETTINGS_SHARED_YAML" ] || {
echo "Error: missing $SETTINGS_SHARED_YAML"
exit 1
}
yq -e '.version == 1' "$SETTINGS_SHARED_YAML" > /dev/null
yq -e '.model.class == "fast" or .model.class == "balanced" or .model.class == "powerful"' "$SETTINGS_SHARED_YAML" > /dev/null
yq -e '.model.reasoning == "low" or .model.reasoning == "medium" or .model.reasoning == "high" or .model.reasoning == "max"' "$SETTINGS_SHARED_YAML" > /dev/null
yq -e '.runtime.filesystem == "read-only" or .runtime.filesystem == "workspace-write"' "$SETTINGS_SHARED_YAML" > /dev/null
yq -e '.runtime.approval == "manual" or .runtime.approval == "guarded-auto" or .runtime.approval == "full-auto"' "$SETTINGS_SHARED_YAML" > /dev/null
yq -e '(.runtime.network_access | type) == "!!bool"' "$SETTINGS_SHARED_YAML" > /dev/null
yq -e '
(.runtime.tools // []) as $tools |
(
$tools |
map(
select(
. == "shell" or
. == "read" or
. == "edit" or
. == "write" or
. == "glob" or
. == "grep" or
. == "web_fetch" or
. == "web_search"
)
) |
length
) == ($tools | length)
' "$SETTINGS_SHARED_YAML" > /dev/null
}
# ---------------------------------------------------------------------------
# map_model_class_to_claude — maps shared model.class to Claude model value
# ---------------------------------------------------------------------------
map_model_class_to_claude() {
local model_class="$1"
case "$model_class" in
fast) echo "haiku" ;;
powerful) echo "opus" ;;
balanced) echo "sonnet" ;;
*) echo "sonnet" ;;
esac
}
# ---------------------------------------------------------------------------
# map_approval_intent_to_codex_policy — shared approval intent to Codex value
# ---------------------------------------------------------------------------
map_approval_intent_to_codex_policy() {
local approval_intent="$1"
case "$approval_intent" in
manual) echo "on-request" ;;
full-auto) echo "never" ;;
guarded-auto) echo "untrusted" ;;
*) echo "untrusted" ;;
esac
}
# ---------------------------------------------------------------------------
# map_filesystem_intent_to_claude_mode — shared filesystem to Claude mode
# ---------------------------------------------------------------------------
map_filesystem_intent_to_claude_mode() {
local filesystem="$1"
case "$filesystem" in
read-only) echo "plan" ;;
workspace-write) echo "acceptEdits" ;;
*) echo "acceptEdits" ;;
esac
}
# ---------------------------------------------------------------------------
# map_portable_tool_to_claude — shared runtime tool to Claude allow-list name
# ---------------------------------------------------------------------------
map_portable_tool_to_claude() {
local tool="$1"
case "$tool" in
shell) echo "Bash" ;;
read) echo "Read" ;;
edit) echo "Edit" ;;
write) echo "Write" ;;
glob) echo "Glob" ;;
grep) echo "Grep" ;;
web_fetch) echo "WebFetch" ;;
web_search) echo "WebSearch" ;;
*) echo "$tool" ;;
esac
}
# ---------------------------------------------------------------------------
# map_model_to_opencode — all models map to the single local model
# ---------------------------------------------------------------------------
map_model_to_opencode() {
echo "llama.cpp/qwen3-coder:a3b"
}
# ---------------------------------------------------------------------------
# map_effort_to_temperature — maps effort to temperature float
# ---------------------------------------------------------------------------
map_effort_to_temperature() {
local effort="$1"
case "$effort" in
max) echo "0.1" ;;
high) echo "0.2" ;;
medium) echo "0.3" ;;
low) echo "0.5" ;;
*) echo "0.3" ;;
esac
}
# ---------------------------------------------------------------------------
# map_permission_mode_to_opencode_mode — maps permission mode to agent mode
# ---------------------------------------------------------------------------
map_permission_mode_to_opencode_mode() {
local permission_mode="$1"
case "$permission_mode" in
plan) echo "subagent" ;;
*) echo "primary" ;;
esac
}
# ---------------------------------------------------------------------------
# generate_opencode_permission_block — emits YAML permission block for agent
# $1 = tools (comma-separated Claude tool names)
# $2 = disallowed_tools (comma-separated Claude tool names)
# $3 = permission_mode (plan/acceptEdits/"")
# ---------------------------------------------------------------------------
generate_opencode_permission_block() {
local tools="$1"
local disallowed_tools="$2"
local permission_mode="$3"
local edit_perm="deny"
local bash_perm="deny"
local webfetch_perm="deny"
if [ "$permission_mode" = "plan" ]; then
# Plan-mode agents: read-only, no edits, no bash
edit_perm="deny"
bash_perm="deny"
# Researchers/reviewers still need web access
if echo "$tools" | grep -qE '\bWebFetch\b|\bWebSearch\b'; then
webfetch_perm="allow"
fi
else
# Check edit permission
if echo "$tools" | grep -qE '\bWrite\b|\bEdit\b'; then
edit_perm="allow"
fi
if echo "$disallowed_tools" | grep -qE '\bWrite\b|\bEdit\b'; then
edit_perm="deny"
fi
# Check bash permission
if echo "$tools" | grep -q '\bBash\b'; then
bash_perm="ask"
fi
if echo "$disallowed_tools" | grep -q '\bBash\b'; then
bash_perm="deny"
fi
# Check web permission
if echo "$tools" | grep -qE '\bWebFetch\b|\bWebSearch\b'; then
webfetch_perm="allow"
fi
fi
echo "permission:"
echo " edit: ${edit_perm}"
if [ "$bash_perm" = "ask" ]; then
echo " bash:"
echo " \"*\": ask"
echo " \"git status\": allow"
echo " \"git diff *\": allow"
echo " \"git log *\": allow"
elif [ "$bash_perm" = "deny" ]; then
echo " bash:"
echo " \"*\": deny"
fi
echo " webfetch: ${webfetch_perm}"
}
# ---------------------------------------------------------------------------
# json_escape — escapes a string for JSON string literal output
# ---------------------------------------------------------------------------
json_escape() {
printf '%s' "$1" | sed 's/\\/\\\\/g; s/"/\\"/g'
}
# ---------------------------------------------------------------------------
# json_array_from_lines — renders stdin as a compact JSON string array
# ---------------------------------------------------------------------------
json_array_from_lines() {
local first=1
local item
printf '['
while IFS= read -r item; do
[ -n "$item" ] || continue
if [ "$first" -eq 0 ]; then
printf ', '
fi
printf '"%s"' "$(json_escape "$item")"
first=0
done
printf ']'
}
# ---------------------------------------------------------------------------
# generate_legacy_settings_json — emits Claude-compatible settings.json
# from SETTINGS.yaml so downstream generation stays backward-compatible
# ---------------------------------------------------------------------------
generate_legacy_settings_json() {
local model_class model_reasoning runtime_filesystem runtime_approval
local claude_model claude_default_mode codex_approval_policy codex_network_access
local allow_json deny_json ask_json claude_md_excludes_json
model_class="$(yq -r '.model.class' "$SETTINGS_SHARED_YAML")"
model_reasoning="$(yq -r '.model.reasoning' "$SETTINGS_SHARED_YAML")"
runtime_filesystem="$(yq -r '.runtime.filesystem' "$SETTINGS_SHARED_YAML")"
runtime_approval="$(yq -r '.runtime.approval' "$SETTINGS_SHARED_YAML")"
claude_model="$(map_model_class_to_claude "$model_class")"
claude_default_mode="$(map_filesystem_intent_to_claude_mode "$runtime_filesystem")"
codex_approval_policy="$(yq -r '.targets.codex.approval_policy // ""' "$SETTINGS_SHARED_YAML")"
codex_network_access="$(yq -r '.targets.codex.network_access // .runtime.network_access // false' "$SETTINGS_SHARED_YAML")"
if [ -z "$codex_approval_policy" ] || [ "$codex_approval_policy" = "null" ]; then
codex_approval_policy="$(map_approval_intent_to_codex_policy "$runtime_approval")"
fi
allow_json="$(
yq -r '.runtime.tools[]' "$SETTINGS_SHARED_YAML" \
| while IFS= read -r tool; do
map_portable_tool_to_claude "$tool"
done \
| json_array_from_lines
)"
deny_json="$(
{
yq -r '.safety.protected_paths[]' "$SETTINGS_SHARED_YAML" | while IFS= read -r path; do
printf 'Read(%s)\n' "$path"
printf 'Write(%s)\n' "$path"
printf 'Edit(%s)\n' "$path"
done
} | json_array_from_lines
)"
ask_json="$(
yq -r '.safety.dangerous_shell_commands.ask[]' "$SETTINGS_SHARED_YAML" \
| while IFS= read -r cmd; do
printf 'Bash(%s)\n' "$cmd"
done \
| json_array_from_lines
)"
claude_md_excludes_json="$(
yq -r '(.targets.claude.claude_md_excludes // [".claude/agent-memory/**"])[]' "$SETTINGS_SHARED_YAML" \
| json_array_from_lines
)"
cat > "$SETTINGS_JSON" <<JSON
{
"\$schema": "https://json.schemastore.org/claude-code-settings.json",
"attribution": {
"commit": "",
"pr": ""
},
"permissions": {
"allow": ${allow_json},
"deny": ${deny_json},
"ask": ${ask_json},
"defaultMode": "${claude_default_mode}"
},
"model": "${claude_model}",
"effortLevel": "${model_reasoning}",
"codex": {
"approvalPolicy": "${codex_approval_policy}",
"networkAccess": ${codex_network_access}
},
"claudeMdExcludes": ${claude_md_excludes_json}
}
JSON
}
# ---------------------------------------------------------------------------
# prepare_settings_json — ensures the Claude-compatible settings.json
# artifact exists from the shared runtime config
# ---------------------------------------------------------------------------
prepare_settings_json() {
echo "Using shared config: $SETTINGS_SHARED_YAML"
validate_shared_settings
validate_team_protocol
generate_legacy_settings_json
echo "Generated compatibility artifact: $SETTINGS_JSON"
}
# ---------------------------------------------------------------------------
# map_model — maps Claude model name to Codex model name
# ---------------------------------------------------------------------------
map_model() {
local model="$1"
case "$model" in
opus) echo "gpt-5.4" ;;
sonnet) echo "gpt-5.3-codex" ;;
haiku) echo "gpt-5.1-codex-mini" ;;
*) echo "gpt-5.3-codex" ;;
esac
}
# ---------------------------------------------------------------------------
# map_effort — maps Claude effort level to Codex model_reasoning_effort
# ---------------------------------------------------------------------------
map_effort() {
local effort="$1"
case "$effort" in
low) echo "low" ;;
medium) echo "medium" ;;
high) echo "high" ;;
max) echo "xhigh" ;;
*) echo "medium" ;;
esac
}
# ---------------------------------------------------------------------------
# map_sandbox_mode — determines Codex sandbox_mode from agent metadata
# $1 = permissionMode value (plan / acceptEdits / "")
# $2 = tools list (comma-separated)
# ---------------------------------------------------------------------------
map_sandbox_mode() {
local permission_mode="$1"
local tools="$2"
local override="${3:-}"
if [ -n "$override" ] && [ "$override" != "null" ]; then
echo "$override"
return
fi
# plan mode is read-only
if [ "$permission_mode" = "plan" ]; then
echo "read-only"
return
fi
# acceptEdits with Write or Edit tool → workspace-write
if [ "$permission_mode" = "acceptEdits" ]; then
if echo "$tools" | grep -qE '\b(Write|Edit)\b'; then
echo "workspace-write"
return
fi
fi
# Default: read-only
echo "read-only"
}
# ---------------------------------------------------------------------------
# map_default_sandbox_mode — determines Codex sandbox_mode from shared config
# $1 = Claude permissions.defaultMode value
# ---------------------------------------------------------------------------
map_default_sandbox_mode() {
local default_mode="$1"
local override="${2:-}"
if [ -n "$override" ] && [ "$override" != "null" ]; then
echo "$override"
return
fi
case "$default_mode" in
plan) echo "read-only" ;;
acceptEdits) echo "workspace-write" ;;
*) echo "workspace-write" ;;
esac
}
# ---------------------------------------------------------------------------
# map_approval_policy — determines Codex approval_policy from shared config
# $1 = runtime.approval value (manual / guarded-auto / full-auto)
# $2 = optional Codex approval override from shared config
# ---------------------------------------------------------------------------
map_approval_policy() {
local runtime_approval="$1"
local override="$2"
if [ -n "$override" ] && [ "$override" != "null" ]; then
echo "$override"
return
fi
map_approval_intent_to_codex_policy "$runtime_approval"
}
# ---------------------------------------------------------------------------
# generate_claude — produces claude/ output directory
# ---------------------------------------------------------------------------
generate_claude() {
echo "=== Generating Claude output ==="
# Clean and recreate output directories
rm -rf "$CLAUDE_DIR"
mkdir -p "$CLAUDE_AGENTS_DIR"
# Copy CLAUDE.md
cp "$CLAUDE_MD" "$CLAUDE_DIR/CLAUDE.md"
echo "Copied: $CLAUDE_DIR/CLAUDE.md"
# Copy settings.json
cp "$SETTINGS_JSON" "$CLAUDE_DIR/settings.json"
echo "Copied: $CLAUDE_DIR/settings.json"
# Create relative symlinks for rules and skills
ln -s ../rules "$CLAUDE_DIR/rules"
echo "Symlinked: $CLAUDE_DIR/rules -> ../rules"
ln -s ../skills "$CLAUDE_DIR/skills"
echo "Symlinked: $CLAUDE_DIR/skills -> ../skills"
# Generate agent .md files from TEAM metadata + markdown instruction body
local agent_id
while IFS= read -r agent_id; do
[ -n "$agent_id" ] || continue
local name description model effort permission_mode
local src_file dst_file body expanded_body
local max_turns background memory isolation
local tools_csv disallowed_tools_csv
name="$(yq -r ".agents.items.${agent_id}.name" "$TEAM_YAML")"
description="$(yq -r ".agents.items.${agent_id}.description" "$TEAM_YAML")"
model="$(yq -r ".agents.items.${agent_id}.model" "$TEAM_YAML")"
effort="$(yq -r ".agents.items.${agent_id}.effort // \"\"" "$TEAM_YAML")"
permission_mode="$(yq -r ".agents.items.${agent_id}.permission_mode // \"\"" "$TEAM_YAML")"
tools_csv="$(yq -r ".agents.items.${agent_id}.tools[]" "$TEAM_YAML" | csv_from_yaml_array)"
disallowed_tools_csv="$(yq -r ".agents.items.${agent_id}.disallowed_tools // [] | .[]" "$TEAM_YAML" | csv_from_yaml_array)"
max_turns="$(yq -r ".agents.items.${agent_id}.max_turns // \"\"" "$TEAM_YAML")"
background="$(yq -r ".agents.items.${agent_id}.background // \"\"" "$TEAM_YAML")"
memory="$(yq -r ".agents.items.${agent_id}.memory // \"\"" "$TEAM_YAML")"
isolation="$(yq -r ".agents.items.${agent_id}.isolation // \"\"" "$TEAM_YAML")"
src_file="$SCRIPT_DIR/$(yq -r ".agents.items.${agent_id}.instruction_file" "$TEAM_YAML")"
dst_file="$CLAUDE_AGENTS_DIR/${name}.md"
body="$(extract_body "$src_file")"
expanded_body="$(expand_body "$body" "${CLAUDE_VARS[@]}")"
{
echo "---"
echo "name: '$(yaml_escape_single_quoted "$name")'"
echo "description: '$(yaml_escape_single_quoted "$description")'"
echo "model: '$(yaml_escape_single_quoted "$model")'"
if [ -n "$effort" ] && [ "$effort" != "null" ]; then
echo "effort: '$(yaml_escape_single_quoted "$effort")'"
fi
if [ -n "$permission_mode" ] && [ "$permission_mode" != "null" ]; then
echo "permissionMode: '$(yaml_escape_single_quoted "$permission_mode")'"
fi
echo "tools: '$(yaml_escape_single_quoted "$tools_csv")'"
if [ -n "$disallowed_tools_csv" ] && [ "$disallowed_tools_csv" != "null" ]; then
echo "disallowedTools: '$(yaml_escape_single_quoted "$disallowed_tools_csv")'"
fi
if [ "$background" = "true" ]; then
echo "background: true"
fi
if [ -n "$memory" ] && [ "$memory" != "null" ]; then
echo "memory: '$(yaml_escape_single_quoted "$memory")'"
fi
if [ -n "$isolation" ] && [ "$isolation" != "null" ]; then
echo "isolation: '$(yaml_escape_single_quoted "$isolation")'"
fi
if [ -n "$max_turns" ] && [ "$max_turns" != "null" ]; then
echo "maxTurns: $max_turns"
fi
echo "skills:"
yq -r ".agents.items.${agent_id}.skills[]" "$TEAM_YAML" | while IFS= read -r skill; do
echo " - $(yaml_escape_single_quoted "$skill")"
done
echo "---"
echo ""
echo "$expanded_body"
} > "$dst_file"
echo "Generated: $dst_file"
done < <(yq -r '.agents.order[]' "$TEAM_YAML")
}
# ---------------------------------------------------------------------------
# generate_codex — produces codex/ output directory
# ---------------------------------------------------------------------------
generate_codex() {
echo ""
echo "=== Generating Codex output ==="
# Clean and recreate output directories
rm -rf "$CODEX_DIR"
mkdir -p "$CODEX_AGENTS_DIR"
ln -s ../skills "$CODEX_DIR/skills"
echo "Symlinked: $CODEX_DIR/skills -> ../skills"
# Generate agent .toml files from TEAM metadata + markdown instruction body
echo "Generating Codex agent definitions..."
local agent_id
while IFS= read -r agent_id; do
[ -n "$agent_id" ] || continue
local name description model effort permission_mode tools disallowed_tools
local codex_sandbox_override
local agent_skills
local src_file dst_file
name="$(yq -r ".agents.items.${agent_id}.name" "$TEAM_YAML")"
description="$(yq -r ".agents.items.${agent_id}.description" "$TEAM_YAML")"
model="$(yq -r ".agents.items.${agent_id}.model" "$TEAM_YAML")"
effort="$(yq -r ".agents.items.${agent_id}.effort // \"\"" "$TEAM_YAML")"
permission_mode="$(yq -r ".agents.items.${agent_id}.permission_mode // \"\"" "$TEAM_YAML")"
tools="$(yq -r ".agents.items.${agent_id}.tools[]" "$TEAM_YAML" | csv_from_yaml_array)"
disallowed_tools="$(yq -r ".agents.items.${agent_id}.disallowed_tools // [] | .[]" "$TEAM_YAML" | csv_from_yaml_array)"
codex_sandbox_override="$(yq -r '.targets.codex.sandbox_mode // ""' "$SETTINGS_SHARED_YAML")"
agent_skills="$(yq -r ".agents.items.${agent_id}.skills[]" "$TEAM_YAML")"
src_file="$SCRIPT_DIR/$(yq -r ".agents.items.${agent_id}.instruction_file" "$TEAM_YAML")"
dst_file="$CODEX_AGENTS_DIR/${name}.toml"
# Map to Codex equivalents
local codex_model codex_effort codex_sandbox
codex_model="$(map_model "$model")"
codex_effort="$(map_effort "${effort:-medium}")"
codex_sandbox="$(map_sandbox_mode "$permission_mode" "$tools" "$codex_sandbox_override")"
# Extract and expand body with Codex variable values
local body expanded_body
body="$(extract_body "$src_file")"
expanded_body="$(expand_body "$body" "${CODEX_VARS[@]}")"
# Build developer_instructions: append disallowedTools note if present
local developer_instructions
developer_instructions="$expanded_body"
if [ -n "$disallowed_tools" ] && [ "$disallowed_tools" != "null" ]; then
developer_instructions="${developer_instructions}
You do NOT have access to these tools: ${disallowed_tools}"
fi
# TOML multiline basic strings use """ delimiters; reject raw delimiter
# sequences in instruction bodies so generated TOML remains parseable.
if printf '%s' "$developer_instructions" | grep -q '"""'; then
echo "Error: agent instruction contains raw triple quotes (\"\"\") which break TOML in $src_file"
exit 1
fi
# Write TOML output
cat > "$dst_file" <<TOML
name = "${name}"
description = "${description}"
model = "${codex_model}"
model_reasoning_effort = "${codex_effort}"
sandbox_mode = "${codex_sandbox}"
TOML
cat >> "$dst_file" <<TOML
developer_instructions = """
${developer_instructions}
"""
TOML
local skill_id skill_applies enabled
while IFS= read -r skill_id; do
[ -n "$skill_id" ] || continue
skill_applies="$(yq -r ".skills.items.${skill_id}.applies_to[]" "$TEAM_YAML")"
if ! printf '%s\n' "$skill_applies" | grep -qx "codex"; then
continue
fi
enabled=false
if printf '%s\n' "$agent_skills" | grep -qx "$skill_id"; then
enabled=true
fi
cat >> "$dst_file" <<TOML
[[skills.config]]
path = "../skills/${skill_id}/SKILL.md"
enabled = ${enabled}
TOML
done < <(yq -r '.skills.order[]' "$TEAM_YAML")
echo "Generated: $dst_file"
done < <(yq -r '.agents.order[]' "$TEAM_YAML")
# Generate AGENTS.md — concatenate TEAM-ordered rules with tool-agnostic header
echo ""
echo "Generating codex/AGENTS.md..."
{
echo "# Agent Team Instructions"
echo ""
echo "Agent-team specific protocols live in skills (orchestrate, conventions, worker-protocol, qa-checklist, message-schema)."
local rule_id rules_file
while IFS= read -r rule_id; do
[ -n "$rule_id" ] || continue
yq -r ".rules.items.${rule_id}.applies_to[]" "$TEAM_YAML" | grep -qx "codex" || continue
rules_file="$SCRIPT_DIR/$(yq -r ".rules.items.${rule_id}.source_file" "$TEAM_YAML")"
echo ""
cat "$rules_file"
done < <(yq -r '.rules.order[]' "$TEAM_YAML")
} > "$CODEX_DIR/AGENTS.md"
echo "Generated: $CODEX_DIR/AGENTS.md"
# Generate config.toml — derive sandbox/approval defaults from shared config
echo ""
echo "Generating codex/config.toml..."
local default_mode runtime_approval codex_approval_override codex_network_access codex_sandbox_override
default_mode="$(map_filesystem_intent_to_claude_mode "$(yq -r '.runtime.filesystem' "$SETTINGS_SHARED_YAML")")"
runtime_approval="$(yq -r '.runtime.approval' "$SETTINGS_SHARED_YAML")"
codex_sandbox_override="$(yq -r '.targets.codex.sandbox_mode // ""' "$SETTINGS_SHARED_YAML")"
codex_approval_override="$(yq -r '.targets.codex.approval_policy // ""' "$SETTINGS_SHARED_YAML")"
codex_network_access="$(yq -r '.targets.codex.network_access // .runtime.network_access // false' "$SETTINGS_SHARED_YAML")"
local config_sandbox config_approval
config_sandbox="$(map_default_sandbox_mode "$default_mode" "$codex_sandbox_override")"
config_approval="$(map_approval_policy "$runtime_approval" "$codex_approval_override")"
if [ "$config_sandbox" = "workspace-write" ]; then
cat > "$CODEX_DIR/config.toml" <<TOML
#:schema https://developers.openai.com/codex/config-schema.json
model = "gpt-5.3-codex"
model_reasoning_effort = "medium"
sandbox_mode = "${config_sandbox}"
approval_policy = "${config_approval}"
[sandbox_workspace_write]
network_access = ${codex_network_access}
TOML
else
cat > "$CODEX_DIR/config.toml" <<TOML
#:schema https://developers.openai.com/codex/config-schema.json
model = "gpt-5.3-codex"
model_reasoning_effort = "medium"
sandbox_mode = "${config_sandbox}"
approval_policy = "${config_approval}"
TOML
fi
echo "Generated: $CODEX_DIR/config.toml"
}
# ---------------------------------------------------------------------------
# generate_opencode — produces opencode/ output directory
# ---------------------------------------------------------------------------
generate_opencode() {
echo ""
echo "=== Generating OpenCode output ==="
# Clean generated outputs only (preserve user-authored config.json)
rm -rf "$OPENCODE_AGENTS_DIR"
rm -f "$OPENCODE_DIR/AGENTS.md"
rm -f "$OPENCODE_DIR/opencode.json"
mkdir -p "$OPENCODE_AGENTS_DIR"
# Symlink skills
if [ -L "$OPENCODE_DIR/skills" ]; then
rm "$OPENCODE_DIR/skills"
fi
ln -s ../skills "$OPENCODE_DIR/skills"
echo "Symlinked: $OPENCODE_DIR/skills -> ../skills"
# Generate agent .md files with OpenCode frontmatter
local agent_id
while IFS= read -r agent_id; do
[ -n "$agent_id" ] || continue
local name description model effort permission_mode
local src_file dst_file body expanded_body
local max_turns tools_csv disallowed_tools_csv
local opencode_model opencode_temperature opencode_mode opencode_steps
name="$(yq -r ".agents.items.${agent_id}.name" "$TEAM_YAML")"
description="$(yq -r ".agents.items.${agent_id}.description" "$TEAM_YAML")"
model="$(yq -r ".agents.items.${agent_id}.model" "$TEAM_YAML")"
effort="$(yq -r ".agents.items.${agent_id}.effort // \"\"" "$TEAM_YAML")"
permission_mode="$(yq -r ".agents.items.${agent_id}.permission_mode // \"\"" "$TEAM_YAML")"
tools_csv="$(yq -r ".agents.items.${agent_id}.tools[]" "$TEAM_YAML" | csv_from_yaml_array)"
disallowed_tools_csv="$(yq -r ".agents.items.${agent_id}.disallowed_tools // [] | .[]" "$TEAM_YAML" | csv_from_yaml_array)"
max_turns="$(yq -r ".agents.items.${agent_id}.max_turns // \"\"" "$TEAM_YAML")"
src_file="$SCRIPT_DIR/$(yq -r ".agents.items.${agent_id}.instruction_file" "$TEAM_YAML")"
dst_file="$OPENCODE_AGENTS_DIR/${name}.md"
body="$(extract_body "$src_file")"
expanded_body="$(expand_body "$body" "${OPENCODE_VARS[@]}")"
# Map to OpenCode equivalents
opencode_model="$(map_model_to_opencode "$model")"
opencode_temperature="$(map_effort_to_temperature "${effort:-medium}")"
opencode_mode="$(map_permission_mode_to_opencode_mode "$permission_mode")"
opencode_steps="${max_turns:-25}"
{
echo "---"
echo "description: '$(yaml_escape_single_quoted "$description")'"
echo "mode: ${opencode_mode}"
echo "model: ${opencode_model}"
echo "temperature: ${opencode_temperature}"
echo "steps: ${opencode_steps}"
generate_opencode_permission_block "$tools_csv" "$disallowed_tools_csv" "$permission_mode"
echo "---"
echo ""
echo "$expanded_body"
} > "$dst_file"
echo "Generated: $dst_file"
done < <(yq -r '.agents.order[]' "$TEAM_YAML")
# Generate AGENTS.md — concatenate TEAM-ordered rules for opencode target
echo ""
echo "Generating opencode/AGENTS.md..."
{
echo "# Agent Team Instructions"
echo ""
echo "Agent-team specific protocols live in skills (orchestrate, conventions, worker-protocol, qa-checklist, message-schema)."
local rule_id rules_file
while IFS= read -r rule_id; do
[ -n "$rule_id" ] || continue
yq -r ".rules.items.${rule_id}.applies_to[]" "$TEAM_YAML" | grep -qx "opencode" || continue
rules_file="$SCRIPT_DIR/$(yq -r ".rules.items.${rule_id}.source_file" "$TEAM_YAML")"
echo ""
cat "$rules_file"
done < <(yq -r '.rules.order[]' "$TEAM_YAML")
} > "$OPENCODE_DIR/AGENTS.md"
echo "Generated: $OPENCODE_DIR/AGENTS.md"
# Generate merged opencode.json — base config + generated overlay
echo ""
echo "Generating opencode/opencode.json..."
if [ ! -f "$OPENCODE_BASE_CONFIG" ]; then
echo "Error: missing base config at $OPENCODE_BASE_CONFIG"
exit 1
fi
# Build the generated overlay with global permissions from SETTINGS.yaml
local overlay_json
overlay_json="$(cat <<'OVERLAY'
{
"permission": {
"edit": "ask",
"bash": {
"*": "ask"
},
"webfetch": "allow",
"skill": {
"*": "allow"
}
},
"compaction": {
"auto": true,
"prune": true
},
"snapshot": true
}
OVERLAY
)"
jq -s '.[0] * .[1]' "$OPENCODE_BASE_CONFIG" <(echo "$overlay_json") > "$OPENCODE_DIR/opencode.json"
echo "Generated: $OPENCODE_DIR/opencode.json"
}
# ---------------------------------------------------------------------------
# Main
# ---------------------------------------------------------------------------
prepare_settings_json
generate_claude
generate_codex
generate_opencode
echo ""
echo "Done."

View file

@ -31,13 +31,13 @@ echo "Target: $CLAUDE_DIR"
echo "" echo ""
# Pre-flight: build fresh generated outputs before proceeding. # Pre-flight: build fresh generated outputs before proceeding.
if [ ! -f "$SCRIPT_DIR/generate.sh" ]; then if [ ! -f "$SCRIPT_DIR/generate.py" ]; then
echo "Error: generate.sh not found." echo "Error: generate.py not found."
exit 1 exit 1
fi fi
echo "Generating fresh artifacts before install..." echo "Generating fresh artifacts before install..."
bash "$SCRIPT_DIR/generate.sh" python "$SCRIPT_DIR/generate.py"
# Ensure ~/.claude exists # Ensure ~/.claude exists
mkdir -p "$CLAUDE_DIR" mkdir -p "$CLAUDE_DIR"
@ -289,7 +289,7 @@ if [ -d "$SCRIPT_DIR/codex" ]; then
if [ -d "$SCRIPT_DIR/codex/agents" ]; then if [ -d "$SCRIPT_DIR/codex/agents" ]; then
create_symlink "$SCRIPT_DIR/codex/agents" "$CODEX_DIR/agents" "codex agents" create_symlink "$SCRIPT_DIR/codex/agents" "$CODEX_DIR/agents" "codex agents"
else else
echo "Run ./generate.sh first to generate Codex agent definitions" echo "Run ./generate.py first to generate Codex agent definitions"
fi fi
# Generated AGENTS.md (symlink to project root for Codex discovery) # Generated AGENTS.md (symlink to project root for Codex discovery)
@ -318,7 +318,7 @@ if [ -d "$SCRIPT_DIR/opencode" ]; then
if [ -d "$SCRIPT_DIR/opencode/agents" ]; then if [ -d "$SCRIPT_DIR/opencode/agents" ]; then
create_symlink "$SCRIPT_DIR/opencode/agents" "$OPENCODE_CONFIG_DIR/agents" "opencode agents" create_symlink "$SCRIPT_DIR/opencode/agents" "$OPENCODE_CONFIG_DIR/agents" "opencode agents"
else else
echo "Run ./generate.sh first to generate OpenCode agent definitions" echo "Run ./generate.py first to generate OpenCode agent definitions"
fi fi
# Generated AGENTS.md # Generated AGENTS.md

View file

@ -13,7 +13,7 @@
"name": "Qwen3-Coder-30B-A3B-Instruct-Q6", "name": "Qwen3-Coder-30B-A3B-Instruct-Q6",
"limit": { "limit": {
"context": 262144, "context": 262144,
"output": 262144 "output": 8192
}, },
"cost": { "cost": {
"input": 0, "input": 0,

View file

@ -1 +0,0 @@
../skills

1
opencode/skills/conventions Symbolic link
View file

@ -0,0 +1 @@
../../skills/conventions

View file

@ -0,0 +1 @@
../../skills/message-schema

View file

@ -0,0 +1 @@
../../skills/qa-checklist

View file

@ -0,0 +1 @@
../../skills/worker-protocol

View file

@ -1,6 +0,0 @@
# Responses & Explanations
- Be concise — lead with the action or answer, not the preamble
- Include just enough reasoning to explain *why* a decision was made, not a full walkthrough
- Skip trailing summaries ("Here's what I did...") — the diff speaks for itself
- No emojis unless explicitly asked

View file

@ -20,14 +20,3 @@
- Commonly run development workflows MUST be wired into `just` recipes as the user-facing entrypoints - Commonly run development workflows MUST be wired into `just` recipes as the user-facing entrypoints
- Temporary artifacts created during work MUST be cleaned up before completion unless the user explicitly asked to keep them - Temporary artifacts created during work MUST be cleaned up before completion unless the user explicitly asked to keep them
# Parallelism
- Always parallelize independent work — tool calls, file reads, searches
- When a task has components that don't depend on each other, run them concurrently by default
- Sequential execution is allowed only when required by dependencies or operational constraints (tool/runtime limits, contention, staged validation)
# Context Management
- Use subagents for exploratory reads and investigations to keep the main context clean
- Use scoped file reads (offset/limit) over reading entire large files
- When a task is complete or the topic shifts significantly, suggest clearing context or starting a new session

View file

@ -1,6 +1,5 @@
# Verification # Verification
- After making changes, run relevant tests or build commands to verify correctness before reporting success
- If no tests exist for the changed code, say so rather than silently assuming it works - If no tests exist for the changed code, say so rather than silently assuming it works
- Run single targeted tests by default; run the full suite when requested or when targeted coverage is insufficient - Run single targeted tests by default; run the full suite when requested or when targeted coverage is insufficient

View file

@ -470,7 +470,6 @@
"uniqueItems": true, "uniqueItems": true,
"const": [ "const": [
"01-session", "01-session",
"02-responses",
"03-git", "03-git",
"04-tools", "04-tools",
"05-verification", "05-verification",
@ -482,7 +481,6 @@
"additionalProperties": false, "additionalProperties": false,
"required": [ "required": [
"01-session", "01-session",
"02-responses",
"03-git", "03-git",
"04-tools", "04-tools",
"05-verification", "05-verification",
@ -499,16 +497,6 @@
} }
] ]
}, },
"02-responses": {
"allOf": [
{ "$ref": "#/$defs/rule_item" },
{
"properties": {
"id": { "const": "02-responses" }
}
}
]
},
"03-git": { "03-git": {
"allOf": [ "allOf": [
{ "$ref": "#/$defs/rule_item" }, { "$ref": "#/$defs/rule_item" },

View file

@ -10,17 +10,19 @@ You are now acting as orchestrator. Decompose, delegate, validate, deliver. Neve
``` ```
You (orchestrator) You (orchestrator)
├── grunt (haiku) — trivial, cheap implementer ├── grunt — trivial, cheap implementer
├── worker (sonnet) — standard implementer ├── worker — standard implementer
├── senior (opus) — ambiguous, architectural, or high-risk implementer ├── senior — ambiguous, architectural, or high-risk implementer
├── debugger (sonnet) — bug diagnosis and minimal fixes ├── debugger — bug diagnosis and minimal fixes
├── documenter (sonnet) — documentation only, never touches source ├── documenter — documentation only, never touches source
├── researcher (sonnet) — one per topic, parallel fact-finding ├── researcher — one per topic, parallel fact-finding
├── architect (opus, effort: max) — triage, research coordination, architecture, wave decomposition ├── architect — triage, research coordination, architecture, wave decomposition
├── reviewer (sonnet) — code quality + AC verification + claim checking ├── reviewer — code quality + AC verification + claim checking
└── auditor (sonnet, background) — security analysis + runtime validation └── auditor — security analysis + runtime validation
``` ```
Models and effort levels are pinned per-agent in each harness's config. Pick agents by role; the harness handles model selection.
--- ---
## Task tiers ## Task tiers
@ -201,9 +203,7 @@ When multiple risk tags are present, take the union. Spawn all required reviewer
### Permission model ### Permission model
Agent `permissionMode` in frontmatter is overridden when the parent (you, the orchestrator) runs in `acceptEdits` or `bypassPermissions` mode — the child inherits the parent's mode. This means `permissionMode: plan` on read-only agents like architect, researcher, and reviewer is **not enforced at runtime**. Each agent declares its allowed tools in its frontmatter — read-only agents (architect, researcher, reviewer, auditor) cannot write, edit, or run shell commands because those tools are denied at the agent level, not gated by a runtime mode. Trust the per-agent tool restrictions as the real safety boundary. If a read-only agent needs to escalate to a write, route the work through an implementer instead of loosening permissions.
The actual write protection for read-only agents comes from `disallowedTools: Write, Edit` — this is enforced regardless of permission mode. Do not rely on `permissionMode` as a safety boundary; rely on tool restrictions.
### Parallelism mandate ### Parallelism mandate