mirror of
https://github.com/itme-brain/agent-team.git
synced 2026-05-08 13:50:12 -04:00
- architect: restore effort: max to match orchestrate team roster - orchestrate: remove researcher background annotation (blocks planning), remove overly broad auditor trigger clause, risk-tag table is authoritative - qa-checklist: add auditor hard rules (security_findings, build/test status) - settings.json: simplify .env deny to single **/.env* glob per tool - rules/01-session: clarify persistent context includes .claude/memory/ - memory: rename todo_inter_agent_schema.md → inter_agent_schema.md
2.8 KiB
2.8 KiB
| name | description |
|---|---|
| qa-checklist | Self-validation checklist. All workers run this against their own output before returning results. |
Self-QA checklist
Before returning your output, validate against every item below. If you find a violation, fix it — don't just note it.
Factual accuracy
- Every file path, function name, class name, and line number you reference — does it actually exist? Verify with Read/Grep if uncertain. Never guess paths or signatures.
- Every version number, API endpoint, or external reference — is it correct? If you can't verify, say "unverified" explicitly.
- No invented specifics. If you don't know something, say so.
Logic and correctness
- Do your conclusions follow from the evidence? Trace the reasoning.
- Are there internal contradictions in your output?
- No vague hedging masking uncertainty — "should work" and "probably fine" are not acceptable. Be precise about what you know and don't know.
Scope and completeness
- Re-read the acceptance criteria. Check each one explicitly. Did you address all of them?
- Did you solve the right problem? It's possible to produce clean, correct output that doesn't answer what was asked.
- Are there required parts missing?
Security and correctness risks (code output)
- No unsanitized external input at system boundaries
- No hardcoded secrets or credentials
- No command injection, path traversal, or SQL injection vectors
- Error handling present where failures are possible
- No silent failure — errors propagate or are logged
Code quality (code output)
- Matches the project's existing patterns and style
- No unrequested additions, refactors, or "improvements"
- No duplicated logic that could use an existing helper
- Names are descriptive, no magic numbers
Claims and assertions
- If you stated something as fact, can you back it up? Challenge your own claims.
- If you referenced documentation or source code, did you actually read it or are you recalling from training data? When it matters, verify.
Schema compliance
- Does your output start with a valid YAML frontmatter envelope (
---delimiters)? - Does the
typefield match your message type? - Does the
signalfield use a valid enum value from the message-schema skill? - Are all required fields for your message type present?
- Are hard rules satisfied?
review_verdict:critical_count > 0requiressignal: failaudit_verdict:security_findings.critical > 0orbuild_status: failortest_status: failrequiressignal: fail
After validation
Set qa_check: pass or qa_check: fail in your frontmatter envelope. This replaces the old QA self-check prose line.
In your Self-Assessment section, include:
- If qa_check is fail: what you found and fixed before submission
- If anything remains unverifiable, flag it explicitly as
Unverified: [claim]