chore: initial agent team setup

This commit is contained in:
Bryan Ramos 2026-03-07 09:39:29 -05:00
commit 49dec3df12
10 changed files with 735 additions and 0 deletions

79
skills/conventions.md Normal file
View file

@ -0,0 +1,79 @@
---
name: conventions
description: Core coding conventions and quality priorities for all projects.
---
## Quality priorities (in order)
1. **Documentation** — dual documentation strategy:
- **Inline:** comments next to code explaining what it does
- **External:** markdown files suitable for mdbook. Every module/component gets a corresponding `.md` doc covering purpose, usage, and design decisions.
- **READMEs:** each major directory gets a README explaining why it exists and what it contains
- **Exception:** helper/utility functions only need inline docs, not external docs
2. **Maintainability** — code is easy to read, modify, and debug. Favor clarity over cleverness.
3. **Reusability** — extract shared logic into well-defined interfaces. Don't duplicate. Helper functions specifically should be easy to cleanly isolate for reuse across the codebase.
4. **Modularity** — clean separation of duties and logic. Each file/module should have a *cohesive* purpose — not necessarily a single purpose, but a group of related responsibilities that belong together. Avoid both god files and excessive fragmentation.
## Naming
- Default to `snake_case` unless the language has a stronger convention (e.g., `camelCase` in JavaScript, `PascalCase` for C++ classes)
- Language-specific formats take precedence over personal preference
- Names should be descriptive — no abbreviations unless universally understood
- No magic numbers — extract to named constants
## Commits
- Use conventional commit format: `type(scope): description`
- Types: `feat`, `fix`, `refactor`, `docs`, `test`, `chore`, `style`, `perf`
- Scope is optional but recommended (e.g., `feat(auth): add JWT middleware`)
- Description is imperative mood, lowercase, no period
- One logical change per commit — don't bundle unrelated changes
- Commit message body (optional) explains **why**, not what
## Error handling
- Return codes: `0` for success, non-zero for error
- Error messaging uses three verbosity tiers:
- **Default:** concise, user-facing message (what went wrong)
- **Verbose:** adds context (where it went wrong, what was expected)
- **Debug:** full diagnostic detail (stack traces, variable state, internal IDs)
- Propagate errors explicitly — don't silently swallow failures
- Match the project's existing error patterns before introducing new ones
## Logging
- Follow the same verbosity tiers as error messaging (default/verbose/debug)
- Log at boundaries: entry/exit of major operations, external calls, state transitions
- Never log secrets, credentials, or sensitive user data
## Testing
- New functionality gets tests. Bug fixes get regression tests.
- Tests should be independent — no shared mutable state between test cases
- Test the interface, not the implementation — tests shouldn't break on internal refactors
- Name tests to describe the behavior being verified, not the function being called
## Interface design
- Public APIs should be stable — think before exposing. Easy to extend, hard to break.
- Internal interfaces can evolve freely — don't over-engineer internal boundaries
- Validate at system boundaries (user input, external APIs, IPC). Trust internal code.
## Security
- Never trust external input — validate and sanitize at system boundaries
- No hardcoded secrets, credentials, or keys
- Prefer established libraries over hand-rolled crypto, auth, or parsing
## File organization
- Directory hierarchy should make ownership and dependencies obvious
- Each major directory gets a README explaining its purpose
- If you can't tell what a directory contains from its path, reorganize
- Group related functionality cohesively — don't fragment for the sake of "single responsibility"
## General
- Clean separation of duties — no god files, no mixed concerns
- Read existing code before writing new code — match the project's patterns
- Minimize external dependencies — vendor what you use, track versions

47
skills/qa-checklist.md Normal file
View file

@ -0,0 +1,47 @@
---
name: qa-checklist
description: Self-validation checklist. All workers run this against their own output before returning results.
---
## Self-QA checklist
Before returning your output, validate against every item below. If you find a violation, fix it — don't just note it.
### Factual accuracy
- Every file path, function name, class name, and line number you reference — does it actually exist? Verify with Read/Grep if uncertain. Never guess paths or signatures.
- Every version number, API endpoint, or external reference — is it correct? If you can't verify, say "unverified" explicitly.
- No invented specifics. If you don't know something, say so.
### Logic and correctness
- Do your conclusions follow from the evidence? Trace the reasoning.
- Are there internal contradictions in your output?
- No vague hedging masking uncertainty — "should work" and "probably fine" are not acceptable. Be precise about what you know and don't know.
### Scope and completeness
- Re-read the acceptance criteria. Check each one explicitly. Did you address all of them?
- Did you solve the right problem? It's possible to produce clean, correct output that doesn't answer what was asked.
- Are there required parts missing?
### Security and correctness risks (code output)
- No unsanitized external input at system boundaries
- No hardcoded secrets or credentials
- No command injection, path traversal, or SQL injection vectors
- Error handling present where failures are possible
- No silent failure — errors propagate or are logged
### Code quality (code output)
- Matches the project's existing patterns and style
- No unrequested additions, refactors, or "improvements"
- No duplicated logic that could use an existing helper
- Names are descriptive, no magic numbers
### Claims and assertions
- If you stated something as fact, can you back it up? Challenge your own claims.
- If you referenced documentation or source code, did you actually read it or are you recalling from training data? When it matters, verify.
## After validation
In your Self-Assessment section, include:
- `QA self-check: [pass/fail]` — did your output survive the checklist?
- If fail: what you found and fixed before submission
- If anything remains unverifiable, flag it explicitly as `Unverified: [claim]`

57
skills/worker-protocol.md Normal file
View file

@ -0,0 +1,57 @@
---
name: worker-protocol
description: Standard output format, feedback handling, and operational procedures for all worker agents.
---
## Output format
Return using this structure. If Kevin specifies a different format, use his — but always include Self-Assessment.
```
## Result
[Your deliverable here]
## Files Changed
[List files modified/created, or "N/A" if not a code task]
## Self-Assessment
- Acceptance criteria met: [yes/no per criterion, one line each]
- Known limitations: [any, or "none"]
```
## Your job
Produce Kevin's assigned deliverable. Accurately. Completely. Nothing more.
- Exactly what was asked. No unrequested additions.
- When uncertain about a specific fact, verify. Otherwise trust context and training.
## Self-QA
Before returning your output, run the `qa-checklist` skill against your work. Fix any issues you find — don't just note them. Your Self-Assessment must include the `QA self-check: pass/fail` line. If you can't pass your own QA, flag what remains and why.
## Cost sensitivity
- Keep responses tight. Result only.
- Kevin passes context inline, but if your task requires reading files Kevin didn't provide, use Read/Glob/Grep directly. Don't guess at file contents — verify. Keep it targeted.
## Commits
Do not commit until Kevin sends `LGTM`. End your output with `RFR` to signal you're ready for review.
- `RFR` — you → Kevin: work complete, ready for review
- `LGTM` — Kevin → you: approved, commit now
- `REVISE` — Kevin → you: needs fixes (issues attached)
When you receive `LGTM`:
- Commit using conventional commit format per project conventions
- One commit per logical change
- Include only files relevant to your task
## Operational failures
If blocked (tool failure, missing file, build error): try to work around it and note the workaround. If truly blocked, report to Kevin with what failed and what you need. No unexplained partial work.
## Receiving Karen's feedback
Kevin resumes you with Karen's findings. You already have the task context and your previous work. Address the issues Kevin specifies. If Karen conflicts with Kevin's requirements, flag to Kevin — don't guess. Resubmit complete output in standard format. In Self-Assessment, note which issues you addressed.