Files
devclaw-gitea/docs/QA_WORKFLOW.md
Claude 553efcc146 docs: overhaul documentation for consistency with implementation
Complete documentation rewrite to match the current codebase:

- README: add benefits section (process consistency, token savings with
  estimates, project isolation, continuous planning, feedback loops,
  role-based prompts, atomic operations, audit trail), task workflow
  with state diagram, model-to-role mapping tables, installation guide
- New TOOLS.md: complete reference for all 11 tools with parameters,
  behavior, and execution guards
- New CONFIGURATION.md: full config reference for openclaw.json,
  projects.json, heartbeat, notifications, workspace layout
- Fix tool names across all docs: task_pickup→work_start,
  task_complete→work_finish
- Fix tier model: QA has reviewer/tester levels, not flat "qa"
- Fix config schema: nested models.dev.*/models.qa.* structure
- Fix prompt path: projects/roles/ not projects/prompts/
- Fix worker state: uses "level" field not "model"/"tier"
- Fix MANAGEMENT.md: remove incorrect model references
- Fix TESTING.md: update model config example to nested structure
- Remove VERIFICATION.md (one-off checklist, no longer needed)
- Add cross-references between all docs pages

https://claude.ai/code/session_01R3rGevPY748gP4uK2ggYag
2026-02-10 20:13:22 +00:00

3.4 KiB

DevClaw — QA Workflow

Quality Assurance in DevClaw follows a structured workflow that ensures every review is documented and traceable.

Required Steps

1. Review the Code

  • Pull latest from the base branch
  • Run tests and linting
  • Verify changes address issue requirements
  • Check for regressions in related functionality

2. Document Your Review (REQUIRED)

Before completing your task, you MUST create a review comment using task_comment:

task_comment({
  projectGroupId: "<group-id>",
  issueId: <issue-number>,
  body: "## QA Review\n\n**Tested:**\n- [List what you tested]\n\n**Results:**\n- [Pass/fail details]\n\n**Environment:**\n- [Test environment details]",
  authorRole: "qa"
})

3. Complete the Task

After posting your comment, call work_finish:

work_finish({
  role: "qa",
  projectGroupId: "<group-id>",
  result: "pass",  // or "fail", "refine", "blocked"
  summary: "Brief summary of review outcome"
})

QA Results

Result Label transition Meaning
"pass" Testing → Done Approved. Issue closed.
"fail" Testing → To Improve Issues found. Issue reopened, sent back to DEV.
"refine" Testing → Refining Needs human decision. Pipeline pauses.
"blocked" Testing → To Test Cannot complete (env issues, etc.). Returns to QA queue.

Why Comments Are Required

  1. Audit Trail — Every review decision is documented in the issue tracker
  2. Knowledge Sharing — Future reviewers understand what was tested
  3. Quality Metrics — Enables tracking of test coverage
  4. Debugging — When issues arise later, we know what was checked
  5. Compliance — Some projects require documented QA evidence

Comment Templates

For Passing Reviews

## QA Review

**Tested:**
- Feature A: [specific test cases]
- Feature B: [specific test cases]
- Edge cases: [list]

**Results:** All tests passed. No regressions found.

**Environment:**
- Browser/Platform: [details]
- Version: [details]
- Test data: [if relevant]

**Notes:** [Optional observations or recommendations]

For Failing Reviews

## QA Review — Issues Found

**Tested:**
- [What you tested]

**Issues Found:**
1. [Issue description with steps to reproduce]
2. [Issue description with expected vs actual behavior]

**Environment:**
- [Test environment details]

**Severity:** [Critical/Major/Minor]

Enforcement

QA workers receive instructions via role templates to:

  • Always call task_comment BEFORE work_finish
  • Include specific details about what was tested
  • Document results, environment, and any notes

Prompt templates affected:

  • projects/roles/<project>/qa.md
  • All project-specific QA templates should follow this pattern

Best Practices

  1. Be Specific — Don't just say "tested the feature" — list what you tested
  2. Include Environment — Version numbers, browser, OS can matter
  3. Document Edge Cases — If you tested special scenarios, note them
  4. Reference Requirements — Link back to acceptance criteria from the issue
  5. Use Screenshots — For UI issues, screenshots help (link in comment)