# Test Strategy ## Purpose Choose and evaluate the right level of verification so changes are covered proportionally to their risk, behavior, and maintenance cost. ## When to use - Adding or updating tests for a feature or bug fix - Deciding what verification is necessary before shipping - Auditing coverage gaps in existing code - Designing regression protection for risky areas ## Inputs to gather - The behavior being changed or protected - Current test layout, tooling, and conventions - Known failure modes, regressions, or edge cases - Constraints on test speed, environment, and confidence needs ## How to work - Match test level to risk: unit for logic, integration for boundaries, end-to-end for critical workflows. - Prefer the smallest test that meaningfully protects the behavior. - Cover success paths, likely failure paths, and regressions suggested by the change. - Reuse existing fixtures and patterns where possible. - If tests are not feasible, define alternate validation steps and explain the gap. ## Output expectations - Specific recommended or implemented tests - Clear rationale for test level and scope - Any remaining uncovered risks or follow-up test ideas ## Quality checklist - Tests target behavior, not implementation trivia. - Coverage includes the change's highest-risk paths. - Test design fits repository conventions and runtime cost expectations. - Non-tested areas are called out explicitly when important. ## Handoff notes - Note flaky, expensive, or environment-dependent checks separately from fast local confidence checks. - Mention whether the test plan is implemented, recommended, or partially blocked.