agents in

This commit is contained in:
2026-03-27 22:34:12 -05:00
commit a682231f73
44 changed files with 2274 additions and 0 deletions

View File

@@ -0,0 +1,45 @@
# API and Backend Work
## Purpose
Guide server-side, service, API, data, and integration changes with attention to contracts, compatibility, failure handling, and operational impact.
## When to use
- Modifying endpoints, handlers, services, jobs, or data flows
- Adding or changing schemas, persistence, or integration behavior
- Working on backend business logic or infrastructure-facing code
- Investigating performance, reliability, or contract issues on the server side
## Inputs to gather
- API contracts, schema, storage models, and service boundaries
- Existing validation, auth, error handling, and observability patterns
- Compatibility constraints for clients, data, and deployments
- Current tests and representative request or event flows
## How to work
- Trace the full request or job lifecycle before changing a boundary.
- Preserve compatibility intentionally or document the break clearly.
- Handle validation, authorization, error responses, and retries in line with existing system behavior.
- Consider migration, rollout, and operational visibility when data or contracts change.
- Add or update tests at the right layer for the change.
## Output expectations
- Working backend change with contract implications made explicit
- Notes on schema, config, data, or rollout impact
- Verification results covering the critical paths
## Quality checklist
- Inputs and outputs are validated appropriately.
- Failure handling is explicit and consistent.
- Compatibility and migration impact are understood.
- Logging, metrics, or observability concerns are addressed when relevant.
## Handoff notes
- Call out any required coordination with frontend, data migration, configuration, or deployment steps.
- Note backwards-incompatible changes clearly and early.

View File

@@ -0,0 +1,45 @@
# Architecture and System Design
## Purpose
Shape meaningful technical direction so systems stay understandable, evolvable, and aligned with product needs over time.
## When to use
- Designing a major feature or subsystem
- Changing service boundaries, module boundaries, or core data flow
- Evaluating multiple implementation approaches with long-term consequences
- Preparing work that will influence maintainability, scale, or team velocity
## Inputs to gather
- Current architecture, boundaries, and pain points
- Product goals, scale expectations, and reliability constraints
- Existing patterns, platform constraints, and team operating model
- Compatibility, migration, and rollout concerns
## How to work
- Start from the user or system outcome, then identify the simplest architecture that supports it well.
- Make tradeoffs explicit: complexity, performance, reliability, maintainability, and delivery speed.
- Preserve useful existing boundaries unless there is a clear reason to change them.
- Prefer designs that are easy to operate and easy for the team to understand.
- Document why the chosen path is better than the main alternatives.
## Output expectations
- Clear recommended design or architecture direction
- Explicit tradeoffs and constraints
- Interfaces, boundaries, and rollout considerations that matter for implementation
## Quality checklist
- The design solves the actual problem, not a hypothetical future one.
- Tradeoffs are named clearly enough to guide later decisions.
- Complexity is justified by concrete needs.
- Operational and migration consequences are not ignored.
## Handoff notes
- Pair with architecture decision records when the choice should be preserved for future contributors.
- Call out which parts are decided versus intentionally deferred.

View File

@@ -0,0 +1,45 @@
# Code Review
## Purpose
Review code with a bug-finding mindset that prioritizes correctness, regressions, risky assumptions, edge cases, and missing tests over style commentary.
## When to use
- Reviewing a pull request or patch
- Auditing a risky change before merge
- Evaluating whether a change is safe to ship
- Checking for test and documentation gaps
## Inputs to gather
- The diff or changed files
- Nearby code paths and contracts affected by the change
- Existing tests, especially those intended to cover the modified behavior
- Context on expected behavior, rollout risk, and compatibility requirements
## How to work
- Start with correctness, then move to regressions, then test gaps, then maintainability risks.
- Trace changed code through call sites, error paths, and data flow rather than reading only the edited lines in isolation.
- Focus comments on issues that materially affect behavior, safety, or maintainability.
- Be explicit about severity and the concrete consequence of each issue.
- Keep summary brief after listing the findings.
## Output expectations
- A prioritized list of findings with clear reasoning
- Open questions or assumptions that affect confidence
- Brief summary of overall risk after the findings
## Quality checklist
- Findings identify real behavior or verification risk, not cosmetic preferences.
- Severity is proportional to user impact and likelihood.
- Missing tests are called out where they reduce confidence materially.
- If no issues are found, residual risk and coverage gaps are still noted.
## Handoff notes
- Include file references and tight line references when available.
- Distinguish confirmed issues from lower-confidence concerns.

View File

@@ -0,0 +1,45 @@
# Database Migrations and Data Evolution
## Purpose
Change schemas and data safely while protecting compatibility, correctness, rollout reliability, and recovery options.
## When to use
- Adding, removing, or changing database schema
- Backfilling or transforming data
- Introducing compatibility windows between old and new code
- Planning rollout for data-sensitive changes
## Inputs to gather
- Current schema, access patterns, and data volume
- Migration tooling and deployment model
- Compatibility requirements across services, jobs, or clients
- Rollback constraints and data recovery options
## How to work
- Prefer staged migrations when compatibility matters: expand, backfill, switch reads or writes, then contract.
- Minimize lock risk, data loss risk, and long-running migration risk.
- Consider how old and new code will coexist during rollout.
- Define verification steps for schema state and critical data correctness.
- Document irreversible steps and operator actions clearly.
## Output expectations
- Safe migration plan or implementation
- Compatibility and rollout notes
- Verification and rollback considerations
## Quality checklist
- The migration is safe for the repository's deployment model.
- Data correctness is protected during and after rollout.
- Backwards and forwards compatibility are considered when needed.
- Irreversible or risky steps are made explicit.
## Handoff notes
- Call out sequencing requirements across application code, migrations, and background jobs.
- Pair with release/change summary and technical docs when operators or teammates need a clear rollout path.

View File

@@ -0,0 +1,45 @@
# Dependency Lifecycle Management
## Purpose
Keep dependencies healthy over time by balancing security, compatibility, maintainability, and upgrade cost.
## When to use
- Upgrading libraries, frameworks, runtimes, or tooling
- Auditing dependency risk or staleness
- Reducing upgrade backlog and ecosystem drift
- Planning how to adopt breaking changes safely
## Inputs to gather
- Current dependency versions and their role in the system
- Changelogs, upgrade guides, and breaking changes
- Existing test coverage and high-risk integration points
- Security, support-window, or maintenance concerns
## How to work
- Prefer focused upgrade batches that are easy to validate and revert.
- Separate mechanical version bumps from behavior-changing adaptation when possible.
- Read authoritative release notes before changing usage patterns.
- Verify the highest-risk integration paths, not just installation success.
- Capture follow-up work when a safe incremental upgrade leaves known deprecated patterns behind.
## Output expectations
- Upgrade plan or completed upgrade with adaptation notes
- Risk summary for changed dependencies
- Verification results and known remaining debt
## Quality checklist
- The upgrade reduces risk or maintenance burden meaningfully.
- Breaking changes are understood before implementation.
- Validation covers the most likely failure surfaces.
- Residual deprecations or postponed steps are documented clearly.
## Handoff notes
- Note whether the work is a full upgrade, a safe intermediate step, or a reconnaissance pass.
- Pair with test strategy and release/change summary when adoption affects developer workflow or runtime behavior.

View File

@@ -0,0 +1,48 @@
# Feature Implementation
## Purpose
Guide implementation of new behavior or meaningful changes to existing behavior with a bias toward working software, repository alignment, and practical verification.
## When to use
- Building a new feature
- Expanding an existing workflow
- Making a multi-file change that affects user or developer behavior
- Turning a scoped request into implemented code
## Inputs to gather
- Relevant entrypoints, modules, and surrounding patterns
- Existing interfaces, types, schema, and tests
- User goal, success criteria, constraints, and impacted surfaces
- Any repository instructions that override generic defaults
## How to work
- Inspect the codebase before editing and identify the smallest coherent change set.
- Prefer existing patterns over introducing novel structure unless the current patterns are clearly limiting.
- Implement end-to-end behavior, not just partial scaffolding, when feasible.
- Keep logic changes close to the relevant module boundaries and avoid unrelated cleanup unless it materially helps the task.
- Validate with targeted tests, builds, or manual checks appropriate to the repository.
- Update docs, examples, or change notes when the feature alters usage or expectations.
## Output expectations
- A working implementation or a clearly explained blocker
- Concise summary of what changed and why
- Validation results and any gaps that remain
- Notes on follow-up work only when it is genuinely important
## Quality checklist
- The change matches the stated goal and avoids unrelated churn.
- Naming, structure, and style fit the existing codebase.
- Errors, edge cases, and obvious failure paths are handled.
- Verification is appropriate for the size and risk of the change.
- User-facing or developer-facing behavior changes are documented when needed.
## Handoff notes
- Mention touched subsystems and any assumptions made because the repo did not answer them.
- Call out migration or rollout concerns if the feature affects data, config, or compatibility.

View File

@@ -0,0 +1,45 @@
# Frontend UI Implementation
## Purpose
Guide interface implementation that balances correctness, usability, clarity, performance, and consistency with the existing product experience.
## When to use
- Building or updating pages, components, and interactions
- Implementing client-side state or view logic
- Adjusting layout, form flows, states, and visual feedback
- Shipping UI changes tied to product behavior
## Inputs to gather
- Existing design system, component patterns, and styling conventions
- User flow, content requirements, and responsive constraints
- State, API, and error/loading behavior tied to the UI
- Current tests, stories, screenshots, or acceptance criteria if available
## How to work
- Preserve the established visual language unless the task explicitly calls for a new direction.
- Design for the full experience: loading, empty, error, success, and edge states.
- Keep interaction logic understandable and avoid overengineering small UI behavior.
- Use content, hierarchy, and spacing intentionally so the UI communicates clearly.
- Validate on the most important screen sizes or states that the repository can reasonably support.
## Output expectations
- A functional UI change that is coherent visually and behaviorally
- Clear notes on user-facing behavior and state handling
- Verification appropriate to the stack, such as tests, stories, or manual checks
## Quality checklist
- The UI is understandable without hidden assumptions.
- Important states are handled, not just the happy path.
- Visual and code patterns fit the existing app.
- Accessibility, responsiveness, and copy quality are considered.
## Handoff notes
- Mention any UX debts, unresolved visual questions, or browser/device gaps that remain.
- Pair with UX review or product copy when usability or wording is central to the task.

View File

@@ -0,0 +1,45 @@
# Maintenance and Technical Debt Planning
## Purpose
Turn vague maintenance needs into a practical, sequenced plan that improves delivery speed, reliability, and future change safety over time.
## When to use
- The codebase has accumulated risky or slowing debt
- A team needs to prioritize cleanup against feature work
- Repeated friction suggests structural maintenance investment is overdue
- You need to explain why maintenance work matters in product terms
## Inputs to gather
- Known pain points, repeated failures, and slow areas in delivery
- Architectural hotspots, obsolete patterns, and fragile dependencies
- Team constraints, roadmap pressure, and acceptable disruption
- Evidence of cost: incidents, churn, slowed feature work, or support burden
## How to work
- Focus on debt that materially changes future delivery, reliability, or risk.
- Group issues into themes rather than a flat list of annoyances.
- Prioritize by impact, urgency, and dependency relationships.
- Prefer incremental sequences that can ship safely between feature work.
- Translate maintenance value into outcomes the team can defend.
## Output expectations
- Prioritized maintenance plan or backlog proposal
- Clear rationale for what should happen now versus later
- Sequencing guidance and expected payoff
## Quality checklist
- Recommendations are tied to real delivery or reliability pain.
- Prioritization is explicit and defensible.
- The plan is incremental enough to execute.
- Work is framed in terms of reduced risk or increased velocity, not vague cleanliness.
## Handoff notes
- Note what evidence would strengthen or change the prioritization.
- Pair with roadmap and opportunity prioritization when balancing debt against new initiatives.

View File

@@ -0,0 +1,45 @@
# Observability and Operability
## Purpose
Make systems easier to understand, debug, and run by improving signals, diagnostics, and operational readiness around important behavior.
## When to use
- A system is hard to diagnose in production or staging
- New functionality needs useful logs, metrics, traces, or alerts
- Operational ownership is unclear during failures or rollout
- Reliability work needs better visibility before deeper changes
## Inputs to gather
- Critical workflows, failure modes, and current diagnostic signals
- Existing logging, metrics, tracing, dashboards, and alerts
- Operator needs during rollout, incident response, and debugging
- Noise constraints and performance or cost considerations
## How to work
- Instrument the questions a responder will need answered during failure.
- Prefer signals tied to user-impacting behavior over vanity metrics.
- Make logs structured and actionable when possible.
- Add observability close to important boundaries and state transitions.
- Keep signal quality high by avoiding low-value noise.
## Output expectations
- Improved observability or an operability plan for the target area
- Clear explanation of what new signals reveal
- Notes on alerting, dashboard, or rollout support when relevant
## Quality checklist
- Signals help detect and diagnose meaningful failures.
- Instrumentation is focused and not excessively noisy.
- Operational usage is considered, not just implementation convenience.
- Added visibility maps to critical user or system outcomes.
## Handoff notes
- Mention what incidents or debugging tasks the new observability should make easier.
- Pair with debugging workflow, incident response, or performance optimization when diagnosis is the main bottleneck.

View File

@@ -0,0 +1,45 @@
# Performance Optimization
## Purpose
Improve responsiveness and efficiency by focusing on the bottlenecks that matter most to users, systems, or operating cost.
## When to use
- Investigating slow pages, endpoints, jobs, or queries
- Reducing memory, CPU, network, or rendering overhead
- Preventing regressions in critical paths
- Prioritizing optimization work with limited time
## Inputs to gather
- Performance symptoms, target metrics, and critical user or system paths
- Existing measurements, profiles, logs, traces, or benchmarks
- Current architecture and known hot spots
- Acceptable tradeoffs in complexity, cost, and feature scope
## How to work
- Measure or inspect evidence before optimizing.
- Focus on the dominant bottleneck rather than broad cleanup.
- Prefer changes that improve the critical path without making the system harder to maintain.
- Re-measure after changes when possible.
- Capture the conditions under which the optimization matters so future work does not cargo-cult it.
## Output expectations
- Bottleneck diagnosis and recommended or implemented improvement
- Before-and-after evidence when available
- Notes on tradeoffs, limits, and remaining hot spots
## Quality checklist
- Optimization targets a real bottleneck.
- Claimed gains are grounded in evidence, not assumption alone.
- Complexity added by the optimization is justified.
- Regression risk is considered for correctness and maintainability.
## Handoff notes
- Note whether the result is measured, estimated, or hypothesis-driven.
- Pair with observability and operability when instrumentation is weak.

View File

@@ -0,0 +1,45 @@
# Refactoring
## Purpose
Improve code structure, readability, maintainability, or modularity without intentionally changing externally observable behavior.
## When to use
- Simplifying complex logic
- Extracting clearer abstractions
- Reducing duplication or coupling
- Preparing code for future work while preserving behavior
## Inputs to gather
- Current behavior and tests that define expected outcomes
- Structural pain points in the relevant modules
- Constraints around public APIs, compatibility, or performance
- Existing patterns for abstraction and module boundaries
## How to work
- Preserve behavior intentionally and define what must remain unchanged before editing.
- Favor small, reviewable moves over sweeping rewrites unless the code is already unsafe to work in incrementally.
- Keep interface changes minimal and justified.
- Add or strengthen tests when behavior preservation is important and current coverage is weak.
- Separate cleanup that supports the refactor from unrelated aesthetic changes.
## Output expectations
- Cleaner code with behavior preserved
- Clear explanation of the structural improvement
- Verification evidence that the refactor did not break expected behavior
## Quality checklist
- Intended behavior is unchanged unless explicitly documented otherwise.
- The resulting structure is easier to understand or extend.
- Interface changes are minimal, justified, and documented.
- Added complexity is avoided unless it buys meaningful maintainability.
## Handoff notes
- Call out any areas where behavior preservation is inferred rather than strongly verified.
- Note future cleanup opportunities only if they naturally follow from the refactor.

View File

@@ -0,0 +1,45 @@
# Release and Change Summary
## Purpose
Explain shipped or proposed changes clearly for developers, operators, collaborators, or end users, with emphasis on what changed, why it matters, and what action is required.
## When to use
- Writing release notes or changelog entries
- Summarizing completed engineering work
- Explaining migration or rollout impact
- Turning technical changes into clear stakeholder communication
## Inputs to gather
- The actual code or product changes
- Intended audience and their level of technical depth
- Any rollout, migration, compatibility, or operational considerations
- Linked docs, issues, or feature context that explains why the change exists
## How to work
- Lead with the user-meaningful change, not internal implementation trivia.
- Group related changes into a few clear themes rather than a raw diff dump.
- Call out required actions, migrations, or risks explicitly.
- Tailor the level of detail to the audience.
- Keep the summary accurate to the implementation that actually landed.
## Output expectations
- Clear release notes, summary, or change communication draft
- Audience-appropriate explanation of impact and required action
- Explicit mention of follow-up items only when relevant
## Quality checklist
- The summary matches the real change, not the original intent alone.
- Important caveats, migrations, and compatibility notes are visible.
- Wording is concise and easy to scan.
- Audience knowledge level is respected.
## Handoff notes
- Say who the summary is for and what medium it targets if that is not obvious.
- Pair with technical docs or marketing skills when the output needs deeper explanation or stronger positioning.

View File

@@ -0,0 +1,44 @@
# Repository Exploration
## Purpose
Rapidly build accurate context before implementation, debugging, or planning by identifying the right files, flows, conventions, and constraints in the repository.
## When to use
- Starting in an unfamiliar repository
- Locating the right implementation area for a request
- Understanding current architecture before proposing changes
- Reducing ambiguity in a vague task
## Inputs to gather
- Repository layout, entrypoints, and key modules
- Build, test, and dependency configuration
- Existing patterns for similar features or workflows
- Any local instructions, docs, or conventions already in the repo
## How to work
- Start broad, then narrow quickly to the files and flows relevant to the task.
- Favor authoritative sources in the repo such as configs, types, interfaces, docs, and existing implementations.
- Identify where decisions are already made by the codebase so you do not reinvent them.
- Summarize findings in terms of how they affect the next action.
- Stop exploring once the path to execution is clear enough.
## Output expectations
- Concise map of the relevant code paths and conventions
- Recommended starting points for changes or further investigation
- Key unknowns that still require validation
## Quality checklist
- Exploration answers practical implementation questions rather than producing generic architecture prose.
- Findings are tied to concrete files, modules, or workflows.
- Enough context is gathered to act confidently without over-reading the entire repo.
## Handoff notes
- Mention the most relevant files, commands, and repo conventions discovered.
- Flag ambiguous areas where multiple plausible implementation paths exist.

View File

@@ -0,0 +1,45 @@
# Security Review and Hardening
## Purpose
Reduce avoidable security risk by reviewing trust boundaries, sensitive data handling, exposure paths, and abuse opportunities in the relevant system area.
## When to use
- Shipping authentication, authorization, input handling, or sensitive workflows
- Reviewing an externally exposed feature or API
- Auditing risky changes for common security failures
- Hardening an existing system area with known gaps
## Inputs to gather
- Trust boundaries, user roles, and entry points
- Sensitive data flows, secrets, tokens, or privileged operations
- Existing auth, validation, logging, and rate limiting patterns
- Relevant compliance or threat concerns if known
## How to work
- Start with who can do what, from where, and with which inputs.
- Check validation, authorization, data exposure, secret handling, and abuse resistance.
- Prefer concrete mitigations over vague warnings.
- Align with existing security controls unless they are clearly insufficient.
- Call out unverified areas when the environment or tooling limits confidence.
## Output expectations
- Concrete risks found or a scoped hardening plan
- Recommended mitigations tied to the actual threat surface
- Clear statement of confidence and any blind spots
## Quality checklist
- Review covers the real trust boundaries and attack surface.
- Findings describe exploit consequence, not just theoretical concern.
- Mitigations are practical for the system and team.
- Residual risk is visible where hardening is incomplete.
## Handoff notes
- Separate must-fix risks from defense-in-depth improvements.
- Pair with code review, API/backend work, and observability when the issue spans implementation and detection.

View File

@@ -0,0 +1,45 @@
# Test Strategy
## Purpose
Choose and evaluate the right level of verification so changes are covered proportionally to their risk, behavior, and maintenance cost.
## When to use
- Adding or updating tests for a feature or bug fix
- Deciding what verification is necessary before shipping
- Auditing coverage gaps in existing code
- Designing regression protection for risky areas
## Inputs to gather
- The behavior being changed or protected
- Current test layout, tooling, and conventions
- Known failure modes, regressions, or edge cases
- Constraints on test speed, environment, and confidence needs
## How to work
- Match test level to risk: unit for logic, integration for boundaries, end-to-end for critical workflows.
- Prefer the smallest test that meaningfully protects the behavior.
- Cover success paths, likely failure paths, and regressions suggested by the change.
- Reuse existing fixtures and patterns where possible.
- If tests are not feasible, define alternate validation steps and explain the gap.
## Output expectations
- Specific recommended or implemented tests
- Clear rationale for test level and scope
- Any remaining uncovered risks or follow-up test ideas
## Quality checklist
- Tests target behavior, not implementation trivia.
- Coverage includes the change's highest-risk paths.
- Test design fits repository conventions and runtime cost expectations.
- Non-tested areas are called out explicitly when important.
## Handoff notes
- Note flaky, expensive, or environment-dependent checks separately from fast local confidence checks.
- Mention whether the test plan is implemented, recommended, or partially blocked.