46 lines
1.6 KiB
Markdown
46 lines
1.6 KiB
Markdown
# Performance Optimization
|
|
|
|
## Purpose
|
|
|
|
Improve responsiveness and efficiency by focusing on the bottlenecks that matter most to users, systems, or operating cost.
|
|
|
|
## When to use
|
|
|
|
- Investigating slow pages, endpoints, jobs, or queries
|
|
- Reducing memory, CPU, network, or rendering overhead
|
|
- Preventing regressions in critical paths
|
|
- Prioritizing optimization work with limited time
|
|
|
|
## Inputs to gather
|
|
|
|
- Performance symptoms, target metrics, and critical user or system paths
|
|
- Existing measurements, profiles, logs, traces, or benchmarks
|
|
- Current architecture and known hot spots
|
|
- Acceptable tradeoffs in complexity, cost, and feature scope
|
|
|
|
## How to work
|
|
|
|
- Measure or inspect evidence before optimizing.
|
|
- Focus on the dominant bottleneck rather than broad cleanup.
|
|
- Prefer changes that improve the critical path without making the system harder to maintain.
|
|
- Re-measure after changes when possible.
|
|
- Capture the conditions under which the optimization matters so future work does not cargo-cult it.
|
|
|
|
## Output expectations
|
|
|
|
- Bottleneck diagnosis and recommended or implemented improvement
|
|
- Before-and-after evidence when available
|
|
- Notes on tradeoffs, limits, and remaining hot spots
|
|
|
|
## Quality checklist
|
|
|
|
- Optimization targets a real bottleneck.
|
|
- Claimed gains are grounded in evidence, not assumption alone.
|
|
- Complexity added by the optimization is justified.
|
|
- Regression risk is considered for correctness and maintainability.
|
|
|
|
## Handoff notes
|
|
|
|
- Note whether the result is measured, estimated, or hypothesis-driven.
|
|
- Pair with observability and operability when instrumentation is weak.
|