A production-ready PRD has exactly 10 sections. Most manually-written PRDs have 4. The 6 missing sections are where features go wrong: missing edge cases become production bugs, missing acceptance criteria become rework, missing dependencies become blocked sprints.
Here is a complete breakdown of what each section must contain, and why omitting any of them costs real engineering time.
Section 1: Objective
One sentence. Not a paragraph. The objective answers: what is this feature, who is it for, and what problem does it solve? If you cannot write it in one sentence, the feature is not scoped well enough to build.
Bad: "This feature will improve the user experience and drive engagement for our core PM users."
Good: "Allow product managers to export a completed PRD directly to their Linear workspace as structured tickets in one click, eliminating the manual copy-paste step that takes 40 minutes per feature."
Section 2: Background
Explain why this feature exists now. What changed — in user behaviour, the market, or the product — that makes this the right build at this moment? Background gives engineers context for why decisions were made the way they were. Without it, every architecture decision looks arbitrary.
Section 3: User Stories
Write 3–7 user stories in the format: As a [persona], I want to [action] so that [outcome].
Each user story must be independently testable. If you cannot write an acceptance criterion for a user story, the story is not specific enough. The persona must be named — not "the user," but "the senior PM managing 3+ features simultaneously" or "the startup founder who also writes the first version of every spec."
Section 4: Success Metrics
Define 2–4 quantitative metrics with a baseline value today and a 90-day target. At least one metric must be measurable within the first sprint — otherwise you will not know if the build is working until it is too late to course-correct.
Bad metric: "Improve user engagement."
Good metric: "PRD-to-Linear export rate: 0% today → 40% of completed PRDs exported within 7 days of generation."
Section 5: Scope
The out-of-scope list is as important as the in-scope list. "We are not building X in this sprint" is information that prevents feature creep, misaligned expectations, and the dreaded scope conversation at the sprint retrospective. Be explicit.
Section 6: Edge Cases
Document at least five edge cases. An edge case is a situation where the happy path breaks — an empty state, a network failure, a rate limit, a malformed input, a permission conflict. For each edge case, specify what the system should do. "Handle gracefully" is not a specification.
71% of the PRDs we analyzed had fewer than two documented edge cases. Engineers find the rest during implementation, which means each undocumented edge case costs an average of 2–4 hours of rework.
Section 7: Dependencies
List every external system, API, team, or decision that this feature depends on. Note who owns each dependency and whether it is resolved, in progress, or blocked. A feature that silently depends on a data model change in another team's sprint is a delayed launch waiting to happen.
Section 8: Open Questions
Never hide an unresolved decision in a paragraph. List every open question explicitly, with the name of the person responsible for answering it and the date by which it must be resolved to stay on schedule. This section is what separates a draft PRD from a finalized one.
Section 9: Risks
Document 3–5 risks with probability (High/Medium/Low), impact (High/Medium/Low), and a mitigation plan. The mitigation must be specific: a named owner, a concrete action, and a timeline. "Monitor the situation" is not a mitigation.
Section 10: Acceptance Criteria
Write Given/When/Then criteria for every user story. Each criterion must be independently verifiable by a QA engineer without asking the PM for clarification. If the QA engineer has to ask "what does 'works correctly' mean?" the criterion is not specific enough.
This is the section that separates a PRD from a memo. Without testable acceptance criteria, engineers build to their interpretation of the feature — and their interpretation is rarely wrong, it is just different from yours.