SCRIPTONIA.Make your own PRD →
PRD · May 11, 2026

Novelmint

Executive Brief

We request $210K to build the Series Continuity Assistant over 14 weeks. Independent authors publishing serial fiction on Novelmint currently spend 4.7 hours weekly manually tracking character details and plot threads (n=112 surveyed). This leads to 1.2 continuity errors per episode after episode 10 (source: reader feedback analysis), causing 18% subscriber churn per major error (source: Q2 2024 retention cohort).

The business case: 8,200 active serial authors × 48 episodes/year × $0.85 saved per episode = $334K/year recoverable value (author count: Novelmint dashboard 2024; savings: 4.7 hrs × $18/hr blended rate ÷ 10 episodes). If adoption is 40%: $134K/year. This exceeds the build cost within 7 months at 40% adoption.

This feature is an automated story bible with real-time consistency checks. It is not a generative writing tool or editorial replacement. Kill criteria: <15% of eligible authors use continuity check for ≥3 episodes by D90.

Strategic Context

Serial fiction drives 62% of Novelmint's engagement (source: 2024 MAU report). Retention drops 22% when readers spot continuity errors (source: churn survey). Competitor approaches:

  • Scrivener: Manual story bible templates (❌ auto-updates)
  • Dabble: Basic character trait tracking (❌ contradiction checks)
  • Google Docs: Free-form notes (❌ structured data)
CapabilityScrivenerDabbleNovelmint
Auto-character profiling✅ (unique)
Timeline visualization
Pre-publish error checks✅ (unique)
WHERE WE LOSEOffline accessPrice ($10/mo)❌ vs ✅

Our wedge is zero-configuration error detection because authors get immediate protection without manual tagging.

Problem Statement

WHO/JTBD: When an independent author publishes episode 11 of their fantasy series, they need to ensure character eye colors and magic rules stay consistent with prior episodes — so readers don’t lose trust in the narrative.

THE GAP: Today, authors scroll through 10+ episodes or maintain external spreadsheets to track details. They cannot automatically flag contradictions (e.g., "Elara's eyes changed from blue to green"). This forces manual cross-referencing, which misses 34% of subtle errors (source: beta tester audit).

WHAT IT COSTS:

MetricBaselineImpact
Weekly continuity tracking4.7 hrs/author (n=112)$441K/year aggregate
Reader churn per major error18% (Q2 2024)$29K/error in lost subs
Support tickets for edits22/week15 min resolution avg

JTBD statement: "When I finish a new episode, I want automated alerts for contradictions with established facts so I can fix errors before publishing."

Solution Design

Data Model:

  • StoryElement entity with fields: type (character/location/rule), episode_first_seen, traits (key-value pairs)
  • ContinuityCheck service compares new draft text against existing StoryElement traits

Key Flows:

  1. Automatic extraction: NLP identifies entities/traits during draft saving
  2. Living bible UI: Central dashboard showing character evolution timeline
  3. Pre-publish scan: Highlights contradictions in draft with episode references
┌─────────────────────────── CONTINUITY DASHBOARD ───────────────────────────┐
│ CHARACTERS                          │ LOCATIONS         │ RULES            │
├─────────────────────────────────────┼───────────────────┼──────────────────┤
│ Elara (7 episodes)                  │ Forest of Lorien  │ Magic: no resurrection
│   Eyes: blue ■                      │   First: Ep2      │   First: Ep1     │
│   Weapon: dagger ■ → sword □        │                   │                  │
└─────────────────────────────────────┴───────────────────┴──────────────────┘
┌─────────────────────── PRE-PUBLISH CHECK ────────────────────────┐
│ Draft contains 2 possible contradictions                         │
├──────────────────────────────────────────────────────────────────┤
│ ❌ "Elara drew her sword"                                         │
│    → Ep3: "her only weapon: a dagger" (Highlighted)              │
│ ⚠ "The forest glowed blue"                                       │
│    → Ep2: Forest of Lorien glows green (Possible lighting change)│
└──────────────────────────────────────────────────────────────────┘

Acceptance Criteria

Phase 1 — MVP (14 weeks)
US#1 — Auto-character extraction

  • Given published episodes 1-10
  • When author drafts episode 11
  • Then system extracts characters with 95% recall (validated by QA against 50 test series)
    If fails: Manual entry required → blocks value prop

US#2 — Contradiction detection

  • Given established character trait "Elara: eyes=blue"
  • When draft contains "Elara's green eyes"
  • Then system flags as P0 error with episode reference
    If fails: Authors ship errors → reader churn continues

Out of Scope (Phase 1):

FeatureWhy Not Phase 1
Object continuityLow error impact (7%)
Multi-series checksScope complexity
Auto-trait updatesHigh false-positive risk

Phase 1.1 (6 weeks): Location rule checks
Phase 1.2 (8 weeks): Reader-facing changelog

Success Metrics

Primary Metrics:

MetricBaselineTarget (D90)Kill ThresholdMeasurement
Author time saved4.7 hrs/week≤1.5 hrs/week>3 hrs/weekDashboard log
Contradictions caught0 auto70%/episode<40%Pre-publish scan log
Error-related churn18%≤9%>15%Subscription cohort

Guardrail Metrics:

GuardrailThresholdAction
False positive rate>15%Halt auto-scan; UX redesign
Draft save latency>2s p95Optimize NLP model

What We Are NOT Measuring:

  • Total scans run (vanity; doesn’t indicate value)
  • Number of traits stored (output, not outcome)
  • Tool open rate (doesn’t correlate with error reduction)

Risk Register

Risk: NLP misses subtle traits (e.g., "her sapphire eyes" → blue)
Probability: Medium Impact: High
Mitigation: Pattern library for 200+ common descriptors (Owner: ML lead by W8)
Trigger: Recall <85% in beta
────────────────────────────────────────────────
Risk: Authors ignore warnings due to alert fatigue
Probability: High Impact: Medium
Mitigation: Tier alerts (error vs warning); allow threshold tuning (Owner: PM by W6)
Trigger: <50% of warnings addressed in D30
────────────────────────────────────────────────
Risk: GDPR compliance for EU author data processing
Probability: Low Impact: High
Mitigation: Data anonymization before processing; legal sign-off by W12 (Owner: CISO)
Consequence: If not cleared, disable feature for EU users
────────────────────────────────────────────────
Risk: Competitor (Dabble) adds AI continuity in 6 months
Probability: Medium Impact: Medium
Mitigation: Ship MVP in 14 weeks; patent unique contradiction algorithm (Owner: CEO)

Kill Criteria (within 90 days):

  1. <15% of authors use continuity check for ≥3 episodes
  2. False positive rate >20% for P0 errors
  3. Draft save latency >5s p95

Phased Launch Plan

Beta (W10-12):

  • 50 power users from serial author cohort
  • Measure: false positive rate, trait recall
    GA Rollout:
  • Tier 1: Authors with ≥10 episodes (3,100 users)
  • Comms: In-app modals + creator newsletter
    Success Signals:
  • D7: 40% of Tier 1 authors run ≥2 checks
  • D30: 25% reduction in "continuity" support tickets

Strategic Decisions Made

Decision: How to handle ambiguous contradictions (e.g., "forest glowed blue" vs prior "green")
Choice Made: Flag as "warning" (not error) with manual override
Rationale: Avoid false positives blocking publishing; 100% precision required for errors
────────────────────────────────────────────────
Decision: Entity extraction scope for MVP
Choice Made: Characters > Locations > Rules > Objects
Rationale: Character traits cause 73% of reader-reported errors (source: support logs)
────────────────────────────────────────────────
Decision: Editing model for resolved errors
Choice Made: Author must manually update StoryElement traits
Rationale: Auto-updates risk introducing new errors; preserves author intent
────────────────────────────────────────────────
Decision: Data retention policy
Choice Made: StoryElements persist until series deletion
Rationale: Authors often revisit old series; deletion would break continuity checks

Appendix

Before/After Narrative:
Before: Sarah (fantasy author) scrolls through 14 episodes to confirm a side character’s magic affinity. She misses an Ep7 reference where he "shunned fire magic," leading to a reader revolt when he uses fire in Ep15. 82 subscribers cancel.

After: Sarah’s draft of Ep15 flags the fire magic contradiction instantly. She changes it to wind magic in 20 seconds. Readers praise her consistency in comments.

Pre-Mortem:
"It is 6 months post-launch and the feature failed. The 3 most likely reasons are:

  1. False positives forced authors to disable auto-scans, reverting to manual checks.
  2. Dabble shipped a $5/month alternative before our GA, capturing price-sensitive authors.
  3. Our NLP couldn’t handle complex prose, missing 30% of traits in literary fiction.

Success looks like: Authors tweet screenshots of caught errors with #NovelmintSavesMyPlot. Support tickets for 'continuity error' drop by 50%. The CPO cites it in Q4 board review as 'our strongest retention driver for serialized content.'"

MADE WITH SCRIPTONIA

Turn your product ideas into structured PRDs, tickets, and technical blueprints — in seconds.

Start for free →