We request $210K to build the Series Continuity Assistant over 14 weeks. Independent authors publishing serial fiction on Novelmint currently spend 4.7 hours weekly manually tracking character details and plot threads (n=112 surveyed). This leads to 1.2 continuity errors per episode after episode 10 (source: reader feedback analysis), causing 18% subscriber churn per major error (source: Q2 2024 retention cohort).
The business case: 8,200 active serial authors × 48 episodes/year × $0.85 saved per episode = $334K/year recoverable value (author count: Novelmint dashboard 2024; savings: 4.7 hrs × $18/hr blended rate ÷ 10 episodes). If adoption is 40%: $134K/year. This exceeds the build cost within 7 months at 40% adoption.
This feature is an automated story bible with real-time consistency checks. It is not a generative writing tool or editorial replacement. Kill criteria: <15% of eligible authors use continuity check for ≥3 episodes by D90.
Serial fiction drives 62% of Novelmint's engagement (source: 2024 MAU report). Retention drops 22% when readers spot continuity errors (source: churn survey). Competitor approaches:
| Capability | Scrivener | Dabble | Novelmint |
|---|---|---|---|
| Auto-character profiling | ❌ | ✅ | ✅ (unique) |
| Timeline visualization | ❌ | ❌ | ✅ |
| Pre-publish error checks | ❌ | ❌ | ✅ (unique) |
| WHERE WE LOSE | Offline access | Price ($10/mo) | ❌ vs ✅ |
Our wedge is zero-configuration error detection because authors get immediate protection without manual tagging.
WHO/JTBD: When an independent author publishes episode 11 of their fantasy series, they need to ensure character eye colors and magic rules stay consistent with prior episodes — so readers don’t lose trust in the narrative.
THE GAP: Today, authors scroll through 10+ episodes or maintain external spreadsheets to track details. They cannot automatically flag contradictions (e.g., "Elara's eyes changed from blue to green"). This forces manual cross-referencing, which misses 34% of subtle errors (source: beta tester audit).
WHAT IT COSTS:
| Metric | Baseline | Impact |
|---|---|---|
| Weekly continuity tracking | 4.7 hrs/author (n=112) | $441K/year aggregate |
| Reader churn per major error | 18% (Q2 2024) | $29K/error in lost subs |
| Support tickets for edits | 22/week | 15 min resolution avg |
JTBD statement: "When I finish a new episode, I want automated alerts for contradictions with established facts so I can fix errors before publishing."
Data Model:
StoryElement entity with fields: type (character/location/rule), episode_first_seen, traits (key-value pairs)ContinuityCheck service compares new draft text against existing StoryElement traitsKey Flows:
┌─────────────────────────── CONTINUITY DASHBOARD ───────────────────────────┐
│ CHARACTERS │ LOCATIONS │ RULES │
├─────────────────────────────────────┼───────────────────┼──────────────────┤
│ Elara (7 episodes) │ Forest of Lorien │ Magic: no resurrection
│ Eyes: blue ■ │ First: Ep2 │ First: Ep1 │
│ Weapon: dagger ■ → sword □ │ │ │
└─────────────────────────────────────┴───────────────────┴──────────────────┘
┌─────────────────────── PRE-PUBLISH CHECK ────────────────────────┐
│ Draft contains 2 possible contradictions │
├──────────────────────────────────────────────────────────────────┤
│ ❌ "Elara drew her sword" │
│ → Ep3: "her only weapon: a dagger" (Highlighted) │
│ ⚠ "The forest glowed blue" │
│ → Ep2: Forest of Lorien glows green (Possible lighting change)│
└──────────────────────────────────────────────────────────────────┘
Phase 1 — MVP (14 weeks)
US#1 — Auto-character extraction
US#2 — Contradiction detection
Out of Scope (Phase 1):
| Feature | Why Not Phase 1 |
|---|---|
| Object continuity | Low error impact (7%) |
| Multi-series checks | Scope complexity |
| Auto-trait updates | High false-positive risk |
Phase 1.1 (6 weeks): Location rule checks
Phase 1.2 (8 weeks): Reader-facing changelog
Primary Metrics:
| Metric | Baseline | Target (D90) | Kill Threshold | Measurement |
|---|---|---|---|---|
| Author time saved | 4.7 hrs/week | ≤1.5 hrs/week | >3 hrs/week | Dashboard log |
| Contradictions caught | 0 auto | 70%/episode | <40% | Pre-publish scan log |
| Error-related churn | 18% | ≤9% | >15% | Subscription cohort |
Guardrail Metrics:
| Guardrail | Threshold | Action |
|---|---|---|
| False positive rate | >15% | Halt auto-scan; UX redesign |
| Draft save latency | >2s p95 | Optimize NLP model |
What We Are NOT Measuring:
Risk: NLP misses subtle traits (e.g., "her sapphire eyes" → blue)
Probability: Medium Impact: High
Mitigation: Pattern library for 200+ common descriptors (Owner: ML lead by W8)
Trigger: Recall <85% in beta
────────────────────────────────────────────────
Risk: Authors ignore warnings due to alert fatigue
Probability: High Impact: Medium
Mitigation: Tier alerts (error vs warning); allow threshold tuning (Owner: PM by W6)
Trigger: <50% of warnings addressed in D30
────────────────────────────────────────────────
Risk: GDPR compliance for EU author data processing
Probability: Low Impact: High
Mitigation: Data anonymization before processing; legal sign-off by W12 (Owner: CISO)
Consequence: If not cleared, disable feature for EU users
────────────────────────────────────────────────
Risk: Competitor (Dabble) adds AI continuity in 6 months
Probability: Medium Impact: Medium
Mitigation: Ship MVP in 14 weeks; patent unique contradiction algorithm (Owner: CEO)
Kill Criteria (within 90 days):
Beta (W10-12):
Decision: How to handle ambiguous contradictions (e.g., "forest glowed blue" vs prior "green")
Choice Made: Flag as "warning" (not error) with manual override
Rationale: Avoid false positives blocking publishing; 100% precision required for errors
────────────────────────────────────────────────
Decision: Entity extraction scope for MVP
Choice Made: Characters > Locations > Rules > Objects
Rationale: Character traits cause 73% of reader-reported errors (source: support logs)
────────────────────────────────────────────────
Decision: Editing model for resolved errors
Choice Made: Author must manually update StoryElement traits
Rationale: Auto-updates risk introducing new errors; preserves author intent
────────────────────────────────────────────────
Decision: Data retention policy
Choice Made: StoryElements persist until series deletion
Rationale: Authors often revisit old series; deletion would break continuity checks
Before/After Narrative:
Before: Sarah (fantasy author) scrolls through 14 episodes to confirm a side character’s magic affinity. She misses an Ep7 reference where he "shunned fire magic," leading to a reader revolt when he uses fire in Ep15. 82 subscribers cancel.
After: Sarah’s draft of Ep15 flags the fire magic contradiction instantly. She changes it to wind magic in 20 seconds. Readers praise her consistency in comments.
Pre-Mortem:
"It is 6 months post-launch and the feature failed. The 3 most likely reasons are:
Success looks like: Authors tweet screenshots of caught errors with #NovelmintSavesMyPlot. Support tickets for 'continuity error' drop by 50%. The CPO cites it in Q4 board review as 'our strongest retention driver for serialized content.'"