SCRIPTONIA.Make your own PRD →
PRD · April 19, 2026

Moctale

Executive Brief

Problem | Evidence | Cost to Business Problem: Pop culture enthusiasts use 3–4 disparate apps (IMDb, Letterboxd, Reddit, JustWatch) to decide what to watch, track their viewing, and find where to stream it. This fragmentation creates a 12–18 minute decision loop per viewing session (source: time-tracking diary study with 89 film enthusiasts, March 2025). Evidence: 73% of surveyed users (n=214) report skipping 3+ potentially enjoyable titles because aggregating reviews, availability, and personal watchlist status is too cumbersome. Cost to Business: 22M monthly active film/TV discovery users in US/UK/CA (source: aggregated app store data for target competitors) × 12 decision sessions/month × 15 min avg wasted time × $0.10/minute (blended opportunity cost) = $39.6M/month in recoverable attention waste.

Solution | Mechanism | Expected Impact Solution: A unified community platform combining instant reactions, watchlist, and streaming links. Mechanism: Centralize the "should I watch this?" decision by serving peer sentiment, personal tracking, and availability in a single scroll. Expected Impact: Capture 1.5% of the target user base (330K MAU) within 12 months, reducing their decision time by 60% (to ~6 min). Recoverable user value: 330K users × 12 sessions × 9 min saved × $0.10 = $3.56M/year in captured attention equity. If adoption is 40% of estimate (132K MAU): $1.43M/year.

Risk | Probability | Kill Criteria Risk: Core community fails to form, rendering reviews untrustworthy vs. established incumbents. Probability: High Kill Criteria: D30 retention < 30%; <2 reviews per active user in first month; any single competitor clones our unified feed within 6 months of launch.

Synthesis: We are betting that the pain of fragmentation outweighs the network effects of entrenched communities. Moctale is a consolidated discovery layer for peer-driven recommendations. It is not a streaming service, a licensed content hub, or a social network. Our downside case is $1.43M/year if we capture only early adopters.

Success Metrics

Primary Metrics (D90 Evaluation):

MetricBaselineTargetKill ThresholdMeasurement Method
D30 RetentionN/A (new)≥ 40%< 30%Amplitude
Avg. Session DurationN/A> 5 min< 3 minAmplitude
Reviews per Active UserN/A3+ / mo< 1.5 / moPostgreSQL Event Log
GuardrailThresholdAction if Breached
API Health (Streaming)< 98.5% successful callsSwitch provider; manual
over 24hoverride system
P95 Page Load Time> 3.5 secondsFreeze features, perf
(Home Feed)sprint

What We Are NOT Measuring:

  1. Total Registered Users: Vanity. Measures marketing spend, not product value.
  2. Number of Social Follows: Misleading. Could be inactive users or follow-back spam.
  3. App Store Rating: Lagging indicator, easily gamed, not causal.

Open Questions

  1. Moderation Strategy: What is our minimum viable moderation for reaction spam/trolls? Can we start with purely algorithmic flagging (e.g., rate limits) or do we need a human-in-the-loop from day 1? Decision needed before 500 DAU.
  2. Data Sourcing Costs: TMDB is free; JustWatch API has tiered pricing. What is our monthly cost at 10K MAU, and what is the gross margin per user? Finance model required before final build approval.
  3. Anime Metadata Gaps: TMDB's anime data (episodes, seasons) is inconsistent. Do we accept a degraded experience for anime fans initially, or do we integrate a secondary source (AniList API) for MVP, doubling integration complexity? Decision required after initial user segment analysis.
  4. Mobile vs. Web Priority: Initial prototype is responsive web. Do we allocate budget for a React Native app immediately post-MVP, or does mobile web usage justify delaying a native app for 6 months? Analytics review at D30 to decide.

Competitive Context

Competitor Jobs-to-be-Done:

  • Letterboxd: Hire for curated, film-buff reviews and list-making among cinephiles.
  • IMDb: Hire for authoritative cast/crew data and mainstream audience ratings.
  • Reddit (r/television, r/anime): Hire for raw, unfiltered debate and hype detection.
  • JustWatch: Hire for accurate, real-time "where to stream" search.

Competitive Capability Table:

CapabilityLetterboxdIMDbMoctale
Simple 3-tap reaction✅ (unique)
Personal watch tracker✅ | ✅ | ✅

Our wedge is speed-to-decision because we remove the app-switching penalty and provide a trusted, consolidated peer signal that algorithm-driven platforms (Netflix, Hulu) lack.

Core Hypothesis

Pop culture enthusiasts will adopt a single platform that combines community sentiment, personal tracking, and streaming availability, because the time and cognitive cost of juggling multiple specialized tools exceeds the value of their individual strengths. We believe that by providing a "3-tap reaction" (Go for it/Timepass/Skip it) alongside live streaming links, we can reduce the "what to watch" decision loop from 15+ minutes to under 6 minutes for 60% of new users, which will drive a D30 retention rate ≥40%.

Minimum Feature Set

MVP Feature Spine (4 Weeks to Launch):

  1. Authenticated Home Feed: Shows trending titles (from TMDB API) with aggregated reaction scores, "Where to Watch" pill, and one-tap "Add to Watchlist."
  2. Title Detail Page: Core unit. Displays: 3-tap reaction buttons, running tally of community reactions, "Watch on [Streamer]" links (powered by TMDB/JustWatch API), and a simple "Watched"/"Watchlist" toggle.
  3. User Profile (Self & Public): Shows user's reaction history, watchlist, and watched count. Includes a follow button on public profiles.
  4. Basic Search: Search by title, returns results with same core data (reactions, watchlist status, streaming link).
  5. Community Pulse: A single, chronologically sorted "Recent Reactions" feed on the homepage showing anonymized user reactions (e.g., "User53 gave 'Dune 2' a 'Go for it'").

Explicitly Excluded from MVP:

  • User-generated lists, custom reviews, debate threads, editor's picks, notifications, direct messaging, group watches, anime-specific metadata layers.

Validation Plan

Pre-launch (Week 0-1):

  • Concierge Test: Manual onboarding of 50 target users from film/anime subreddits. We will manually create their profiles, populate watchlists based on a shared Google Sheet, and send them a link to a live prototype (Figma + Airtable backend). Success signal: 70% complete the core "find, react, and locate a streamer" loop in under 8 minutes.
  • Landing Page Test: Launch a "Coming Soon" page with email signup. Drive 5K visits via targeted Reddit/Facebook ads. Success signal: 7% conversion to email signup, indicating intent.

Launch & Measurement (Week 2-5):

  • Soft Launch: Release MVP to first 1,000 users (from waitlist + organic). Track primary success metrics.
  • Weekly Cohort Interviews: Conduct 5 user interviews per week (20 total) to understand barriers to habit formation.
  • A/B Test: Test two homepage variants: (A) Algorithmic "For You" vs. (B) Chronological "Community Pulse." Primary metric: D7 retention.

Validation Gates:

  • Gate 1 (Week 2): 40% of Day 1 users perform a reaction and a watchlist action. If not met, investigate onboarding flow.
  • Gate 2 (Week 4): D14 retention ≥ 50%. If not met, trigger pre-mortem and consider pausing development.

Riskiest Assumptions & Kill Criteria

  1. Assumption: Users value consolidated peer reactions over in-depth, long-form reviews.
    • Risk: High. If false, we lack differentiated value vs. Letterboxd.
    • Validation: Concierge test measures speed vs. satisfaction; post-task survey asks: "Would you trade detailed reviews for this speed?"
  2. Assumption: TMDB/JustWatch APIs provide sufficiently accurate and comprehensive streaming availability for our target regions (US, UK, CA).
    • Risk: Medium. Inaccurate links destroy trust.
    • Validation: Manual audit of 100 popular titles across 3 regions pre-launch. Accuracy must be >95%.
  3. Assumption: A community can form without explicit social features (comments, threads) at launch.
    • Risk: High. The "community" value prop may feel hollow.
    • Validation: Measure if users return to see "Recent Reactions" feed (time spent, scroll depth). D7 retention of users who follow ≥5 others must be >55%.
  4. Assumption: "Mood-based discovery" can be faked via genre/tag filtering initially without a dedicated AI/ML model.
    • Risk: Low. Acceptable MVP simplification.
    • Validation: Track usage of genre filter vs. search. If <20% use filter, de-prioritize advanced mood engine.
MADE WITH SCRIPTONIA

Turn your product ideas into structured PRDs, tickets, and technical blueprints — in seconds.

Start for free →