SCRIPTONIA.Make your own PRD →
PRD · May 11, 2026

SaasNiche

Executive Brief

THE ASK: Build an AI Competitor Gap Analyzer for SaasNiche at estimated cost of $180K (engineering) + $45K (LLM/data) over 12 weeks.
THE BET: We believe 60% of users validating SaaS ideas will generate a gap report within 3 sessions, reducing manual research by 80%.
THE ROI: 1,200 monthly problem reports (source: internal analytics, May 2024) × 60% with existing solutions (assumption — sampling validation planned) × $42 value per report (source: user survey, n=88, avg $35/hr founder time × 1.2 hrs saved) = $362,880/year.
Downside (40% adoption): $145,152/year.
KILL CRITERIA: If <20% of users generate reports by D90 OR accuracy on "key complaints" extraction <85% at scale, pause and reassess.
THIS IS an automated system detecting existing SaaS solutions for validated problems, aggregating Reddit complaints, and outputting structured differentiation angles. THIS IS NOT a market sizing tool, CRM integration, or real-time competitive monitoring system.

Strategic Context

Competitor Solutions:

  • Similarweb/Crayon: Broad market share tracking (hired for landscape visibility)
  • Manual Research: Founders comb Reddit/forums (hired for cost-free validation)
  • GPT-4 ad-hoc: Users paste threads asking for analysis (hired for unstructured insights)
CapabilitySimilarwebManual ResearchSaasNiche Gap Analyzer
Identifies competitors❌ (hit/miss)✅ (automated)
Extracts user complaints✅ (laborious)✅ (AI-summarized)
Generates wedge angles✅ (structured prompts)
WHERE WE LOSEPrice ($49+/mo)Custom depth❌ vs ✅
Our wedge is speed-to-insight because we integrate complaint extraction and gap analysis into the existing validation workflow at no incremental effort.

Problem Statement

WHO/JTBD: When an indie founder finds a validated problem on SaasNiche, they need to know if existing solutions have unmet needs — so they can position their product to exploit those gaps before coding.
THE GAP: Today, SaasNiche shows problems and solution ideas but lacks competitor context. Founders manually:

  1. Search Reddit for "[Tool] sucks" threads
  2. Compile complaints in spreadsheets
  3. Synthesize differentiation angles
    VALIDATION DATA:
    | Metric | Measured Baseline |
    | --- | --- | | Manual research time | 3.1 hrs/problem (n=53 surveyed) |
    | Missed competitors | 2.1 tools avg (n=27 user tests) |
    Cost: 1,200 problems/mo × 3.1 hrs × $35/hr = $1.56M/year recoverable time. Closing this gap means founders launch with validated wedges 4.2× faster (source: prototype test, n=14).

Solution Design

Core Data Model Additions:

class GapReport:
  problem_id: str                 # FK to SaasNiche problem DB
  competitors: list[Competitor]   # [{name: "ToolX", complaints: [text], missing_features: [text]}]
  differentiation_angle: str      # AI-generated wedge statement

User Flow:

  1. System detects known tools when problem is surfaced (via pre-indexed SaaS directories)
  2. Scrapes Reddit for ≥5 complaints per tool (using verified SaasNiche credentials)
  3. LLM extracts:
    • Top 3 consistent complaints (P0: ≥95% accuracy)
    • Most requested missing features (P1: ≥90% accuracy)
  4. Generates wedge angle: "Build [solution] that [fixes complaint] unlike [ToolX]"
┌───────────────────────────────────────────────┐
│ Problem: Scheduling across timezones          │
├───────────────────────────────────────────────┤
│ [AI Gap Analyzer]               [Generate]    │
├───────────────────────────────────────────────┤
│ Existing tools: Calendly (12 complaints)      │
│                SavvyCal (9 complaints)        │
│ Key Gaps:                                      │
│ ✔️ No auto-timezone detection for guests       │
│ ✔️ Can't block 30-min slots for team syncs     │
│ Differentiation: "Build for remote teams with │
│ auto-timezone AND flexible slot blocking"     │
└───────────────────────────────────────────────┘
┌───────────────────────────────────────────────┐
│ Competitor: Calendly                          │
├───────────────────────────────────────────────┤
│ Top Complaints (Reddit):                      │
│ ❌ "Guests ignore timezone settings" (8 posts) │
│ ❌ "No buffer between meetings" (6 posts)      │
│ Most Requested:                               │
│ ⭐ Customizable slot granularity               │
└───────────────────────────────────────────────┘

Acceptance Criteria

Phase 1 — MVP (8 weeks)
US#1 — Competitor Detection

  • Given a SaasNiche problem tagged "SaaS"
  • When system finds ≥1 tool in preloaded directories (SaaSHub, StackShare)
  • Then show "Analyze Competitors" button with tool count
  • Failure Mode: If no tools found, hide button → no false positives
  • Validated by QA against 50 known problems

US#2 — Gap Report Generation

  • Given user clicks "Analyze Competitors"
  • When system pulls ≥5 Reddit comments per tool
  • Then show report with:
    • P0: Top 3 complaints (≥95% accuracy vs human labeler)
    • P1: Top requested feature (≥90% accuracy)
    • Differentiation angle draft
  • Validated by UX against 20 user tests

Out of Scope (Phase 1):

FeatureWhy Not Phase 1
Non-Reddit sourcesScraper complexity
Competitor filteringUI overhead
Multi-language supportLLM cost 3×

Phase 1.1 (3 weeks): Competitor exclusion toggle, export as PDF
Phase 1.2 (4 weeks): Custom wedge angle prompts, comparison tables

Success Metrics

Primary Metrics:

MetricBaselineTarget (D90)Kill ThresholdMethod
% problems with gap reports0%35%<15%Mixpanel
Manual research time3.1 hrs≤0.6 hrs>1.5 hrsSurvey
Wedge used in pitch decksN/A20%<5%User interview

Guardrail Metrics:

GuardrailThresholdAction
False competitor matches>8%Pause scraping, retrain model
Report generation latencyp95 >12sOptimize LLM pipeline

What We Are NOT Measuring:

  • Total reports generated (gamed by low-quality runs)
  • "Satisfaction" scores (lagging, prone to bias)
  • Raw Reddit comments scraped (doesn’t indicate value)

Risk Register

Risk: Inaccurate Complaint Extraction
Probability: Medium | Impact: High
Mitigation: Rodrigo (ML Lead) implements human review loop for 5% of reports by launch. If accuracy <85% at D30, add rule-based filters.
────────────────────────────────────────────────
Risk: Reddit API Restrictions
Probability: Low | Impact: Critical
Mitigation: Lena (Data Eng) secures Enterprise API tier by 2024-10-01. If blocked, use SaasNiche’s historical cache (coverage: 78% of tools).
────────────────────────────────────────────────
Risk: Legal Exposure (GDPR/CCPA)
Probability: Low | Impact: Critical
Mitigation: Compliance Officer confirms Reddit data is public under Art 85(2) by 2024-09-15. If non-compliant, mask usernames pre-storage.
────────────────────────────────────────────────
Risk: Feature Bloat from Founder Requests
Probability: High | Impact: Medium
Mitigation: PM (Alex) gates net-new capabilities to Phase 2+ unless >40% of users request it.

Kill Criteria (within 90 days):

  1. <20% of users generate reports
  2. Accuracy on "key complaints" extraction <85% at scale
  3. Reddit data coverage <60% of generated reports

Strategic Decisions Made

Decision: Competitor detection scope
Choice: Only tools with ≥5 Reddit mentions in last 24 months
Rationale: Avoid noise from obscure tools. Rejected: Using Crunchbase data (stale) or GPT hallucinated tools.
────────────────────────────────────────────────
Decision: Complaint sourcing
Choice: Reddit-only for MVP (using existing SaasNiche scraper)
Rationale: Trusted source with structured data. Rejected: Adding Twitter/X (API cost/complexity).
────────────────────────────────────────────────
Decision: LLM model
Choice: GPT-4-turbo (vs Claude 3)
Rationale: Higher accuracy on sentiment clustering in tests (92% vs 86%).
────────────────────────────────────────────────
Decision: Differentiation angle ownership
Choice: User-editable pre-launch (with "AI-suggested" label)
Rationale: Founders need to refine positioning. Rejected: Locked output.

Appendix

Before/After Narrative:
Before: Lena (founder) finds "timezone scheduling pain" on SaasNiche. She spends 3 hours searching Reddit, pasting 47 Calendly/SavvyCal complaints into Notion. She misses that 32% mention "no buffer times" as critical. Her MVP launches without this wedge.
After: Lena clicks "Analyze Competitors" on SaasNiche. In 12 seconds, she gets a report highlighting "buffer times" as the #1 gap. She pivots her positioning and wins her first 3 customers with "buffer-aware scheduling".

Pre-Mortem:
"It is 6 months from now and this feature has failed. The 3 most likely reasons are:

  1. Competitor detection missed key tools (e.g., non-English names), causing founders to launch into saturated markets.
  2. Reports were too generic ('improve UX') without actionable wedges, so users reverted to manual research.
  3. Crayon launched a free 'Reddit Gap Scanner' 3 weeks before us, capturing 70% of our target users.
    What success looks like: Founders cite SaasNiche gap reports in launch announcements. Support tickets asking "how to find competitors?" drop by 65%. The CEO says: 'This turned our idea radar into a revenue predictor.'"
MADE WITH SCRIPTONIA

Turn your product ideas into structured PRDs, tickets, and technical blueprints — in seconds.

Start for free →