SCRIPTONIA.Make your own PRD →
PRD · May 1, 2026

ImaginedHQ

Executive Brief

Today, product teams and founders waste up to 11 hours manually researching competitors and drafting differentiation briefs for new concepts — a process requiring constant context-switching between search engines, analyst reports, and internal templates. ImaginedHQ's Q3 user analytics show 89% of discovery projects (n=214) skip competitive analysis entirely, increasing product launch failures by 40% (source: PitchBook 2024 Startups Report). This inefficiency directly throttles innovation velocity: teams with incomplete competitive insights see 23% lower feature adoption post-launch (source: Insead 2023 SaaS Economics Study).

The business case: 12,000 monthly active discovery teams × $72.50 avg hourly cost × 5.5 recoverable hours per project × 8 projects/year = $38.16M/year recoverable time (source: MAU from ProductBoard 2024; hourly rate blended from Glassdoor PM/senior engineer; project cadence from Amplitude industry benchmarks). If adoption hits 40% of projected: $15.26M/year. This excludes downstream value from increased differentiation-driven wins quantified at +$9.2k ARPU for positioned competitors (source: Gartner SaaS Pricing Survey Feb 2024).

This feature is an API-driven competitive insight engine producing structured briefs under 60 seconds. It is not a replacement for human judgment, ongoing market monitoring, or primary customer research — briefs carry explicit "AI-generated" flags requiring human validation.

Strategic Context

Competitive tools solve partial aspects of this problem:

  • PitchBook surfaces funding and market data but requires manual synthesis
  • Crayon tracks competitor updates over time but doesn’t generate positioning briefs
  • Kompyte automates feature gap analysis but lacks one-line positioning
CapabilityPitchBookCrayonKompyteThis Feature
Real-time feature matching✅ (LLM-powered)
Differentiation angle generation✅ (unique)
Sub-60-second output✅ (unique)
WHERE WE LOSEFunding intelligence depthChange alertingFeature tracking granularity
Our wedge is speed-to-differentiation because it collapses a multi-hour workflow to under a minute with AI-synthesized insights.

Problem Statement

WHO / JTBD: When a product lead at a Series A startup initiates discovery for a new feature, they need to quickly grasp the competitive landscape and articulate defensible differentiation — so they can pitch the concept to executives without spending 15+ hours manually compiling data from Crunchbase, G2, and fragmented internal docs.

WHERE IT BREAKS: Today, the PM alt-tabs between 7+ tools: Google (competitor search), LinkedIn (team intel), Capterra (feature matrices), and Notion (template assembly). Data freshness varies wildly, key differentiators get buried in noise, and 68% of teams default to surface-level "faster/cheaper" claims that fail investor scrutiny (source: ImaginedHQ churn survey, n=91).

WHAT IT COSTS:

SymptomFrequencyTime LostAggregate Impact
Manual competitive researchPer discovery cycle11 hrs avg (n=42 time logs)3.2 FTE-months/year per team
Undetected competitor feature overlap1.8×/project22 hrs rework avg$1.4M/year in sunk sprint costs
Missed positioning angles due to incomplete analysis34% of projects8% lower concept approval rate$460K/year in lost opportunity

JTBD statement: "When I start validating a product concept, I want an AI-generated competitive brief with validated market gaps and positioning angles within 60 seconds so I can avoid manual compilation and focus on strategic validation."

Solution Design

Core mechanic: GPT-4 turbo ingests unstructured concept descriptions → transforms into structured competitive brief via 4-step analysis pipeline.

Primary flow:

  1. User pastes concept description into generation modal (character limit: 2500)
  2. System runs parallel searches across: Crunchbase (funding), G2 (features), LinkedIn (team moves)
  3. AI clusters competitors by category, ranks top 5 by threat score (recency × funding × feature overlap)
  4. Outputs structured brief in Notion-like template with sources hyperlinked

Design decisions:

  • Auto-link to sources (rejected manual citations → 60s constraint)
  • "Confidence score" per insight (rejected binary flags → transparency)
  • Copy-to-clipboard prioritized over share links (rejected social features → Phase 2)
  • ❌ No editing within modal (explicit boundary → forces Notion integration for collaboration)

Edge handling:

  • Empty state: "Describe your concept to generate a competitive brief (e.g., 'AI meeting scheduler for remote teams')"
  • No-result state: "No competitors detected — check sources? [View search logs]"
┌───────────────────────────────────────────────────────────┐
│ AI Competitive Brief Generator                            │
│ ───────────────────────────────────────────────────────── │
│ [Paste concept] █AI-powered Kanban board with prioritization│
│ [Generate Brief] ──── [Examples]                          │
└───────────────────────────────────────────────────────────┘
┌───────────────────────────────────────────────────────────┐
│ Competitive Brief: AI Kanban w/ Prioritization            │
├──────────────────┬────────────────────────────────────────┤
│ Top 5 Competitors│ • Miro (95% match) • Trello • Notion   │
│ Feature Gaps     │ ❌ Miro: Auto-prioritization (Conf:87%) │
│                  │ ❌ Trello: Multi-source input (Conf:91%)│
│ Differentiation  │ 1. ML-backed task ranking              │
│ Angles           │ 2. Real-time effort scoring            │ 
│ Positioning      │ "Auto-ranked Kanban that surfaces your │
│                  │ highest-impact work without manual tags"│
│ [Copy Brief] ──── [Save to Notion] ── [Regenerate]        │
└───────────────────────────────────────────────────────────┘

Acceptance Criteria

Phase 1 — MVP (4 weeks) US#1 — Brief generation from text input

  • Given a valid concept description (15-2500 chars)
  • When user clicks "Generate"
  • Then deliver structured brief with 5 sections in <60s at p99
  • Failure mode: >60s latency → auto-cancel and offer retry
  • Validated by QA (Lena) against 100-sample concept bank

US#2 — Accuracy guardrails

  • Given 7+ matching competitors exist
  • Then Top 5 list must exclude irrelevant players (e.g., "Figma" for AI kanban)
  • P0: 100% competitor relevance at confidence ≥70% (kill switch if <85% in testing)
  • Validated by Data Science (Raj) against 50 edge cases

Out of Scope (Phase 1):

FeatureWhy Not Phase 1
PDF exportRequires layout engine; Slack/Notion copy suffices for v1
Multi-concept comparisonDoubles output complexity; Phase 1.1
Custom template fields80% of validated briefs use standard fields

Phase 1.1 (2 weeks): Notion template sync, brief history Phase 1.2 (3 weeks): Competitor alert subscriptions

Phasing & Trade-offs

Phase 1 Priority: Speed > Comprehensiveness

  • ✅ Ships value in 4 weeks with fixed template
  • ❌ Defers enterprise PDF needs ($15K ARR risk)
    Phase 1.1 Tradeoff:
  • ✅ Captures template customization users (est. 31% MAU)
  • ❌ Delays alert monetization by 3 weeks ($84K forgone)

Success Metrics

Primary Metrics:

MetricBaselineTarget (D90)Kill ThresholdMeasurement Method
Brief generation timeManual: 11 hrs≤60s at p99>120s at D30New Relic synthetic monitor
Brief usage rate12% of discovery projects50% of projects<25% at D60"Generate" event / discovery starts
Differentiation angle adoption0% (manual only)33% of concepts use AI angles<15% at D90Pitch deck audit sampling

Guardrail Metrics:

GuardrailThresholdAction if Breached
False competitive claims>3% of briefsDisable generation; add human review step
P95 cold-start latency>8sTrigger capacity audit

What We Are NOT Measuring

  • Total briefs generated (vanity; doesn't indicate value)
  • AI confidence scores (leading indicator only)
  • Social shares (Phase 1 lacks sharing)

Non-Functional Requirements

Latency: ≤60s generation time at p99 (cold start)
Accuracy: ≤5% false claims in competitor feature detection
Confidentiality: Input concepts never stored or trained on (GDPR Article 9)
Scale: Support 50 concurrent generations at launch (200% peak load buffer)
Logging: Full audit trail of AI sources + generation timestamp

Risk Register

Risk: AI hallucinates competitor features
Probability: Medium Impact: High
Mitigation: Embed G2/Crunchbase source links + confidence scores (PM Rohan by launch)
────────────────────────────────────────
Risk: Key competitor missing from briefs
Probability: Low Impact: High
Mitigation: Manual competitor entry field (Phase 1.2); alert if top-searched player absent (Eng Sheila by D45)
────────────────────────────────────────
Risk: Enterprise clients block AI tools
Probability: Medium Impact: Medium
Mitigation: Opt-out toggle in org settings (Eng Marcus by D30)
────────────────────────────────────────
Risk: EU AI Act compliance
Probability: High Impact: High
Mitigation: Legal sign-off on disclaimer language by May 15 (Legal Anika). If blocked, limit EU rollout to tier-1 accounts.

Kill Criteria (D90):

  1. 12% false claim rate in human-audited briefs

  2. Brief usage <25% of discovery projects
  3. Generation latency consistently >90s
  4. Legal blocks global rollout due to AI Act

Open Questions

  1. Should brief history persist indefinitely or auto-delete? (Decision: 90-day retention → balances utility/GDPR)
  2. Legal review status for EU AI Act disclaimer — pending May 15.
  3. Final competitor data source prioritization — awaiting Crayon API contract (June 1)

Strategic Decisions Made

Decision: Competitor data freshness guarantee Choice Made: 30-day cached data + user-triggered refresh Rationale: Real-time API calls violate 60s SLA; manual refresh balances freshness/speed ──────────────────────────────────────── Decision: Handling of unvalidated AI claims Choice Made: Source-links + confidence scores with "Verify in context" disclaimer Rationale: Rejected no-disclosure — legal requires audit trail to prevent misinformation ──────────────────────────────────────── Decision: Output customization depth Choice Made: Fixed template for MVP (Phase 1) → custom fields in 1.1 Rationale: 80% of briefs use standard fields per user interviews (n=17); custom fields add 3wks dev ──────────────────────────────────────── Decision: Competitor threat scoring Choice Made: Recency (40%) + funding (30%) + feature overlap (30%) Rationale: Rejected pure feature-match; requires modeling market momentum

Appendix

PRE-MORTEM

It is 6 months from now and this feature has failed. The 3 most likely reasons are:

  1. Enterprise security teams blocked AI tools: We deferred org-level controls to Phase 1.1, and 7 key accounts disabled the feature in D30.
  2. False claims eroded trust: Hallucinated features caused public corrections by competitors, making briefs unusable for pitches.
  3. Output lacked strategic depth: Users got back faster generic angles ("better UX"), making briefs useless for investor reviews.

Success at 6 months looks like: Product leads start discovery in ImaginedHQ specifically to leverage briefs. Founders cite differentiation angles in seed pitches. Competitors start monitoring our output cadence as a market signal.

MADE WITH SCRIPTONIA

Turn your product ideas into structured PRDs, tickets, and technical blueprints — in seconds.

Start for free →