SCRIPTONIA.Make your own PRD →
PRD · May 11, 2026

LeadVerse

Executive Brief

Founders using LeadVerse spend 60% of their outreach time manually reading social posts to judge buyer intent, missing high-potential leads while drowning in low-signal noise (source: 2024 user survey of 87 indie hackers). This manual filtering costs $2.1M/year in lost founder productivity: 5,200 active users × 3.2 hrs/week wasted × $25/hr blended rate × 50 weeks = $2.1M/year (source: LeadVerse dashboard analytics Jan 2024, founder wage assumption from Y Combinator benchmark). If adoption reaches only 40% of users: $840K/year.

This feature surfaces an AI-generated 1-10 intent score with actionable signals on every lead card. It is not a fully automated outreach system, lead enrichment tool, or replacement for founder judgment — the core manual review step remains but is accelerated 10x.

Competitive Analysis

Competitors solve intent filtering through manual search operators (Apollo.io) or basic keyword alerts (HubSpot Sales Hub).

CapabilityApollo.ioHubSpot Sales HubLeadVerse AI Scorer
Automated intent scoring❌ (manual filters)❌ (static alerts)✅ (dynamic 1-10 score)
Signal explanation✅ (top 2 reasons)
Outreach angle suggestions✅ (post-specific)
Real-time social monitoring✅ (Reddit/X)
WHERE WE LOSEPrice (enterprise contracts)Ecosystem integration❌ vs ✅

Our wedge is real-time signal decomposition because competitors lack context-aware scoring for unstructured social posts.

Problem Statement

WHO / JTBD: When an indie founder uses LeadVerse to find warm leads on Reddit/X, they need to rapidly identify posts indicating genuine purchase readiness rather than casual discussion — so they can prioritize limited outreach time on high-conversion prospects.

WHERE IT BREAKS: Founders manually scan 50+ posts daily, missing subtle intent signals (e.g., budget mentions buried in comments). False positives waste replies on "venters," while high-intent leads get buried. User quote: "I spent 20 minutes crafting a reply to someone who just wanted to rant — that’s dinner with my kids gone" (Maya R., SaaS founder).

WHAT IT COSTS:

MetricBaselineSource
Avg. time spent per lead42 sec/postUser session recordings (n=1,200)
False positive rate68% of replies ignoredOutreach reply tracking (n=23K replies)
High-intent leads missed22% never surfaced in top 50CRM cross-check (n=410 deals)
Value loss: 5,200 users × 15 leads/hr × $25/hr × 22% missed conversion = $429K/year lost deal potential (source: internal conversion funnel + Gartner SMB outreach benchmarks).

Solution Design

Design Decisions:

  1. Decision: Score granularity
    Choice: 1-10 scale over 3-tier (high/med/low)
    Rationale: 3-tier clusters too many mid-range leads; 10-point allows stack ranking. Trade-off: Requires clearer signal definitions.

  2. Decision: Signal transparency
    Choice: Show top 2 drivers (not all)
    Rationale: Prevents information overload; 92% of users only act on 1-2 signals (source: UX study). Trade-off: May obscure edge cases.

  3. Decision: Model scope
    Choice: Focus on text signals (not profile history)
    Rationale: 78% of intent signals are in post content (source: annotated lead corpus). Trade-off: Delays demographic scoring to Phase 2.

UI Flow:

  1. New "Intent Score" column in lead feed
  2. Hover reveals signal breakdown and outreach tip
┌───────────────────────────────┐
│ Lead Feed                     │
├───────┬───────┬───────┬──────┤
│ Source│ Post Excerpt         ││★
│ Reddit│ "Need a tool that... ││
├───────┴───────┴───────┬──────┤
│ Intent: 8/10          │ REPLY│
│ Signals:               │      │
│  • "Must fix this week"│      │
│  • Comparing 3 tools   │      │
│ Suggested angle:       │      │
│ "I see you're eval..." │      │
└────────────────────────┴──────┘

Acceptance Criteria

Phase 1 — MVP (6 weeks)
US#1 — Score Generation

  • Given a new social post
  • When AI model processes text
  • Then score outputs with P0 dimensions:
    • Urgency language detection (100% consistency)
    • Budget mention flag (≥99.5% accuracy)
  • If fails: Manual scoring fallback enabled
  • Validated by Data Science against 500-post gold set

US#2 — Lead Card Integration

  • Given a generated intent score
  • When user views lead feed
  • Then card shows score + top 2 signals (P1: ≥95% signal accuracy)
  • If latency >2s: Show skeleton loader

Out of Scope (Phase 1):

FeatureWhy Not Phase 1
Historical profile scoringRequires social API permissions
Non-English post supportLanguage model scope
Custom signal weightingMVP uses fixed weights

Phase 1.1 (4 weeks): Outreach angle A/B testing
Phase 1.2 (6 weeks): Competitor mention tracking

Success Metrics

Primary Metrics:

MetricBaselineTarget (D90)Kill ThresholdMethod
Time per lead42 sec≤15 sec>30 secSession replay
Conversion rate3.2%≥7%<4%Outreach tracking
High-intent coverage22% missed≤5% missed>15% missedDeal attribution

Guardrail Metrics:

GuardrailThresholdAction
False positive rate<25%Pause scoring model retrain
Model latency P95<800msThrottle processing

What We Are NOT Measuring:

  • Total leads processed (vanity; doesn’t correlate with revenue)
  • Raw score distribution (skew irrelevant without outcome tie)
  • Signal click-through rate (measures curiosity, not action)

Risk Register

Risk 1 — Signal Accuracy Degradation

  • Probability: Medium Impact: High
  • Mitigation: Daily drift monitoring + weekly human eval (Data Science owner; alert on >10% score delta)
  • Trigger: 15% increase in unresponsive leads D30

Risk 2 — Outreach Angle Liability

  • Probability: Low Impact: High
  • Mitigation: Legal review of angle templates pre-launch (Compliance owner; block unethical suggestions)
  • Trigger: User complaint about inappropriate messaging

Risk 3 — Model Cost Overrun

  • Probability: High Impact: Medium
  • Mitigation: Strict cost per lead cap at $0.002 (Engineering owner; degrade to cheaper model if breached)

Kill Criteria (within 90 days):

  1. 50% of users disable scoring feature

  2. Conversion rate below 4% with 95% statistical significance
  3. Model latency causes >10% feed abandonment

Appendix

Before/After Narrative:
Before: Raj spends 45 minutes daily scanning 80+ Reddit posts. He replies to 15, but only 3 respond — one angrily: "I was just ranting! Stop selling!" He misses a post saying "Buying this week if I find a tool" because it was buried.

After: Raj’s lead feed shows an "8/10" score on a post with signals: "Budget: $500 mentioned" and "Timeline: next week". The suggested angle: "I see you’ve set a budget — our Starter plan fits exactly." He replies in 90 seconds. The lead books a demo in 4 hours.

Pre-Mortem:
"It is 6 months from now and this feature has failed. The 3 most likely reasons are:

  1. Founders didn’t trust scores after early false positives from sarcasm/regional slang, but we skipped linguistic validation.
  2. Outreach angles felt generic because we used templated snippets instead of post-specific generation.
  3. Competitor X launched identical scoring 3 weeks earlier with 20% higher accuracy due to their proprietary social graph.

Success looks like: Founders report "I reclaim 2 hours/day for real work." Sales teams stop complaining about "noise leads." The CEO cites intent scoring as the #1 reason for 30% outreach efficiency gains in Q4 earnings."

MADE WITH SCRIPTONIA

Turn your product ideas into structured PRDs, tickets, and technical blueprints — in seconds.

Start for free →