Founders using LeadVerse spend 60% of their outreach time manually reading social posts to judge buyer intent, missing high-potential leads while drowning in low-signal noise (source: 2024 user survey of 87 indie hackers). This manual filtering costs $2.1M/year in lost founder productivity: 5,200 active users × 3.2 hrs/week wasted × $25/hr blended rate × 50 weeks = $2.1M/year (source: LeadVerse dashboard analytics Jan 2024, founder wage assumption from Y Combinator benchmark). If adoption reaches only 40% of users: $840K/year.
This feature surfaces an AI-generated 1-10 intent score with actionable signals on every lead card. It is not a fully automated outreach system, lead enrichment tool, or replacement for founder judgment — the core manual review step remains but is accelerated 10x.
Competitors solve intent filtering through manual search operators (Apollo.io) or basic keyword alerts (HubSpot Sales Hub).
| Capability | Apollo.io | HubSpot Sales Hub | LeadVerse AI Scorer |
|---|---|---|---|
| Automated intent scoring | ❌ (manual filters) | ❌ (static alerts) | ✅ (dynamic 1-10 score) |
| Signal explanation | ❌ | ❌ | ✅ (top 2 reasons) |
| Outreach angle suggestions | ❌ | ❌ | ✅ (post-specific) |
| Real-time social monitoring | ❌ | ❌ | ✅ (Reddit/X) |
| WHERE WE LOSE | Price (enterprise contracts) | Ecosystem integration | ❌ vs ✅ |
Our wedge is real-time signal decomposition because competitors lack context-aware scoring for unstructured social posts.
WHO / JTBD: When an indie founder uses LeadVerse to find warm leads on Reddit/X, they need to rapidly identify posts indicating genuine purchase readiness rather than casual discussion — so they can prioritize limited outreach time on high-conversion prospects.
WHERE IT BREAKS: Founders manually scan 50+ posts daily, missing subtle intent signals (e.g., budget mentions buried in comments). False positives waste replies on "venters," while high-intent leads get buried. User quote: "I spent 20 minutes crafting a reply to someone who just wanted to rant — that’s dinner with my kids gone" (Maya R., SaaS founder).
WHAT IT COSTS:
| Metric | Baseline | Source |
|---|---|---|
| Avg. time spent per lead | 42 sec/post | User session recordings (n=1,200) |
| False positive rate | 68% of replies ignored | Outreach reply tracking (n=23K replies) |
| High-intent leads missed | 22% never surfaced in top 50 | CRM cross-check (n=410 deals) |
| Value loss: 5,200 users × 15 leads/hr × $25/hr × 22% missed conversion = $429K/year lost deal potential (source: internal conversion funnel + Gartner SMB outreach benchmarks). |
Design Decisions:
Decision: Score granularity
Choice: 1-10 scale over 3-tier (high/med/low)
Rationale: 3-tier clusters too many mid-range leads; 10-point allows stack ranking. Trade-off: Requires clearer signal definitions.
Decision: Signal transparency
Choice: Show top 2 drivers (not all)
Rationale: Prevents information overload; 92% of users only act on 1-2 signals (source: UX study). Trade-off: May obscure edge cases.
Decision: Model scope
Choice: Focus on text signals (not profile history)
Rationale: 78% of intent signals are in post content (source: annotated lead corpus). Trade-off: Delays demographic scoring to Phase 2.
UI Flow:
┌───────────────────────────────┐
│ Lead Feed │
├───────┬───────┬───────┬──────┤
│ Source│ Post Excerpt ││★
│ Reddit│ "Need a tool that... ││
├───────┴───────┴───────┬──────┤
│ Intent: 8/10 │ REPLY│
│ Signals: │ │
│ • "Must fix this week"│ │
│ • Comparing 3 tools │ │
│ Suggested angle: │ │
│ "I see you're eval..." │ │
└────────────────────────┴──────┘
Phase 1 — MVP (6 weeks)
US#1 — Score Generation
US#2 — Lead Card Integration
Out of Scope (Phase 1):
| Feature | Why Not Phase 1 |
|---|---|
| Historical profile scoring | Requires social API permissions |
| Non-English post support | Language model scope |
| Custom signal weighting | MVP uses fixed weights |
Phase 1.1 (4 weeks): Outreach angle A/B testing
Phase 1.2 (6 weeks): Competitor mention tracking
Primary Metrics:
| Metric | Baseline | Target (D90) | Kill Threshold | Method |
|---|---|---|---|---|
| Time per lead | 42 sec | ≤15 sec | >30 sec | Session replay |
| Conversion rate | 3.2% | ≥7% | <4% | Outreach tracking |
| High-intent coverage | 22% missed | ≤5% missed | >15% missed | Deal attribution |
Guardrail Metrics:
| Guardrail | Threshold | Action |
|---|---|---|
| False positive rate | <25% | Pause scoring model retrain |
| Model latency P95 | <800ms | Throttle processing |
What We Are NOT Measuring:
Risk 1 — Signal Accuracy Degradation
Risk 2 — Outreach Angle Liability
Risk 3 — Model Cost Overrun
Kill Criteria (within 90 days):
50% of users disable scoring feature
Before/After Narrative:
Before: Raj spends 45 minutes daily scanning 80+ Reddit posts. He replies to 15, but only 3 respond — one angrily: "I was just ranting! Stop selling!" He misses a post saying "Buying this week if I find a tool" because it was buried.
After: Raj’s lead feed shows an "8/10" score on a post with signals: "Budget: $500 mentioned" and "Timeline: next week". The suggested angle: "I see you’ve set a budget — our Starter plan fits exactly." He replies in 90 seconds. The lead books a demo in 4 hours.
Pre-Mortem:
"It is 6 months from now and this feature has failed. The 3 most likely reasons are:
Success looks like: Founders report "I reclaim 2 hours/day for real work." Sales teams stop complaining about "noise leads." The CEO cites intent scoring as the #1 reason for 30% outreach efficiency gains in Q4 earnings."