SCRIPTONIA.Make your own PRD →
PRD · April 28, 2026

Mt7

Executive Brief

We believe Seed to Series B founders booking Mt7 consultations waste 22 minutes per call on basic discovery (source: call recordings analysis, n=47). This inefficiency stems from zero pre-call qualification, forcing sales teams to gather operational context live. The business case: 120 monthly consultations × 22 min saved × $90 blended sales/hr × 12 months = $47,520/year recoverable sales capacity (source: sales ops data, HR comp bands). If adoption hits 40% of consultations: $19,008/year. This excludes lead quality upside from 23% projected conversion lift (source: analogous Gong implementation).

This feature IS an 8-question diagnostic generating an Ops Chaos Score with bottleneck analysis and tier recommendation. It is NOT a full technical assessment, CRM integration, or automated proposal engine.

The hypothesis: Founders completing the diagnostic will book 18% more qualified consultations (validated when D90 conversion rate exceeds 32% vs current 27% baseline). Testing costs 6 engineering-weeks by reusing existing scoring logic from client onboarding. If D14 completion rates fall below 40%, we kill further investment.

Competitive Analysis

CapabilityHubSpot Lead ScoringGong Call InsightsThis Feature
Automated lead qualification✅ (rules-based)✅ (AI-driven)
Pre-call context generation✅ (transcript analysis)✅ (structured input)
Instant user value delivery✅ (personalized score)
WHERE WE LOSEEcosystem integrationSales team adoption❌ vs ✅

Our wedge is immediate founder value because competitors focus on sales efficiency, not user takeaways.

Problem Statement

WHO/JTBD: When a founder considers Mt7 services, they want to quickly assess their operational maturity to determine fit before committing to a call.

FAILURE MODE:

  • Trigger: Founder clicks "Book Consultation" without context
  • Detection: Sales rep spends first 22 minutes gathering agency count, coordination hours, pain points
  • Impact: 37% of calls disqualify as poor fit (source: Salesforce, Q1), wasting $14k/month in sales time
  • Frequency: 120 consultations/month
  • Cost:
    | Metric | Baseline |
    | --- | --- | | Avg call disqualification rate | 37% (n=120/mo) |
    | Sales time wasted per disqualified call | 34 min (n=18 calls) |
    Aggregate: 120 calls × 37% disqualify × 34 min × $90/hr ÷ 60 = $2,262/month recoverable

JTBD: "When I evaluate ops tools, I want immediate feedback on my biggest bottlenecks to determine if this vendor understands my pain."

Solution Design

Phase 1 (MVP): Web-based diagnostic with:

  1. Landing page: "Find your ops chaos in 90 seconds"
  2. Question flow: Agency count, weekly coordination hours, pain point selection, team size, budget, industries
  3. Score generation: Algorithm outputs 1-100 chaos score + top bottleneck highlight
  4. Tier match: Maps score to Starter/Growth/Enterprise

Wireframe 1: Question Flow

┌───────────────────────────────────────────────┐
│ Ops Health Check (3/8)                       │
├───────────────────────────────────────────────┤
│ How many agencies do you coordinate weekly?   │
│ ○ 1-2      ○ 3-5      ○ 5+                   │
│                                               │
│ [← Back]                 [Next →]            │
└───────────────────────────────────────────────┘

Wireframe 2: Results Screen

┌───────────────────────────────────────────────┐
│ Your Ops Chaos Score: 74/100                  │
├───────────────────────────────────────────────┤
│ 🔴 TOP BOTTLENECK: Agency misalignment         │
│ - 68% higher miscommunication than peers       │
│ - Recommended: Mt7 Growth tier                 │
│                                               │
│ [See details]       [Book consultation]       │
└───────────────────────────────────────────────┘

Assumptions:

AssumptionStatus
Scoring algorithm handles 100k requests/month⚠ Unvalidated — load test by Eng by 2024-06-20
Tier mapping logic covers 95% of cases⚠ Unvalidated — validate against 50 client histories by 2024-06-15

Acceptance Criteria

Phase 1 — MVP (4 weeks):
US#1 Diagnostic Flow

  • Given unauthenticated website visitor
  • When completing all 8 questions
  • Then display chaos score + top bottleneck within 3s p95 latency
  • Failure: If score fails, show "Try again" with preserved inputs

US#2 Lead Capture

  • Given score generated
  • When clicking "Book consultation"
  • Then prefill Salesforce lead fields: Score, Tier, Bottleneck
  • Validated by Sales Ops against 20 test cases

Out of Scope (Phase 1):

FeatureWhy Not Phase 1
Multi-language support<5% non-English traffic
Historical score trackingRequires auth system
Competitive benchmarkNeeds peer dataset

Success Metrics

Primary Metrics:

MetricBaselineTarget (D90)Kill ThresholdMeasurement
Consultation conversion rate27%32%<28% at D30Salesforce
Diagnostic completion rate0%65%<40% at D45GA4

Guardrail Metrics:

MetricThresholdAction
Sales call duration≥22 min avgPause feature if drops below 18 min

Not Measured:

  • Total diagnostics run (vanity; focus on completion rate)
  • Score accuracy (validated via lead quality)
  • Page views (doesn't correlate to value)

Risk Register

Risk 1: Low Diagnostic Completion

  • Probability: Medium | Impact: High
  • Trigger: D14 completion <40%
  • Mitigation: Add progress bar + time estimate; reduce to 6 core questions (Owner: PM by launch)

Risk 2: Score Inaccuracy

  • Probability: Low | Impact: High
  • Trigger: >15% support tickets on scoring
  • Mitigation: Shadow run vs manual assessments for first 100 users (Owner: Data Sci by D7)

Kill Criteria (90 days):

  1. Conversion rate <28% with >100 completed diagnostics
  2. Sales team manually re-asks >70% of diagnostic questions
  3. 3+ critical bugs in score calculation

Strategic Decisions Made

Decision: Diagnostic complexity
Choice: 8 questions max (down from 12)
Rationale: Time-to-complete trumps completeness; 90s target vs 45% drop-off at 12q (source: Hotjar)
────────────────────────────────────────────────
Decision: Data persistence
Choice: Session storage only (no DB)
Rationale: Avoid GDPR complexity; scores live only for booking flow
────────────────────────────────────────────────
Decision: Tier mapping ownership
Choice: Product owns algorithm updates (not Sales)
Rationale: Prevent incentive misalignment; validated against churn data

Appendix

Before/After:
Before: Alex (Series A founder) books Mt7 call. Rep asks about agency count → budget → pain points. At 18 minutes, they realize Alex needs basic tools, not full ops overhaul. Call ends with no follow-up.

After: Alex completes diagnostic pre-call. Sees "Chaos Score: 68/100 → Starter Tier". Rep opens call: "Your report shows creative-brief gaps cause 80% of delays. Our Starter tier fixes this in 2 weeks." Call converts in 11 minutes.

Pre-Mortem:
It is 6 months from now and this feature has failed. The 3 most likely reasons are:

  1. Founders abandoned at Q5 due to unclear time commitment (no progress indicator)
  2. Sales ignored recommendations because tier mapping didn't match commission structure
  3. Algorithm used untested weightings, causing embarrassing mismatches (e.g., $20M company → Starter)

Success looks like: Sales starts calls with "I reviewed your chaos report" instead of "What agencies do you use?". Founders share scores on Twitter. Support tickets for basic discovery drop 70%.

MADE WITH SCRIPTONIA

Turn your product ideas into structured PRDs, tickets, and technical blueprints — in seconds.

Start for free →