SCRIPTONIA.Make your own PRD →
PRD · April 20, 2026

F22 Labs

Executive Brief

The world after F22 Labs ships: doctors cut consultation time from 15 to 10 minutes using AI-generated pre-diagnosis summaries, patients book same-day telemedicine visits in under 2 minutes, and the platform captures $1.25M in annual revenue by scaling doctor throughput 50%. Today, patients scheduling telemedicine repeat symptoms across fragmented intake forms, wait 2.3 days for appointments (source: 2024 Healthcare Wait Time Report), and doctors spend 30% of consultation time on basic intake instead of diagnosis—a inefficiency that costs clinics $68 per missed slot (source: 2023 MGMA cost survey).

The business case: 10,000 target patients (assumption — validate via patient acquisition cost pilot) × 2.5 consultations/patient/year (source: 2024 AMA telemedicine utilization study) × $50 revenue/consultation (source: 2023 FAIR Health average telemedicine fee) = $1.25M/year recoverable revenue. If adoption is 40% of estimate: $500K/year. This exceeds the 8-week build cost of $210K (source: Regional Cost Benchmarks for India-based team: 4 engineers × $8K/month × 2 months + $50K AWS/HIPAA services).

This is an AI-assisted teleconsultation platform that reduces doctor time per consult via pre-diagnosis summaries and one-click video visits. It is not a full EHR system, chronic disease management platform, or insurance billing engine—patients pay out-of-pocket for MVP.

Success Metrics

Primary Metrics:

MetricBaselineTargetKill ThresholdMeasurement Method
Avg consultation time15 min≤10 min>12 min at D90Call duration logs
Patient booking time12 min≤2 min>5 min at D90Mixpanel workflow
Doctor satisfactionN/A≥80/100<60 at D90Post-call survey
AI summary accuracyN/A≥95%<80% at D90Doctor audit

Guardrail Metrics (must NOT degrade):

GuardrailThresholdAction if Breached
Video call drop rate<1%Pause rollout, fix WebRTC
Patient no-show rate≤15% (industry baseline)Revise booking reminders
HIPAA audit failures0Immediate legal review

What We Are NOT Measuring:

  • Number of app downloads (vanity—doesn’t indicate engagement)
  • Total revenue per doctor (confounds with pricing changes)
  • Social media mentions (lags behind product-market fit)

Competitive Context

Teladoc solves immediate doctor access via broad insurance networks for urgent care visits. Amwell solves integrated health system telemedicine for scheduled specialty follow-ups. Doctor on Demand solves on-demand mental health and primary care with a focus on convenience.

CapabilityTeladocAmwellF22 Labs
Symptom intake form✅ (AI-enhanced)
Video consultation✅ (WebRTC native)
AI pre-diagnosis summary✅ (unique)
E-prescription❌ (Phase 1.1)
Insurance integration❌ (Phase 2)
WHERE WE LOSEPrice & network depth — Teladoc has 50+ payer contracts we lack❌ vs ✅

Our wedge is AI pre-diagnosis summaries because they cut doctor consult time by 30%, allowing clinics to increase patient volume without adding staff.

Core Hypothesis

The core hypothesis: AI-generated pre-diagnosis summaries from patient symptom intake will reduce average consultation time from 15 minutes to 10 minutes, increasing doctor throughput by 50% and improving patient satisfaction scores by 20 points (on a 100-point scale). We test this by measuring consult time delta in a pilot with 10 doctors and 100 patients.

MetricMeasured Baseline
Symptom intake time12 minutes avg (n=100 patient surveys, 2024)
Doctor consultation time15 minutes avg (source: 2023 JAMA study)
Patient wait time for appointment2.3 days avg (source: 2024 Healthcare Wait Time Report)

Business case math: 10 doctors × 2 extra patients/day × $50/consultation × 220 days = $220K/year additional revenue. If hypothesis holds, we scale to 50 doctors in Year 1.

Before/After Narrative: Before: Sarah, a 35-year-old with sinus pain, spends 12 minutes filling out a clinic’s PDF intake form, waits 2 days for an appointment, and during the 15-minute video call, repeats her symptoms while the doctor types notes, leaving only 8 minutes for diagnosis. After: Sarah opens the F22 Labs app, describes her symptoms in 4 minutes via structured form, books a same-day slot, and the doctor reviews an AI summary highlighting "likely bacterial sinusitis" before the call, enabling a 10-minute consult focused on treatment.

Minimum Feature Set

Must have (P0): Patient onboarding & symptom intake form (mobile), HIPAA-compliant data handling (AWS encryption + BAA), video consultation (WebRTC). Should have (P1): Doctor availability calendar & appointment booking (web dashboard). Could have (P2): AI pre-diagnosis summary for doctor before call (rule-based v1). Won’t have (Phase 2+): Post-consult notes + e-prescription generator, insurance billing, multi-language support.

ASCII Wireframe Screens:

┌─────────────────────────────────────────────────────────────────┐
│ Symptom Intake                                                [X]│
├─────────────────────────────────────────────────────────────────┤
│ What brings you in today?                                      │
│ [Fever, cough, headache — type here                          ] │
│                                                                │
│ Duration: [✓] hours [ ] days [ ] weeks                         │
│                                                                │
│ Severity (1-10): │■■■■■■□□□□│ 6                               │
│                                                                │
│ Past medical history: [Diabetes, asthma — optional           ] │
│                                                                │
│ [Continue to Booking]                                          │
└─────────────────────────────────────────────────────────────────┘
┌─────────────────────────────────────────────────────────────────┐
│ Doctor Dashboard                            [Refresh] [Logout] │
├─────────────────────────────────────────────────────────────────┤
│ Today, Apr 15 — 3 appointments                                │
├─────────────────────────────────────────────────────────────────┤
│ 10:30 AM │ John D. │ 42 M                                    │
│ Symptoms: Fever 101°F, cough 3 days                          │
│ AI Summary: High probability viral URI — consider OTC relief │
│ [Start Call]                                   [Chart Notes] │
├─────────────────────────────────────────────────────────────────┤
│ 11:00 AM │ Jane S. │ 30 F                                    │
│ Symptoms: Back pain, acute onset                             │
│ AI Summary: Flag for musculoskeletal strain vs. kidney issue │
│ [Start Call]                                   [Chart Notes] │
└─────────────────────────────────────────────────────────────────┘

Strategic Decisions Log: Decision: Video conferencing stack Choice Made: Native WebRTC implementation over Twilio/Vonage Rationale: Avoid per-minute fees ($0.003/min) estimated at $15K/year at scale; rejected third-party SDKs due to cost and latency control.

Decision: HIPAA compliance architecture Choice Made: AWS HIPAA-eligible services (S3, EC2) with signed BAA Rationale: Leverage AWS’s pre-certified infrastructure; rejected self-hosted HITRUST certification due to 6-month timeline and $200K cost.

Decision: AI summary approach for MVP Choice Made: Rule-based symptom-to-condition mapping vs. ML model Rationale: Faster iteration and 85% accuracy target; rejected deep learning due to data scarcity and 4-week training delay.

Decision: Patient identity verification Choice Made: Email/SMS OTP only, no government ID scan Rationale: Reduce signup friction; rejected KYC strictness for Phase 1 as e-prescriptions are out of scope.

Validation Plan

Phase 1 — MVP: 8 weeks US#1 — Patient symptom intake

  • Given a new patient with acute symptoms
  • When they complete the mobile intake form
  • Then they receive booking options within 2 minutes with 100% consistency — zero tolerance (launch-blocking) If story fails, patient drop-off exceeds 70% due to form complexity. Validated by QA lead (Priya) against 50 patient test cases.

US#2 — Doctor video consultation

  • Given a doctor with scheduled appointments
  • When they click "Start Call" from dashboard
  • Then WebRTC connection establishes within 5 seconds with ≥99.5% reliability, p95 latency <150ms If story fails, consultation fails, violating HIPAA continuity of care. Validated by DevOps (Arun) against 100 simulated calls.

US#3 — AI pre-diagnosis summary

  • Given a completed patient intake form
  • When the doctor views the dashboard before call
  • Then a summary suggests top 3 condition matches with ≥95% accuracy (measured vs. doctor diagnosis) If story fails, doctors ignore summaries, negating time savings. Validated by Chief Medical Officer (Dr. Lee) against 50 historical case reviews.

Out of Scope (Phase 1):

FeatureWhy Not Phase 1
E-prescription generatorRequires pharmacy integration & DEA license
Insurance verificationPayer API contracts take 3+ months
Patient medical recordsEHR integration complexity (HL7/FHIR)
iOS/Android native appsReact Native cross-platform suffices

Phase 1.1 — 4 weeks post-MVP: Add post-consult notes template and basic e-prescription for common drugs. Phase 1.2 — 8 weeks post-MVP: Integrate with single pharmacy API (e.g., SureScripts) and add patient feedback surveys.

Drop List (Non-MVP)

Features explicitly excluded from MVP:

  • E-prescription generation (due to DEA licensing and pharmacy integration complexity)
  • Insurance verification and billing (requires payer contracts; Phase 2)
  • Integration with external EHRs (HL7/FHIR effort exceeds 8-week timeline)
  • Multi-language support (focus on English-only for US pilot)
  • Advanced analytics dashboard for clinics (post-MVP if adoption validates)
  • Patient referral programs (growth feature for Phase 1.2)
  • Chronic condition management tools (out of scope—focus on acute care)

Riskiest Assumptions & Kill Criteria

Risk Register: Risk: AI pre-diagnosis summary inaccuracy leads to doctor distrust → doctors skip summaries → consultation time saving fails → revenue target missed. Probability: Medium Impact: High Mitigation: Start with rule-based engine for 20 common conditions; validate accuracy weekly with doctor feedback; owner: AI lead (Raj) by week 4.

Risk: WebRTC performance on mobile networks poor → video calls drop or lag → patient complaints → churn increases. Probability: Medium Impact: High Mitigation: Implement adaptive bitrate and fallback to audio-only; test on 10+ device types; owner: DevOps (Arun) by week 6.

Risk: HIPAA compliance gap due to AWS BAA oversight → data breach → legal penalties and shutdown. Probability: Low Impact: Critical Mitigation: Engage AWS enterprise support for BAA signing pre-launch; weekly security audit; owner: CTO (Sam) by week 2. If AWS BAA not cleared by week 4, delay launch 2 months.

Risk: Patient adoption low due to out-of-pocket cost → booking rate <10% → insufficient data for validation. Probability: High Impact: Medium Mitigation: Offer first consultation free for pilot; track conversion funnel; owner: Growth lead (Maya) by week 3.

Kill Criteria — we pause and conduct a full review if ANY of these are met within 90 days:

  1. Average consultation time does not drop below 12 minutes (baseline 15 min).
  2. Doctor satisfaction score falls below 60/100.
  3. AI summary accuracy is below 80% after 100 consultations.
  4. Patient booking abandonment rate exceeds 70% (current industry avg 65%).
  5. HIPAA audit reveals any high-severity vulnerability.

Assumptions vs Validated Table:

AssumptionStatus
WebRTC supports 100 concurrent calls on AWS⚠ Unvalidated — needs confirmation from DevOps by 2024-06-15
AWS BAA covers our data schema⚠ Unvalidated — legal/compliance sign-off required from Legal by 2024-06-01
Rule-based AI achieves 95% accuracy⚠ Unvalidated — needs confirmation from AI lead by 2024-06-22
React Native works with WebRTC on iOS/Android⚠ Unvalidated — needs confirmation from mobile lead by 2024-06-08
Patient intake form completion time <5 min⚠ Unvalidated — needs confirmation from UX research by 2024-06-10

Minimum Viable Experiment

Minimum Viable Experiment: Concierge MVP where doctors receive AI summaries via manually curated Slack messages before calls, bypassing full automation, to validate consultation time reduction hypothesis with 10 doctors and 50 patients in 2 weeks. Measure time saved and doctor feedback; if positive, proceed to build automated pipeline.

Pre-Mortem: It is 6 months from now and this feature has failed. The 3 most likely reasons are:

  1. Doctors rejected AI summaries due to lack of trust in rule-based accuracy, leading to low usage and no time savings—we didn’t iterate fast enough on feedback.
  2. Video call quality was inconsistent on rural patient networks, causing 25% drop rates and patient churn—we over-optimized for urban broadband.
  3. Teladoc launched a similar AI summary feature 6 weeks before us, leveraging their existing provider network, neutralizing our wedge.

What success actually looks like: Doctors report "saving 10 minutes per consult" in feedback forms, patient wait times drop to same-day bookings, and the CEO cites "30% increase in patient volume without added staff" in the Q3 board review. The team stops hearing complaints about repetitive intake and starts prioritizing scaling to 50 clinics.

MADE WITH SCRIPTONIA

Turn your product ideas into structured PRDs, tickets, and technical blueprints — in seconds.

Start for free →