SCRIPTONIA.Make your own PRD →
PRD · March 23, 2026

HLSR AI Virtual Teacher

Problem Statement

Students in underserved communities face a rigid, generic curriculum that ignores individual pacing, struggles, and time constraints, leading to frustration, incomplete sessions, and high abandonment rates—internal data shows 35% of users complete fewer than three sessions before churning, with survey feedback citing "content feels too hard/easy" and "no time for all of it" as top complaints. Parents lack visibility into progress, exacerbating disengagement as they can't guide or motivate without clear insights. Without personalization, the product fails to deliver equitable education, widening achievement gaps in subjects like math and science where 60% of users report specific pain points.

User Personas

  • Maria Gonzalez, 14-year-old 8th grader in a rural low-income area: Struggles with algebra due to inconsistent home support and only 30 minutes daily for study; motivated to improve grades for high school admission but gets overwhelmed by irrelevant pacing in the current app.
  • Jamal Rivera, 38-year-old single parent working two jobs in an urban underserved neighborhood: Monitors his 12-year-old son's learning but can't track daily progress or adjust plans; motivated to ensure his son builds foundational skills in English without access to tutors.
  • Aisha Patel, 16-year-old high school junior from an immigrant family: Balances school with family chores, available 45 minutes daily, weakest in science concepts; driven to pursue college but drops off from generic flows that don't adapt to her errors.

User Stories

As Maria, a student struggling with math, I want to answer five quick questions about my subject, grade, goals, time, and struggles so that the app generates a daily plan tailored to my 30-minute window with adaptive difficulty.
As Jamal, a busy parent, I want a progress dashboard showing my child's completion rates, assessment scores, and suggested adjustments so that I can intervene early without daily check-ins.
As Aisha, a student with science gaps, I want micro-assessments after each module that adjust the next content's difficulty based on my performance so that I build confidence without repeating easy material or failing on hard jumps.
As Maria, I want the plan to pull from the existing content library without new creations so that recommendations stay within proven, available resources.
As Jamal, I want email summaries of the dashboard weekly so that I stay informed even if I can't log in daily.

Acceptance Criteria

For "As Maria, a student struggling with math, I want to answer five quick questions about my subject, grade, goals, time, and struggles so that the app generates a daily plan tailored to my 30-minute window with adaptive difficulty":

  • Given a new user starts the onboarding, when they submit answers to the five questions (e.g., subject: math, grade: 8, goal: improve basics, time: 30 min/day, struggle: equations), then a plan generates in under 10 seconds with 3-5 daily modules totaling ≤30 minutes.
  • Given invalid input (e.g., non-numeric time), when submitted, then the app shows specific error messages and requires re-entry without losing prior answers.
  • Given plan generation, then it includes adaptive notes like "start at beginner level due to struggle input" verifiable in the output JSON.

For "As Jamal, a busy parent, I want a progress dashboard showing my child's completion rates, assessment scores, and suggested adjustments so that I can intervene early without daily check-ins":

  • Given parent login linked to child account, when viewing dashboard, then it displays metrics like % modules completed, average score >70%, and alerts like "child struggling on topic X—suggest review".
  • Given no activity for 3 days, when dashboard loads, then it highlights zero progress with a "motivate now" prompt.
  • Given data privacy, then dashboard access requires separate parent credentials, confirmed by login simulation.

For "As Aisha, a student with science gaps, I want micro-assessments after each module that adjust the next content's difficulty based on my performance so that I build confidence without repeating easy material or failing on hard jumps":

  • Given completion of a module with score 80%, when proceeding, then next module selects medium difficulty from library, verifiable by content metadata.
  • Given score <60%, when proceeding, then next module drops to easier variant or adds remedial content, with session log recording the adjustment.
  • Given three consecutive low scores, then app pauses and suggests goal review, testable via end-to-end flow.

For "As Maria, I want the plan to pull from the existing content library without new creations so that recommendations stay within proven, available resources":

  • Given plan generation inputs, when querying library, then all assigned modules exist in current library (no placeholders), checked via API response.
  • Given library gap (e.g., no content for specific struggle), then app falls back to closest match and logs for review, with 100% plans using library items.
  • Given plan output, then it references library IDs, verifiable in backend traces.

For "As Jamal, I want email summaries of the dashboard weekly so that I stay informed even if I can't log in daily":

  • Given parent email on file, when weekly cron runs (Sundays), then email sends with key metrics if any activity occurred, including unsubscribe link.
  • Given no parent email, when setup, then prompt to add it before dashboard access.
  • Given opt-out, then emails cease immediately, testable by mock send logs.

Success Metrics

  • Daily active users engaging with personalized plans ≥ 75% of total users (baseline: 45% from generic flows).
  • Average session completion rate for generated plans ≥ 85% (vs. current 55%).
  • 7-day retention post-onboarding ≥ 65% (up from 40%, measured as return logins).
  • Parent dashboard views per week ≥ 2 per linked account (new metric, target from zero).
  • Time to first value (plan generation to first module start) ≤ 2 minutes (current onboarding averages 5+).

Non-Functional Requirements

  • Performance: Plan generation API responds in ≤10 seconds for 95% of requests; micro-assessment processing ≤3 seconds; dashboard loads in ≤2 seconds on 3G networks. SLA: 99.5% uptime, monitored via Datadog.
  • Accessibility: Complies with WCAG 2.1 AA; all inputs support screen readers (e.g., ARIA labels for questions); color contrast ≥4.5:1 for progress charts; tested with VoiceOver and NVDA.
  • Security: User inputs encrypted in transit (HTTPS/TLS 1.3); PII (e.g., parent emails) stored with AES-256; parent access uses OAuth 2.0 with role-based controls; audit logs for all generations. No data sharing without consent.
  • Scalability: Handles 10,000 concurrent generations daily (projected growth); auto-scale AI inference pods on Kubernetes; database queries optimized for <50ms latency at 1M users.

Edge Cases & Constraints

  • Network failure during question submission: App caches inputs locally via IndexedDB and retries on reconnect; if offline >24 hours, prompt resume without data loss.
  • Invalid or incomplete inputs (e.g., no struggle specified): Default to general plan but flag in logs for UX review; prevent generation until all fields valid.
  • Library content unavailability (e.g., deprecated module): Fallback to nearest alternative or notify user "content updating—try similar topic"; log as error to avoid blank plans.
  • Multiple users on shared device (e.g., siblings): Session isolation via device ID; parent dashboard requires explicit linking to prevent cross-viewing.
  • High-load failure: If AI backend throttles (>500 req/min), queue requests with 30-second wait message; past outage showed 20% drop-off, so implement circuit breakers.
  • Age/inappropriate content: For under-13 users, auto-filter library to grade-appropriate; permission denial if parent not linked.

Open Questions

  • Which AI model to use for generation (e.g., fine-tuned GPT-4 vs. custom LLM)—current generic model hallucinated 15% of plans in prototype; ⚠ critical, decide before dev kickoff to avoid rework.
  • How to handle non-English inputs for questions (e.g., translation layer needed for underserved global users)? Test with 10% user base first.
  • Integration depth with existing library: Full metadata sync or just ID pulls? Shallow sync caused mismatches in v1 failure.
  • Parent dashboard customization (e.g., customizable alerts)? Defer to MVP but flag for post-launch if feedback demands.
  • Data retention for plans: Delete after 30 days inactivity, or indefinite? Align with privacy policy review.

Dependencies

  • Content Library Team: API access to query/filter modules by subject/grade/difficulty (v2 endpoints); sync schema changes before build.
  • AI Backend Infrastructure: Deployment on existing SageMaker or Vertex AI; requires model fine-tuning quota increase to 50 inferences/min.
  • Auth Service: Updates for parent-child linking with JWT roles; integrate with current Firebase Auth.
  • Analytics Platform (Mixpanel): Custom events for plan generation and assessments; FF for A/B testing rollout.
  • Email Service (SendGrid): API keys and templates for weekly summaries; dependency on approval for education domain.
  • Feature Flags (LaunchDarkly): Toggle for beta rollout to 10% users; infrastructure already in place but needs new keys.
MADE WITH SCRIPTONIA

Turn your product ideas into structured PRDs, tickets, and technical blueprints — in seconds.

Start for free →