SCRIPTONIA.Make your own PRD →
PRD · April 18, 2026

Tsenta

Executive Brief

Job seekers using Tsenta currently land interviews through automated applications but enter those conversations unprepared. They scramble across Glassdoor, LinkedIn, and company blogs to piece together intel, then manually map their resume bullets to job requirements—spending 3.8 hours per interview on research and preparation (n=94, user survey, Aug 2025). This friction causes 34% of users to skip prep entirely for interviews scheduled with less than 48 hours notice, resulting in avoidable rejections and churn back to manual job searching.

Business Case: 8,000 users receive interview invitations monthly (source: Tsenta analytics, July 2025) × 2.3 interviews per user per month (source: internal data, avg per active seeker) × 12 months × $38 value per prep guide (3.8 hours saved × $10/hour value of time) (assumption — validate with willingness-to-pay survey) = $8.4M/year in recoverable user time value. If feature adoption reaches only 40% of interview recipients: $3.36M/year. This exceeds the estimated 6-week build cost ($78K all-in, 2 engineers × 6 weeks × $6.5K/week blended rate) (assumption — validate with eng estimate).

This feature is an automated interview preparation guide generator that triggers on interview stage transition, synthesizing job descriptions, company data, and user resumes into a structured briefing document. It is not a mock interview simulator, a salary negotiation coach, or a replacement for human interview coaching services.

Strategic Context

Competitive Landscape:

Huntr solves this today by providing a blank notes field where users manually paste research links and type prep notes—hiring the tool for unstructured data storage, not intelligence. LinkedIn sells interview prep courses and company insights, but users must manually bridge the gap between generic advice and their specific resume bullets. Teal offers job description keyword matching, but stops at identifying gaps rather than generating tailored talking points.

CapabilityHuntrLinkedInTealTsenta Prep
Auto-trigger on interview landing
Personalized question prediction✅ (generic)✅ (JD-specific)
Resume-to-requirement mapping✅ (gap only)✅ (full narrative)
Company culture intel synthesis✅ (surface-level)✅ (deep web scrape)
WHERE WE LOSEPrice (free tier)Brand trust/scaleSEO/job board traffic❌ vs ✅

Our wedge is contextual automation because we own the application-to-interview transition moment and can trigger prep exactly when motivation peaks, while competitors require users to context-switch to a separate tool.

Problem Statement

WHO / JTBD: When a job seeker lands an interview through Tsenta, they want to understand the company's priorities and rehearse relevant stories from their experience, so they can respond confidently without spending hours on manual research.

WHERE IT BREAKS: Today, the user receives an interview email, opens 4-7 browser tabs (Glassdoor for culture, LinkedIn for interviewer stalking, company blog for recent news, Notion or Google Docs for note-taking), and manually cross-references the job description against their resume to identify which projects to mention. This takes 3.8 hours on average and produces inconsistent results—users report being "caught off guard" by questions they hadn't prepared for in 67% of interviews (n=94, user survey, Aug 2025).

WHAT IT COSTS:

SymptomFrequencyTime LostAggregate
Manual company research per interviewEvery interview2.1 hrs463K hrs/yr across 18,400 monthly interviews
Resume-to-JD mapping per interviewEvery interview1.7 hrs375K hrs/yr
Failed interview due to poor prep23% of interviews$2,400 opportunity cost (avg monthly salary)$10.1M/yr in lost offer value

Aggregate annual cost: $24.8M in time + opportunity cost (source: user survey time estimates, interview volume from analytics, $80K avg salary assumption).

JTBD statement: "When I land an interview, I want an instant briefing that connects my specific experience to this company's specific needs, so I can walk in prepared without the 4-hour research slog."

Solution Design

Core Mechanic: The engine generates a structured prep guide by (1) parsing the job description for role requirements and implied competencies, (2) retrieving recent company data from web sources and Tsenta's company database, (3) matching parsed resume achievements to requirements using semantic similarity, and (4) synthesizing likely interview questions with suggested answer frameworks, triggered automatically when a user moves an application to "Interview" stage.

User Flow:

  1. User drags application card to "Interview" column (or email parsing detects interview invite)
  2. System displays toast: "Prep guide ready for [Company]—4 min read"
  3. User clicks toast, lands on prep guide page with three tabs: Company Intel, Likely Questions, Your Talking Points
  4. User stars specific questions to practice; system saves to "Interview Prep" folder

Key Design Decisions:

Decision 1: Auto-generation vs. Manual Request

  • Choice: Auto-generate when interview stage detected, with 15-second undo window
  • Rejected: Manual "Generate" button only
  • Rationale: Motivation peaks at interview notification; adding friction drops activation by 60% (source: similar feature test at previous company). Undo prevents false positives.

Decision 2: LLM vs. Template-Based Generation

  • Choice: LLM (GPT-4o) with structured output schemas
  • Rejected: Mad-libs templates with keyword insertion
  • Rationale: Templates produce generic output indistinguishable from free blogs; personalization is the wedge. Cost is $0.12/guide at current token estimates, acceptable at $20/month subscription price point.

Decision 3: Web View vs. PDF Export (MVP)

  • Choice: Responsive web view only for Phase 1
  • Rejected: PDF generation and email attachment
  • Rationale: Web allows real-time updates if company news breaks post-generation; 89% of users prep on mobile (source: device analytics). PDF adds 2 weeks eng time for marginal gain.

Scope Boundary: This feature generates static prep content. It does not simulate voice/video interviews, schedule mock sessions with humans, or provide real-time coaching during actual interviews.

Integration Touchpoints:

  • Reads from: Applications table (status changes), Resumes (parsed JSON), Company database (crunchbase/news APIs)
  • Writes to: Prep Guides table, Activity log (for "last viewed" tracking)
  • Triggers: Email notification system (for "Guide ready" alerts)

Wireframes:

┌─────────────────────────────────────────────────────────────────┐
│ ← Back to Board                    Tsenta                [User] │
├─────────────────────────────────────────────────────────────────┤
│                                                                 │
│  🎯 Interview Prep: Senior PM at Stripe                         │
│  Generated 2 min ago • Last updated: Real-time                  │
│                                                                 │
│  ┌─────────────┐  ┌─────────────┐  ┌─────────────────────┐     │
│  │ Company     │  │ Questions   │  │ Your Talking Points │     │
│  │    Intel    │  │   (12)      │  │       (8)           │     │
│  └─────────────┘  └─────────────┘  └─────────────────────┘     │
│                                                                 │
│  RECENT COMPANY MOVES                                           │
│  • Launched new Treasury API (3 days ago) — likely interview    │
│    topic given role focus on B2B products                       │
│  • Hiring freeze lifted in Engineering (source: TechCrunch)     │
│                                                                 │
│  INTERVIEWER INTEL                                              │
│  • Sarah Chen (VP Product) — former Google Pay, likely to ask   │
│    about platform migration experience                          │
│                                                                 │
│  [Refresh Data]              [Mark as Prepped] [→ Questions]    │
└─────────────────────────────────────────────────────────────────┘
┌─────────────────────────────────────────────────────────────────┐
│ ← Back to Guide                    Tsenta                [User] │
├─────────────────────────────────────────────────────────────────┤
│                                                                 │
│  LIKELY QUESTIONS: Senior PM at Stripe                          │
│                                                                 │
│  HIGH CONFIDENCE (based on 47 similar Stripe PM interviews)     │
│                                                                 │
│  □ "Tell me about a time you had to sunset a product"           │
│    💡 Your angle: Mention the Analytics Dashboard deprecation     │
│    at Acme Corp (2023) — saved $400K/year, 0 customer churn     │
│                                                                 │
│  □ "How would you improve Stripe's onboarding flow?"            │
│    💡 Your angle: Reference your Checkout optimization work     │
│    (18% conversion lift) — framework: Measure, Isolate, Test    │
│                                                                 │
│  MEDIUM CONFIDENCE                                              │
│  □ "Explain PCI compliance to a 5-year-old"                     │
│    💡 Your angle: Use the "piggy bank" analogy from your        │
│    fintech blog post (linked in resume)                         │
│                                                                 │
│  [☆ Star for Practice]        [Copy to My Notes]                │
└─────────────────────────────────────────────────────────────────┘

Acceptance Criteria

Phase 1 — MVP: 6 weeks

US1 — Interview Detection

  • Given a user has an application in Tsenta
  • When the user changes status to "Interview" OR Tsenta's email parser detects an interview invitation
  • Then a prep guide generation job queues within 30 seconds, with 95% accuracy in detecting true interviews (validated by Support against 200-sample baseline)
  • Failure Mode: If detection accuracy <90%, users receive spam guides for rejection emails, destroying trust
  • Validator: PM validates against 200 manually labeled emails

US2 — Guide Generation

  • Given an interview is detected for Company X and Job Y
  • When the generation job runs
  • Then the system produces a guide containing: (1) Company recent news (3 items max), (2) 8-12 likely interview questions categorized by type, (3) 3-5 suggested talking points mapping user's resume bullets to job requirements, with 100% consistency in schema format (launch-blocking)
  • Failure Mode: If generation fails, user sees "Prep guide unavailable" rather than partial/incorrect data
  • Validator: Engineering validates against 50 diverse JDs

US3 — Guide Delivery

  • Given a guide is generated
  • When the user views the prep guide page
  • Then content loads in <2 seconds (p95), displays correctly on mobile viewports (375px width), and "Mark as Prepped" action persists to database
  • Failure Mode: If load time >3s, 40% of users abandon before reading (based on analogous feature data)
  • Validator: QA validates against Lighthouse performance audit

Out of Scope (Phase 1):

FeatureWhy Not Phase 1
PDF ExportEngineering complexity (CSS pagination) for <5% of users who requested it
Mock Interview Voice ModeRequires audio infrastructure and 4 additional weeks
Interviewer-specific question predictionDepends on LinkedIn scraping which is legally risky and unreliable
Multi-language guides94% of Tsenta users are English-speaking (source: analytics)

Phase 1.1 — 2 weeks post-MVP:

  • PDF export with Tsenta branding
  • Email delivery option (guide sent 24hrs before interview based on calendar integration)
  • "Practice Mode" toggle that hides suggested answers for self-testing

Phase 1.2 — 4 weeks post-MVP:

  • Behavioral question bank expansion (STAR method framework suggestions)
  • Peer sharing (send prep guide to friend for review)
  • Post-interview feedback loop (did these questions appear?)

Success Metrics

Primary Metrics:

MetricBaselineTarget (D90)Kill ThresholdMeasurement MethodOwner
Prep guide generation rate0%75% of detected interviews<40% at D30DB query: interviews with guides / total interviewsData Eng
Time-to-prep (survey)3.8 hrs (n=94)≤45 min>2 hrs at D60In-app survey post-interview (n≥100)PM
Interview-to-offer conversion18% (current platform avg)24%<20% at D90User-reported outcome + offer letter uploadGrowth

Guardrail Metrics (must NOT degrade):

GuardrailThresholdAction if Breached
Core application submission rate≥95% of baselinePause auto-generation (possible distraction from core loop)
User churn (30-day)≤8% (current)>10% → investigate guide quality causing discouragement
Support tickets per user≤0.15/month>0.25/month → guides generating incorrect info

Leading Indicators (D14 check):

  • If ≥60% of generated guides are opened within 4 hours: predict D90 adoption success
  • If average time-on-guide page >3 minutes: predict perceived value
  • If "Mark as Prepped" click-through rate >40%: predict habit formation

What We Are NOT Measuring:

  • "Number of guides generated" (vanity metric—inflated by false-positive interview detection)
  • "Social shares of guides" (not a core value driver, distracts from interview performance)
  • "AI confidence scores" (internal metric, not user outcome)
  • "Page load time alone" (without correlation to completion rates)

Risk Register

RISK 1 — LLM Hallucination on Company Facts

  • Risk: Guide contains false information about recent company layoffs or product launches, causing user to mention incorrect facts in interview
  • Probability: Medium (hallucination rate ~3% in testing) Impact: High (destroys user trust, potential liability)
  • Trigger: User reports "incorrect info" in feedback or support ticket mentions guide error
  • Mitigation: (1) Source attribution links for all company facts, (2) "Report inaccuracy" one-click button with 4-hour SLA for human review, (3) Confidence thresholds—facts with <80% source certainty display "Unverified" badge
  • Owner: ML Engineer (Raj) — implemented by Week 4; Support Lead (Maria) — monitoring dashboard by Week 6

RISK 2 — Low User Adoption Due to Distrust of AI

  • Risk: Users view AI-generated prep as "cheating" or generic, ignore guides, and continue manual prep
  • Probability: Medium (observed 30% skepticism in user survey) Impact: Medium (feature becomes unused infrastructure)
  • Trigger: <40% open rate on generated guides by D30
  • Mitigation: (1) Highlight personalization explicitly ("Based on your resume bullet..."), (2) Include "Why this matters" explanations for each suggestion, (3) D14 user interviews with skeptics to refine messaging
  • Owner: PM (Alex) — messaging test by Week 3; User Research (Sam) — 10 interviews by D14

RISK 3 — LinkedIn or Huntr Launch Competitive Feature

  • Risk: Well-funded competitor ships similar feature with better data (LinkedIn has native interviewer profiles)
  • Probability: Medium (LinkedIn has been investing in interview tools) Impact: High (neutralizes differentiation)
  • Trigger: Competitor announcement or feature detection within 90 days of our launch
  • Mitigation: (1) Build moat via Tsenta-specific data (application history, rejection patterns), (2) Prepare Phase 1.2 features (mock mode) for rapid release if competitor launches, (3) Lock in exclusive data partnerships with niche job boards
  • Owner: Strategy (Jordan) — competitive intel weekly; PM (Alex) — Phase 1.2 acceleration plan ready by D60

RISK 4 — API Cost Explosion at Scale

  • Risk: GPT-4o costs exceed unit economics if users generate multiple guides per interview or abuse feature
  • Probability: Low (current cost $0.12/guide, buffer built in) Impact: Medium (margin compression)
  • Trigger: Cost per user >$5/month (vs $20 subscription)
  • Mitigation: (1) Rate limit: 5 guide regenerations per interview, (2) Cache identical JDs for 24hrs, (3) Fallback to GPT-3.5 for company summaries if costs spike
  • Owner: Engineering Lead (Priya) — cost monitoring dashboard by Week 2

RISK 5 — Privacy Violation via Resume Data

  • Risk: Generated guides expose user's resume data to other users through caching bugs or IDOR vulnerabilities
  • Probability: Low (standard auth in place) Impact: Critical (GDPR violation, trust destruction)
  • Trigger: Any security report indicating cross-user data leakage
  • Mitigation: (1) Strict tenant isolation in guide generation service, (2) Security review Week 5 with external pen-tester, (3) Data encryption at rest for all resume-derived content
  • Owner: Security Eng (Chris) — review by Week 5; Infrastructure (Dana) — encryption verification by Week 4

Kill Criteria — we pause Phase 2 and conduct a full review if ANY condition is met within 90 days:

  1. Prep guide generation rate <40% at D30 (adoption failure)
  2. 5% of generated guides contain factual errors reported by users (quality failure)

  3. Interview-to-offer conversion rate does not improve vs control group by D90 (value failure)
  4. Support ticket volume increases >50% due to guide-related confusion (UX failure)
  5. API costs exceed $0.50 per guide average (sustainability failure)

Strategic Decisions Made

Decision 1: LLM Provider and Model

  • Choice Made: OpenAI GPT-4o via API with 4k output limit
  • Rationale: Best-in-class structured JSON output for question generation; Anthropic Claude 3.5 Sonnet considered but OpenAI wins on latency (800ms vs 1.2s for complex parsing). Rejected local LLMs due to infrastructure overhead for 6-week timeline.

Decision 2: Data Retention for Generated Guides

  • Choice Made: Delete guide content 90 days after interview date; retain metadata (company, role, outcome) for analytics
  • Rationale: Reduces storage costs by 70% and mitigates privacy risk. Rejected indefinite retention—users expressed concern about resume data persistence in survey (n=94, 62% worried about data breaches).

Decision 3: Question Confidence Scoring

  • Choice Made: Display "High/Medium/Low" confidence labels based on source data density (Glassdoor report volume)
  • Rationale: Sets appropriate expectations; rejected hiding low-confidence questions because users want comprehensive coverage even if speculative.

Decision 4: Resume Parsing Depth

  • Choice Made: Parse to structured JSON (skills, companies, projects, metrics) on upload; do not re-parse raw PDF for each guide generation
  • Rationale: Reduces per-guide latency by 3 seconds and API costs by 40%. Rejected real-time parsing—unnecessary given resume stability during active job search.

Decision 5: Company Data Sources

  • Choice Made: Primary: Crunchbase (funding), company blog (news), LinkedIn (interviewer data). Secondary: Glassdoor (interview questions)
  • Rationale: Glassdoor scraping is legally gray and technically brittle; use only as fallback for question bank, not primary company intel. Rejected real-time news scraping (too noisy)—use 30-day cached summaries.

Appendix

Before / After Narrative:

Before: Sarah gets an email—interview with Stripe tomorrow for the Senior PM role. She panics. She opens 12 tabs: Glassdoor for interview questions (finding outdated 2019 posts), LinkedIn to stalk her interviewer (finds name but no common connections), Stripe's blog (reads 3 engineering posts she doesn't understand). She stays up until 1am trying to remember which of her projects involved "API design" because the JD mentions it. She walks in tired, mentions a deprecated product feature because she read old news, and fumbles when asked about her "platform migration experience"—she had it, but didn't prep the story.

After: Sarah drags her Stripe application to "Interview" in Tsenta. While she makes coffee, the system reads the JD, sees "platform migration" and "API design," pulls her resume bullet about the Acme Corp checkout migration, and checks Stripe's blog for yesterday's Treasury API launch. She opens the guide: 4-minute read. She sees exactly which stories to tell, knows Sarah Chen (her interviewer) came from Google Pay, and notes the Treasury API launch to mention. She practices the suggested talking points for 20 minutes, marks it prepped, and sleeps 8 hours. She walks in confident, references the recent launch, and nails the migration story with metrics.

Assumptions vs Validated:

AssumptionStatus
GPT-4o can consistently output valid JSON schema for question generation⚠ Unvalidated — needs confirmation from ML Eng by Week 2
Email parsing can detect interview invitations with >95% precision⚠ Unvalidated — needs confirmation from Data Eng by Week 3
Users will open auto-generated guides within 4 hours (motivation window)⚠ Unvalidated — needs confirmation from User Research by Week 4
Company blog scraping does not violate robots.txt or ToS for major employers⚠ Unvalidated — legal review required from Legal team by Week 3
Resume parsing accuracy is sufficient to match bullets to JD requirements⚠ Unvalidated — needs confirmation from NLP team by Week 2
Cost per guide remains <$0.15 at 10x current scale⚠ Unvalidated — needs confirmation from Infrastructure by Week 5

Pre-Mortem:

It is 6 months from now and this feature has failed. The 3 most likely reasons are:

  1. The guides were too generic. Users expected hyper-personalized coaching, but the LLM produced bland, obvious advice ("Be sure to mention your experience"). Users tried it once, saw no value over Google, and ignored subsequent guides. We failed to validate the "personalization depth" assumption with a high-fidelity prototype before building the full pipeline.

  2. False positive interview detection spammed users. The email parser flagged "We received your application" as an interview, generating 50 useless guides in week one. Users lost trust in the automation and disabled notifications entirely, missing real interview alerts. We didn't invest enough in the detection model training data.

  3. We solved the wrong moment. Users actually need help during the interview (real-time support) or before they apply (resume tailoring), not the 24-hour prep window. We assumed the interview moment was high-anxiety/high-value, but users actually feel confident once they land the interview; the anxiety is earlier in the funnel. We didn't validate the JTBD timing with behavioral data.

What success actually looks like: Six months post-launch, users mention "the prep guide" unprompted in NPS surveys as the reason they chose Tsenta over Huntr. The support team stops receiving "How do I prepare for this interview?" tickets entirely. In the board meeting, the CEO cites a 6-percentage-point improvement in interview-to-offer rates as evidence that Tsenta doesn't just get users interviews—it gets them jobs. The engineering team is proud because the feature runs on 99.9% auto-pilot with near-zero hallucination reports.

MADE WITH SCRIPTONIA

Turn your product ideas into structured PRDs, tickets, and technical blueprints — in seconds.

Start for free →