Job seekers using Tsenta currently land interviews through automated applications but enter those conversations unprepared. They scramble across Glassdoor, LinkedIn, and company blogs to piece together intel, then manually map their resume bullets to job requirements—spending 3.8 hours per interview on research and preparation (n=94, user survey, Aug 2025). This friction causes 34% of users to skip prep entirely for interviews scheduled with less than 48 hours notice, resulting in avoidable rejections and churn back to manual job searching.
Business Case: 8,000 users receive interview invitations monthly (source: Tsenta analytics, July 2025) × 2.3 interviews per user per month (source: internal data, avg per active seeker) × 12 months × $38 value per prep guide (3.8 hours saved × $10/hour value of time) (assumption — validate with willingness-to-pay survey) = $8.4M/year in recoverable user time value. If feature adoption reaches only 40% of interview recipients: $3.36M/year. This exceeds the estimated 6-week build cost ($78K all-in, 2 engineers × 6 weeks × $6.5K/week blended rate) (assumption — validate with eng estimate).
This feature is an automated interview preparation guide generator that triggers on interview stage transition, synthesizing job descriptions, company data, and user resumes into a structured briefing document. It is not a mock interview simulator, a salary negotiation coach, or a replacement for human interview coaching services.
Competitive Landscape:
Huntr solves this today by providing a blank notes field where users manually paste research links and type prep notes—hiring the tool for unstructured data storage, not intelligence. LinkedIn sells interview prep courses and company insights, but users must manually bridge the gap between generic advice and their specific resume bullets. Teal offers job description keyword matching, but stops at identifying gaps rather than generating tailored talking points.
| Capability | Huntr | Teal | Tsenta Prep | |
|---|---|---|---|---|
| Auto-trigger on interview landing | ❌ | ❌ | ❌ | ✅ |
| Personalized question prediction | ❌ | ✅ (generic) | ❌ | ✅ (JD-specific) |
| Resume-to-requirement mapping | ❌ | ❌ | ✅ (gap only) | ✅ (full narrative) |
| Company culture intel synthesis | ❌ | ✅ (surface-level) | ❌ | ✅ (deep web scrape) |
| WHERE WE LOSE | Price (free tier) | Brand trust/scale | SEO/job board traffic | ❌ vs ✅ |
Our wedge is contextual automation because we own the application-to-interview transition moment and can trigger prep exactly when motivation peaks, while competitors require users to context-switch to a separate tool.
WHO / JTBD: When a job seeker lands an interview through Tsenta, they want to understand the company's priorities and rehearse relevant stories from their experience, so they can respond confidently without spending hours on manual research.
WHERE IT BREAKS: Today, the user receives an interview email, opens 4-7 browser tabs (Glassdoor for culture, LinkedIn for interviewer stalking, company blog for recent news, Notion or Google Docs for note-taking), and manually cross-references the job description against their resume to identify which projects to mention. This takes 3.8 hours on average and produces inconsistent results—users report being "caught off guard" by questions they hadn't prepared for in 67% of interviews (n=94, user survey, Aug 2025).
WHAT IT COSTS:
| Symptom | Frequency | Time Lost | Aggregate |
|---|---|---|---|
| Manual company research per interview | Every interview | 2.1 hrs | 463K hrs/yr across 18,400 monthly interviews |
| Resume-to-JD mapping per interview | Every interview | 1.7 hrs | 375K hrs/yr |
| Failed interview due to poor prep | 23% of interviews | $2,400 opportunity cost (avg monthly salary) | $10.1M/yr in lost offer value |
Aggregate annual cost: $24.8M in time + opportunity cost (source: user survey time estimates, interview volume from analytics, $80K avg salary assumption).
JTBD statement: "When I land an interview, I want an instant briefing that connects my specific experience to this company's specific needs, so I can walk in prepared without the 4-hour research slog."
Core Mechanic: The engine generates a structured prep guide by (1) parsing the job description for role requirements and implied competencies, (2) retrieving recent company data from web sources and Tsenta's company database, (3) matching parsed resume achievements to requirements using semantic similarity, and (4) synthesizing likely interview questions with suggested answer frameworks, triggered automatically when a user moves an application to "Interview" stage.
User Flow:
Key Design Decisions:
Decision 1: Auto-generation vs. Manual Request
Decision 2: LLM vs. Template-Based Generation
Decision 3: Web View vs. PDF Export (MVP)
Scope Boundary: This feature generates static prep content. It does not simulate voice/video interviews, schedule mock sessions with humans, or provide real-time coaching during actual interviews.
Integration Touchpoints:
Wireframes:
┌─────────────────────────────────────────────────────────────────┐
│ ← Back to Board Tsenta [User] │
├─────────────────────────────────────────────────────────────────┤
│ │
│ 🎯 Interview Prep: Senior PM at Stripe │
│ Generated 2 min ago • Last updated: Real-time │
│ │
│ ┌─────────────┐ ┌─────────────┐ ┌─────────────────────┐ │
│ │ Company │ │ Questions │ │ Your Talking Points │ │
│ │ Intel │ │ (12) │ │ (8) │ │
│ └─────────────┘ └─────────────┘ └─────────────────────┘ │
│ │
│ RECENT COMPANY MOVES │
│ • Launched new Treasury API (3 days ago) — likely interview │
│ topic given role focus on B2B products │
│ • Hiring freeze lifted in Engineering (source: TechCrunch) │
│ │
│ INTERVIEWER INTEL │
│ • Sarah Chen (VP Product) — former Google Pay, likely to ask │
│ about platform migration experience │
│ │
│ [Refresh Data] [Mark as Prepped] [→ Questions] │
└─────────────────────────────────────────────────────────────────┘
┌─────────────────────────────────────────────────────────────────┐
│ ← Back to Guide Tsenta [User] │
├─────────────────────────────────────────────────────────────────┤
│ │
│ LIKELY QUESTIONS: Senior PM at Stripe │
│ │
│ HIGH CONFIDENCE (based on 47 similar Stripe PM interviews) │
│ │
│ □ "Tell me about a time you had to sunset a product" │
│ 💡 Your angle: Mention the Analytics Dashboard deprecation │
│ at Acme Corp (2023) — saved $400K/year, 0 customer churn │
│ │
│ □ "How would you improve Stripe's onboarding flow?" │
│ 💡 Your angle: Reference your Checkout optimization work │
│ (18% conversion lift) — framework: Measure, Isolate, Test │
│ │
│ MEDIUM CONFIDENCE │
│ □ "Explain PCI compliance to a 5-year-old" │
│ 💡 Your angle: Use the "piggy bank" analogy from your │
│ fintech blog post (linked in resume) │
│ │
│ [☆ Star for Practice] [Copy to My Notes] │
└─────────────────────────────────────────────────────────────────┘
Phase 1 — MVP: 6 weeks
US1 — Interview Detection
US2 — Guide Generation
US3 — Guide Delivery
Out of Scope (Phase 1):
| Feature | Why Not Phase 1 |
|---|---|
| PDF Export | Engineering complexity (CSS pagination) for <5% of users who requested it |
| Mock Interview Voice Mode | Requires audio infrastructure and 4 additional weeks |
| Interviewer-specific question prediction | Depends on LinkedIn scraping which is legally risky and unreliable |
| Multi-language guides | 94% of Tsenta users are English-speaking (source: analytics) |
Phase 1.1 — 2 weeks post-MVP:
Phase 1.2 — 4 weeks post-MVP:
Primary Metrics:
| Metric | Baseline | Target (D90) | Kill Threshold | Measurement Method | Owner |
|---|---|---|---|---|---|
| Prep guide generation rate | 0% | 75% of detected interviews | <40% at D30 | DB query: interviews with guides / total interviews | Data Eng |
| Time-to-prep (survey) | 3.8 hrs (n=94) | ≤45 min | >2 hrs at D60 | In-app survey post-interview (n≥100) | PM |
| Interview-to-offer conversion | 18% (current platform avg) | 24% | <20% at D90 | User-reported outcome + offer letter upload | Growth |
Guardrail Metrics (must NOT degrade):
| Guardrail | Threshold | Action if Breached |
|---|---|---|
| Core application submission rate | ≥95% of baseline | Pause auto-generation (possible distraction from core loop) |
| User churn (30-day) | ≤8% (current) | >10% → investigate guide quality causing discouragement |
| Support tickets per user | ≤0.15/month | >0.25/month → guides generating incorrect info |
Leading Indicators (D14 check):
What We Are NOT Measuring:
RISK 1 — LLM Hallucination on Company Facts
RISK 2 — Low User Adoption Due to Distrust of AI
RISK 3 — LinkedIn or Huntr Launch Competitive Feature
RISK 4 — API Cost Explosion at Scale
RISK 5 — Privacy Violation via Resume Data
Kill Criteria — we pause Phase 2 and conduct a full review if ANY condition is met within 90 days:
5% of generated guides contain factual errors reported by users (quality failure)
Decision 1: LLM Provider and Model
Decision 2: Data Retention for Generated Guides
Decision 3: Question Confidence Scoring
Decision 4: Resume Parsing Depth
Decision 5: Company Data Sources
Before / After Narrative:
Before: Sarah gets an email—interview with Stripe tomorrow for the Senior PM role. She panics. She opens 12 tabs: Glassdoor for interview questions (finding outdated 2019 posts), LinkedIn to stalk her interviewer (finds name but no common connections), Stripe's blog (reads 3 engineering posts she doesn't understand). She stays up until 1am trying to remember which of her projects involved "API design" because the JD mentions it. She walks in tired, mentions a deprecated product feature because she read old news, and fumbles when asked about her "platform migration experience"—she had it, but didn't prep the story.
After: Sarah drags her Stripe application to "Interview" in Tsenta. While she makes coffee, the system reads the JD, sees "platform migration" and "API design," pulls her resume bullet about the Acme Corp checkout migration, and checks Stripe's blog for yesterday's Treasury API launch. She opens the guide: 4-minute read. She sees exactly which stories to tell, knows Sarah Chen (her interviewer) came from Google Pay, and notes the Treasury API launch to mention. She practices the suggested talking points for 20 minutes, marks it prepped, and sleeps 8 hours. She walks in confident, references the recent launch, and nails the migration story with metrics.
Assumptions vs Validated:
| Assumption | Status |
|---|---|
| GPT-4o can consistently output valid JSON schema for question generation | ⚠ Unvalidated — needs confirmation from ML Eng by Week 2 |
| Email parsing can detect interview invitations with >95% precision | ⚠ Unvalidated — needs confirmation from Data Eng by Week 3 |
| Users will open auto-generated guides within 4 hours (motivation window) | ⚠ Unvalidated — needs confirmation from User Research by Week 4 |
| Company blog scraping does not violate robots.txt or ToS for major employers | ⚠ Unvalidated — legal review required from Legal team by Week 3 |
| Resume parsing accuracy is sufficient to match bullets to JD requirements | ⚠ Unvalidated — needs confirmation from NLP team by Week 2 |
| Cost per guide remains <$0.15 at 10x current scale | ⚠ Unvalidated — needs confirmation from Infrastructure by Week 5 |
Pre-Mortem:
It is 6 months from now and this feature has failed. The 3 most likely reasons are:
The guides were too generic. Users expected hyper-personalized coaching, but the LLM produced bland, obvious advice ("Be sure to mention your experience"). Users tried it once, saw no value over Google, and ignored subsequent guides. We failed to validate the "personalization depth" assumption with a high-fidelity prototype before building the full pipeline.
False positive interview detection spammed users. The email parser flagged "We received your application" as an interview, generating 50 useless guides in week one. Users lost trust in the automation and disabled notifications entirely, missing real interview alerts. We didn't invest enough in the detection model training data.
We solved the wrong moment. Users actually need help during the interview (real-time support) or before they apply (resume tailoring), not the 24-hour prep window. We assumed the interview moment was high-anxiety/high-value, but users actually feel confident once they land the interview; the anxiety is earlier in the funnel. We didn't validate the JTBD timing with behavioral data.
What success actually looks like: Six months post-launch, users mention "the prep guide" unprompted in NPS surveys as the reason they chose Tsenta over Huntr. The support team stops receiving "How do I prepare for this interview?" tickets entirely. In the board meeting, the CEO cites a 6-percentage-point improvement in interview-to-offer rates as evidence that Tsenta doesn't just get users interviews—it gets them jobs. The engineering team is proud because the feature runs on 99.9% auto-pilot with near-zero hallucination reports.