THE ASK: Approve a 9-week, ~₹85L build to ship an AI Recruiter Copilot that automatically surfaces the top 10 candidate matches from foundit's database when a recruiter posts a job, with one-line fit reasons, reducing initial screening from ~5 hours to under 10 minutes.
THE BET: We believe 35% of Premium-tier recruiters who post a job will use the Copilot's shortlist within 14 days of access, and that 70% of those users will approve at least one candidate from the AI-generated list, validating the match quality.
THE ROI EQUATION: 9,200 Premium recruiter seats (source: internal analytics, Q4 2024) × 30% adoption rate (assumption — validate with pilot) × 12 jobs/year (source: internal analytics, avg jobs/recruiter/year) × ₹1,250 value per job from time saved (source: regional cost benchmark for HR executive at ₹625/hr × 4.8 hrs saved) = ₹4.14 Crore/year added value. If adoption is 40% of estimate: ₹1.66 Crore/year.
WHAT THIS IS: A deterministic, rules-based AI scoring engine that ranks in-network candidates against a new job post and presents a vetted shortlist for one-click approval within the recruiter dashboard. WHAT THIS IS NOT: An outbound candidate sourcing tool, a generative AI for writing outreach messages, or a replacement for human recruiter judgment and interviews.
Market Position: The core job of a recruiter on foundit is to fill open roles quickly with quality candidates. Today, they perform this job by manually crafting Boolean searches, sifting through hundreds of profiles, and maintaining mental scoring rubrics—a process that is slow, inconsistent, and scales poorly with open req volume. Existing competitors address parts of this job: LinkedIn Recruiter uses a powerful graph for broad search, Naukri Recruiter offers basic keyword filters, and SeekOut provides deep technical talent intelligence. Our wedge is zero-query, instant candidate matching because we own the proprietary candidate profile data and job schema, allowing us to pre-compute matches at post-time with high precision, eliminating the search step entirely.
Competitive Landscape:
| Capability | LinkedIn Recruiter | Naukri Recruiter | foundit AI Copilot |
|---|---|---|---|
| Automatic match on job post creation | ❌ (requires manual search) | ❌ (requires manual search) | ✅ (unique) |
| Ranking based on multi-factor fit (skills, title, exp, location) | ✅ (via complex query builder) | ✅ (basic keyword + filters) | ✅ (pre-configured weighted model) |
| One-click candidate approval into workflow | ❌ (export required) | ❌ (manual add to folder) | ✅ |
| "Reason for fit" explainability | ❌ (score only) | ❌ | ✅ (one-line attribute match) |
| WHERE WE LOSE | Ecosystem & Network Breadth — LinkedIn's 1B+ user graph is unmatchable for out-of-network sourcing. | Market Volume & Brand Trust — Naukri's dominant brand and candidate volume in India. | ❌ vs ✅ |
Our wedge is immediate time-to-value for in-network screening because we eliminate the query formulation step entirely and deliver a actionable shortlist in under two minutes.
WHO / JTBD: When a corporate HR generalist or staffing agency recruiter using foundit needs to fill a new open requisition, they want to quickly identify a shortlist of qualified, available candidates from the database so they can initiate outreach and fill the role without spending half a day on manual resume screening.
WHERE IT BREAKS: Today, the recruiter crafts a Boolean search string (e.g., "Java AND Spring Boot AND Bangalore"), runs it, and gets 250+ results. They then manually scan each profile for role title alignment, years of experience match, skill keyword density, and current notice period. This process is iterative, repetitive, and prone to both omission (good candidates buried on page 5) and fatigue-based error.
WHAT IT COSTS:
| Metric | Measured Baseline |
|---|---|
| Avg. time to create initial manual shortlist of 10 candidates | 4.8 hours (n=112 recruiter surveys, Q3 2024) |
| Avg. number of open requisitions per Premium recruiter per month | 1.2 (source: internal dashboard data) |
| Effective hourly cost of a recruiter (blended HR/agency) | ₹625/hr (source: Regional Cost Benchmarks for India) |
Business case math: 4.8 hrs/job × 1.2 jobs/month × 12 months × ₹625/hr = ₹43,200/year in recoverable time per recruiter. For the target 9,200 Premium seat pool, the total addressable time cost is ~₹39.7 Crore/year.
JTBD statement: "When I post a new job, I want to immediately see a ranked shortlist of the best matching candidates from foundit's database with clear reasons why they fit, so I can approve them for outreach in one click instead of crafting searches and reading hundreds of profiles."
Core Data Model & Scoring Engine: The feature introduces a new ai_shortlist table linked to the job_post and candidate_profile tables. Upon job post creation, a scoring job is triggered. The scoring model uses a weighted sum of normalized factors: Skill Keyword Match (40% weight), Title Seniority Alignment (25%), Years of Experience Band (20%), Location Match (10%), and Profile Recency (5%). Candidates are ranked, and the top 10 non-duplicate matches are selected. The "fit reason" is generated by selecting the top-contributing factor (e.g., "5/6 key skills match").
Primary User Flow:
┌────────────────────────────────────────────────────────────────────────────┐
│ Job: Senior Backend Engineer - Bangalore [Back to Jobs] [Refresh] │
├────────────────────────────────────────────────────────────────────────────┤
│ CANDIDATES > SEARCH > AI SHORTLIST (10) │
├────────────────────────────────────────────────────────────────────────────┤
│ ✅ Amit Sharma (8.2 yrs) | Java, Spring, AWS, Kafka, Docker │
│ 🟢 6/8 skills match • Ex: Flipkart • Notice: 30 days [APPROVE] [SKIP]│
│ │
│ ✅ Priya Reddy (6.5 yrs) | Java, Microservices, PostgreSQL, GCP │
│ 🟢 Title match (Backend Lead) • Ex: Razorpay • Notice: 15 days [APPROVE]│
│ │
│ [View 8 more matches] [APPROVE ALL]│
└────────────────────────────────────────────────────────────────────────────┘
Key Design Decisions:
Phase 1 — MVP (9 weeks) US#1 — Automated Shortlist Generation
ai_shortlist table.US#2 — Shortlist Display & One-Click Approval
US#3 — Eligibility & Access Control
Out of Scope (Phase 1):
| Feature | Why Not Phase 1 |
|---|---|
| Manually trigger re-scoring for an edited job | Complexity of handling state changes; added in Phase 1.1 |
| Customizable weighting for scoring factors | Need to validate default weights work for 80% of cases first |
| Bulk-approve/skip actions for the entire list | Simpler interaction model (1-click per candidate) reduces error risk |
| Candidate profile preview within the shortlist card | Requires significant UI refactoring; added in Phase 1.2 |
| Integration with external candidate sources (e.g., LinkedIn) | Massive scope increase; potential Phase 2 exploration |
Phase 1.1 (4 weeks post-MVP):
Phase 1.2 (8 weeks post-MVP):
Approach: Build the core scoring engine and inline approval UI first (Phase 1), deferring advanced controls and integrations. This prioritizes validating the core user behavior (will recruiters use and trust an auto-generated list?) over completeness.
Trade-off Analysis:
Contingency Plan: If Phase 1 metrics hit Kill Criteria, we will not proceed to Phase 1.1. Instead, we will de-prioritize the feature roadmap, sunset the UI elements for new users, and maintain the backend scoring job only for existing data analysis to inform future search improvements.
Primary Metrics (D90):
| Metric | Baseline | Target (D90) | Kill Threshold | Measurement Method |
|---|---|---|---|---|
| Avg. time to first shortlist action (approve/skip) after job post | 4.8 hours (manual) | ≤15 minutes | >60 minutes at D90 | Mixpanel timestamp from job_publish to first shortlist click |
| AI Shortlist Utilization Rate (% of Premium jobs where shortlist is viewed) | 0% | ≥35% | <15% at D90 | Event tracking on shortlist section view |
| Candidate Approval Rate (% of viewed shortlists where ≥1 candidate is approved) | N/A | ≥70% | <40% at D90 | Event tracking on approve button |
Guardrail Metrics (must NOT degrade):
| Guardrail | Threshold | Action if Breached |
|---|---|---|
| Manual search usage per job (searches run) | Current: 3.2 avg | Increase >50% (to 4.8) at D90 |
| Overall job post-to-first-contact time | Current: 28.5 hrs avg | Increase >20% (to 34.2 hrs) at D90 |
| P95 latency for shortlist generation | Launch Target: <120s | >300s for 3 consecutive days |
Leading Indicator (D14): If ≥25% of Premium recruiters who post a job view the AI shortlist within D7, we predict D90 utilization will hit target (based on analogous dashboard feature adoption curve).
What We Are NOT Measuring:
Risk 1 — Poor Match Quality Discourages Use Failure Mode: It is 4 weeks after launch. Recruiters are viewing the AI shortlist but immediately skipping all candidates because the matches are irrelevant (e.g., wrong seniority, mismatched core skills). Adoption plateaus below 10%. Probability: Medium Impact: High Mitigation: Launch a closed pilot with 50 trusted recruiter partners for 2 weeks before general availability. Manually review their approval/skip patterns and adjust scoring weights. Owner: Head of Product, deadline: Week 7 of build.
Risk 2 — Algorithmic Bias Creates Legal & Reputational Exposure Failure Mode: It is 3 months after launch. An analysis reveals the scoring model disproportionately downgrades candidates from certain geographic regions or with non-standard job titles, leading to a potential discrimination complaint. Probability: Low Impact: Critical Mitigation: Before launch, engage the Legal & Compliance team to review the scoring factor definitions and weighting for compliance with Indian labor laws. Conduct a bias audit on historical data using a 3rd-party tool (e.g., FairLearn). Owner: Chief Legal Officer, deadline: Week 8. If compliance sign-off not received by Week 8, launch is blocked.
Risk 3 — Performance Degradation at Scale Failure Mode: It is launch day. The scoring job queues are overwhelmed by a surge of job posts, causing P95 generation latency to exceed 5 minutes. Recruiters abandon the feature. Probability: Medium Impact: High Mitigation: Implement robust job queuing (Redis Queue) with auto-scaling workers. Define scaling triggers: >100 concurrent scoring jobs triggers +2 workers. Conduct load testing at 2x expected peak volume (500 concurrent jobs) in Week 7. Owner: Engineering Lead, deadline: Week 8.
Risk 4 — Channel Conflict with Enterprise Sales Failure Mode: It is 2 months after launch. Enterprise sales reps report pushback from large clients who view this as a "premium feature" that should be included in their existing enterprise contracts, creating renewal friction. Probability: High Impact: Medium Mitigation: Before launch, align with Sales leadership on positioning: this is a "productivity add-on" for individual recruiter seats, not a platform capability. Create clear internal FAQ and discount guidelines. Owner: VP of Sales, deadline: Week 4.
Kill Criteria — we pause and conduct a full review if ANY of these are met within 90 days:
Decision: Scoring Model Complexity Choice Made: Use a deterministic, weighted-factor model (skills, title, exp, location, recency) over a monolithic LLM for ranking. Rationale: Deterministic models are explainable ("6/8 skills match"), auditable for bias, cheaper to run at scale, and provide consistent results. LLM-based ranking is a black box, has higher latency/cost, and its "reasoning" is not easily trusted for a v1. An LLM may be used later solely for generative "fit reason" phrasing.
Decision: Source of Candidate Pool Choice Made: Limit matches to candidates with active, searchable profiles within the foundit database only. Rationale: Including external/social profiles (LinkedIn, GitHub) would expand coverage but introduces data freshness problems, permission issues, and significantly increases complexity. Our unique wedge is depth on our owned data, not breadth.
Decision: User Control & Override Choice Made: AI shortlist is a supplemental section above manual search results; recruiters can ignore it entirely. Rationale: This is an assistant, not an autopilot. Forcing use would create resentment if a recruiter has a specific candidate in mind. Positioning it as a time-saver that can be ignored respects recruiter autonomy and reduces perceived risk.
Decision: Data Freshness & Re-scoring Choice Made: Score once, at job post creation. Do not automatically re-score the shortlist if the job post is edited. Rationale: Re-scoring on every edit adds complexity and could cause confusion ("my candidate disappeared"). If a job is edited significantly, the recruiter can manually trigger a re-match via a "Refresh Shortlist" button (Phase 1.1). Simpler v1.
Before / After Narrative: Before: Priya, an HR generalist at TechGrowth Inc., logs in on Monday with a new req for a "Senior Data Engineer." She spends 30 minutes crafting a search string. The results show 180 profiles. She spends the next 4 hours scanning, comparing, and cross-referencing profiles in tabs, eventually shortlisting 8 candidates into her pipeline. By lunch, she's fatigued and unsure if she missed a great candidate on page 4. After: Priya publishes the "Senior Data Engineer" job. She attends a stand-up meeting. Returning to foundit 15 minutes later, a notification shows her AI Shortlist is ready. She clicks in, sees 10 ranked profiles, and reads the one-line reasons: "8/10 skills match," "Title match: Data Lead at Current Co." She approves 6 with a click, skips 2 with mismatched locations, and has her outreach list ready by 10:30 AM.
Pre-Mortem: It is 6 months from now and this feature has failed. The 3 most likely reasons are:
What success actually looks like: Six months post-launch, recruiters in our community forums refer to "running the Copilot" as a standard step in their workflow. Product marketing uses unsolicited quotes from staffing agency owners about cutting onboarding time for new recruiters by weeks. The internal sales team reports "AI Shortlist adoption" is a top-three question in enterprise renewal conversations, and the finance team greenlights Phase 2 investment based on clear attach-rate data from the premium add-on.