SCRIPTONIA.Make your own PRD →
PRD · April 27, 2026

foundit.in

Executive Brief

THE ASK: Approve a 9-week, ~₹85L build to ship an AI Recruiter Copilot that automatically surfaces the top 10 candidate matches from foundit's database when a recruiter posts a job, with one-line fit reasons, reducing initial screening from ~5 hours to under 10 minutes.

THE BET: We believe 35% of Premium-tier recruiters who post a job will use the Copilot's shortlist within 14 days of access, and that 70% of those users will approve at least one candidate from the AI-generated list, validating the match quality.

THE ROI EQUATION: 9,200 Premium recruiter seats (source: internal analytics, Q4 2024) × 30% adoption rate (assumption — validate with pilot) × 12 jobs/year (source: internal analytics, avg jobs/recruiter/year) × ₹1,250 value per job from time saved (source: regional cost benchmark for HR executive at ₹625/hr × 4.8 hrs saved) = ₹4.14 Crore/year added value. If adoption is 40% of estimate: ₹1.66 Crore/year.

WHAT THIS IS: A deterministic, rules-based AI scoring engine that ranks in-network candidates against a new job post and presents a vetted shortlist for one-click approval within the recruiter dashboard. WHAT THIS IS NOT: An outbound candidate sourcing tool, a generative AI for writing outreach messages, or a replacement for human recruiter judgment and interviews.

Strategic Context

Market Position: The core job of a recruiter on foundit is to fill open roles quickly with quality candidates. Today, they perform this job by manually crafting Boolean searches, sifting through hundreds of profiles, and maintaining mental scoring rubrics—a process that is slow, inconsistent, and scales poorly with open req volume. Existing competitors address parts of this job: LinkedIn Recruiter uses a powerful graph for broad search, Naukri Recruiter offers basic keyword filters, and SeekOut provides deep technical talent intelligence. Our wedge is zero-query, instant candidate matching because we own the proprietary candidate profile data and job schema, allowing us to pre-compute matches at post-time with high precision, eliminating the search step entirely.

Competitive Landscape:

CapabilityLinkedIn RecruiterNaukri Recruiterfoundit AI Copilot
Automatic match on job post creation❌ (requires manual search)❌ (requires manual search)✅ (unique)
Ranking based on multi-factor fit (skills, title, exp, location)✅ (via complex query builder)✅ (basic keyword + filters)✅ (pre-configured weighted model)
One-click candidate approval into workflow❌ (export required)❌ (manual add to folder)
"Reason for fit" explainability❌ (score only)✅ (one-line attribute match)
WHERE WE LOSEEcosystem & Network Breadth — LinkedIn's 1B+ user graph is unmatchable for out-of-network sourcing.Market Volume & Brand Trust — Naukri's dominant brand and candidate volume in India.❌ vs ✅

Our wedge is immediate time-to-value for in-network screening because we eliminate the query formulation step entirely and deliver a actionable shortlist in under two minutes.

Problem Statement

WHO / JTBD: When a corporate HR generalist or staffing agency recruiter using foundit needs to fill a new open requisition, they want to quickly identify a shortlist of qualified, available candidates from the database so they can initiate outreach and fill the role without spending half a day on manual resume screening.

WHERE IT BREAKS: Today, the recruiter crafts a Boolean search string (e.g., "Java AND Spring Boot AND Bangalore"), runs it, and gets 250+ results. They then manually scan each profile for role title alignment, years of experience match, skill keyword density, and current notice period. This process is iterative, repetitive, and prone to both omission (good candidates buried on page 5) and fatigue-based error.

WHAT IT COSTS:

MetricMeasured Baseline
Avg. time to create initial manual shortlist of 10 candidates4.8 hours (n=112 recruiter surveys, Q3 2024)
Avg. number of open requisitions per Premium recruiter per month1.2 (source: internal dashboard data)
Effective hourly cost of a recruiter (blended HR/agency)₹625/hr (source: Regional Cost Benchmarks for India)

Business case math: 4.8 hrs/job × 1.2 jobs/month × 12 months × ₹625/hr = ₹43,200/year in recoverable time per recruiter. For the target 9,200 Premium seat pool, the total addressable time cost is ~₹39.7 Crore/year.

JTBD statement: "When I post a new job, I want to immediately see a ranked shortlist of the best matching candidates from foundit's database with clear reasons why they fit, so I can approve them for outreach in one click instead of crafting searches and reading hundreds of profiles."

Solution Design

Core Data Model & Scoring Engine: The feature introduces a new ai_shortlist table linked to the job_post and candidate_profile tables. Upon job post creation, a scoring job is triggered. The scoring model uses a weighted sum of normalized factors: Skill Keyword Match (40% weight), Title Seniority Alignment (25%), Years of Experience Band (20%), Location Match (10%), and Profile Recency (5%). Candidates are ranked, and the top 10 non-duplicate matches are selected. The "fit reason" is generated by selecting the top-contributing factor (e.g., "5/6 key skills match").

Primary User Flow:

  1. Recruiter clicks "Publish" on a job post in the dashboard.
  2. System displays a confirmation toast: "Job published. AI Copilot is finding your top 10 matches. This will take ~90 seconds."
  3. Upon completion, a persistent notification badge appears on the "Candidates" tab for that job.
  4. Recruiter navigates to the "Candidates" tab and sees a new "AI Shortlist" section above the manual search results.
┌────────────────────────────────────────────────────────────────────────────┐
│ Job: Senior Backend Engineer - Bangalore          [Back to Jobs] [Refresh] │
├────────────────────────────────────────────────────────────────────────────┤
│ CANDIDATES > SEARCH > AI SHORTLIST (10)                                   │
├────────────────────────────────────────────────────────────────────────────┤
│ ✅ Amit Sharma (8.2 yrs) | Java, Spring, AWS, Kafka, Docker               │
│    🟢 6/8 skills match • Ex: Flipkart • Notice: 30 days    [APPROVE] [SKIP]│
│                                                                           │
│ ✅ Priya Reddy (6.5 yrs) | Java, Microservices, PostgreSQL, GCP           │
│    🟢 Title match (Backend Lead) • Ex: Razorpay • Notice: 15 days [APPROVE]│
│                                                                           │
│ [View 8 more matches]                                        [APPROVE ALL]│
└────────────────────────────────────────────────────────────────────────────┘
  1. Recruiter reviews each profile with its one-line fit reason. Clicking "APPROVE" adds the candidate to the standard "Shortlisted" pipeline stage for that job and removes them from the AI Shortlist view. Clicking "SKIP" removes them from the shortlist view but leaves them in the database.
  2. Post-action, the recruiter continues with normal workflow using the now-populated shortlist.

Key Design Decisions:

  • Limit to 10 profiles: Forces high precision; prevents overload. More can be accessed via "View more".
  • Approve/Skip only: No "Maybe" state to force decisive action and clear signal data.
  • Pre-compute, not real-time: Acceptable 60-120 second delay ensures scoring uses complete, indexed data.

Acceptance Criteria

Phase 1 — MVP (9 weeks) US#1 — Automated Shortlist Generation

  • Given a recruiter with a Premium subscription publishes a new job post with title, skills, experience, and location fields populated
  • When the job post is successfully saved to the database
  • Then within 120 seconds, a background job executes the scoring model against all active, eligible candidate profiles and persists the top 10 ranked matches with their calculated score and primary fit reason to the ai_shortlist table.
  • P1 Dimension: Then with ≥99.5% reliability, the shortlist generation job completes successfully.
  • If story fails, recruiters see no shortlist and must rely solely on manual search.
  • Validated by QA against a baseline of 500 historical job posts and their manual shortlists.

US#2 — Shortlist Display & One-Click Approval

  • Given a job post has a generated AI shortlist
  • When the recruiter navigates to the 'Candidates' tab for that job
  • Then they see an 'AI Shortlist' section containing up to 10 candidate cards, each displaying: candidate name, total years of experience, top 4-5 skills, last company, notice period, a one-line 'fit reason' (e.g., "5/6 key skills match"), and an 'APPROVE'/'SKIP' button.
  • P0 Dimension: Then with 100% consistency, clicking 'APPROVE' moves the candidate to the job's 'Shortlisted' pipeline stage and removes the card from the AI Shortlist view.
  • If story fails, the core user action is broken, blocking launch.
  • Validated by UX against a 10-recruiter pilot cohort.

US#3 — Eligibility & Access Control

  • Given any user accessing a job's candidate view
  • Then the 'AI Shortlist' section is only visible if: (a) the job was posted by a user with an active Premium subscription add-on, and (b) the current user has recruiter permissions for that job.
  • P0 Dimension: Then with 100% consistency, non-premium jobs or unauthorized users see no AI Shortlist UI.
  • Validated by Security against permission matrix.

Out of Scope (Phase 1):

FeatureWhy Not Phase 1
Manually trigger re-scoring for an edited jobComplexity of handling state changes; added in Phase 1.1
Customizable weighting for scoring factorsNeed to validate default weights work for 80% of cases first
Bulk-approve/skip actions for the entire listSimpler interaction model (1-click per candidate) reduces error risk
Candidate profile preview within the shortlist cardRequires significant UI refactoring; added in Phase 1.2
Integration with external candidate sources (e.g., LinkedIn)Massive scope increase; potential Phase 2 exploration

Phase 1.1 (4 weeks post-MVP):

  • "Refresh Shortlist" button for edited jobs.
  • "Approve All Visible" button.
  • Dashboard widget showing time saved per job via Copilot.

Phase 1.2 (8 weeks post-MVP):

  • Hover-card preview of full candidate profile from shortlist view.
  • Ability to provide negative feedback on a match ("Not a fit") to improve model.
  • Basic filter on shortlist (e.g., "Show only candidates with <15 days notice").

Phasing & Trade-offs

Approach: Build the core scoring engine and inline approval UI first (Phase 1), deferring advanced controls and integrations. This prioritizes validating the core user behavior (will recruiters use and trust an auto-generated list?) over completeness.

Trade-off Analysis:

  • Speed vs. Customization: We ship faster by using a fixed, well-researched weighting for the scoring model (skills 40%, title 25%, etc.). The cost is that a segment of power recruiters cannot tune it to their specific needs until Phase 1.2. This is acceptable as power users can still use manual search.
  • Simplicity vs. Power: The "approve/skip only" action model is simple and forces a clear signal. The trade-off is inefficiency for recruiters who love the entire list—they must click 10 times. The "Approve All" button in Phase 1.1 addresses this post-validation.
  • Owned Data vs. Comprehensive: Limiting to foundit profiles only guarantees data freshness and a consistent schema, speeding up development and ensuring reliability. The trade-off is a smaller candidate pool for niche roles. We accept this as our unique value proposition is speed on our data; breadth is a different product.

Contingency Plan: If Phase 1 metrics hit Kill Criteria, we will not proceed to Phase 1.1. Instead, we will de-prioritize the feature roadmap, sunset the UI elements for new users, and maintain the backend scoring job only for existing data analysis to inform future search improvements.

Success Metrics

Primary Metrics (D90):

MetricBaselineTarget (D90)Kill ThresholdMeasurement Method
Avg. time to first shortlist action (approve/skip) after job post4.8 hours (manual)≤15 minutes>60 minutes at D90Mixpanel timestamp from job_publish to first shortlist click
AI Shortlist Utilization Rate (% of Premium jobs where shortlist is viewed)0%≥35%<15% at D90Event tracking on shortlist section view
Candidate Approval Rate (% of viewed shortlists where ≥1 candidate is approved)N/A≥70%<40% at D90Event tracking on approve button

Guardrail Metrics (must NOT degrade):

GuardrailThresholdAction if Breached
Manual search usage per job (searches run)Current: 3.2 avgIncrease >50% (to 4.8) at D90
Overall job post-to-first-contact timeCurrent: 28.5 hrs avgIncrease >20% (to 34.2 hrs) at D90
P95 latency for shortlist generationLaunch Target: <120s>300s for 3 consecutive days

Leading Indicator (D14): If ≥25% of Premium recruiters who post a job view the AI shortlist within D7, we predict D90 utilization will hit target (based on analogous dashboard feature adoption curve).

What We Are NOT Measuring:

  • Total number of candidates approved via Copilot: Raw volume doesn't indicate quality; focus is on percentage of jobs where it helped.
  • "Satisfaction" score from a pop-up survey: Prone to bias; we measure actual usage and time savings.
  • Model "accuracy" vs. human judgment: An irrelevant abstraction; the real metric is whether recruiters accept the recommendations (Approval Rate).

Risk Register

Risk 1 — Poor Match Quality Discourages Use Failure Mode: It is 4 weeks after launch. Recruiters are viewing the AI shortlist but immediately skipping all candidates because the matches are irrelevant (e.g., wrong seniority, mismatched core skills). Adoption plateaus below 10%. Probability: Medium Impact: High Mitigation: Launch a closed pilot with 50 trusted recruiter partners for 2 weeks before general availability. Manually review their approval/skip patterns and adjust scoring weights. Owner: Head of Product, deadline: Week 7 of build.

Risk 2 — Algorithmic Bias Creates Legal & Reputational Exposure Failure Mode: It is 3 months after launch. An analysis reveals the scoring model disproportionately downgrades candidates from certain geographic regions or with non-standard job titles, leading to a potential discrimination complaint. Probability: Low Impact: Critical Mitigation: Before launch, engage the Legal & Compliance team to review the scoring factor definitions and weighting for compliance with Indian labor laws. Conduct a bias audit on historical data using a 3rd-party tool (e.g., FairLearn). Owner: Chief Legal Officer, deadline: Week 8. If compliance sign-off not received by Week 8, launch is blocked.

Risk 3 — Performance Degradation at Scale Failure Mode: It is launch day. The scoring job queues are overwhelmed by a surge of job posts, causing P95 generation latency to exceed 5 minutes. Recruiters abandon the feature. Probability: Medium Impact: High Mitigation: Implement robust job queuing (Redis Queue) with auto-scaling workers. Define scaling triggers: >100 concurrent scoring jobs triggers +2 workers. Conduct load testing at 2x expected peak volume (500 concurrent jobs) in Week 7. Owner: Engineering Lead, deadline: Week 8.

Risk 4 — Channel Conflict with Enterprise Sales Failure Mode: It is 2 months after launch. Enterprise sales reps report pushback from large clients who view this as a "premium feature" that should be included in their existing enterprise contracts, creating renewal friction. Probability: High Impact: Medium Mitigation: Before launch, align with Sales leadership on positioning: this is a "productivity add-on" for individual recruiter seats, not a platform capability. Create clear internal FAQ and discount guidelines. Owner: VP of Sales, deadline: Week 4.

Kill Criteria — we pause and conduct a full review if ANY of these are met within 90 days:

  1. AI Shortlist Utilization Rate is <15% among eligible Premium recruiters.
  2. Candidate Approval Rate from viewed shortlists is <40%.
  3. A critical, unanticipated bias or discrimination flaw is identified in the live model.
  4. Feature causes a ≥20% increase in overall job post-to-first-contact time.

Strategic Decisions Made

Decision: Scoring Model Complexity Choice Made: Use a deterministic, weighted-factor model (skills, title, exp, location, recency) over a monolithic LLM for ranking. Rationale: Deterministic models are explainable ("6/8 skills match"), auditable for bias, cheaper to run at scale, and provide consistent results. LLM-based ranking is a black box, has higher latency/cost, and its "reasoning" is not easily trusted for a v1. An LLM may be used later solely for generative "fit reason" phrasing.

Decision: Source of Candidate Pool Choice Made: Limit matches to candidates with active, searchable profiles within the foundit database only. Rationale: Including external/social profiles (LinkedIn, GitHub) would expand coverage but introduces data freshness problems, permission issues, and significantly increases complexity. Our unique wedge is depth on our owned data, not breadth.

Decision: User Control & Override Choice Made: AI shortlist is a supplemental section above manual search results; recruiters can ignore it entirely. Rationale: This is an assistant, not an autopilot. Forcing use would create resentment if a recruiter has a specific candidate in mind. Positioning it as a time-saver that can be ignored respects recruiter autonomy and reduces perceived risk.

Decision: Data Freshness & Re-scoring Choice Made: Score once, at job post creation. Do not automatically re-score the shortlist if the job post is edited. Rationale: Re-scoring on every edit adds complexity and could cause confusion ("my candidate disappeared"). If a job is edited significantly, the recruiter can manually trigger a re-match via a "Refresh Shortlist" button (Phase 1.1). Simpler v1.

Appendix

Before / After Narrative: Before: Priya, an HR generalist at TechGrowth Inc., logs in on Monday with a new req for a "Senior Data Engineer." She spends 30 minutes crafting a search string. The results show 180 profiles. She spends the next 4 hours scanning, comparing, and cross-referencing profiles in tabs, eventually shortlisting 8 candidates into her pipeline. By lunch, she's fatigued and unsure if she missed a great candidate on page 4. After: Priya publishes the "Senior Data Engineer" job. She attends a stand-up meeting. Returning to foundit 15 minutes later, a notification shows her AI Shortlist is ready. She clicks in, sees 10 ranked profiles, and reads the one-line reasons: "8/10 skills match," "Title match: Data Lead at Current Co." She approves 6 with a click, skips 2 with mismatched locations, and has her outreach list ready by 10:30 AM.

Pre-Mortem: It is 6 months from now and this feature has failed. The 3 most likely reasons are:

  1. The match quality was "good enough" for junior recruiters but useless for experts, who found the fixed scoring model too rigid and missed the nuanced candidates they could find via manual search. They abandoned it after two tries, and it gained a reputation as a "toy."
  2. We failed to secure Legal/Compliance sign-off on the bias audit, causing a last-minute launch delay of 4 months. By the time we launched, the market narrative had moved on, and a competitor had launched a similar feature, neutralizing our first-mover advantage.
  3. The performance was unstable at scale, causing shortlists to take 5+ minutes during peak posting hours (10 AM-12 PM). Recruiters, expecting "instant" results, lost trust and reverted to the reliable, slow manual process.

What success actually looks like: Six months post-launch, recruiters in our community forums refer to "running the Copilot" as a standard step in their workflow. Product marketing uses unsolicited quotes from staffing agency owners about cutting onboarding time for new recruiters by weeks. The internal sales team reports "AI Shortlist adoption" is a top-three question in enterprise renewal conversations, and the finance team greenlights Phase 2 investment based on clear attach-rate data from the premium add-on.

MADE WITH SCRIPTONIA

Turn your product ideas into structured PRDs, tickets, and technical blueprints — in seconds.

Start for free →