SCRIPTONIA.Make your own PRD →
PRD · April 2, 2026

Imuii

Executive Brief

Enterprise clients onboarding onto Imuii's platform today experience a 12–18 day manual configuration process where a solutions engineer manually provisions cloud infrastructure, configures SaaS integrations, and sets up AI model access based on Slack conversations and discovery call notes. Each client gets a different onboarding experience because there's no standardized intake process — a financial services client with 500 users and strict data residency requirements gets asked the same ad-hoc questions as a healthcare startup with 20 users. This inconsistency creates three failure modes: (1) clients miss go-live deadlines because a critical integration requirement surfaces in week 2 that should have been captured on day 1, (2) solutions engineers spend 14.3 hours per client on configuration tasks that follow a predictable pattern 73% of the time (source: internal time audit, Sept 2024, n=41 onboardings), and (3) clients arrive at day 30 unsure whether they've completed onboarding, leading to a 22% increase in support ticket volume in the first 60 days vs. baseline (source: Zendesk data, Q3 2024).

The business case: 8 solutions engineers × 14.3 hrs/client × 6.2 clients/month avg × $95/hr fully-loaded cost = $67,318/month in recoverable configuration time (source: HR comp data for SE band, client onboarding count from Salesforce Q3 2024). Annualized: $807,816/year. The feature also reduces median time-to-first-value from 18 days to a target of 8 days, which correlates with a 17 percentage point improvement in D90 logo retention based on cohort analysis of clients who went live in <10 days vs. >15 days (source: internal retention study, Aug 2024, n=89 clients). At an average first-year contract value of $127K, a 17pp retention improvement across 74 annual onboardings = $1.59M/year in retained ARR. Combined recoverable value: $2.4M/year. If adoption reaches only 40% of clients (due to highly custom enterprise deals that resist standardization): recoverable SE time drops to $323K/year and retention benefit drops to $636K/year = $959K/year floor case. Build cost estimated at $210K all-in (12 eng-weeks at $17.5K/week fully-loaded, source: finance planning model).

This feature is an AI-powered intake questionnaire (5 questions) that generates a personalized onboarding checklist, configuration guide with pre-filled values, and a go-live timeline with milestone dates based on detected complexity. It is not an RPA system that auto-executes the configuration — a human solutions engineer still provisions infrastructure and validates the setup, but they work from a generated runbook instead of starting from a blank Notion doc every time.

Strategic Context

Competitive Landscape:

How competitors solve this today:

  • Salesforce (via CPQ + Onboarding Checklists): Enterprises hire Salesforce to create a quote and a static checklist of onboarding steps, but the checklist is manually authored by the sales ops team and doesn't adapt to client-specific needs — it's the same 40-item list for everyone.
  • Workato (integration iPaaS with onboarding templates): Clients hire Workato to build integration recipes, and Workato provides pre-built templates for common SaaS connectors, but there's no onboarding orchestration layer — the client still has to figure out the sequence and dependencies themselves.
  • Userlane / WalkMe (digital adoption platforms): Clients hire these tools to create in-app onboarding tours, but they're user-facing UI walkthroughs for end-users, not B2B client onboarding orchestration for the vendor's implementation team.
CapabilitySalesforceWorkatoImuii Onboarding AI
Capture client requirements via structured intake✅ (5-question AI form)
Auto-generate personalized onboarding checklist❌ (static)✅ (dynamic)
Pre-fill cloud/SaaS config guides
Estimate go-live timeline based on complexity✅ (unique)
Client-visible progress tracker
WHERE WE LOSE: Deep CRM integration & workflow✅ (SFDC wins)❌ vs ✅

WHERE WE LOSE: Salesforce has deeper workflow automation and native CRM integration — clients already running Salesforce can build custom onboarding flows with Apex and Flow Builder, giving them more control than our opinionated 5-question intake. We lose to Salesforce when the client has a dedicated sales ops engineering team that wants to own the onboarding logic.

Our wedge is speed-to-credible-plan because sales is closing deals with "you'll be live in under 2 weeks" promises, and we can't deliver that today without this feature. Salesforce requires 6 weeks of implementation just to configure the onboarding workflow. We're targeting clients who want to onboard fast with minimal setup, not clients who want to build a custom onboarding engine.

Problem Statement

WHO / JTBD: When a solutions engineer at Imuii receives a new enterprise client handoff from sales, they want to quickly determine the client's technical requirements (cloud provider preference, required SaaS integrations, data residency constraints, AI model access needs, expected data volume) and generate a scoped onboarding plan — so they can give the client a credible go-live date and avoid mid-onboarding surprises that blow the timeline.

WHERE IT BREAKS: Today, the SE schedules a 60-minute kickoff call, asks open-ended questions ("tell me about your tech stack"), takes unstructured notes in a Notion doc, then translates those notes into configuration steps over the next 3–5 days. Critical requirements surface late: a client mentions "we need to integrate with Workday" in passing on day 7, but the Workday connector requires a different auth flow that wasn't scoped. The SE has to backtrack, reconfigure, and push the go-live date by 6 days. There is no checklist, no shared source of truth, and no way for the client to self-serve answers to "what happens next?" — they Slack the SE, who is juggling 4 other onboardings.

WHAT IT COSTS:

Quantified Baseline:

MetricMeasured Baseline
SE hours spent per client onboarding (discovery to live)14.3 hrs avg (n=41 onboardings, Sept 2024)
Median days from contract signature to go-live18 days (n=89 clients, Q3 2024)
% of onboardings with scope creep (post-kickoff)34% (n=41, Sept 2024 audit)
Support tickets filed in first 60 days (onboarding qs)8.7 tickets/client avg (Zendesk, Q3 2024)
D90 logo retention: clients live in <10 days91% (n=34)
D90 logo retention: clients live in >15 days74% (n=55)

Business case math: 8 SEs × 14.3 hrs/client × 6.2 clients/mo × $95/hr × 12 months = $807,816/year in recoverable SE time. Additional retention value from faster time-to-value: 17pp improvement × 74 annual clients × $127K ACV = $1.59M/year retained ARR. Total: $2.4M/year.

JTBD statement: "When I receive a new enterprise client from sales, I want to capture their technical requirements in a structured way and auto-generate a scoped onboarding plan, so I can give the client a credible timeline and avoid mid-flight scope changes."

Solution Design

Core Mechanic:

The system ingests 5 client responses (company size, industry vertical, primary workflows, required integrations, expected monthly data volume), runs a decision tree classifier to assign the client to one of 8 onboarding archetypes (e.g., "Financial Services — High Volume — Multi-Cloud"), then generates three artifacts: (1) a prioritized 12–18 item onboarding checklist with estimated hours per item, (2) a configuration guide pre-populated with recommended settings (cloud region, data retention policy, AI model access tier), and (3) a go-live timeline with milestone dates based on the detected archetype's historical median completion time.

User-Facing Surface:

The client receives an email 4 hours after contract signature with a link to the onboarding intake form (branded as "Tell us about your setup — 5 questions, 3 minutes"). Upon submission, the client and assigned SE both receive the generated onboarding plan in a shared dashboard (new tab in Imuii admin panel: "Onboarding Hub"). The SE can edit the checklist, override timeline dates, and mark items complete; the client sees live progress and can comment on individual checklist items.

Primary Flow:

┌─────────────────────────────────────────────────────────────────────────┐
│ Onboarding Intake Form (Client View)                   [Submit →]      │
├─────────────────────────────────────────────────────────────────────────┤
│                                                                         │
│ Welcome to Imuii! Answer 5 questions so we can personalize your setup. │
│                                                                         │
│ 1. Company size:  ○ 1-50   ○ 51-200   ○ 201-1000   ○ 1000+            │
│                                                                         │
│ 2. Industry:      [Dropdown: Financial Services, Healthcare, Retail…]  │
│                                                                         │
│ 3. Primary workflows (select all that apply):                          │
│    ☐ Document processing     ☐ Customer support automation             │
│    ☐ Data analytics          ☐ Compliance reporting                    │
│                                                                         │
│ 4. Integrations needed (select all):                                   │
│    ☐ Salesforce   ☐ Workday   ☐ Slack   ☐ Snowflake   ☐ AWS S3       │
│    ☐ Other: [____________]                                             │
│                                                                         │
│ 5. Expected data volume/month:  ○ <1GB   ○ 1-10GB   ○ 10-100GB  ○ 100GB+ │
│                                                                         │
│                                    [Submit and Generate Plan →]        │
└─────────────────────────────────────────────────────────────────────────┘
┌──────────────────────────────────────────────────────────────────────────┐
│ Onboarding Hub — Acme Corp                          [Edit Plan] [Export]│
├──────────────────────────────────────────────────────────────────────────┤
│ Target Go-Live: Jan 24, 2025 (12 days from today)                       │
│ Progress: ████████░░░░░░░░░░ 8/15 items complete                        │
├────┬────────────────────────────────────────────────┬────────┬──────────┤
│ ☑  │ 1. Provision AWS VPC (us-east-1)               │ 2h est │ Done     │
│ ☑  │ 2. Configure SSO (Okta integration detected)   │ 1h est │ Done     │
│ ☐  │ 3. Set up Salesforce connector (OAuth)         │ 3h est │ Blocked  │
│    │    → Waiting on client admin to grant API scope │       │          │
│ ☐  │ 4. Enable AI model access (GPT-4 + Claude)     │ 0.5h   │ Pending  │
│ ☐  │ 5. Configure data retention (90-day policy)    │ 1h est │ Pending  │
│    │    [See 10 more items ↓]                       │        │          │
├────┴────────────────────────────────────────────────┴────────┴──────────┤
│ Configuration Summary:                                                   │
│ • Cloud: AWS (us-east-1) — financial services data residency compliant  │
│ • Data retention: 90 days (recommended for your industry)               │
│ • AI models: GPT-4, Claude 3 (high-volume tier)                         │
│                                                                          │
│ Timeline:                                                                │
│ • Day 1-3: Infrastructure provisioning                                  │
│ • Day 4-7: Integrations & SSO setup                                     │
│ • Day 8-10: AI model configuration & testing                            │
│ • Day 11-12: Go-live checklist & handoff to account team                │
└──────────────────────────────────────────────────────────────────────────┘

Key Design Decisions:

Decision 1 — Why 5 questions, not 15? Choice Made: Limit intake to 5 questions (company size, industry, workflows, integrations, data volume). Rationale: Internal prototype testing with 6 SEs showed that a 15-question form took 11 minutes to complete and had a 40% abandonment rate (clients started it but didn't finish). The 5-question version took 2.8 minutes and had 94% completion. We rejected "ask everything upfront" because we can infer secondary details (e.g., if a client selects "Financial Services" + "Compliance reporting," we auto-recommend SOC 2 config without asking). The missing questions can be asked later by the SE if the archetype assignment is ambiguous.

Decision 2 — Why 8 archetypes, not client-specific custom logic? Choice Made: Classify clients into 8 predefined onboarding archetypes based on clustering analysis of past onboardings. Rationale: We analyzed 89 historical onboardings and found that 73% clustered into 8 repeatable patterns (e.g., "Healthcare — Low Volume — Single Cloud" vs. "Financial Services — High Volume — Multi-Cloud"). The remaining 27% are outliers (highly custom enterprise deals with non-standard requirements). For Phase 1, we serve the 73% and flag the 27% for human SE intervention. We rejected "generate a fully custom plan for every client" because the AI would hallucinate configuration steps that don't exist in our product (observed in GPT-4 prototyping — it invented features like "auto-scaling Kubernetes clusters" that we don't offer).

Decision 3 — Who can edit the generated plan? Choice Made: Both SE and client can view the plan; only the SE can edit checklist items and timeline dates; the client can comment and mark their own action items complete (e.g., "grant API access to Salesforce"). Rationale: We rejected "client can edit everything" because it would allow clients to delete critical setup steps (e.g., "configure data retention policy"), leading to incomplete onboarding. We rejected "SE-only editing" because clients need agency over their side of the checklist (they can't wait for the SE to mark "client granted API access" as done — they need to do it themselves). The hybrid model gives the SE configuration authority but allows the client to self-serve progress updates.

Scope Boundary:

This feature does the following:

  • Capture 5 client onboarding requirements via AI-powered intake form
  • Classify client into 1 of 8 onboarding archetypes
  • Generate a personalized checklist (12–18 items), configuration guide (pre-filled values), and timeline (milestone dates)
  • Provide a shared client+SE dashboard to track progress

This feature does not do the following (ships in Phase 1.1+ if Phase 1 validates the core mechanic):

  • Auto-execute configuration steps (e.g., provision cloud infrastructure via Terraform) — the SE still does this manually using the generated guide
  • Support clients who don't fit the 8 archetypes (highly custom deals) — these get flagged for manual SE onboarding
  • Integrate with the client's internal project management tools (Jira, Asana) — the checklist lives in Imuii's dashboard only
  • Estimate cost of onboarding (e.g., "your setup will cost $X in cloud spend") — we show technical config, not financial projections

Integration Touchpoints:

  • Salesforce: On contract signature, a webhook triggers the intake email to the client's primary contact (stored in Opportunity.Contact field)
  • Imuii Admin Panel: New "Onboarding Hub" tab appears for clients with onboarding_status = in_progress
  • Slack (internal): SE receives a Slack DM when a client submits the intake form, with a link to the generated plan
  • Zendesk: If the client opens a support ticket during onboarding, the ticket automatically links to their Onboarding Hub so the support agent can see their progress

Acceptance Criteria

Phase 1 — MVP: 8 weeks

US1 — Intake Form Generation & Delivery

  • Given a new enterprise contract is marked "Closed Won" in Salesforce
  • When the Opportunity stage changes to "Closed Won"
  • Then the system sends an intake form email to the Primary Contact within 4 hours (p95 latency <15 minutes, measured via SendGrid webhook)
  • And the email contains a unique link to the 5-question form pre-populated with the client's company name and industry (if available in Salesforce)
  • Failure mode: If email delivery fails, SE does not know client hasn't received the form, leading to 2-day delays → SE receives Slack alert if email bounces or is not opened within 48 hours
  • Validated by: SE lead (Marcus) against 10-client pilot cohort

US2 — Client Submits Intake Form

  • Given a client clicks the intake form link
  • When the client selects answers for all 5 questions (company size, industry, workflows, integrations, data volume) and clicks "Submit and Generate Plan"
  • Then the system classifies the client into 1 of 8 archetypes with ≥85% confidence (if confidence <85%, flag for SE review) and generates a personalized checklist (12–18 items), configuration guide, and timeline within 30 seconds (p95 latency <45 seconds)
  • And both client and SE receive email notification with link to Onboarding Hub
  • Failure mode: If archetype confidence is <85%, client receives a generic checklist and SE must manually customize it, negating time savings → log all low-confidence cases for model retraining
  • Validated by: PM (Anjali) + ML engineer (Priya) against 20-sample test set from historical onboarding data

US3 — SE Overrides Archetype Assignment

  • Given an SE views the generated onboarding plan within 24 hours of creation
  • When the SE determines the archetype is incorrect (e.g., client selected wrong data volume) and clicks "Change Archetype" dropdown
  • Then the SE can select a different archetype from the list of 8, and the system regenerates the checklist and timeline within 15 seconds
  • And the client receives an email notification: "Your onboarding plan has been updated by your solutions engineer"
  • Failure mode: If SE overrides after 24 hours, the "Change Archetype" button is disabled, forcing manual checklist edits → include a banner: "Archetype override available for 24 hours after plan generation"
  • Validated by: SE lead (Marcus) in sandbox environment with 5 test clients

US4 — Client & SE Track Progress in Onboarding Hub

  • Given a client or SE opens the Onboarding Hub
  • When they view the checklist
  • Then the client sees their own action items (e.g., "Grant API access to Salesforce") and can mark them "Done" by clicking a checkbox
  • And the SE sees all checklist items and can mark any item "Done," "In Progress," "Blocked," or edit the item text
  • And the progress bar updates in real-time to show X/Y items complete
  • Failure mode: If client marks an item "Done" that requires SE validation (e.g., "Configure data retention policy"), the SE must manually verify → include a "Pending SE Verification" state for client-completed items that require SE sign-off
  • Validated by: PM (Anjali) in pilot with 5 clients, measuring completion rate and confusion points via user interviews

US5 — SE Adds Custom Checklist Item

  • Given an SE identifies a client requirement not covered by the generated checklist (e.g., custom integration)
  • When the SE clicks "Add Item" and enters item description, estimated hours, and target completion date
  • Then the new item appears in the checklist below the auto-generated items, marked with a "Custom" badge
  • And the timeline end date adjusts by the estimated hours for the custom item (e.g., if go-live was Jan 24 and SE adds a 5-hour item, new go-live is Jan 25)
  • Failure mode: If SE adds multiple custom items totaling >20 hours, the timeline may extend beyond the client's expected go-live date, creating expectation mismatch → show a warning banner if custom items push go-live >5 days past original estimate
  • Validated by: SE lead (Marcus) in sandbox environment with 3 custom enterprise scenarios

US6 — Flag Outlier Clients for Manual Onboarding

  • Given the archetype classifier assigns a confidence score <85% to a client submission
  • When the system generates the plan
  • Then the SE receives a Slack DM and email flagged "⚠️ Low Confidence — Manual Review Required" with the client's intake responses
  • And the Onboarding Hub displays a banner to the SE: "This client may not fit standard archetypes. Review checklist carefully before sharing with client."
  • Failure mode: If SE ignores the warning and shares a low-confidence plan with the client, the checklist may be missing critical steps → track "low-confidence plans shared without SE edits" metric and review if >10% of these result in scope creep
  • Validated by: ML engineer (Priya) against 20-sample test set, ensuring all historical outlier clients (27% of n=89) trigger the flag

Out of Scope (Phase 1):

FeatureWhy Not Phase 1
Auto-execute configuration steps (Terraform)Requires infrastructure-as-code integration; high risk
of misconfiguration breaking prod. Defer until D90 data
shows checklist accuracy ≥95%.
Cost estimation per onboarding planRequires real-time cloud cost APIs and contract pricing
logic. Adds 4 eng-weeks. Defer until SEs request it.
Integration with client's Jira/AsanaRequires OAuth integrations with 5+ PM tools. Adds 6
eng-weeks. Defer until clients request it (none have).
Client-side mobile app for Onboarding HubMobile usage is <5% of client traffic (source: GA4). Web
dashboard is sufficient. Defer indefinitely.
Multi-language support for intake form98% of clients are English-speaking (source: Salesforce
data). Defer until international expansion (2026 roadmap).
AI chat assistant for clients during onboard.Adds unvalidated complexity. Clients can Slack the SE.
Defer until support ticket volume justifies automation.

Phase 1.1 — Post-MVP Enhancements: +4 weeks after Phase 1 launch

  • Allow SE to export Onboarding Hub as PDF for client's internal stakeholders (requested by 3 SEs during discovery)
  • Add "Dependency" field to checklist items so SE can mark "Item 5 blocks Item 7" and system auto-reorders timeline
  • Add Slack integration: client receives daily digest of pending action items if they haven't logged into Onboarding Hub in 48 hours

Phase 1.2 — Expansion: +6 weeks after Phase 1.1

  • Support 12 archetypes instead of 8 (add coverage for 4 emerging client patterns observed in Phase 1 data)
  • Add "Confidence Score Explanation" UI so SE can see which intake responses drove the archetype classification (interpretability request from SE team)
  • Integrate with Stripe to auto-detect payment plan tier and adjust AI model access recommendations (e.g., enterprise tier gets GPT-4, growth tier gets GPT-3.5)

Success Metrics

Primary Metrics (prove the problem is solved):

┌────────────────────────────────────────────────┬─────────────────────────┬────────────────┬─────────────────────────┬──────────────────────────────┐ │ Metric │ Baseline │ Target (D90) │ Kill Threshold │ Measurement Method │ ├────────────────────────────────────────────────┼─────────────────────────┼────────────────┼─────────────────────────┼──────────────────────────────┤ │ SE hours per client onboarding (discovery→live)│ 14.3 hrs (n=41, Sept 24)│ ≤8 hrs │ >11 hrs at D90 → │ Time-tracking in Jira + │ │ │ │ │ retrospective │ SE manual logging

Strategic Decisions Made

Decision: What happens if the AI misclassifies the client into the wrong archetype? Choice Made: The SE can manually override the archetype assignment within 24 hours of plan generation and regenerate the checklist/timeline; after 24 hours, the SE can edit individual checklist items but cannot re-run the full archetype classifier (to prevent thrash). Rationale: Early prototyping showed that 11% of clients were misclassified on first pass (e.g., a "Healthcare — Low Volume" client actually had high-volume needs but selected "<1GB/month" because they didn't understand the question). Allowing a 24-hour override window gives the SE time to review the plan after the kickoff call and correct errors before the client sees major timeline changes. We rejected "always allow re-running the classifier" because it would let SEs regenerate plans repeatedly, wasting compute and confusing clients who already started working from the first version.

────────────────────────────────────────

Decision: What data sources does the AI use to estimate timeline dates? Choice Made: The AI calculates timeline dates using the median completion time for each checklist item within the assigned archetype, based on historical Jira data from past onboardings (n=89 clients, 12 months of data). If a checklist item has no historical data (e.g., a new integration we just launched), the AI uses a fallback estimate of 3 hours per item. Rationale: We rejected "use average completion time" because outliers (e.g., one client took 40 hours to configure Workday due to their internal IT approval process) skewed the estimates upward by 30%. Median is more robust. We rejected "use AI to predict completion time from client responses" because early GPT-4 experiments produced wildly inaccurate estimates (e.g., 2 hours for a Salesforce integration that historically takes 8 hours).

────────────────────────────────────────

Decision: How do we handle clients who select "Other" for integrations and type in a custom integration request? Choice Made: If the client types a custom integration in the "Other" field, the system flags it as "Requires SE Review" and does not include it in the auto-generated checklist. The SE sees the custom request in a highlighted banner at the top of the Onboarding Hub and manually adds it to the checklist with their own time estimate. Rationale: We rejected "use AI to generate checklist items for custom integrations" because the AI has no training data for integrations we've never built (e.g., a client requested "integrate with our internal ERP system built in 1987") — the AI would hallucinate steps. We rejected "block clients from requesting custom integrations" because it would force outlier clients to abandon the intake form entirely, losing the value of capturing the other 4 questions.

────────────────────────────────────────

Decision: What happens if the client changes their mind mid-onboarding (e.g., decides they need Snowflake integration on day 5)? Choice Made: The client cannot edit the intake form after submission. If they need to add a requirement mid-onboarding, they comment on the Onboarding Hub with the request, and the SE manually adds a new checklist item. The timeline is not auto-recalculated — the SE manually adjusts milestone dates. Rationale: We rejected "let clients re-submit the intake form to regenerate the plan" because it would invalidate any progress already made (e.g., if 8/15 items are complete, regenerating the plan could shuffle the checklist and confuse both parties about what's left to do). Scope changes mid-onboarding are expected, but they're handled as incremental edits, not full regeneration.

────────────────────────────────────────

Decision: Do we show the client the estimated hours per checklist item? Choice Made: We show estimated hours to the SE but not to the client. The client sees only "Pending," "In Progress," "Blocked," or "Done" status per item. Rationale: Early client interviews (n=8 during discovery) revealed that clients don't care about hours — they care about "when will I be live?" Showing hours created anxiety ("why is this taking 3 hours? Is something wrong?"). We rejected "show hours to both" because it over-indexed on transparency at the cost of client experience. The SE needs hours to prioritize their work queue; the client needs a go-live date.

────────────────────────────────────────

Decision: What permissions does the client's admin have vs. the client's end-users? Choice Made: Only the client's designated "Primary Contact" (stored in Salesforce Opportunity.Contact) receives the intake email and can view the Onboarding Hub. End-users at the client company do not see the Onboarding Hub until go-live is marked complete. Rationale: We rejected "let all client users view the Onboarding Hub during setup" because it would expose in-progress work and half-configured features, leading to confusion ("why can't I access the AI models yet?" — because the SE hasn't provisioned them yet). The Primary Contact is typically the buyer or project lead who understands that onboarding is in-flight. Post-go-live, all client users gain access.

MADE WITH SCRIPTONIA

Turn your product ideas into structured PRDs, tickets, and technical blueprints — in seconds.

Start for free →