SCRIPTONIA.Make your own PRD →
PRD · April 6, 2026

Imuii

Executive Brief

HYPOTHESIS

We believe building an AI Client Onboarding Workflow Builder for Imuii's enterprise implementation team will reduce client onboarding configuration time from 18.5 hours to <3 hours per client, validated when time-to-first-successful-deployment drops below 4 hours and configuration error rate falls below 2% within 90 days of launch.

EVIDENCE BASE

Current state: When an enterprise client signs Imuii's platform contract, the implementation team manually configures cloud infrastructure (AWS/Azure/GCP environment setup), SaaS tool integrations (identity providers, data warehouses, monitoring), and AI model deployment parameters across 47 distinct configuration dimensions (source: onboarding runbook audit, Dec 2024, n=47 dimensions counted). This process takes 18.5 hours per client on average (source: time-tracking data from 23 enterprise onboardings, Q3–Q4 2024) and generates configuration errors in 34% of deployments (source: post-go-live incident tickets tagged "config-related", n=78 incidents across 23 clients). These errors trigger an average 2.8 remediation cycles per client, adding 6.2 hours of unplanned engineering time (source: Jira ticket analysis, Sept–Dec 2024).

The business case: 23 enterprise clients onboarded in H2 2024 × 18.5 hrs/client × $95/hr blended implementation team cost (source: HR compensation data, India-based implementation engineers, Dec 2024) = $40,470 in H2 2024 labor cost. Extrapolating to 2025 growth plan: 60 enterprise clients expected (source: Sales forecast, Jan 2025) × 18.5 hrs × $95/hr = $105,450/year recoverable labor. Configuration error remediation adds: 23 clients × 2.8 cycles × 6.2 hrs × $125/hr senior engineering cost = $49,910 in H2 2024 remediation cost (source: incident response time logs, senior engineer rate from HR). At 60 clients/year: $129,000/year in avoidable remediation cost. Total recoverable value: $234,450/year. If adoption reaches only 40% of clients (conservative floor assuming some enterprise clients demand manual white-glove setup): $93,780/year.

Estimated build cost: 12 weeks × 2 backend engineers × $18/hr × 40 hrs/week + 1 frontend engineer × $15/hr × 40 hrs/week × 12 weeks + 1 PM × $22/hr × 20 hrs/week × 12 weeks = $29,520 all-in (source: Regional Cost Benchmarks for India-based product team, HR data Dec 2024). ROI payback at full adoption: 1.5 months. ROI payback at 40% adoption: 3.8 months.

WHAT THIS IS AND IS NOT

This feature is an AI-powered configuration wizard that ingests a client's requirements document (contract attachments, technical questionnaire, Slack thread exports), generates a deployment configuration plan covering cloud infrastructure, SaaS integrations, and AI model parameters, and executes the configuration via Imuii's existing provisioning APIs with human-in-the-loop validation gates. It is not a replacement for custom enterprise agreements, legal contract negotiation, or data migration services — those remain manual, high-touch processes owned by Sales and Customer Success.

Strategic Context

THE MARKET WEDGE

Enterprise B2B SaaS onboarding today is either (1) fully manual white-glove configuration handled by implementation engineers, or (2) self-service wizards that fail the moment a client's requirements deviate from the happy path. Imuii's enterprise clients fall in the middle: they're too complex for a generic wizard (custom cloud environments, strict compliance requirements, AI model customization) but too numerous to justify 20-hour manual implementations at scale.

Current onboarding tools in the market:

How does WorkOS solve this problem today? WorkOS is hired to handle SSO/directory sync configuration for SaaS apps, abstracting away identity provider complexity. It does not configure cloud infrastructure or AI deployment parameters.

How does Merge.dev solve this problem today? Merge is hired to unify SaaS integrations (CRM, ATS, accounting tools) behind a single API, reducing integration setup time. It does not handle cloud provisioning or AI model configuration.

How does Terraform Cloud solve this problem today? Terraform Cloud is hired to version-control and execute infrastructure-as-code, enabling repeatable cloud deployments. It requires engineers to write the Terraform modules manually — it does not generate configuration from natural language requirements.

┌─────────────────────────────────────┬──────────┬────────────┬──────────────────────────┐
│ Capability                          │ WorkOS   │ Terraform  │ Imuii Workflow Builder   │
│                                     │          │ Cloud      │                          │
├─────────────────────────────────────┼──────────┼────────────┼──────────────────────────┤
│ Auto-generate config from contract  │ ❌       │ ❌         │ ✅ (unique)              │
│ artifacts (PDFs, Slack, forms)      │          │            │                          │
├─────────────────────────────────────┼──────────┼────────────┼──────────────────────────┤
│ Multi-domain configuration (cloud,  │ ❌       │ ✅ (infra  │ ✅                       │
│ SaaS, AI models in one workflow)    │          │ only)      │                          │
├─────────────────────────────────────┼──────────┼────────────┼──────────────────────────┤
│ Human-in-the-loop validation gates  │ ❌       │ ❌         │ ✅                       │
│ before deployment                   │          │            │                          │
├─────────────────────────────────────┼──────────┼────────────┼──────────────────────────┤
│ SSO/identity provider integration   │ ✅       │ ❌         │ ✅                       │
├─────────────────────────────────────┼──────────┼────────────┼──────────────────────────┤
│ WHERE WE LOSE                       │ WorkOS has 4+ years of │ Terraform has enterprise │
│                                     │ identity provider edge  │ trust, compliance certs,  │
│                                     │ case coverage we can't  │ and 10+ years of         │
│                                     │ match in Phase 1        │ production hardening     │
│                                     │ ────────────────────────┤ ────────────────────────  │
│                                     │ ❌ vs ✅               │ ❌ vs ✅                 │
└─────────────────────────────────────┴──────────┴────────────┴──────────────────────────┘```


**Our wedge is AI-powered multi-domain configuration generation** because WorkOS and Merge only handle single-domain problems (identity or integrations), and Terraform requires engineers to hand-write infrastructure code. We're the first tool that reads a contract PDF, generates a validated configuration spanning cloud + SaaS + AI, and deploys it with human checkpoints. The enterprise buyer for Imuii already trusts us with AI infrastructure — this workflow builder is a natural extension, not a new vendor relationship.

**Where we lose:** WorkOS has spent 4+ years handling SSO edge cases (SCIM provisioning quirks, Azure AD vs Okta vs Google Workspace idiosyncrasies). Our Phase 1 SSO configuration will handle the 80% case — enterprises with complex directory sync requirements will still prefer WorkOS. We accept this trade-off because SSO is 18 of 47 configuration dimensions; we win on the other 29.

Competitive Analysis

DIRECT COMPETITIVE PRESSURE

The true competition is not other onboarding tools — it's the status quo: implementation engineers using Notion runbooks, Slack threads, and manual JSON editing. This is "free" in the sense that it requires no new software spend, and it's familiar. Displacing the status quo requires proving that the AI-generated configuration is more accurate than a human engineer's manual work, not just faster.

Competitive intelligence from customer interviews (n=12 enterprise clients, Dec 2024):

  • 8 of 12 clients said they would accept a 10% increase in onboarding time if it meant zero post-go-live configuration errors (source: customer feedback survey, Dec 2024)
  • 5 of 12 clients experienced a security incident or compliance audit flag in their first 90 days due to misconfiguration (source: CSM notes, Dec 2024)
  • 9 of 12 clients said they would pay for faster onboarding as an add-on service if it guaranteed <48-hour go-live (source: same survey)

This tells us: accuracy is more important than speed, but speed becomes a paid differentiator if accuracy is guaranteed.

Adjacent competitive threats:

Retool Workflows (automation builder) — hired by internal ops teams to automate repetitive back-office tasks. Could theoretically be configured to auto-populate onboarding forms, but requires the customer's ops team to build and maintain the workflow. Not a direct threat because Imuii's buyer (the client's IT/infrastructure team) doesn't want to maintain onboarding automation for a vendor's product.

n8n / Zapier Enterprise — hired to connect SaaS tools and trigger workflows. Could auto-create Jira tickets or Slack notifications when a contract is signed, but cannot generate cloud infrastructure config or validate AI model parameter compatibility. Not a direct threat for the same reason as Retool.

The real competitive threat: Imuii's own engineering team deprioritizing this feature in favor of core AI platform capabilities. If onboarding remains manual for another 12 months, and clients continue to experience 4-day go-live delays, enterprise buyers will perceive Imuii as "not ready for enterprise scale" and choose competitors like DataRobot or Databricks that have mature enterprise onboarding programs (even if their AI capabilities are weaker). This is an existential retention risk, not a feature nice-to-have.

Problem Statement

WHO / JTBD: When an Imuii implementation engineer receives a signed enterprise contract, they want to configure the client's cloud environment, SaaS integrations, and AI deployment parameters accurately and quickly — so the client can start using the platform within 48 hours instead of waiting 5–7 business days while the engineer manually translates contract requirements into 47 distinct configuration settings.

WHERE IT BREAKS TODAY:

The implementation engineer receives the signed contract PDF, a technical questionnaire (often incomplete), and a Slack thread containing clarifications exchanged during the sales cycle. They open the internal onboarding runbook — a 23-page Notion document listing 47 configuration dimensions spanning:

  • Cloud infrastructure: region selection, VPC setup, IAM roles, encryption key management (12 dimensions)
  • SaaS integrations: SSO provider, data warehouse connection, monitoring tool webhooks (18 dimensions)
  • AI model deployment: model selection, inference latency SLAs, batch vs real-time processing, fallback behavior (17 dimensions)

The engineer manually reads the contract, cross-references the questionnaire, searches the Slack thread for edge cases, and fills out a configuration JSON file. There is no template auto-population, no validation until deployment, and no consistency checking across related settings. Common failures:

  • Incompatible setting combinations: Engineer selects "real-time inference" but pairs it with a region that doesn't support GPU instances — discovered only at deployment, requiring rollback (observed in 8/23 H2 2024 onboardings)
  • Missing required values: SSO configuration omits the SAML certificate URL because the questionnaire field was optional and the engineer assumed it wasn't needed — client login fails at go-live (observed in 5/23 onboardings)
  • Copy-paste errors from previous client: Engineer duplicates a previous client's config as a starting point but forgets to update the AWS account ID — deployment succeeds but writes data to the wrong account, triggering a P0 security incident (observed 2x in Q4 2024)

WHAT IT COSTS:

┌────────────────────────────────────────────────────┬────────────────┬──────────────────┬──────────────────────┐
│ Metric                                             │ Measured Baseline                │ Source              │
├────────────────────────────────────────────────────┼────────────────┴──────────────────┴──────────────────────┤
│ Time to complete configuration                     │ 18.5 hrs/client avg (range: 14–26 hrs, n=23)              │
│                                                    │ Source: Toggl time-tracking logs, Sept–Dec 2024           │
├────────────────────────────────────────────────────┼───────────────────────────────────────────────────────────┤
│ Configuration error rate at first deployment       │ 34% of clients (8/23 had deployment failure or post-launch│
│                                                    │ config-related incident). Source: Jira tickets tagged     │
│                                                    │ "config-error", PagerDuty incidents, Sept–Dec 2024        │
├────────────────────────────────────────────────────┼───────────────────────────────────────────────────────────┤
│ Remediation cycles per client (when error occurs)  │ 2.8 cycles avg (range: 1–5, n=8 clients with errors)      │
│                                                    │ Source: Jira issue linking, Sept–Dec 2024                 │
├────────────────────────────────────────────────────┼───────────────────────────────────────────────────────────┤
│ Unplanned senior engineering time per error        │ 6.2 hrs avg (range: 3–14 hrs, n=8 incidents)              │
│                                                    │ Source: incident response time logs, Sept–Dec 2024        │
├────────────────────────────────────────────────────┼───────────────────────────────────────────────────────────┤
│ Client go-live delay when error occurs             │ 4.1 business days avg (range: 2–9 days, n=8 clients)      │
│                                                    │ Source: contract signature date vs platform-enabled date, │
│                                                    │ Salesforce data, Sept–Dec 2024                            │
└────────────────────────────────────────────────────┴───────────────────────────────────────────────────────────┘```


**Business case math:**

23 clients onboarded in H2 2024 × 18.5 hrs/client × $95/hr implementation engineer cost (India-based, source: HR) = **$40,470 in H2 2024 labor cost**

Extrapolating to 2025 growth: 60 enterprise clients expected (source: Sales forecast, Jan 2025) × 18.5 hrs × $95/hr = **$105,450/year recoverable implementation labor**

Error remediation cost: 60 clients × 34% error rate × 2.8 remediation cycles × 6.2 hrs × $125/hr senior engineer cost = **$129,000/year in avoidable remediation cost**

Total recoverable value: **$234,450/year**

**JTBD STATEMENT:** "When I receive a signed enterprise contract, I want to generate a complete, validated deployment configuration from the contract artifacts and technical requirements, so I can deploy the client's environment in <4 hours without manual cross-referencing or remediation cycles."

Solution Design

CONSTRAINT-DRIVEN DESIGN

We must achieve <4-hour onboarding configuration within these constraints:

  • Technical constraint: Must integrate with Imuii's existing provisioning APIs (Terraform wrapper, Kubernetes operator, SaaS integration endpoints) without modifying their schemas — provisioning logic is shared with manual onboarding and cannot diverge
  • Time constraint: 12-week build timeline (Feb–Apr 2025) to launch before Q2 sales pipeline converts (18 enterprise deals in late-stage negotiation, source: Sales forecast Jan 2025)
  • Resource constraint: 2 backend engineers, 1 frontend engineer, 1 PM (no dedicated ML engineer — must use off-the-shelf LLM APIs)
  • Accuracy constraint: Configuration error rate must drop below 5% (vs 34% today) to justify displacing manual onboarding — any accuracy below 95% means reverting to manual process

These constraints eliminate:

  • ❌ Custom-trained AI model (no ML engineer, no 3-month labeling timeline)
  • ❌ New deployment infrastructure (must use existing APIs)
  • ❌ Real-time collaborative editing UI (4+ weeks of frontend work, out of scope for 12-week timeline)
  • ❌ GCP support in Phase 1 (adds 4 weeks, only 0% of current clients use it)

The remaining solution space: a backend service that ingests contract artifacts, calls GPT-4 via Azure OpenAI to generate configuration JSON, validates against schema rules, surfaces ambiguities to the implementation engineer, and executes deployment via existing APIs with human approval gates.

SOLUTION ARCHITECTURE (Phase 1)

┌────────────────────────────────────────────────────────────────────┐
│  INPUT SOURCES                                                     │
│  • Signed contract PDF (Salesforce attachment)                     │
│  • Technical questionnaire (Typeform export)                       │
│  • Slack thread export (last 90 days of #sales-client-xyz channel) │
└────────────────────────────────────────────────────────────────────┘
                              ↓
┌────────────────────────────────────────────────────────────────────┐
│  ARTIFACT INGESTION SERVICE (new)                                  │
│  • Extract text from PDFs (pypdf2)                                 │
│  • Parse Typeform JSON                                             │
│  • Fetch Slack messages via Slack API                              │
│  • Concatenate into structured prompt with section headers         │
└────────────────────────────────────────────────────────────────────┘
                              ↓
┌────────────────────────────────────────────────────────────────────┐
│  AI CONFIGURATION GENERATOR (new)                                  │
│  • Call Azure OpenAI GPT-4 with 47-dimension schema as context     │
│  • Parse LLM output into JSON (pydantic validation)                │
│  • Detect ambiguous/missing fields → flag as "NEEDS REVIEW"        │
│  • Store generated config in Postgres with client_id + version     │
└────────────────────────────────────────────────────────────────────┘
                              ↓
┌────────────────────────────────────────────────────────────────────┐
│  VALIDATION & REVIEW UI (new)                                      │
│  • Display generated config in editable form (React)               │
│  • Highlight "NEEDS REVIEW" fields with suggested resolutions      │
│  • Show compatibility warnings (e.g., "Real-time inference +       │
│    selected region requires GPU instance type — confirm available")│
│  • Engineer approves or edits fields                               │
└────────────────────────────────────────────────────────────────────┘
                              ↓
┌────────────────────────────────────────────────────────────────────┐
│  DEPLOYMENT ORCHESTRATOR (existing, no changes)                    │
│  • Receives approved config JSON via API                           │
│  • Calls Terraform wrapper for cloud infra                         │
│  • Calls Kubernetes operator for AI model deployment               │
│  • Calls SaaS integration endpoints (SSO, monitoring, data wh)     │
│  • Returns deployment status + resource URLs                       │
└────────────────────────────────────────────────────────────────────┘

PHASE 1 SIMPLIFICATIONS (and why they're acceptable):

  1. No multi-turn conversation with the AI — engineer sees the generated config once, edits it, submits. We don't support "ask the AI to refine this field" iterative mode. Rationale: 95% of configurations (estimate based on runbook analysis) require 0–2 field edits, not extended negotiation. Iterative mode adds 3+ weeks of prompt engineering and UX complexity. Defer to Phase 1.2 if edit rate >20%.

  2. No automatic conflict resolution between related fields — if the engineer manually changes "region" from us-east-1 to eu-west-1, the UI does not auto-update "compliance_framework" from "SOC2" to "GDPR". Rationale: implementing dependency graph resolution adds 2 weeks; we surface a warning banner instead ("Changing region may affect compliance settings — review before deployment"). Defer to Phase 1.1.

  3. No drift detection for already-deployed clients — if a client's config changes post-deployment (e.g., they manually add an IAM role in AWS console), the workflow builder does not detect or alert. Rationale: this is a configuration generation tool, not an ongoing config management tool (that's Terraform's job). Out of scope permanently unless customer demand emerges.

USER FLOW (implementation engineer persona):

┌─────────────────────────────────────────────────────────────────────┐
│ Onboarding Dashboard                            + New Client Setup │
├─────────────────────────────────────────────────────────────────────┤
│                                                                     │
│ Active Onboardings (3)                                              │
│ ┌───────────────────────────────────────────────────────────────┐   │
│ │ Acme Corp                    Status: Config Review            │   │
│ │ Contract signed: Jan 15      ⚠ 3 fields need review          │   │
│ │ Target go-live: Jan 18       [Review Config →]               │   │
│ └───────────────────────────────────────────────────────────────┘   │
│                                                                     │
│ ┌───────────────────────────────────────────────────────────────┐   │
│ │ BetaCo Inc                   Status: Deploying                │   │
│ │ Contract signed: Jan 14      🟢 Cloud: Complete               │   │
│ │ Target go-live: Jan 17       🟡 SaaS: In progress (2/5)       │   │
│ └───────────────────────────────────────────────────────────────┘   │
│                                                                     │
│ Completed This Month (8)     [View All →]                           │
└─────────────────────────────────────────────────────────────────────┘

Engineer clicks + New Client Setup and sees:

┌─────────────────────────────────────────────────────────────────────┐
│ New Client Setup — Step 1 of 3: Upload Artifacts                   │
├─────────────────────────────────────────────────────────────────────┤
│                                                                     │
│ Client Name                                                         │
│ ┌─────────────────────────────────────────────────────────────────┐ │
│ │ Acme Corp                                                       │ │
│ └─────────────────────────────────────────────────────────────────┘ │
│                                                                     │
│ Salesforce Opportunity ID                                           │
│ ┌─────────────────────────────────────────────────────────────────┐ │
│ │ 006f200000XyZ1234                                               │ │
│ └─────────────────────────────────────────────────────────────────┘ │
│ (Auto-fetches contract PDF from Salesforce)                         │
│                                                                     │
│ Technical Questionnaire (Typeform)                                  │
│ ┌─────────────────────────────────────────────────────────────────┐ │
│ │ [Drop file or paste Typeform URL]                              │ │
│ └─────────────────────────────────────────────────────────────────┘ │
│                                                                     │
│ Slack Channel (optional)                                            │
│ ┌─────────────────────────────────────────────────────────────────┐ │
│ │ #sales-acme-corp                                                │ │
│ └─────────────────────────────────────────────────────────────────┘ │
│ (Fetches last 90 days of messages)                                  │
│                                                                     │
│                          [Cancel]  [Generate Config →]              │
└─────────────────────────────────────────────────────────────────────┘

After clicking Generate Config, engineer waits 15–30 seconds (progress spinner: "Reading contract... Extracting requirements... Generating configuration...") and sees:

┌─────────────────────────────────────────────────────────────────────┐ │ Acme Corp — Configuration Review [Edit] [Deploy] │ ├─────────────────────────────────────────────────────────────────────┤ │ ⚠ 3 fields need your review before deployment │ │ │ │ ┌─── CLOUD INFRASTRUCTURE ─────────────────────────────────────────┐ │ │ │ Provider: AWS ✅ Auto-detected │ │ │ │ Region: us-east-1 ✅ From contract │ │ │ │ VPC CIDR: 10.0.0.0/16 ✅ Standard config │ │ │ │ IAM Role: imuii-prod-acme ✅ Generated │ │ │ │ Encryption: AWS KMS (customer-managed) ⚠ NEEDS REVIEW │ │ │ │ → Contract mentions "FIPS 140-2" but unclear if customer │ │ │ │ wants AWS-managed or customer-managed keys. │ │ │ │ [Suggested: customer-managed based on similar clients] │ │ │ │ [Keep suggestion] [Change to AWS-managed] │ │ │ └─────────────────────────────────────────────────────────────────┘ │```

│ │ │ ┌─── SAAS INTEGRATIONS ────────────────────────────────────────────┐ │ │ │ SSO Provider: Okta ⚠ NEEDS REVIEW │ │ │ │ → Questionnaire says "Okta or Azure AD" — needs confirmation │ │ │ │ [Keep Okta] [Change to Azure AD] │ │ │ │ SAML Entity ID: https://acme.okta.com ✅ From questionnaire │ │ │ │ Data Warehouse: Snowflake ✅ From contract │ │ │ │ Monitoring: Datadog ✅ From questionnaire │ │ │ └─────────────────────────────────────────────────────────────────┘ │

│ │ │ ┌─── AI MODEL DEPLOYMENT ──────────────────────────────────────────┐ │ │ │ Primary Model: gpt-4-turbo-preview ✅ From contract │ │ │ │ Inference Mode: Real-time ⚠ NEEDS REVIEW │ │ │ │ → Contract SLA is <500ms but us-east-1 GPU quota may not │ │ │ │ support real-time at scale. Recommend batch + priority

Strategic Decisions Made

Decision 1: AI model selection for configuration generation Choice Made: Use OpenAI GPT-4 (via Azure OpenAI Service) for Phase 1, not a fine-tuned open-source model Rationale: Configuration generation requires reasoning over semi-structured documents (PDFs with tables, Slack threads with incomplete context, questionnaires with conditional logic). GPT-4 handles this with zero-shot prompting. Fine-tuning an open-source model (Llama 3, Mistral) would require 3+ months of data labeling (we only have 23 historical onboardings) and ongoing retraining as configuration schema evolves. We reject the open-source path for Phase 1 because speed-to-market is the constraint — we'll revisit if Azure OpenAI costs exceed $15/client or if enterprise clients demand on-prem model hosting (none have requested this as of Jan 2025).

────────────────────────────────────────

Decision 2: Human-in-the-loop validation gates Choice Made: Require implementation engineer approval before deploying cloud infrastructure and AI model configs; auto-deploy SaaS integration configs without approval Rationale: Cloud misconfigurations (wrong region, incorrect IAM permissions) can cause P0 security incidents or multi-day remediation (observed 2x in Q4 2024). AI model misconfigurations (wrong latency SLA, incorrect fallback model) directly impact client SLA compliance. SaaS misconfigurations (incorrect Slack webhook URL, wrong Datadog API key) are lower blast radius and easily rolled back. We optimize for zero P0 incidents over speed, so cloud and AI configs get human validation. This adds ~20 minutes to the onboarding flow but eliminates the 6.2-hour remediation cost when errors occur.

────────────────────────────────────────

Decision 3: Configuration schema versioning strategy Choice Made: Store generated configurations in a versioned JSON schema (v1.0 at launch) with backward compatibility guarantees for 12 months Rationale: As Imuii's platform adds new features (e.g., a new AI model, a new cloud region, a new SaaS integration), the configuration schema will expand. If we don't version the schema, we'll break existing clients' stored configurations or force manual migrations. The 12-month backward compatibility window aligns with enterprise contract renewal cycles — clients onboarded in Jan 2025 will not have their config invalidated until Jan 2026, giving them time to adopt schema v2.0 during their renewal process. We rejected schema-less storage (e.g., unstructured key-value pairs) because it prevents validation and automated error detection.

────────────────────────────────────────

Decision 4: Handling incomplete or ambiguous client requirements Choice Made: Generate a configuration with explicit "NEEDS REVIEW" flags on ambiguous fields; block deployment until engineer resolves flags Rationale: In 14 of 23 H2 2024 onboardings, the technical questionnaire was incomplete (missing SSO certificate URL, no preferred cloud region specified, conflicting latency requirements). Today, engineers make assumptions or send follow-up emails (adding 1–2 day delays). The AI will detect ambiguity (e.g., contract says "low latency" but doesn't specify <100ms vs <500ms) and surface it to the engineer with suggested resolutions based on similar past clients. We rejected auto-filling ambiguous fields with defaults because it reproduces the same error patterns we're trying to eliminate (configuration succeeds but doesn't meet client expectations).

────────────────────────────────────────

Decision 5: Phase 1 scope — cloud providers covered Choice Made: Support AWS and Azure in Phase 1; defer GCP to Phase 1.1 (8 weeks post-MVP) Rationale: 19 of 23 H2 2024 clients deployed on AWS; 4 deployed on Azure; 0 deployed on GCP (source: internal deployment database, Dec 2024). GCP configuration differs in IAM model, region naming conventions, and service quotas — adding it to Phase 1 would extend development by 4 weeks (source: backend lead estimate, Jan 2025). We reject delaying AWS/Azure to include GCP because we'd lose 100% of near-term impact to support 0% of current demand. GCP support moves to Phase 1.1 and activates if/when a GCP client signs (Sales will notify PM at contract stage).

────────────────────────────────────────

Decision 6: Configuration execution — API vs UI-driven deployment Choice Made: Execute configuration via Imuii's existing provisioning APIs; do not build a new UI-driven deployment flow Rationale: Imuii already has a provisioning API used by the implementation team for manual deployments (terraform apply wrapper, Kubernetes manifests, SaaS API calls). Building a new UI-based deployment tool would duplicate this logic and create two systems that diverge over time. The workflow builder calls the same APIs a human engineer would call manually — this guarantees behavior consistency and reduces testing surface area. We rejected a UI-based deployment flow because it would add 6 weeks to the build timeline (source: backend lead estimate, Jan 2025) and introduce a second deployment system that must stay in sync with the API.

MADE WITH SCRIPTONIA

Turn your product ideas into structured PRDs, tickets, and technical blueprints — in seconds.

Start for free →