SCRIPTONIA.Make your own PRD →
PRD · March 23, 2026

Avaamo

Problem Statement

Solutions engineers and PMs spend 15-20 hours per new client on manual scoping, parsing inconsistent inputs from emails, RFPs, and calls, which leads to incomplete specs, integration oversights, and delayed go-lives—evidenced by internal logs showing 25% of Q1 deployments requiring rework due to spec errors, and team surveys citing "blank-page syndrome" as the top frustration blocking faster closes.

User Personas

  • Sarah Lopez, Solutions Engineer: Mid-level engineer handling initial client discovery for enterprise AI agents; key pain is manually mapping vague RFP requirements to agent configs, causing 2-day delays per deal; motivation is to close more deals quarterly by cutting scoping from hours to minutes and reducing error-prone guesswork.
  • Mike Chen, Product Manager: Oversees agent customization roadmaps for Fortune 500 clients; key pain is inconsistent spec handoffs from solutions team leading to dev misalignments and scope creep; motivation is to standardize outputs for predictable timelines and higher client satisfaction scores.
  • Priya Patel, Senior Solutions Architect: Leads complex integrations for regulated industries like finance; key pain is overlooking compliance in rushed manual reviews, resulting in 10% audit failures; motivation is to embed regulatory checks early for compliant, faster deployments without post-facto fixes.

User Stories

As a Solutions Engineer, I want to answer five targeted questions about the client's use case so that the wizard generates a structured spec with recommended skills and timeline, reducing my manual translation effort. As a Product Manager, I want to review and export the generated spec as a PDF or JSON file so that I can share it directly with dev teams without reformatting. As a Solutions Architect, I want the wizard to flag compliance risks based on industry inputs so that I avoid integration pitfalls early and ensure regulatory adherence. As a Solutions Engineer, I want to save incomplete wizard sessions for later resumption so that I can pause during client calls and pick up without re-entering data. As a Product Manager, I want analytics on wizard usage showing common use case patterns so that I can inform product roadmaps and prioritize agent features.

Acceptance Criteria

For "As a Solutions Engineer, I want to answer five targeted questions about the client's use case so that the wizard generates a structured spec with recommended skills and timeline, reducing my manual translation effort."

  • Given a logged-in user starts the wizard, when they complete all five questions with valid inputs (e.g., "Healthcare" industry, "Patient Triage" use case), then the output includes at least 3 recommended skills, a 4-6 week timeline estimate, and an integration checklist with 5+ items.
  • Given invalid input (e.g., empty use case field), when the user submits, then validation errors appear inline with specific messages, and submission is blocked until fixed.
  • The entire wizard flow completes in under 3 minutes for a full submission, measured end-to-end.

For "As a Product Manager, I want to review and export the generated spec as a PDF or JSON file so that I can share it directly with dev teams without reformatting."

  • Given a generated spec, when the user clicks "Export", then options for PDF (formatted with tables) and JSON (structured data dump) appear, and downloads succeed without errors.
  • The exported PDF includes all sections (skills, checklist, timeline, guide) in readable 10pt font, under 5 pages.
  • Exports log to user activity dashboard for audit, with file size <2MB.

For "As a Solutions Architect, I want the wizard to flag compliance risks based on industry inputs so that I avoid integration pitfalls early and ensure regulatory adherence."

  • Given "Finance" industry and "HIPAA" compliance selected, when spec generates, then a dedicated "Compliance Alerts" section lists 3+ risks (e.g., "Encrypt PII in integrations") with mitigation steps.
  • If no risks apply (e.g., non-regulated e-commerce), then the section states "No major flags; standard practices apply" explicitly.
  • Flags are hyperlinked to internal compliance docs, testable by clicking to a valid URL.

For "As a Solutions Engineer, I want to save incomplete wizard sessions for later resumption so that I can pause during client calls and pick up without re-entering data."

  • Given partial inputs entered, when the user clicks "Save Draft", then the session is stored in their dashboard and auto-resumes on next login with progress preserved.
  • Drafts auto-expire after 7 days, with a warning banner on access.
  • Save succeeds even on partial forms (at least 1 question answered), confirmed by success toast message.

For "As a Product Manager, I want analytics on wizard usage showing common use case patterns so that I can inform product roadmaps and prioritize agent features."

  • Given 10+ wizard completions, when viewing the analytics dashboard, then charts display top industries/use cases (e.g., pie chart of 40% healthcare) and skills requested.
  • Access restricted to PM role via RBAC, with export to CSV for offline analysis.
  • Data anonymizes client details, aggregating at feature level only.

Success Metrics

  • Average scoping time per deployment reduced by ≥50%, from 15 hours to ≤7.5 hours, tracked via pre/post user surveys and Jira ticket durations.
  • Wizard adoption rate ≥80% among solutions/PM users within 3 months, measured by sessions started per active user in Amplitude.
  • Spec error rate in deployments ≤5%, down from 25%, audited via post-go-live reviews of first 50 uses.
  • Client satisfaction score for scoping process ≥4.2/5, collected via NPS in follow-up emails to 100 users.
  • Wizard completion rate ≥70%, with drop-off analysis showing <20% abandonment at any question step.

Non-Functional Requirements

  • Performance: Wizard loads in ≤2 seconds on desktop/mobile; generation of spec completes in ≤5 seconds for standard inputs; supports 500 concurrent users with <1% error rate under load testing.
  • Accessibility: Complies with WCAG 2.1 AA, including screen reader support for forms/questions, keyboard navigation for all controls, and color contrast ≥4.5:1; tested with VoiceOver and NVDA.
  • Security: All inputs encrypted in transit (TLS 1.3) and at rest (AES-256); session data stored with role-based access (PMs/solutions only); GDPR/HIPAA compliant with PII masking in analytics; audit logs for all exports/saves retained 90 days.
  • Scalability: Backend API handles 10,000 sessions/month initially, auto-scaling to 50,000 via Kubernetes; no single point of failure for AI generation service.
  • Reliability SLA: 99.9% uptime for wizard flow; if offline, graceful degradation to static form with queued generation on recovery.

Edge Cases & Constraints

  • Invalid or extreme inputs: e.g., conversation volume >1M/day triggers "High Scale Alert" in spec with custom infrastructure recs; non-standard industries (e.g., "Space Exploration") defaults to generic template without crashing.
  • Network interruptions: Mid-wizard save/export fails silently with retry button; offline mode caches inputs locally via Service Worker, syncing on reconnect—tested with Chrome DevTools throttling.
  • Permission issues: Non-authorized users (e.g., sales rep) see locked dashboard with "Access Denied" and redirect to request form; role changes mid-session revoke access without data loss.
  • Failure modes: AI generation backend outage shows fallback static template based on rules engine, with error banner "Using cached recs; full AI on recovery"; repeated failures (3+) notify admin via Slack webhook.
  • High-volume spam: Rate limiting at 5 sessions/user/hour to prevent abuse; CAPTCHA on suspicious patterns (e.g., rapid submits from one IP).

Open Questions

  • How will the wizard integrate with existing Avaamo CRM for auto-populating client industry from leads? (Medium urgency; needs design sync by sprint 2.)
  • What ML model powers recommendations—use current GPT integration or custom fine-tuned one for agent skills? ⚠ Critical; impacts accuracy and must be decided pre-dev kickoff to avoid rework like last failed prototype.
  • Should exports include watermarks or versioning for legal traceability in enterprise contracts? (Low urgency; defer to beta feedback.)
  • Localization: Support non-English inputs for global clients (e.g., EU finance)? (Medium; prototype in English first.) ⚠ Does compliance flagging need jurisdiction-specific rules (e.g., CCPA vs. GDPR), or start with US-only? Critical for regulated verticals; validate with legal by EOW.
  • Analytics granularity: Aggregate only, or allow PMs to filter by client segment? (Low; start aggregate to minimize privacy risks.)

Dependencies

  • Backend: Avaamo's existing AI recommendation engine (team: ML Platform) for skill/timeline generation; feature flag in LaunchDarkly to toggle wizard on/off per tenant.
  • Frontend: React-based Avaamo dashboard UI kit (team: Web Dev); integrate with current auth service for RBAC.
  • Third-party: OpenAI API (or equivalent) for natural language parsing of free-text fields; ensure API keys rotated quarterly.
  • Infrastructure: AWS S3 for draft storage and exports; dependent on recent DynamoDB schema update for user sessions (team: Infra, Q3 complete).
  • Cross-team: Legal review for compliance outputs (team: Compliance); beta testing with 10 solutions users (team: QA, post-MVP). No dependencies on external products like Salesforce yet, but future CRM sync flagged.
MADE WITH SCRIPTONIA

Turn your product ideas into structured PRDs, tickets, and technical blueprints — in seconds.

Start for free →