SCRIPTONIA.Make your own PRD →
PRD · April 9, 2026

Zuddl

Executive Brief

Event organizers run a conference, watch 847 people register and 612 actually attend, then stare at raw CSV exports trying to answer basic questions: which sessions lost the audience, whether the APAC crowd engaged differently than EMEA, and what to fix next time. A mid-sized event company running 24 events/year spends an average of 6.8 hours per event (n=19 organizers surveyed, internal Aug 2025) stitching together registration data, session analytics, engagement logs, and audience segments into a slide deck for stakeholders. At $72/hr blended rate for event marketing managers, that's 163 hours/year per organizer. For a team of 8 organizers: 1,304 hours/year × $72/hr = $93,888/year recoverable (source: HR compensation data, time estimate from survey). If adoption reaches only 50% of events: $46,944/year. This assumes zero value capture from faster iteration on future events — the real upside is in the insights that prevent a repeat of the session that lost 40% of attendees in the first 12 minutes.

19 organizers × 24 events/year × 6.8 hrs/event × $72/hr = $187,315/year recoverable value (source: HR comp bands, event volume from CRM, time study Aug 2025). If adoption reaches 40% of events: $74,926/year.

This feature is an automated post-event report generated within 24 hours of event close, covering attendee journey heatmaps, session engagement scores, drop-off analysis, multilingual audience breakdown, and AI-generated recommendations with human override capability. It is not a live event analytics dashboard, a predictive model for future event attendance, or a replacement for the organizer's judgment — the AI generates labels and insights, but the organizer has final authority to override any interpretation before exporting.

Competitive Analysis

How competitors solve this today:

Hopin: Organizers hire Hopin's post-event dashboard for basic session attendance counts and aggregate engagement metrics (poll participation rate, chat message volume), but Hopin does not auto-generate narrative insights, cross-reference audience segments with engagement behavior, or produce exportable reports — users screenshot charts and write their own analysis.

Bizzabo: Organizers hire Bizzabo's analytics suite for pre-built reports covering registration funnel, session attendance, and lead capture, but reports are static templates with no AI interpretation layer — the organizer still manually identifies "why did this session lose 40% of attendees at minute 12?" and Bizzabo does not surface multilingual audience breakdowns or generate next-event recommendations.

Goldcast: Organizers hire Goldcast for video engagement heatmaps showing per-minute drop-offs within recorded sessions, but Goldcast's analytics are session-scoped, not event-scoped — there is no cross-session journey analysis, no audience segment comparison, and no auto-generated executive summary.

CapabilityHopinBizzaboGoldcastZuddl (this PRD)
Auto-generated post-event report (24hr SLA)✅ (unique)
Session-level drop-off heatmaps
Cross-session attendee journey mapping✅ (unique)
AI-generated insight labels with override✅ (unique)
Multilingual audience segmentation✅ (unique)
Export to Notion (live sync, not PDF only)✅ (unique)
Pre-built report templates (no setup)
WHERE WE LOSE: In-person event analytics❌ vs ✅

WHERE WE LOSE: Bizzabo and Hopin both support hybrid and in-person event analytics (badge scans, booth visit tracking, physical session check-ins). Zuddl's Phase 1 scope is virtual and hybrid events where attendance is digitally logged — we do not integrate with physical badge scanners or manual check-in flows. For organizers running fully in-person conferences with badge-based tracking, Bizzabo has feature depth we will not match in Phase 1.

Our wedge is the 24-hour auto-generated report with cross-session journey analysis and AI narrative layer because every competitor stops at raw dashboards or static templates — they make the organizer do the interpretive work. We are hiring AI to close the gap between "here are your numbers" and "here is what happened and what to do next time." The override mechanism ensures organizers trust the output enough to share it with executives without manual re-checking every label.

Problem Statement

WHO / JTBD: When a conference organizer at a B2B SaaS company finishes a 400-person virtual event, they want to understand what worked and what failed — session performance, audience segment behavior, drop-off points — so they can brief executives, justify budget for the next event, and avoid repeating mistakes. They hire Zuddl to run the event, but they currently hire Excel, Google Slides, and their own manual analysis to generate the post-event story.

WHERE IT BREAKS: Today, the organizer exports five separate CSVs from Zuddl (registration data, attendance logs, session analytics, engagement events, poll responses), opens them in Excel, cross-references attendee IDs across sheets, manually calculates session drop-off rates, builds pivot tables to segment by geography or job title, screenshots charts, pastes them into Google Slides, and writes narrative commentary. There is no standard format: every organizer produces a different report, every executive asks different follow-up questions, and insights that require cross-referencing multiple data sources (e.g., "did APAC attendees engage more in live Q&A than on-demand viewers?") often go unanalyzed because the manual lift is too high.

QUANTIFIED BASELINE:

MetricMeasured Baseline
Time to generate post-event report6.8 hrs avg (n=19 surveyed)
Number of data exports required5.2 exports avg per event
Percentage of events with no post-mortem completed34% (n=72 events, Q2-Q3 2025)
Follow-up questions from execs requiring re-analysis3.1 questions avg per event

Business case: 19 organizers × 24 events/year × 6.8 hrs/event × $72/hr = $187,315/year recoverable value. The 34% of events with no post-mortem represent lost institutional knowledge — teams repeat the same session format mistakes because insights never made it out of the raw data.

JTBD statement: "When my event ends, I want to receive a structured report showing what happened — session performance, audience behavior, drop-offs, segment differences — so I can brief stakeholders and improve the next event without spending a full day in spreadsheets."

Solution Design

INTEGRATION MAP:

┌─────────────────────────────────────────────────────────────────────┐
│ INTEGRATION SURFACE MAP                                             │
├─────────────────────────────────────────────────────────────────────┤
│                                                                     │
│  Zuddl Event Database (PostgreSQL)                                 │
│  ├─ READ: event metadata (title, date, duration, format)           │
│  ├─ READ: registration table (attendee_id, profile fields, lang)   │
│  ├─ READ: attendance_logs (session_id, attendee_id, join/leave ts) │
│  ├─ READ: engagement_events (poll, chat, Q&A, reactions)           │
│  ├─ READ: session metadata (title, format, speaker, duration)      │
│  └─ WRITE: report_generation_status (triggered, in_progress, done) │
│                                                                     │
│  OpenAI GPT-4 API (external)                                       │
│  ├─ WRITE: prompt with event summary + session data                │
│  └─ READ: generated insight text, recommendations                  │
│                                                                     │
│  Zuddl Report Storage (S3)                                         │
│  ├─ WRITE: generated PDF blob                                      │
│  └─ WRITE: report JSON (for Notion sync, override tracking)        │
│                                                                     │
│  Notion API (external, Phase 1 + 2 weeks)                          │
│  ├─ WRITE: create page in user's workspace with report blocks      │
│  └─ WRITE: update page if organizer edits in Zuddl and re-syncs    │
│                                                                     │
│  Zuddl Dashboard UI (React)                                        │
│  ├─ READ: report JSON for preview rendering                        │
│  ├─ WRITE: override edits (user clicks inline, saves changes)      │
│  └─ TRIGGER: PDF download, Notion sync, email delivery             │
└─────────────────────────────────────────────────────────────────────┘

CORE MECHANIC:

When an event's status changes to "Ended" (triggered by organizer clicking "End Event" or auto-triggered 2 hours after scheduled end time if no manual trigger), the system queues a report generation job. The job runs in three stages:

  1. Data aggregation (5-10 min): Pull all registration, attendance, engagement, and session metadata from Zuddl's event database. Calculate session-level metrics (attendance rate, average watch time, drop-off curve, poll/chat/Q&A participation rate). Calculate event-level benchmarks (median session attendance rate, median engagement rate). Segment attendees by profile language (primary) and country (fallback). Generate cross-session journey map (which sessions did each attendee visit, in what order, for how long).

  2. AI insight generation (10-15 min): Send aggregated data to OpenAI GPT-4 API with structured prompt: "You are analyzing a virtual conference. Here is the session performance data: [JSON]. Here are the audience segments: [JSON]. Generate: (a) 3-5 key insights about what worked and what didn't, (b) session engagement scores with labels (e.g., 'High engagement - strong Q&A participation'), (c) 2-4 recommendations for the next event (e.g., 'Consider shorter sessions - 60min sessions had 18% higher drop-off than 30min sessions')." Parse API response, extract insight text, map to report sections.

  3. Report assembly and delivery (5 min): Render report as structured JSON (sections: executive summary, session performance table with engagement scores, drop-off heatmaps, audience segment breakdown, AI recommendations). Generate PDF from JSON using server-side renderer. Store PDF in S3, store JSON in report storage. Send email to organizer with PDF attachment + link to in-app report preview. Surface report in Zuddl dashboard under "Events > [Event Name] > Post-Event Report."

PRIMARY USER FLOW:

┌─────────────────────────────────────────────────────────────────────┐
│ ORGANIZER JOURNEY — POST-EVENT REPORT                              │
├─────────────────────────────────────────────────────────────────────┤
│                                                                     │
│  Day 0 (event day):                                                │
│  14:00 → Event ends, organizer clicks "End Event" in dashboard     │
│  14:01 → System queues report generation job, shows toast:         │
│          "Your post-event report will be ready within 24 hours.    │
│           We'll email you when it's done."                         │
│                                                                     │
│  Day 1 (next morning):                                             │
│  09:12 → Organizer receives email: "Your Zuddl Acme Summit 2025   │
│          Post-Event Report is ready" with PDF attached + link      │
│  09:15 → Organizer opens Zuddl dashboard, clicks "View Report"     │
│  09:16 → Report preview loads (web view, not just PDF):            │
│          - Executive summary (AI-generated, 3 key insights)        │
│          - Session performance table (12 sessions, engagement      │
│            scores 0-100, labels like "High engagement" or          │
│            "Significant drop-off after 22min")                     │
│          - Audience breakdown (47% English, 31% Spanish, 22% other)│
│          - Recommendations (4 bullets, e.g., "Shorter sessions     │
│            had 18% better retention")                              │
│  09:18 → Organizer notices AI labeled "Keynote Q&A" as "Low       │
│          engagement (42/100)" — knows this is wrong because chat   │
│          was very active, but AI only counted poll interactions    │
│  09:19 → Organizer clicks the "42/100" score inline, dropdown      │
│          appears: "Override this score?" — enters "78" and writes  │
│          note "High chat activity, AI missed this context"         │
│  09:20 → Organizer clicks "Save Overrides" — report re-renders    │
│          with updated score, adds footnote "* Edited by organizer" │
│  09:22 → Organizer clicks "Export to Notion" — OAuth modal opens  │
│          (first-time only), connects Zuddl to Notion workspace     │
│  09:24 → Report syncs to Notion as new page under "Events" folder │
│  09:30 → Organizer shares Notion link with exec team in Slack      │
│                                                                     │
│  Day 2 (debrief meeting):                                          │
│  10:00 → Exec asks "Why did the afternoon sessions lose people?"  │
│  10:02 → Organizer opens report, points to drop-off heatmap showing│
│          3pm session lost 40% of attendees in first 15min          │
│  10:03 → Exec asks "What should we do differently next time?"     │
│  10:04 → Organizer reads AI recommendation: "Consider moving       │
│          high-value content to morning slots when engagement was   │
│          22% higher on average"                                    │
│  10:10 → Debrief ends 20min early (historically took 90min)       │
└─────────────────────────────────────────────────────────────────────┘

KEY DESIGN DECISIONS:

  • Inline override mechanism: Clicking any AI-generated metric, label, or insight text opens an inline editor (text input for metrics, textarea for narrative insights). Organizer edits, clicks save, system writes override to report JSON, re-renders preview, marks edited sections with "* Edited by organizer" footnote. Overrides persist across PDF re-download and Notion re-sync. No version history in Phase 1 — latest edit wins.

  • Engagement score formula (Phase 1): Simple weighted average of 4 normalized metrics: (attendance_rate × 0.25) + (poll_participation_rate × 0.25) + (chat_message_rate × 0.25) + (1 - drop_off_rate × 0.25). Each metric is min-max normalized to 0-1 scale within the event (best session = 1.0, worst = 0.0), then multiplied by 100 to produce 0-100 score. Label thresholds: 0-40 = "Low engagement", 41-65 = "Moderate engagement", 66-85 = "High engagement", 86-100 = "Exceptional engagement". Thresholds tuned based on 47 historical events analyzed Aug-Sep 2025.

  • Drop-off heatmap: Per-session timeline chart (X-axis = minutes into session, Y-axis = % of attendees still watching). Data sampled at 1-minute intervals from attendance_logs table. Rendered as line chart with threshold shading: red zone (>30% cumulative drop-off), yellow zone (15-30%), green zone (<15%). AI insight triggered if any session crosses 30% drop-off threshold before the halfway mark: "Significant drop-off detected at [X] minutes — review content pacing or technical issues."

  • Multilingual audience breakdown: Pie chart showing % of attendees by profile language (self-reported during registration). Fallback logic: if profile_language is null, use registration_country as proxy via ISO language mapping (e.g., country=FR → language=French). If both null, group into "Language unknown" slice. Segment detail table shows engagement metrics per language group (avg sessions attended, avg engagement score). Minimum segment size for display: 5% of total attendees (prevents tiny slices cluttering the chart).

SCOPE BOUNDARY:

IN SCOPE (Phase 1):

  • Auto-generated report within 24 hours of event end
  • Session engagement scores (0-100) with AI-generated labels
  • Drop-off heatmaps (per-session, minute-by-minute)
  • Audience breakdown by language and geography
  • AI-generated recommendations (format, timing, session length)
  • Inline override for any AI metric or label
  • PDF export (email + in-app download)
  • Notion sync (OAuth, live page creation)

OUT OF SCOPE (Phase 1):

  • Real-time report generation during event
  • Speaker-specific performance feedback
  • Topic or content recommendations (requires NLP on session transcripts)
  • Predictive modeling (e.g., "next event will attract X attendees")
  • Google Slides export (deferred to Phase 1.2)
  • Customizable report templates (single default template in Phase 1)
  • Version history for overrides (deferred to Phase 1.1)
  • Attendee-level journey drill-down (aggregate view only in Phase 1)

CRITICAL PATH INTEGRATION WORK:

  1. Event status webhook: Engineering must implement event.ended webhook that triggers report generation job. Currently, event status changes do not trigger background jobs — this is net-new infrastructure. Estimated 1 sprint to build job queue + worker pool + retry logic.

  2. OpenAI API rate limits: Zuddl does not currently use OpenAI API in production. Must provision API key, set up rate limit monitoring, implement exponential backoff for 429 errors. If rate limit is hit during high-volume event day (e.g., 10 events end simultaneously), reports queue and deliver sequentially. Engineering must define acceptable queue depth before alerting. Estimated 0.5 sprint.

  3. Notion OAuth flow: Zuddl does not currently integrate with Notion. Must implement OAuth 2.0 flow (authorize, token exchange, refresh token storage), Notion API client (create page, update page, handle pagination for large reports), error handling for workspace permission issues (e.g., user revokes Zuddl access mid-sync). Estimated 1.5 sprints.

  4. Report preview UI: New React component for rendering report JSON as interactive web view (not just PDF iframe). Must support inline editing (click to override), live preview updates, save/cancel actions. Estimated 1 sprint.

Acceptance Criteria

Phase 1 — MVP: 8 weeks

US1 — Report Auto-Generation on Event End

  • Given an event has status = "Active" and scheduled_end_time has passed by 2 hours
  • When the system cron job runs (every 15 minutes)
  • Then the system changes event status to "Ended", queues a report generation job, and sends in-app notification to organizer: "Your post-event report will be ready within 24 hours"
  • And the job completes data aggregation, AI insight generation, and report assembly within 24 hours for events with ≤1,000 attendees and ≤50 sessions (99th percentile event size per Aug 2025 data)
  • If this story fails, organizers receive no notification that a report is being generated and may contact support asking "where is my report?" — increasing ticket volume
  • Validated by QA lead (Priya) against 20-event test dataset with varied sizes (50-1,000 attendees, 5-50 sessions)

US2 — Session Engagement Score Calculation

  • Given a completed event with ≥5 sessions and ≥50 attendees
  • When the report generation job runs
  • Then each session receives an engagement score from 0-100 calculated as: (attendance_rate × 0.25 + poll_participation_rate × 0.25 + chat_message_rate × 0.25 + (1 - drop_off_rate) × 0.25) × 100, with each component min-max normalized to 0-1 within the event
  • And each score is labeled: 0-40 = "Low engagement", 41-65 = "Moderate engagement", 66-85 = "High engagement", 86-100 = "Exceptional engagement"
  • And scores are displayed in the report's session performance table with label and raw score (e.g., "High engagement (72/100)")
  • If this story fails, organizers cannot compare session performance — the core value prop of the feature is missing
  • Validated by Analytics lead (Rohan) against 15 historical events with known high/low performers (manual spot-check that scores align with organizer intuition)

US3 — Drop-Off Heatmap Rendering

  • Given a session with ≥30 attendees and duration ≥15 minutes
  • When the report generation job runs
  • Then the system samples attendance_logs at 1-minute intervals, calculates % of original attendees still present at each interval, and renders a line chart with X-axis = minutes, Y-axis = % remaining
  • And the chart is shaded: red (>30% cumulative drop-off), yellow (15-30%), green (<15%)
  • And if any session crosses 30% drop-off before the halfway mark (e.g., 30% drop-off at 12min in a 30min session), the AI generates insight: "Significant drop-off detected at [X] minutes in [Session Name] — review content pacing or technical issues"
  • If this story fails, organizers cannot pinpoint when/why audiences left — they lose the "aha moment" insight the feature promises
  • Validated by Product Designer (Anya) against 8 test sessions with known drop-off events (e.g., technical glitch at 10min, boring speaker at 20min)

US4 — Multilingual Audience Breakdown

  • Given an event with attendees from ≥3 distinct profile languages (or ≥3 distinct countries if language is null)
  • When the report generation job runs
  • Then the system groups attendees by profile_language (primary), falls back to registration_country → ISO language mapping if profile_language is null, and generates pie chart showing % distribution
  • And any language group representing <5% of attendees is grouped into "Other languages" slice
  • And a detail table shows avg_sessions_attended and avg_engagement_score per language group
  • And if ≥20% of attendees have neither profile_language nor registration_country, the report surfaces warning: "Language data unavailable for [X]% of attendees — segment analysis may be incomplete"
  • If this story fails, global event organizers (primary customer segment) lose the geographic/cultural insight differentiation vs. competitors
  • Validated by Customer Success lead (Mei) with 3 enterprise customers running multilingual events in Q4 2025

US5 — AI-Generated Recommendations

  • Given a completed event with ≥10 sessions
  • When the report generation job sends event data to OpenAI GPT-4 API
  • Then the API returns 2-4 recommendations (format: bullet list, each ≤50 words) addressing: (a) session format (e.g., "Q&A sessions had 15% higher engagement than panels — consider more interactive formats"), (b) session length (e.g., "60min sessions had 22% higher drop-off than 30min — test shorter formats"), (c) scheduling (e.g., "Morning sessions (9-11am) had 18% better attendance — prioritize key content early")
  • And recommendations are included in the "What to Try Next" section of the report
  • And if the API call fails (timeout, rate limit, 500 error), the system retries 3× with exponential backoff (2s, 8s, 32s), and if all retries fail, the report ships with placeholder text: "Recommendations unavailable — contact support" and engineering is paged
  • If this story fails, organizers get data but no actionable guidance — the "insights" promise is half-delivered

Strategic Decisions Made

Decision: Report generation timing — real-time during event vs. post-event batch Choice Made: Post-event batch generation triggered automatically when event status changes to "Ended," with 24-hour SLA for delivery Rationale: Real-time generation during the event would compete for compute resources with live event infrastructure (video streaming, chat, polls) and introduce latency risk during the highest-stakes moment. Organizers do not need the report while the event is running — they need it for the post-event debrief, which typically happens 1-3 days later. Batch generation allows us to use heavier ML models (embeddings for session similarity, clustering for audience segmentation) without impacting live event performance. Rejected: real-time dashboard (adds complexity, unclear user need), 48-hour SLA (too slow for Monday-event-Wednesday-debrief cycle common in enterprise).

────────────────────────────────────────

Decision: AI insight override mechanism — inline editing vs. approval workflow vs. version history Choice Made: Inline editing with single-click override — organizer clicks any AI-generated label or insight text, edits directly in the report preview, saves, and the override persists in PDF/Notion export Rationale: Organizers need to correct AI mistakes fast (e.g., AI labels a Q&A session as "low engagement" because it had no poll interactions, but organizer knows the chat was highly active). Approval workflow (AI draft → review → approve/reject) adds friction and implies the AI output is a "proposal" rather than a working draft. Inline editing treats the AI output as a starting point the organizer refines, matching the mental model from translation override flows already in Zuddl. Version history considered but deferred to Phase 1.1 — adds complexity for marginal value in MVP. Rejected: approval workflow (too slow), no override capability (organizers won't trust black-box AI labels).

────────────────────────────────────────

Decision: Session engagement scoring algorithm — absolute metrics (poll count, chat volume) vs. relative benchmarks (this session vs. event avg) vs. ML clustering Choice Made: Relative benchmarks with event-scoped normalization — each session gets an engagement score from 0-100 based on its performance vs. the event average across 4 dimensions (attendance rate, poll participation rate, chat message rate, drop-off rate), weighted equally in Phase 1 Rationale: Absolute metrics are meaningless across events (100 chat messages in a 50-person event is high; in a 500-person event it's low). ML clustering (k-means on engagement vectors) considered but deferred — requires training data we don't have yet, and explainability is harder ("why is this session in cluster 2?" is a harder question than "this session had 23% higher poll participation than event average"). Relative benchmarks are explainable, require no training, and give organizers the comparison they actually want: "was this session better or worse than the rest of my event?" Rejected: absolute metrics (not comparable across events), ML clustering (deferred to Phase 1.2 after we have baseline data).

────────────────────────────────────────

Decision: Multilingual audience breakdown — detected language vs. self-reported profile language vs. geographic proxy Choice Made: Self-reported profile language as primary dimension, with geographic proxy (country) as secondary fallback when profile language is null Rationale: Zuddl already captures profile language during registration (optional field, 67% completion rate per CRM data Aug 2025). Detected language (from chat messages, poll responses) considered but introduces bias — users who don't engage in chat/polls disappear from the breakdown, and language detection on short text is error-prone. Geographic proxy (IP-based country or registration country field) is available for 98% of attendees but is a weak signal for language (e.g., India has 22 official languages, US has large Spanish-speaking population). Hybrid approach: use profile language when available, fall back to country, surface "language data unavailable" count in the report. Rejected: detected language only (excludes non-engagers), no multilingual breakdown (feedback from 3 enterprise customers in Q3 said this was a top-5 request).

────────────────────────────────────────

Decision: Export format — PDF only vs. Notion sync vs. Google Slides API vs. all three Choice Made: PDF export (Phase 1 MVP) + Notion sync (Phase 1, 2 weeks post-MVP) — Google Slides API deferred to Phase 1.2 Rationale: PDF is table stakes and requires no third-party API integration (render server-side, deliver via email + in-app download). Notion sync is the wedge — 4 of 8 enterprise customers interviewed (Aug 2025) said they keep event retrospectives in Notion, and live sync (report updates in Notion if organizer edits in Zuddl) is a unique capability vs. "export to PDF and upload manually." Google Slides API considered but authentication flow is heavier (OAuth, token refresh, Drive permissions) and Slides is less commonly used for event retrospectives per customer interviews (2 of 8 mentioned Slides, 4 mentioned Notion, 2 mentioned Confluence). Notion first, Slides second. Rejected: PDF only (misses wedge opportunity), all three in Phase 1 (scope risk for marginal value).

────────────────────────────────────────

Decision: Recommendation engine scope — next-event format suggestions vs. session topic recommendations vs. speaker performance feedback vs. all three Choice Made: Next-event format suggestions only (Phase 1) — "based on drop-off patterns, consider shorter sessions (30min vs 45min)" or "based on engagement scores, Q&A format outperformed panel format" — no speaker-specific feedback, no topic recommendations Rationale: Format recommendations are defensible from the data we have (session length, format type, engagement/drop-off metrics) and are actionable without requiring external data. Topic recommendations would require content analysis (session titles, descriptions, transcripts) and a topic taxonomy we don't have — deferred to Phase 1.2. Speaker performance feedback is politically sensitive (organizers may not want AI saying "Speaker X underperformed") and introduces legal/HR risk if the data is misinterpreted — out of scope for Phase 1 entirely. Rejected: speaker feedback (legal risk, organizer discomfort), topic recommendations (requires content analysis infra not in scope).

MADE WITH SCRIPTONIA

Turn your product ideas into structured PRDs, tickets, and technical blueprints — in seconds.

Start for free →