Data scientists and analysts at mid-sized banks and NBFCs today must manually aggregate, cleanse, and visualize credit portfolio data from disparate core banking and CRM systems. This process—often involving Excel, Tableau, and custom Python scripts—consumes an average of 11 hours per analyst per week on report generation alone (source: internal interviews with 8 target institutions, Feb 2025). The manual workflow introduces a 4-7% error rate in key risk metrics like PD and LGD, leading to mispriced loans and regulatory exposure.
Business case: 250 targetable mid-sized institutions (source: RBI 2023-24 bank list, filters applied) × 4.2 average analysts per institution (source: LinkedIn headcount sampling) × 11 hours saved weekly (source: validated baseline) × $42/hour fully-loaded cost (source: Regional Cost Benchmarks, India B2B SaaS) × 48 working weeks = $2.33M/year recoverable analyst capacity. If adoption is 40% of estimate: $932K/year. This MVP must prove we can capture at least 20% of that time-savings.
ParallelHQ is a no-code dashboard that ingests raw credit data and renders regulatory-grade visualizations in 4 clicks. It is not a core banking system, a loan origination platform, or a replacement for a data warehouse—it is a read-only analytics layer on top of existing systems.
Primary Metrics:
| Metric | Baseline | Target | Kill Threshold | Measurement Method |
|---|---|---|---|---|
| Time to first viz | 67 min | ≤15 min | >30 min (D90) | Mixpanel workflow |
| Weekly active users | 0 | 70% of | <40% (D90) | Amplitude + Stripe |
| (per pilot org) | pilots | |||
| Report export rate | N/A | ≥2/ | <0.5/ | Dashboard telemetry |
| (exports/user/week) | user/week | user/week (D90) |
Guardrail Metrics (must NOT degrade):
| Guardrail | Threshold | Action if Breached |
|---|---|---|
| Data upload error rate | <5% of sessions | Immediate eng swarm |
| P95 dashboard latency | <3 seconds | Scale backend, cache |
| CSAT (post-session) | ≥4.0/5.0 | User interview sprint |
What We Are NOT Measuring:
Decision: Authentication & Data Isolation Model
Choice Made: Email/password per institution (no SSO), database-level row isolation by org_id.
Rationale: SSO (Okta, Azure AD) adds 3 weeks of integration time; password auth gets pilots faster. Row-level isolation is simpler than schema-per-tenant for PostgreSQL.
────────────────────────────────────────────────
Decision: PDF Export Fidelity vs. Speed
Choice Made: Prioritize 100% data accuracy over pixel-perfect RBI formatting.
Rationale: Pilot institutions will accept "draft watermarked" PDFs if numbers are correct. Perfect formatting is a scaling problem, not validation problem.
────────────────────────────────────────────────
Decision: Industry Benchmark Data Source
Choice Made: Use synthetic benchmarks based on RBI annual reports, labeled "Illustrative".
Rationale: Licensing real benchmark data (CRISIL, CIBIL) requires 6-month negotiations. Synthetic data proves the UX; we can replace with real data post-validation.
────────────────────────────────────────────────
Pre-Mortem: It is 6 months from now and this feature has failed. The 3 most likely reasons are:
Success looks like: Pilot analysts stop their weekly "Python Thursday" scripting ritual. Compliance managers reference ParallelHQ screenshots in board meetings. The sales lead reports, "They're asking how to add their second loan book, not if the tool is secure." We have a 6-month product roadmap co-created with 4 paying institutions.
Competitors solve the credit analytics problem in two ways: FICO Dashboard provides entrenched but generic risk scores requiring expensive customization ($250K+ engagements). Tableau/Custom Python is hired for unlimited flexibility but requires scarce data science talent, creating a 3-6 week lag for new reports.
| Capability | FICO Analytics | Tableau + Python | ParallelHQ |
|---|---|---|---|
| No-code report builder | ❌ | ❌ | ✅ (unique) |
| Pre-built RBI compliance viz | ✅ | ❌ (build from 0) | ✅ (launch-ready) |
| API + CSV ingestion | ❌ (SFTP only) | ✅ | ✅ |
| Industry benchmark overlay | ✅ (extra cost) | ❌ | ✅ (included) |
| WHERE WE LOSE | Brand trust & | Unlimited | ❌ vs ✅ on ecosystem |
| enterprise SLAs | customization depth | & report complexity |
Our wedge is no-code simplicity for regulatory reporting because analysts need compliance-ready views today, not in 6 weeks, and cannot wait for IT or data science backlogs.
We hypothesize that credit analysts will replace their manual weekly reporting workflow (Excel → Python → Tableau) with ParallelHQ if it delivers audit-ready visualizations in under 15 minutes from raw data upload, with zero SQL or code required.
| Metric | Measured Baseline |
|---|---|
| Weekly portfolio health report creation | 11.2 hours avg (n=8 surveyed) |
| Error rate in manual PD/LGD calculations | 5.3% avg (n=120 sampled reports) |
| Time to add new metric to existing report | 3.5 days avg (IT ticket + dev) |
Business case math: 11.2 hours × $42/hour × 48 weeks = $22,579/year/analyst recoverable time. 1,050 analysts (250 inst × 4.2) × $22,579 = $23.7M total addressable time cost. Our 20% capture target = $4.74M/year.
P0 (Weeks 1-6): Core validation engine.
┌─────────────────────────────────────────────────────────────────────┐ │ Credit Score Distribution [Export PDF] [←Back] │ ├─────────────────────────────────────────────────────────────────────┤ │ ████████████████████ 720-750: 24% │ │ ████████████ 680-710: 18% │ │ █████████ 650-680: 14% │ │ Industry Avg: █████████████ 690 mean │ │ Your Mean: 705 (+15 bps) │ └─────────────────────────────────────────────────────────────────────┘
P1 (Weeks 7-10): User workflow.
- Drag-drop metrics builder (max 4 metrics per report).
- Export to PDF with company watermark.
- Role-based access: admin (upload), analyst (view/build), viewer (read-only).
Phase 1 — MVP Pilot (10 weeks) Target: 10 paid pilot institutions ($1K/month, annual contract).
US#1 — Analyst Upload & Validate
US#2 — No-Code Report Creation
Out of Scope (Phase 1):
| Feature | Why Not Phase 1 |
|---|---|
| Real-time API data streams | Batch CSV covers 80% of monthly reporting |
| Custom calculation editor | Pre-built metrics cover RBI Pillar 3 needs |
| Advanced ML risk models | Overkill for validation; build if D90 NPS>40 |
| On-prem deployment | SaaS-only validates compliance risk first |
Compliance Acceptance: Analysts will trust and use an external SaaS tool for sensitive credit data without requiring on-prem deployment.
Data Schema Universality: Our predefined data validators will handle 80% of customer CSV formats without custom mapping.
No-Code Sufficiency: The drag-drop report builder will satisfy 70% of ad-hoc analysis needs without SQL fallback.
Kill Criteria — we pause and conduct a full review if ANY are met within 90 days of pilot launch:
50% of report building sessions require IT assistance (measured via support ticket tagging).
Minimum Viable Experiment (4 weeks, $15K burn): Instead of building the full dashboard, create a static PDF generator that takes a standardized CSV template and emails back 3 pre-rendered charts with benchmark overlays. Manual onboarding via Zoom call.