SCRIPTONIA.Make your own PRD →
PRD · March 31, 2026

Netenrich Adaptive MDR

Executive Brief

Security operations teams today rely on quarterly manual audits to validate detection coverage against evolving threat landscapes. A detection engineer at a typical mid-market enterprise spends 40 hours each quarter grepping through SIEM queries, cross-referencing MITRE ATT&CK mappings in spreadsheets, and manually flagging gaps for backlog prioritization—creating a 90-day blind spot where emerging threats like new PowerShell obfuscation techniques or novel LOLBins go undetected. By the time the gap is identified, adversaries have already tested the organization's defenses.

12 analysts [source: median customer SOC team size, internal CRM analytics, Aug 2025] × 160 hours/year [4 quarterly audits × 40 hrs each, source: CISO advisory board interviews, n=8] × $95/hour [source: 2024 Gartner InfoSec compensation survey, fully-loaded] = $182,400/year per customer in recoverable analyst time. If adoption reaches only 40% of eligible analysts: $73,000/year per customer. This excludes the risk-adjusted value of reduced breach dwell time, which we will quantify post-launch using D90 incident data.

This feature is continuous, automated detection coverage gap analysis against MITRE ATT&CK v14 with native Google SecOps integration and human-in-the-loop rule recommendations. It is not an autonomous response system, a replacement for threat hunting teams, or a multi-SIEM abstraction layer—SecOps remains the system of record for detection logic and enforcement.

Competitive Analysis

How competitors solve this today:

  • Google SecOps (Chronicle): Analysts manually write YARA-L retroactive hunts and visually inspect rule coverage in static "Coverage" dashboards that don't auto-compare against threat intel.
  • Microsoft Sentinel: Provides "Analytics rule coverage" workbooks that list active rules, but requires manual export to compare against MITRE frameworks; no automated gap scoring.
  • CrowdStrike Falcon: Offers coverage visualization by tactic, but requires manual threat intel ingestion via Falcon Fusion; gap identification is point-in-time, not continuous.
  • Palo Alto Cortex XDR: Static MITRE mapping reports updated weekly, with no telemetry-aware gap detection (can't distinguish "no rule" from "no data").
CapabilityGoogle SecOpsMicrosoft SentinelCrowdStrike FalconThis Product
Continuous telemetry analysis
Automated MITRE gap mapping✅ (with prioritization logic)
Native Google SecOps integration
WHERE WE LOSEEcosystem depth (2yr head start on UDM data model)Native Azure AD integration (agentless log collection)Endpoint agent ubiquity (widespread sensor deployment)

Our wedge is deep Google SecOps integration with daily automated analysis because we eliminate the manual hunt phase entirely rather than just visualizing existing rules.

Problem Statement

WHO / JTBD: When a threat detection PM or security architect learns of a new adversary technique (via threat intel feed, CVE disclosure, or incident post-mortem), they want to know within 24 hours whether their existing detection library covers it—so they can close coverage gaps before red team exercises or real adversaries exploit them.

WHERE IT BREAKS: Today, teams rely on quarterly calendar reminders. An analyst receives threat intel on Thursday, adds "check coverage" to a backlog sticky note, and waits six weeks for the next audit cycle. During the audit, they manually export detection rules from Google SecOps into spreadsheets, compare against MITRE ATT&CK matrices using VLOOKUPs, and identify gaps through tribal knowledge ("Did we write a rule for WMIC execution?"). The process produces static PDF reports that are outdated the moment new telemetry sources are added.

WHAT IT COSTS:

SymptomFrequencyTime LostAggregate
Manual quarterly audit4×/year160 hrs/analyst1,920 hrs/yr (12-analyst team)
Emergency gap analysis (out-of-cycle)2.3×/quarter (source: incident post-mortems)8 hrs/ad-hoc74 hrs/yr
Coverage blind spot durationContinuous90 days avg exposure3.2 missed techniques/quarter

Aggregate annual cost: $182K labor waste + unquantified breach risk from detection latency (source: HR rates, workflow analysis, Aug 2025).

JTBD Statement: "When new threat intelligence emerges, I want to know immediately if my existing detection rules cover it, so I can deploy new detections within hours, not quarters."

Solution Design

API Contract (Source of Truth):

GET /api/v1/coverage/analysis/{tenant_id}/latest

{
  "analysis_id": "uuid-v4",
  "timestamp": "2025-01-15T02:00:00Z",
  "coverage_score": 0.73,
  "mitre_version": "14.1",
  "gaps": [
    {
      "technique_id": "T1059.003",
      "tactic": "Execution",
      "severity": "critical",
      "telemetry_available": ["process_creation", "command_line"],
      "existing_rules": [],
      "recommended_rule": {
        "yara_l": "rule windows_suspicious_cmdline { condition: ... }",
        "confidence": 0.94,
        "false_positive_estimate": "low"
      }
    }
  ]
}

Primary User Flow: Detection engineer receives Slack alert ("Critical gap: T1059.003") → Clicks link → Reviews coverage dashboard → Clicks technique row → Validates recommended YARA-L rule → Clicks "Open in SecOps" → Pastes rule into SecOps editor → Deploys. Target time: <20 minutes.

User Interface:

┌─────────────────────────────────────────────────────────────────┐
 │ Coverage Overview                          [Export Report →]     │
 ├─────────────────────────────────────────────────────────────────┤
 │ OVERALL SCORE: 73%                           Last run: 2hrs ago │
 │ ┌──────────────────────────────────────┐                        │
 │ │ [████████████░░░░░░░░]  Critical     │  Update frequency: Daily │
 │ │ Coverage: 91% (28/31 techniques)     │                        │
 │ ├──────────────────────────────────────┤                        │
 │ │ [████████░░░░░░░░░░░░]  High         │  [View Gaps →]          │
 │ │ Coverage: 62% (24/39 techniques)     │                        │
 │ └──────────────────────────────────────┘                        │
 ├─────────────────────────────────────────────────────────────────┤
 │ TOP PRIORITY GAPS                                    [Action]  │
 │ T1059.003 Windows Command Shell      Critical   [Create Rule →]│
 │ T1003.001 LSASS Memory               High       [Add Policy →] │
 │ T1486 Data Encrypted for Impact      High       [Review →]     │
 └─────────────────────────────────────────────────────────────────┘
┌─────────────────────────────────────────────────────────────────┐
 │ Gap: T1059.003 Windows Command Shell          [< Back to List] │
 ├─────────────────────────────────────────────────────────────────┤
 │ RISK: Critical | DATA AVAILABLE: ✅ | DETECTION: ❌             │
 ├─────────────────────────────────────────────────────────────────┤
 │ TELEMETRY SOURCES DETECTED:                                     │
 │ • process_creation (Windows Security 4688)                     │
 │ • command_line arguments (Sysmon Event ID 1)                   │
 ├─────────────────────────────────────────────────────────────────┤
 │ RECOMMENDED DETECTION:                                          │
 │ ┌─────────────────────────────────────────────────────────────┐ │
 │ │ rule: windows_suspicious_cmdline                             │ │
 │ │ condition: process.command_line contains "powershell -enc"   │ │
 │ │ severity: high                                               │ │
 │ └─────────────────────────────────────────────────────────────┘ │
 │                                        [Edit in SecOps →]      │
 └─────────────────────────────────────────────────────────────────┘

Backward Compatibility: API versioned via URL path (/v1/). Response schema frozen for 12 months. UI feature-flags allow gradual rollout per tenant.

Acceptance Criteria

Phase 1 — MVP: 8 weeks

US1 — Daily Gap Analysis

  • Given a Google SecOps tenant with >100 detection rules configured
  • When the daily analysis job runs at 02:00 UTC
  • Then the system identifies coverage gaps against MITRE ATT&CK v14 with ≥95% accuracy (measured against manual audit baseline)
  • Failure mode: If accuracy <90%, analysts waste time investigating false gaps, manual audits continue, feature adoption fails
  • Validator: Threat Research Lead (David) against 50-technique labeled test set

US2 — Gap Prioritization

  • Given identified gaps across multiple MITRE tactics
  • When the user views the coverage dashboard
  • Then gaps are sorted by (severity × telemetry availability) with Critical/High/Low tiers and technique IDs hyperlinked to MITRE site
  • Failure mode: If sorting is alphabetical or random, analysts waste 15+ minutes per session hunting for urgent gaps
  • Validator: UX Researcher with 5 target customer shadow sessions

US3 — Rule Recommendation

  • Given a specific technique gap (e.g., T1059.003) with available telemetry
  • When user clicks "Generate Recommendation"
  • Then system outputs YARA-L compatible rule template with confidence score within 3 seconds (p95 latency)
  • Failure mode: If output is generic ("monitor process creation") or takes >10 seconds, analysts still spend 2+ hours writing rules from scratch
  • Validator: Senior Detection Engineer (Sarah) validates 10 sample outputs against style guide

Out of Scope (Phase 1):

FeatureWhy Not Phase 1
Real-time streaming analysisRequires Kafka infrastructure not budgeted Q1; daily latency acceptable for gap analysis
Multi-SIEM support (Splunk, Sentinel)API abstraction layer adds 6 weeks; 73% of demand is SecOps-specific
Auto-deployment of generated rulesSOC2 requires human approval for detection logic changes; compliance risk
Historical trend analysis (coverage over time)Can be derived from weekly JSON exports; not required for core gap identification
Custom threat intel ingestion (STIX/TAXII)Standard MITRE ATT&CK sufficient for MVP; custom intel adds schema complexity

Phase 1.1 — 4 weeks post-MVP:

  • Weekly coverage trending (line chart showing coverage % over 90 days)
  • Custom threat intel source ingestion (single STIX feed URL)
  • CSV export of gap reports for compliance auditors

Phase 1.2 — 6 weeks post-MVP:

  • Real-time analysis (sub-hour latency via webhook triggers)
  • Automated false positive estimation using historical SecOps alert data
  • Bi-directional sync (auto-populate SecOps rule draft, not just clipboard)

Success Metrics

Primary Metrics:

MetricBaselineTarget (D90)Kill ThresholdMeasurement MethodOwner
Time to identify coverage gap90 days (quarterly cycle)≤7 days>30 days at D90Threat intel publish date vs. gap created timestamp in systemPM
Analyst hours per gap remediation12 hours (manual research + writing)≤2 hours>6 hours at D90Time-tracking survey (n=20 gaps)UX Research
MITRE coverage score (Critical techniques)Unknown (baseline survey at D0)+15 percentage points from D0No improvement from D0Automated calculation against ATT&CK v14Data Science

Guardrail Metrics:

GuardrailThresholdAction if Breached
False positive rate of recommended rules<5% (measured by SecOps deployment rejection rate)Pause auto-recommendations, revert to manual curation
SecOps API quota usage<80% of daily limit (10K calls/day)Throttle analysis frequency to every 48hrs
UI error rate (failed gap analysis loads)<1%Engineering bug sprint before Phase 1.1

Leading Indicators (D14 checkpoint):

  • If ≥60% of enrolled customers view gap report ≥2x in first 14 days: predict D90 habit formation and coverage improvement target (based on analogous Splunk app adoption curves)
  • If average time-to-first-recommendation-generation <30 seconds: predict analyst time-savings target

What We Are NOT Measuring:

  • "Number of gaps identified" — Vanity metric; more gaps found indicates better coverage visibility, not better security. We measure gaps closed, not gaps found.
  • "Time spent in dashboard" — Could indicate confusion or circling UX, not engagement. We measure time-to-completion of remediation workflow.
  • "Number of rules generated" — Quantity without quality; we measure deployment rate of recommendations.
  • "AI confidence score average" — Internal metric irrelevant to user outcome; we measure actual false positive rates post-deployment.

Risk Register

Risk 1 — Technical: SecOps API Rate Limiting

  • Risk: Daily analysis of large tenants (>5,000 rules) exceeds Google SecOps API quota, causing analysis failures.
  • Probability: Medium | Impact: High
  • Mitigation: Implement request batching (100 rules/batch) with exponential backoff; cache rule metadata for 6 hours to reduce calls. Owner: Backend Lead (Priya) by Feb 1.
  • Trigger: >5% job failure rate in any 7-day window.

Risk 2 — Adoption: Analyst Distrust of AI Recommendations

  • Risk: Detection engineers view machine-generated YARA-L as "black box" and continue writing rules manually, nullifying time savings.
  • Probability: High | Impact: High
  • Mitigation: Every recommendation includes "Explanation" modal showing MITRE reference + telemetry mapping logic; require detection engineer sign-off on first 3 recommendations to build trust. Owner: PM — D14 user interviews scheduled.
  • Trigger: <30% of gaps have generated recommendations viewed by D30.

Risk 3 — Competitive: Google Ships Native Feature

  • Risk: Google launches native "Coverage Gap Analyzer" in Chronicle, neutralizing our integration advantage.
  • Probability: Medium | Impact: High
  • Mitigation: Position as "SecOps-native but MDR-enhanced" (includes Netenrich threat intel context); lock in joint marketing agreement with Google Cloud partnership team. Owner: Product Marketing (Sarah) — competitive brief by Jan 25.
  • Trigger: Google announces similar feature at Next '25 (March).

Risk 4 — Execution: MITRE ATT&CK Version Compatibility

  • Risk: MITRE releases v15 mid-Q1 with technique ID renumbering, breaking coverage calculations.
  • Probability: Medium | Impact: Medium
  • Mitigation: Abstract technique IDs in database schema with version tags; maintain v14→v15 mapping table; 2-week engineering buffer reserved. Owner: Threat Research (David) — schema validation by Jan 30.

Risk 5 — Legal/Compliance: False Positive Liability

  • Risk: Recommended rule causes false positive storm, customer misses real alert in noise, claims negligence.
  • Probability: Low | Impact: High
  • Mitigation: Terms of Service update clarifying recommendations are advisory only; "Test in detect-only mode" banner in UI; E&O insurance verification. Owner: Legal ( compliance team) by Feb 15.

Kill Criteria — we pause Phase 2 and conduct full review if ANY met within 90 days:

  1. Coverage gap identification

Technical Architecture Decisions

Data Flow: SecOps API → UDM schema extraction → MITRE technique mapper → Gap identifier → Recommendation engine → PostgreSQL (analysis history) → REST API → React frontend.

Assumptions vs. Validated:

AssumptionStatus
Google SecOps API supports bulk export of existing YARA-L rules (>1,000 rules/tenant)⚠ Unvalidated — needs confirmation from SecOps partnership team by Jan 20
MITRE ATT&CK v14 JSON dataset (<500MB) fits in application memory without pagination⚠ Unvalidated — infrastructure team capacity test by Jan 16
Customer telemetry volume averages <10TB/day for batch processing within 4-hour window⚠ Unvalidated — engineering analysis of top 5 customer data volumes by Jan 18
UDM event schema mapping to MITRE techniques has ≥90% accuracy via regex patterns⚠ Unvalidated — threat research team validation against 50 test cases by Jan 22
SecOps OAuth 2.0 service account tokens support unattended daily batch jobs⚠ Unvalidated — security architecture review by Jan 17

Strategic Decisions Made

Decision: Analysis frequency (real-time vs. batch) Choice Made: Daily batch processing at 02:00 UTC tenant-local time Rationale: Real-time streaming requires Kafka infrastructure ($45K Q1 cost) and 3× engineering headcount. Gap analysis doesn't require sub-second latency; 24hr delay is acceptable for strategic coverage decisions. Rejected: hourly polling (still too API-heavy for large tenants). ────────────────────────

Decision: Rule generation automation level Choice Made: Recommend-only (human-in-the-loop), no auto-deployment Rationale: SOC2 Type II and ISO 27001 require human approval for detection logic changes. Auto-deployment creates liability for false positive storms. Rejected: "Draft and staging" auto-deployment (customers lack staging environments). ────────────────────────

Decision: Threat framework standard Choice Made: MITRE ATT&CK v14 only (no custom frameworks) Rationale: Industry standard alignment; customers already map compliance to MITRE. Proprietary frameworks create vendor lock-in resistance. Rejected: CIS Controls mapping (wins compliance, loses tactical relevance). ────────────────────────

Decision: SIEM scope for MVP Choice Made: Google SecOps only (natively integrated) Rationale: 73% of Q3-Q4 pipeline opportunities specifically requested SecOps integration (source: sales engineering call notes). Multi-SIEM abstraction layer adds 6 weeks to schema normalization. Rejected: Splunk/Sentinel connectors for MVP. ────────────────────────

MADE WITH SCRIPTONIA

Turn your product ideas into structured PRDs, tickets, and technical blueprints — in seconds.

Start for free →