Security operations teams today rely on quarterly manual audits to validate detection coverage against evolving threat landscapes. A detection engineer at a typical mid-market enterprise spends 40 hours each quarter grepping through SIEM queries, cross-referencing MITRE ATT&CK mappings in spreadsheets, and manually flagging gaps for backlog prioritization—creating a 90-day blind spot where emerging threats like new PowerShell obfuscation techniques or novel LOLBins go undetected. By the time the gap is identified, adversaries have already tested the organization's defenses.
12 analysts [source: median customer SOC team size, internal CRM analytics, Aug 2025] × 160 hours/year [4 quarterly audits × 40 hrs each, source: CISO advisory board interviews, n=8] × $95/hour [source: 2024 Gartner InfoSec compensation survey, fully-loaded] = $182,400/year per customer in recoverable analyst time. If adoption reaches only 40% of eligible analysts: $73,000/year per customer. This excludes the risk-adjusted value of reduced breach dwell time, which we will quantify post-launch using D90 incident data.
This feature is continuous, automated detection coverage gap analysis against MITRE ATT&CK v14 with native Google SecOps integration and human-in-the-loop rule recommendations. It is not an autonomous response system, a replacement for threat hunting teams, or a multi-SIEM abstraction layer—SecOps remains the system of record for detection logic and enforcement.
How competitors solve this today:
| Capability | Google SecOps | Microsoft Sentinel | CrowdStrike Falcon | This Product |
|---|---|---|---|---|
| Continuous telemetry analysis | ❌ | ❌ | ❌ | ✅ |
| Automated MITRE gap mapping | ❌ | ❌ | ✅ | ✅ (with prioritization logic) |
| Native Google SecOps integration | ✅ | ❌ | ❌ | ✅ |
| WHERE WE LOSE | Ecosystem depth (2yr head start on UDM data model) | Native Azure AD integration (agentless log collection) | Endpoint agent ubiquity (widespread sensor deployment) | — |
Our wedge is deep Google SecOps integration with daily automated analysis because we eliminate the manual hunt phase entirely rather than just visualizing existing rules.
WHO / JTBD: When a threat detection PM or security architect learns of a new adversary technique (via threat intel feed, CVE disclosure, or incident post-mortem), they want to know within 24 hours whether their existing detection library covers it—so they can close coverage gaps before red team exercises or real adversaries exploit them.
WHERE IT BREAKS: Today, teams rely on quarterly calendar reminders. An analyst receives threat intel on Thursday, adds "check coverage" to a backlog sticky note, and waits six weeks for the next audit cycle. During the audit, they manually export detection rules from Google SecOps into spreadsheets, compare against MITRE ATT&CK matrices using VLOOKUPs, and identify gaps through tribal knowledge ("Did we write a rule for WMIC execution?"). The process produces static PDF reports that are outdated the moment new telemetry sources are added.
WHAT IT COSTS:
| Symptom | Frequency | Time Lost | Aggregate |
|---|---|---|---|
| Manual quarterly audit | 4×/year | 160 hrs/analyst | 1,920 hrs/yr (12-analyst team) |
| Emergency gap analysis (out-of-cycle) | 2.3×/quarter (source: incident post-mortems) | 8 hrs/ad-hoc | 74 hrs/yr |
| Coverage blind spot duration | Continuous | 90 days avg exposure | 3.2 missed techniques/quarter |
Aggregate annual cost: $182K labor waste + unquantified breach risk from detection latency (source: HR rates, workflow analysis, Aug 2025).
JTBD Statement: "When new threat intelligence emerges, I want to know immediately if my existing detection rules cover it, so I can deploy new detections within hours, not quarters."
API Contract (Source of Truth):
GET /api/v1/coverage/analysis/{tenant_id}/latest
{
"analysis_id": "uuid-v4",
"timestamp": "2025-01-15T02:00:00Z",
"coverage_score": 0.73,
"mitre_version": "14.1",
"gaps": [
{
"technique_id": "T1059.003",
"tactic": "Execution",
"severity": "critical",
"telemetry_available": ["process_creation", "command_line"],
"existing_rules": [],
"recommended_rule": {
"yara_l": "rule windows_suspicious_cmdline { condition: ... }",
"confidence": 0.94,
"false_positive_estimate": "low"
}
}
]
}
Primary User Flow: Detection engineer receives Slack alert ("Critical gap: T1059.003") → Clicks link → Reviews coverage dashboard → Clicks technique row → Validates recommended YARA-L rule → Clicks "Open in SecOps" → Pastes rule into SecOps editor → Deploys. Target time: <20 minutes.
User Interface:
┌─────────────────────────────────────────────────────────────────┐
│ Coverage Overview [Export Report →] │
├─────────────────────────────────────────────────────────────────┤
│ OVERALL SCORE: 73% Last run: 2hrs ago │
│ ┌──────────────────────────────────────┐ │
│ │ [████████████░░░░░░░░] Critical │ Update frequency: Daily │
│ │ Coverage: 91% (28/31 techniques) │ │
│ ├──────────────────────────────────────┤ │
│ │ [████████░░░░░░░░░░░░] High │ [View Gaps →] │
│ │ Coverage: 62% (24/39 techniques) │ │
│ └──────────────────────────────────────┘ │
├─────────────────────────────────────────────────────────────────┤
│ TOP PRIORITY GAPS [Action] │
│ T1059.003 Windows Command Shell Critical [Create Rule →]│
│ T1003.001 LSASS Memory High [Add Policy →] │
│ T1486 Data Encrypted for Impact High [Review →] │
└─────────────────────────────────────────────────────────────────┘
┌─────────────────────────────────────────────────────────────────┐
│ Gap: T1059.003 Windows Command Shell [< Back to List] │
├─────────────────────────────────────────────────────────────────┤
│ RISK: Critical | DATA AVAILABLE: ✅ | DETECTION: ❌ │
├─────────────────────────────────────────────────────────────────┤
│ TELEMETRY SOURCES DETECTED: │
│ • process_creation (Windows Security 4688) │
│ • command_line arguments (Sysmon Event ID 1) │
├─────────────────────────────────────────────────────────────────┤
│ RECOMMENDED DETECTION: │
│ ┌─────────────────────────────────────────────────────────────┐ │
│ │ rule: windows_suspicious_cmdline │ │
│ │ condition: process.command_line contains "powershell -enc" │ │
│ │ severity: high │ │
│ └─────────────────────────────────────────────────────────────┘ │
│ [Edit in SecOps →] │
└─────────────────────────────────────────────────────────────────┘
Backward Compatibility: API versioned via URL path (/v1/). Response schema frozen for 12 months. UI feature-flags allow gradual rollout per tenant.
Phase 1 — MVP: 8 weeks
US1 — Daily Gap Analysis
US2 — Gap Prioritization
US3 — Rule Recommendation
Out of Scope (Phase 1):
| Feature | Why Not Phase 1 |
|---|---|
| Real-time streaming analysis | Requires Kafka infrastructure not budgeted Q1; daily latency acceptable for gap analysis |
| Multi-SIEM support (Splunk, Sentinel) | API abstraction layer adds 6 weeks; 73% of demand is SecOps-specific |
| Auto-deployment of generated rules | SOC2 requires human approval for detection logic changes; compliance risk |
| Historical trend analysis (coverage over time) | Can be derived from weekly JSON exports; not required for core gap identification |
| Custom threat intel ingestion (STIX/TAXII) | Standard MITRE ATT&CK sufficient for MVP; custom intel adds schema complexity |
Phase 1.1 — 4 weeks post-MVP:
Phase 1.2 — 6 weeks post-MVP:
Primary Metrics:
| Metric | Baseline | Target (D90) | Kill Threshold | Measurement Method | Owner |
|---|---|---|---|---|---|
| Time to identify coverage gap | 90 days (quarterly cycle) | ≤7 days | >30 days at D90 | Threat intel publish date vs. gap created timestamp in system | PM |
| Analyst hours per gap remediation | 12 hours (manual research + writing) | ≤2 hours | >6 hours at D90 | Time-tracking survey (n=20 gaps) | UX Research |
| MITRE coverage score (Critical techniques) | Unknown (baseline survey at D0) | +15 percentage points from D0 | No improvement from D0 | Automated calculation against ATT&CK v14 | Data Science |
Guardrail Metrics:
| Guardrail | Threshold | Action if Breached |
|---|---|---|
| False positive rate of recommended rules | <5% (measured by SecOps deployment rejection rate) | Pause auto-recommendations, revert to manual curation |
| SecOps API quota usage | <80% of daily limit (10K calls/day) | Throttle analysis frequency to every 48hrs |
| UI error rate (failed gap analysis loads) | <1% | Engineering bug sprint before Phase 1.1 |
Leading Indicators (D14 checkpoint):
What We Are NOT Measuring:
Risk 1 — Technical: SecOps API Rate Limiting
Risk 2 — Adoption: Analyst Distrust of AI Recommendations
Risk 3 — Competitive: Google Ships Native Feature
Risk 4 — Execution: MITRE ATT&CK Version Compatibility
Risk 5 — Legal/Compliance: False Positive Liability
Kill Criteria — we pause Phase 2 and conduct full review if ANY met within 90 days:
Data Flow: SecOps API → UDM schema extraction → MITRE technique mapper → Gap identifier → Recommendation engine → PostgreSQL (analysis history) → REST API → React frontend.
Assumptions vs. Validated:
| Assumption | Status |
|---|---|
| Google SecOps API supports bulk export of existing YARA-L rules (>1,000 rules/tenant) | ⚠ Unvalidated — needs confirmation from SecOps partnership team by Jan 20 |
| MITRE ATT&CK v14 JSON dataset (<500MB) fits in application memory without pagination | ⚠ Unvalidated — infrastructure team capacity test by Jan 16 |
| Customer telemetry volume averages <10TB/day for batch processing within 4-hour window | ⚠ Unvalidated — engineering analysis of top 5 customer data volumes by Jan 18 |
| UDM event schema mapping to MITRE techniques has ≥90% accuracy via regex patterns | ⚠ Unvalidated — threat research team validation against 50 test cases by Jan 22 |
| SecOps OAuth 2.0 service account tokens support unattended daily batch jobs | ⚠ Unvalidated — security architecture review by Jan 17 |
Decision: Analysis frequency (real-time vs. batch) Choice Made: Daily batch processing at 02:00 UTC tenant-local time Rationale: Real-time streaming requires Kafka infrastructure ($45K Q1 cost) and 3× engineering headcount. Gap analysis doesn't require sub-second latency; 24hr delay is acceptable for strategic coverage decisions. Rejected: hourly polling (still too API-heavy for large tenants). ────────────────────────
Decision: Rule generation automation level Choice Made: Recommend-only (human-in-the-loop), no auto-deployment Rationale: SOC2 Type II and ISO 27001 require human approval for detection logic changes. Auto-deployment creates liability for false positive storms. Rejected: "Draft and staging" auto-deployment (customers lack staging environments). ────────────────────────
Decision: Threat framework standard Choice Made: MITRE ATT&CK v14 only (no custom frameworks) Rationale: Industry standard alignment; customers already map compliance to MITRE. Proprietary frameworks create vendor lock-in resistance. Rejected: CIS Controls mapping (wins compliance, loses tactical relevance). ────────────────────────
Decision: SIEM scope for MVP Choice Made: Google SecOps only (natively integrated) Rationale: 73% of Q3-Q4 pipeline opportunities specifically requested SecOps integration (source: sales engineering call notes). Multi-SIEM abstraction layer adds 6 weeks to schema normalization. Rejected: Splunk/Sentinel connectors for MVP. ────────────────────────