Jump to content



  • Available Subscriptions

  • New Topics

  • Posts

    • Read all five PDFs in full. Here's the analysis. The research is high-quality and changes three things in your architecture. Most of it confirms what you already had. But three findings are load-bearing — they should affect implementation decisions, not just inform them. What's confirmed (don't act on these — they validate your existing direction) Identity confidence tiers belong in the measurement graph, not bolted on. The Optimizely 3-tier pattern (high/medium/low) plus warehouse schema with confidence_score and confidence_tier columns matches exactly what your _SDR_KIND_LOW_RESOLUTION gate is supposed to enforce. Your decision in AA.1 H.9 to add the low-resolution gate at 0.50 threshold is consistent with the warehouse attribution practice of filtering WHERE confidence >= 0.9 for revenue allocation. The 50% threshold is more permissive than industry standard for revenue work — flag this for SC.1 review but don't change AA.1. Conservative-floor revenue is correct. The attribution research flat-out validates your H.8 decision: CRM-confirmed attribution of 20–40% vs platform-claimed 70–80%, with structurally untraceable revenue at 20–40% even in well-instrumented businesses. Constitution §08's 0.30 conservative-floor isn't conservative — it's accurate. The legacy raw-multiplication approach was the optimistic outlier. You made the right call yesterday. The dual-channel Finding/SDR architecture is industry-correct. Attribution platforms in the research explicitly model "attributed revenue" vs "unmatched deals" as separate streams. Your Finding (measurement) + SDR (system design requirement) split is the same pattern. This validates the canonical signal architecture, not just AA.1. What changes things — three findings Finding 1: Your benchmark thresholds are too lenient. Material recalibration needed before client engagements. This is the most important finding. The operational research gives you actual percentile distributions. Compare against your current canonical thresholds (mirrored from 5.7 in {0.10, 0.25, 0.50}): Lead response time: Industry median: 3-7 hours P25 (better quartile): 15-30 minutes "Below standard": >1 hour Your current _BAND_BELOW at 0.25 needs calibration against this distribution. The existing thresholds were internally consistent but not anchored to industry data. Missed call rate: Median: 20-30% missed (P50) P25 (better): ≤10-15% missed P75 (struggling): 35-60% missed "Worth fixing": >10-15% "Alarming": >30-35% Stalled deal (your AA.1 signal): Healthy: 5-10% stuck Attention: 10-20% Material failure: >20-25% Handoff failure (5.7): Within 7 days: 80-90% should be scheduled Within 14 days: 85-95% Within 30 days: 90-98% Implied healthy handoff failure rate: 2-10% Problematic: >10-15% The implication: your current thresholds may produce findings that read as "below benchmark" against arbitrary internal numbers. Industry-anchored thresholds let you say "your missed-call rate is 32%, industry median is 25%, P75 is 35%" — which is dramatically more credible to a client. Recommendation: do NOT change AA.1 thresholds during implementation. The threshold recalibration is a separate, focused task that touches every signal module's _DEFAULT_BENCHMARK_THRESHOLDS constant. Schedule it as SC.4 — benchmark calibration in the Normalization Plan, sequenced after SC.1 (graph integration) but before first client engagement. Document the research source for each calibrated threshold so the values are defensible. Finding 2: ServiceTitan data model breaks one of your AA.1 assumptions. Your H.7.1 canonical-event predicate assumes closed_won is a discrete event with a clean timestamp. The ServiceTitan research is explicit: there is no closed_won field in ServiceTitan. Win events are derived from estimate status transitions and require following an estimate-status audit trail. Specifically: "Won" is reconstructed from EstimateStatus = "Sold" transitions The exact timestamp depends on a status-change audit trail or report-level timestamps Some tenants use opportunity close dates; others use estimate sold dates Different tenants configure different enums This doesn't break AA.1. The compute logic operates on canonical events; it doesn't care how those events are derived. But it adds work to ingestion (csv_generic.py) and normalization (normalize.py) — the layer that maps ServiceTitan CSV exports to canonical events needs to handle multiple "win event" derivation strategies depending on tenant configuration. Recommendation: add a contradiction-log entry in BUILD_NORMALIZATION_PLAN noting that the closed_won canonical event cannot be sourced from a single ServiceTitan field; it must be derived from estimate-status transitions. This is an ingestion concern, not an AA.1 concern — but it should be visible before first client engagement so the ingestion mapping isn't built on a flawed assumption. Finding 3: The "scheduled" predicate in your H.7.1 has a structural error. H.7.1 Rule 2(b) checks for event_type == "job_scheduled". The ServiceTitan research clarifies: scheduling in ServiceTitan is the presence of an Appointment record with a non-null Start time, NOT a status field on the Job. This means the canonical event job_scheduled is itself a derived event — generated when an Appointment is created against a Job. This is the same class of issue as Finding 2. The compute logic is unchanged. But the canonical event derivation is non-trivial — and crucially, a Job can have multiple Appointments. If the first Appointment is created and then cancelled, then a second one is created later, what's the job_scheduled.occurred_at? Recommendation: add a sub-rule to H.7.1 specifying the canonical-event derivation rule for job_scheduled: This needs to land in AA_1_PLAN.md as a Rule 2(e) clarification before Step 5 implementation runs. It's one paragraph; it prevents implementation drift. What to do with all this Tonight, in priority order: Run the AA.1 acceptance prompt I gave you earlier. The eleven resolutions and constraint boundary are still correct. The research validates the direction. Add a fifth edit to that prompt — Rule 2(e) clarification in H.7.1, capturing the multi-Appointment derivation rule from Finding 3. Defer Findings 1 and 2 to Phase 1+ tasks. Don't try to recalibrate thresholds tonight. Don't try to fix the ingestion closed_won derivation tonight. Both go in the BUILD_NORMALIZATION_PLAN as new work items: SC.4 — benchmark calibration (research-anchored thresholds for all signals) Contradiction <N+1> — ServiceTitan estimate-status derivation for canonical closed_won event (in Section 8 of the Normalization Plan) Want me to regenerate the AA.1 acceptance prompt with the Rule 2(e) addition baked in, plus a small companion prompt that adds SC.4 and the new contradiction to BUILD_NORMALIZATION_PLAN? The companion prompt is small. It's worth doing in the same session because the research will fade from your context, and these two items are the only structural follow-ups the research generated.
×
×
  • Create New...