Your conversions are probably lying to you.
When we rebuilt attribution for a Texas pest control client, we found that 73% of their "conversions" were contacts who never became paying customers. They were spending confidently on a fiction. That's what attribution is supposed to find.
Of 'conversions' that were fake — revealed by the attribution rebuild
ROAS on a Facebook campaign that looked healthy — revealed to be barely breaking even
ROAS on a Bing campaign that was invisible — budget reallocated, revenue followed
The 28-field attribution stack
Full-funnel tracking stack
UTM parameters preserved server-side before ad blockers strip them. GCLIDs, FBCLIDs, MSCLKIDs all captured and stored. Every form submission logged with source, device, page, and timestamp. Seven data sources stitched: Meta Ads, Google Ads, Bing Ads, GA4, Search Console, GBP, and your CRM.
Customer classification
New customer, reactivation, in-CRM-no-sub, not-in-CRM. Every contact classified on entry. ROAS calculated per bucket, per channel — so you know what each channel is actually producing, not what the platform claims.
Self-reported attribution
"How did you hear about us?" captured at the form level and matched against UTM data for every customer. Surfaces yard signs, truck wraps, word-of-mouth, Nextdoor — the channels that drive decisions but can never be tracked with a pixel.
Call attribution
Incoming call phone numbers matched against existing CRM records to distinguish new inquiries from reactivations. Critical for service businesses where calls dominate over form fills — and where call attribution is otherwise a black box.
CRM revenue pull
Actual dollars pulled from your field service platform — FieldRoutes, ServiceTitan, Jobber, or similar — into the reporting dashboard. ROAS calculated on real invoiced revenue, not lead count. No more proxy metrics standing in for the real number.
LTV-aware reporting
Initial service price, annual contract value, and three-year LTV tracked separately. A $150 initial job that converts to a $718 LTV customer is a different acquisition than a $300 one-time job. Attribution has to reflect that distinction or it's optimizing toward the wrong outcome.
Same budget. 40% more revenue.
The client was spending across Google Ads, Meta, and Bing. Standard reporting showed campaigns performing. Nobody had looked at actual revenue by channel.
Attribution revealed the truth: a Facebook campaign at 1.04× ROAS had absorbed budget for months while a Bing campaign at 10× ROAS sat underfunded. The channels weren't broken — the measurement was.
Year-over-year revenue grew significantly on the same total ad spend. 40% more revenue without increasing the budget.
Fake conversion rate — before classification
YoY revenue growth
Cost per acquired customer
Avg customer LTV
Common questions about attribution modeling
What does 'closed-loop attribution' actually mean?
It means every marketing dollar is connected to a real customer outcome — not a form fill, not a lead, not a platform-reported conversion. Closed-loop attribution tracks a prospect from their first ad click through your website, through your form, and into your CRM — where it matches against actual invoiced revenue. The loop closes when you can say: this channel produced $X in revenue from Y customers at a cost of Z.
What was the Ashley Discovery?
When we rebuilt attribution for a Texas pest control client, we found that 73% of their 'conversions' were contacts who never became paying customers. Ashley was a real name in the CRM — a $0 revenue record counted as a win for months. She wasn't unique. Hundreds of similar contacts were inflating every channel's reported performance. Once we classified customers into four buckets (new customer, reactivation, in-CRM-no-subscription, never-entered), the actual conversion rate snapped into focus — and so did which channels were actually producing it.
What data sources does the attribution stack connect?
The full stack connects Meta Ads → Google Ads → Bing Ads → Google Analytics 4 → Google Search Console → Google Business Profile → your field service CRM (FieldRoutes, ServiceTitan, Jobber, or similar). UTM parameters are captured server-side before browser-level ad blockers strip them. GCLIDs, FBCLIDs, and MSCLKIDs are all preserved. Phone call attribution is matched against existing CRM records to distinguish new inquiries from reactivations.
What is the 4-bucket customer classification system?
Standard CRM setups treat every contact as a lead. The 4-bucket system separates them: (1) new_customer — first-time paying client; (2) reactivation — returning customer who was inactive; (3) in_fr_no_sub — in the CRM but never subscribed, i.e. the 'fake conversion'; (4) not_in_fr — submitted a form, never entered the system at all. Each bucket has different ROAS implications. Optimizing toward the wrong bucket can look like success while producing none.
What does the self-reported attribution layer capture?
UTMs only see what happens through a trackable link. 'How did you hear about us?' dropdown data captures what UTMs miss — yard signs, truck wraps, Nextdoor mentions, referrals, radio. We match self-reported data against UTM data for every customer, giving you a hybrid view that sees both the trackable and the dark funnel.
How long does attribution setup take?
The audit phase — mapping what you currently have and identifying the gaps — takes two weeks. Full implementation varies by CRM complexity and existing tracking setup, but most engagements are fully operational within 30–45 days. After that, every channel decision has data behind it.
What would change if your attribution was telling the truth?
That's not a rhetorical question. A two-week audit maps what you currently have, identifies what's wrong, and shows you exactly what your channels are actually producing.
Start with The Audit