Is marketing attribution a lie?
With Google and OpenAI launching ads in their AI products, we decided to talk about ad measurement. You won't want to miss this one.
If you rewind to 2017, there was a fairly discrete playbook for marketing. You bought ads, the pixel fired, the deterministic data matched up, and you knew exactly where your money was going.
Fast forward to 2026, and that playbook is gone.
In this week’s episode of GTMN, we’re doing something a little different. Austin is stepping into new fractional roles as CMO at AtoB and working with Replit, and he’s facing a massive question that keeps coming up with founders: How do we actually track performance when the old tools are broken?
We dove deep into the state of attribution for companies spending $20M+ on web and mobile, the “telemetry bias” that is killing brand strategy, the “poor man’s” version of incrementality, and the new ad inventory hitting Google’s and OpenAI’s AI surfaces.
The Death of Deterministic Attribution?
Austin kicked off the conversation with the question on every growth marketer’s mind: Is deterministic attribution (knowing exactly who clicked an ad via device ID or cookie) dead, alive, or on life support?
According to Pranav, it’s not dead—but it is dangerously misunderstood.
The industry has conflated “telemetry” (tracking events) with “causation” (what made the user buy). We are living in a world of fractional data. If you do things right today, you might get 70% visibility on web, but perhaps only 15% on iOS and 10% on desktop.
The problem isn’t that we can’t track people; it’s that executives are living with 2017 expectations, wanting to know exactly how every specific dollar printed money.
“It [deterministic attribution] is overtrusted by people... even the best of marketers might think that it implies causation or it implies that you can use it to do budget allocation but those assumptions are extremely flawed.”
Deterministic attribution tells you what you can observe. Not what caused the outcome. It’s telemetry. Useful telemetry. Necessary telemetry. But still partial.
In an idealized world, deterministic attribution would neatly stitch together every touchpoint from impression → click → conversion. In the real world, it doesn’t even come close.
The “Telemetry Bias” Trap
One of the most insightful parts of the discussion centered on why companies over-index on Google and Meta while ignoring video, podcasts, or Out-of-Home (OOH) advertising.
Pranav coined this “Telemetry Bias.” We gravitate toward the channels that are easiest to measure because they look fantastic in a spreadsheet.
“The things that are easiest to measure will show that they’re working... This is why 70% or 80% of businesses are investing... [in] direct response on LinkedIn and Meta. But when was the last time you clicked on a podcast ad? You can’t.” — Pranav
If you rely solely on deterministic data, you will optimize your marketing mix for trackability rather than growth. You’ll cut the brand-building channels that actually drive demand simply because a pixel didn’t fire.
Why? Browser-level blocking, Cookie consent banners, GDPR / CCPA, iOS ATT + SKAdNetwork, Platform-level obfuscation.
In practice, the amount of deterministic data being captured correctly is dwindling:
Web: ~60–70% in best cases
iOS app install attach rates: ~15-30%
That’s not a rounding error. That’s a fundamentally incomplete picture.
And when vendors claim “perfect matching,” they’re usually leaning on fingerprinting—which is probabilistic by definition and increasingly shut down by platforms.
The Solution: Incrementality and “Eyeballing It”
So, if the data is broken, what’s the fix?
For early-stage companies (and even Series B), you don’t always need expensive software. You need a scientific mindset. Pranav suggests a return to “eyeballing” big changes. If you double your spend on podcasts ($100k/month) and your total installs go up by 5% over that period, there is your answer.
At Paramark, the team is stripping away the vanity metrics (MQLs/SQLs) and focusing on a simple dashboard:
1. Total Ad Spend
2. Impressions/Engagements
3. Website Traffic & Search Query Volume
4. Demos (High intent hand-raisers).
If you pull a big lever (like launching a campaign in New York) and you don’t see a lift in traffic or search volume from that geography, the campaign didn’t work. It doesn’t matter what the attribution software says.
Larger businesses that have the scale and spend should probably invest in incrementality testing and Marketing Mix Modeling.
Setting up your team for success
In addition to all this, there are three more things you should consider doing.
Respect time horizons
Some effects show up fast. Others don’t. If you judge every campaign on a 2-week window, you’ll systematically kill your marketing engine. Some tests need to be 4 weeks or even 12 weeks depending on the channel, the type of ad inventory, and your buying journey. Use leading indicators and lagging indicators. And know which is which.
Don’t over-test at low volume
Incrementality testing is powerful—but only when you have scale, enough time, and limited overlap. Running five overlapping tests on tiny budgets just manufactures false confidence.
Hire for a scientific mindset
This one’s uncomfortable but true. Many marketing teams lack someone who can think clearly about hypotheses, controls, bias, and causality vs correlation. That gap matters more than your tooling. You need to hire the right team to fill this gap.
News: Ads in the AI Era
Finally, we touched on Google’s major update: Direct Offers in AI Overviews.
Google is now allowing advertisers to place price-promotion ads directly inside AI-generated answers for high-intent queries. While this opens up new inventory, there is a catch: it is currently focused on price discounts.
As Pranav noted, “It encourages the wrong behavior in my opinion which is discounting,” but marketers are inevitably going to flock to it to capture that high-intent traffic.



