Your Attribution Model Is Lying to You. Here’s How to Catch It.

Forrester tested attribution models against holdout experiments. They found something brutal: 72% of marketing leaders said they trusted their data. The models were wrong by 37% on average.

The core problem: 72% of marketing leaders trust their attribution models. Forrester found they're wrong by 37% on average. Your model captures maybe 8 out of 28 buyer touchpoints. You end up looking in a mirror: the channel you fund most appears to work hardest, which is a feedback loop, not validation.

That's not a margin of error. That's a chasm between what you think is working and what's actually working.

Your attribution model didn't set out to deceive you. But it's lying all the same.

Why Can't Your Attribution Model See the Full Buyer Journey?

The problem starts with math. An average B2B buyer touches 28 different things on their way to a deal. An attribution model? It captures maybe 8.

Here's what happens next: The model distributes credit across the 8 it can see. Whatever you spend the most on gets credit proportional to visibility. You end up looking in a mirror. The channel you fund the heaviest appears to work the hardest. The model confirms what your gut already believed, which feels like validation but is actually a feedback loop.

It's not deception. It's a blind spot masquerading as insight.

The second failure is architectural. An attribution model is built to see only what shows up in a trackable click or pixel. Up to 80% of social sharing happens in dark social: DMs, Slack threads, texts, group chats. Your model sees none of it. SparkToro research reveals that significant "direct" traffic is actually people referring you through channels with no URL parameters. Your best channel might be word-of-mouth. Your model hands the credit to something else.

You're measuring the wrong thing because you can only measure what touches a browser.

What Happens When You Actually Turn Off Your Top-Performing Channel?

A B2B SaaS company ran an attribution model. It said paid social drove 31% of pipeline.

They turned it off for 6 weeks. Just paused the whole channel.

Pipeline drop: 4%.

A 27-point gap between what the model said and what actually happened.

This wasn't a mistake in their setup. Their data was clean. Their tagging was correct. The model did exactly what it was designed to do. It took 28 touchpoints, captured 8, and allocated credit to the most visible ones. Paid social showed up early and often. The model crowned it a star.

Reality had a different opinion.

The attribution model wasn't wrong about seeing paid social. It was wrong about sizing its impact. And because they trusted the model, they might have kept throwing money at a channel that was barely moving the needle.

How Do You Test Whether Your Attribution Model Is Lying?

There's one test that cuts through attribution fog. It's called a geo test, and it's the closest thing marketing has to honest feedback.

Here's how it works:

1. Pick your top-performing channel per your attribution model.

2. Pause it completely in one geography. Keep running in a comparable one.

3. Wait 4-6 weeks. Measure pipeline in both geos.

If the paused geography barely flinches, your model is overcrediting that channel. The attribution assumed it was driving value. The market just showed you it wasn't.

This is the most straightforward diagnostic in marketing. It bypasses models entirely. It goes straight to behavior.

A company running a geo test discovers something unexpected: The channel they're spending 40% of budget on barely moves the needle when you flip it off. The channel they're under-weighting shows 3x the impact of what the model suggested.

You don't need perfect data to act on this. You just need a comparable geography and the guts to pause something you're supposedly winning with.

Dark Social and the Attribution Blind Spot

Your model can't see half the conversation.

Dark social research consistently shows that people share content through private channels more than public ones. A prospect gets an email from your sales team. They text their peer. "Hey, should we check this out?" The peer doesn't Google your company. They just come back and say yes.

Your model credits email. Email gets the conversion. But the real decision maker was a text message your system never saw.

The same happens with Slack. An engineer shares your explainer video in a private Slack channel. 8 people see it. 1 gets curious and visits your site 3 days later. Your model attributes that visit to direct traffic or organic. It has no idea Slack moved the needle.

The result: Channels that create genuine word-of-mouth appeal look less valuable than channels that create trackable clicks. You defund the former. You overfund the latter. Your model has led you backward.

This is why the most honest marketing stacks stop trusting attribution alone.

What's Actually Replacing Attribution Models

The smartest teams aren't trying to perfect their attribution models anymore. They're replacing them.

The new stack: incrementality testing for channel-level truth, media mix modeling for budget allocation.

One team ran exactly this experiment. They'd built what they thought was a world-class attribution model. It ranked paid social as second-best. They ran an incrementality test on the same period. Layered in media mix modeling. Both methods agreed. Attribution disagreed.

Which two do you trust?

Incrementality testing turns the question inside out. Instead of trying to credit past behavior, you isolate what happens when you change behavior. Turn off a channel. Measure the impact. Repeat with the next one. No assumptions. Just outcomes.

Media mix modeling sits on top and answers a different question: Given your constraints and your channels' true influence, how should you allocate budget?

These two together are more honest than any attribution model can be.

The Measurement Tax: When Perfect Data Kills Progress

One team spent 9 months building a "perfect" attribution model. During those 9 months, they didn't reallocate a single dollar.

A competitor took a different path. They ran 4 holdout tests. Analyzed the results in 2 weeks. Moved 30% of budget based on what they learned.

18 months later, the competitor had grown pipeline 18%. The first team? Still using their perfect attribution model.

Measurement paralysis is the most expensive analytics problem in marketing because it looks like progress. You're building something sophisticated. You're getting sophisticated outputs. You feel like you're getting smarter about your money.

You're actually getting slower.

The 37% gap Forrester found wasn't the cost of bad attribution. It was the cost of trusting attribution. Every quarter you spend perfecting a model is a quarter you're not running tests that would move budget in the direction that actually matters.

Start small. Run a geo test on your strongest channel. Measure what happens. Reallocate. Do it again next quarter.

Your model will tell you it's wrong. That's the point.

What to Do Monday Morning

Run a geo test this week. Don't wait for perfect data. Don't build a new model first.

Pick one geography where you can pause your top-performing channel. Another one where you keep it running. Measure pipeline.

If it barely moves, your attribution model is overcrediting it. That's not a surprise. That's an opportunity.

Start there.

Then look at incrementality testing. Or media mix modeling. Or both. But stop spending cycles perfecting something that's wrong by 37%.

At a Glance

First-Touch AttributionMulti-Touch Attribution
Gives all credit to first interactionSpreads credit across all touchpoints
Simple to implementRequires more data tracking
Tells you how people discover youTells you the full path (or what you can see)
Organic search gets all the creditPaid gets credit too, but so does organic
Misses influence of later channelsOvercredits channels with high visibility
Risk of under-investing in closing channelsRisk of over-investing in visible channels

Frequently Asked Questions

Should I stop using attribution models?

Not completely. Use them as a starting point, not a destination. Run a geo test to validate what they're telling you. If a channel your model credits heavily barely moves the needle when you pause it, the model is overcrediting that channel.

What's dark social and why can't my attribution model see it?

Dark social is sharing through private channels: DMs, Slack, texts, group chats. A prospect reads your content, texts a peer, "should we check this out?" Your model never sees that text. It credits the original channel, not the text that convinced them.

How long should I run a geo test?

4 to 6 weeks minimum. Marketing has long sales cycles. A 2-week pause might not show up in pipeline yet. Give the geography 6 weeks. If barely anything moves, your model is lying about that channel's impact.

What should I do instead of attribution models?

Start with geo tests to validate channel impact. Layer in incrementality testing to isolate what changes when you adjust spend. Add media mix modeling to optimize budget allocation. These three together beat any attribution model.

Further Reading

On Professor Leads:

On Forbes (by William DeCourcy):

About the Author

William DeCourcy is the founder of Professor Leads and a Forbes Business Development Council contributor. He's spent 15 years building lead generation systems for B2B companies. His writing on metrics, attribution, and pipeline strategy has been published in Forbes.

Want to stop trusting broken attribution models? Subscribe to the newsletter for weekly testing frameworks, or watch the breakdowns on YouTube.

Previous
Previous

Your Landing Page Is Losing 60% of Your Leads

Next
Next

Stop Measuring Cost Per Lead. Start Measuring Cost Per Revenue Dollar.