Skip to content
Professor Leads

Incrementality Without the PhD

William DeCourcy · April 20, 2026

Your Attribution Model Agrees With Everything You Do

I've been running performance marketing campaigns for 15 years. Every attribution model I've used has told me the same thing: everything is working.

Google says Google drove the conversion. Facebook says Facebook did. The retargeting vendor says retargeting was the hero. Your email platform takes credit for the open that preceded the purchase.

They're all right, technically. And the sum of their claims adds up to roughly 3 times your actual revenue.

That's the problem with attribution. It measures exposure, not causation. The gap between those two concepts is where marketing budgets go to die.

What Incrementality Actually Means

Incrementality answers one question: "Would this conversion have happened anyway?"

Your attribution model can't answer that. It can only tell you which touchpoints existed before a conversion happened. It can't tell you which ones caused it.

An incrementality test removes a variable and measures what changes. In marketing, that means turning off a channel and watching what happens to revenue.

If revenue drops, the channel was doing real work. If revenue stays flat, you were paying for conversions that were going to happen regardless.

That's the entire concept. The rest is execution.

The 2-Week Holdout Test

Here's the setup. Any team can run this without a data scientist, a testing platform, or a PhD.

Week 0 (baseline). Pick the channel you want to test. Log your total revenue and conversion count for the previous 7 days. This is your baseline. Don't cherry-pick a good week. Use whatever happened last week.

Pick your test channel. Start with your second-highest spend channel. Second-highest gives you a meaningful test without betting the farm. Running a holdout on your primary channel first risks real revenue loss before you've built confidence in the method.

Weeks 1-2 (holdout). Turn the channel off completely. Pause all spend, all ads, all scheduled posts. Don't reduce budgets. Kill it entirely for 14 days.

Measure the same numbers. Total revenue and conversion count, same method as baseline. Compare.

Interpret the results.

Revenue dropped more than 10%? The channel was driving real incremental revenue. Turn it back on and test a different channel next.

Revenue stayed within 5% of baseline? The channel was probably taking credit for organic demand. You just found budget to reallocate.

Revenue went up? This happens more often than people expect. It usually means the channel was generating low-quality traffic that cluttered your funnel and distracted your sales team.

The incrementality guide at professorleads.com/tools/incrementality-guide walks through this setup with a downloadable template. It includes the baseline tracking sheet, the decision tree for interpreting results, and the one-page summary you can hand to your CFO.

The Channel That Didn't Survive

A marketing team ran this test on their display retargeting. They'd been spending $40,000 a month. The attribution dashboard showed retargeting participating in 35% of their conversions. It looked essential.

They turned it off for 2 weeks.

Revenue didn't move. Conversions didn't move. The only thing that changed was the ad bill went to zero.

Here's what was happening: their retargeting was following people who had already decided to buy. The banner showed up in the minutes or hours before a purchase that was going to happen anyway, and the attribution model gave the banner credit.

$480,000 a year. On a channel that was measuring demand, not creating it.

They reallocated the budget to LinkedIn content campaigns. CPL went up. Revenue went up more. The retargeting was never turned back on.

The Double-Count Problem

This is the math that makes attribution reports look better than they are.

A customer clicks a Google ad. Later that week, they see a retargeting banner. A few days later, they open an email. Then they buy.

Multi-touch attribution gives partial credit to all three. First-touch gives Google 100%. Last-touch gives email 100%.

None of these models tell you what would have happened if you'd removed one of those touchpoints. They count exposure. Incrementality measures causation.

The sum of credited conversions across all your channels will almost always exceed your actual revenue. I've seen teams report dashboards that, taken at face value, implied they were generating 2.8x their real revenue. Everyone knew the numbers were inflated. Nobody had a better method.

The holdout test is the better method.

What Benchmarks to Expect

After watching teams run holdout tests across dozens of channels, here are the patterns that show up most often.

Display retargeting tends to show low incrementality. Attribution models credit retargeting for 30-50% of conversions, but holdout tests frequently show the actual incremental contribution is closer to 10-15%. Retargeting chases people who were already buying.

Branded paid search is another common surprise. If someone searches your brand name, they already know who you are. Branded search ads capture demand that organic search would have handled. The holdout test usually shows a 5-15% revenue impact, not the 30-40% the attribution model claims.

Cold paid social tends to hold up well, particularly for B2B. Cold social creates awareness that wouldn't exist otherwise. When you turn it off, pipeline starts thinning in 2-4 weeks.

Email nurture sequences are harder to test because the holdout period needs to be longer. The impact of pausing emails takes weeks to show up in close rates. Consider a 4-week holdout for email.

Your results will vary. That's the whole point. The holdout test tells you what your channels are doing for your business, not what someone else's channels did for theirs.

The CFO Conversation

Here's why this matters beyond the marketing team.

Every CFO I've worked with has the same complaint about attribution: "I don't trust these numbers." They're right. Attribution models are hypotheses. Most marketing teams present them as facts.

A holdout test gives you something a CFO respects: an experiment with a clear before-and-after.

Walk into the budget meeting with this: "We turned off display retargeting for 2 weeks. Revenue stayed flat. We're reallocating that $40,000 a month to channels with proven incremental impact."

That's a conversation about cause and effect. Real numbers. Before and after. Math anyone in the room can follow.

One test changes the entire dynamic between marketing and finance. You stop defending a dashboard. You start presenting evidence.

When to Run a Holdout Test (and When to Wait)

Run the test when you've been spending on a channel for at least 90 days without validating its incremental contribution. Any channel older than a quarter without a holdout test is running on trust, not data.

Run the test when you're planning a budget reallocation and need evidence. "The model says Channel X is working" is a hypothesis. "We turned Channel X off and revenue didn't change" is a fact.

Run the test when your CFO asks for proof. Give them the experiment instead of the report.

Wait if you just launched the channel. Give new channels 60-90 days to ramp before testing. Early holdout tests produce false negatives because the channel hasn't had time to build awareness.

Wait if you're in a seasonal peak. Running a holdout during Black Friday or your Q4 close will produce noisy data. Test during a normal business period.

And it's fine to test one channel at a time. Run them sequentially. One per month gives you 12 channel validations in a year. That's more incrementality data than most teams generate in a decade.

The Template

I put together a free incrementality testing guide that walks through every step in this post. It includes the baseline tracking spreadsheet, the decision framework for interpreting results, and the one-page executive summary you can hand to your CFO.

Get the Incrementality Testing Guide

The math is simple. The template is free. The hardest part is the willingness to turn something off for 14 days.

Most teams would rather keep spending $40,000 a month than ask the uncomfortable question.

You don't have to be most teams.

Further Reading

On Professor Leads:

On Forbes (by William DeCourcy):

William DeCourcy

William is the founder of Professor Leads and has spent 15+ years in performance marketing. He teaches B2B and B2C marketers how to make better decisions about lead generation.

Subscribe to the Newsletter