Scalversion Team||7 min read

Why Your Email ROI Numbers Are Lying to You

The email marketing industry loves to cite a "36:1 ROI" statistic. For every dollar spent on email, you get $36 back. It is one of the most widely repeated numbers in digital marketing. It is also one of the most misleading. That 36:1 number comes from attribution models that credit email for revenue from customers who were going to buy anyway. It conflates correlation (customers who receive emails also spend money) with causation (the emails made them spend money). The real ROI of email marketing is almost certainly positive, but it is not 36:1. Understanding the gap between attributed and incremental ROI is essential for making smart marketing decisions.

How Attribution Inflates Email ROI

Most email platforms measure campaign revenue using some form of attribution: last-touch, first-touch, or multi-touch. The typical approach works like this: 1. Customer receives an email. 2. Customer opens the email (or the platform records a pixel load). 3. Customer makes a purchase within an attribution window (typically 3-7 days). 4. The platform credits the full purchase amount to the email campaign. The problem is step 4. The customer may have made that purchase regardless of the email. They might have: - Already been browsing the website before the email arrived - Seen a paid ad on Instagram that morning - Received a push notification from the app - Simply been ready to reorder a product they buy regularly When the email gets full credit for all of these purchases, the ROI number balloons. A customer who spends $200 on a routine reorder gets attributed to whatever email they happened to receive that week. To illustrate the scale of the problem: imagine you send a promotional email to 100,000 customers. Over the next week, 2,000 of them make a purchase totaling $180,000. Your platform says the campaign drove $180,000 in revenue. But what if you had not sent the email? If 1,500 of those 2,000 customers would have purchased anyway (a reasonable assumption for a loyal customer base), only $45,000 of that revenue is truly attributable to the email. Your campaign's real impact is 75% smaller than what the dashboard shows.

The Overcounting Problem in Practice

Attribution overcounting creates a cascade of bad decisions: **Budget misallocation.** If email appears to deliver 36:1 ROI while paid search delivers 8:1, the obvious move is to shift budget toward email. But if email's true ROI is 10:1, the two channels are much closer in effectiveness, and the reallocation might be wrong. **False optimization signals.** You A/B test two subject lines. Subject line A gets a higher click rate and more attributed revenue. You declare it the winner. But attributed revenue is noisy. The real incremental difference might be negligible or even reversed. You just optimized for a metric that does not measure what you think it measures. **Difficulty justifying budget cuts.** Every campaign appears profitable because the attribution model is generous. This makes it nearly impossible to identify and kill underperforming campaigns. Your CFO asks "which campaigns should we cut?" and the answer appears to be "none of them, they all have positive ROI." That cannot be true, but the data cannot tell you which ones to cut. **Organizational credibility risk.** Savvy executives and finance teams are increasingly skeptical of marketing attribution numbers. If your email program claims $10 million in annual revenue impact and the CFO does not believe it, you have a credibility problem even if the true number is $5 million (which would still be impressive).

The Solution: Causal Measurement

The fix is to supplement attribution with causal measurement. This means using holdout groups to measure the incremental impact of your campaigns. Instead of asking "how much did people spend after receiving our email?" you ask "how much more did they spend compared to a randomly selected group that did not receive the email?" The difference is the causal, incremental impact. Here is what this looks like for a real campaign: **Attribution-based result:** $180,000 in attributed revenue. 36:1 ROI. **Holdout-measured result:** $45,000 in incremental revenue. 9:1 ROI. Both numbers are "correct" in the sense that they measure real things. The $180,000 measures total revenue from email recipients. The $45,000 measures the revenue that would not have happened without the email. For decision-making, the second number is the one that matters. This does not mean email marketing is bad. A 9:1 ROI is excellent. But it means your decisions should be based on the honest number, not the inflated one. Scalversion's measurement approach is built on holdout testing from the ground up. Every campaign automatically runs alongside a randomized holdout group, and the monthly incrementality report gives you the causal, incremental revenue number with confidence intervals.

What to Do About It

You do not need to throw out your attribution data. It is still useful for operational monitoring: understanding email engagement, tracking campaign reach, and identifying trends. But for ROI calculations and budget decisions, you need a causal layer on top. **Start with one campaign.** Pick your biggest lifecycle campaign (usually winback or cart abandonment) and run a holdout test. Compare the attributed revenue to the incremental revenue. The gap will tell you how much your current numbers are inflated. **Report both numbers.** Present your results with both the attributed and incremental figures. This builds trust with finance and leadership teams and demonstrates analytical rigor. **Optimize for incremental lift.** Once you have holdout-measured data, start making optimization decisions based on incremental lift rather than attributed revenue. This might change which campaigns you invest in and which you retire. **Make it ongoing.** Holdout testing should not be a one-time experiment. The most valuable insight comes from continuous measurement across campaigns, over time. Trends in incremental lift tell you whether your program is actually getting better.

Conclusion

Your email ROI numbers are almost certainly inflated. This is not a failure of your team or your platform. It is a structural limitation of attribution-based measurement. The solution is not to abandon email (it works!) but to measure it honestly using holdout groups and incremental lift. The marketers who make this shift gain something invaluable: credibility. When your numbers are backed by causal measurement, you can defend your budget, justify your strategy, and make genuinely informed decisions about where to invest next.

Related Articles

Causal Lift vs. Attribution: What Marketers Get Wrong
Attribution tells you who converted after seeing your campaign. Causal lift tells you who converted because of it. Here is why the difference matters.
How to Measure If Your Email Campaigns Actually Work
Go beyond open rates and click rates. Learn how holdout testing reveals whether your email campaigns truly drive incremental revenue.
How to Prove Email Campaigns Drive Revenue
A practical guide for lifecycle marketers who need to prove their email campaigns drive real revenue. Build the case with holdout testing and incremental lift.

See holdout-based measurement in action

Scalversion runs every campaign with a built-in holdout group and delivers monthly incrementality reports. Start a free pilot.

Start Free Pilot