Most email marketing teams can tell you their open rate and click-through rate within seconds. What they can’t tell you is whether those numbers actually mean their program is working — and that gap is costing them real money.
According to Jeanne Jennings, CEO of Email Optimization Shop, open rate correctly predicted the winning version of an email only 20% of the time across years of A/B testing. Click-through rate performed even worse — it identified the true winner just 7% of the time. This tutorial walks through how to stop optimizing for the wrong outcomes, what metrics to track instead, and how to build an email measurement stack that actually ties to revenue.
What This Is
The email metrics problem is not a new one — but the scale of how badly mismeasured most programs are is striking. This is a systematic, data-documented failure mode that affects everything from subject line testing to campaign evaluation to executive reporting.
Jeanne Jennings at Martech.org describes the pattern clearly: she has run subject line A/B split tests across a wide range of clients and audiences over many years. In those tests, the version with the highest open rate frequently produced fewer conversions or lower revenue per email than competing versions. The same pattern held for click-through rate tests — and the data is not ambiguous.
Here is what the numbers actually showed:
Open Rate as a KPI:
– 20% of the time: Highest open rate correctly predicted the highest conversion rate or revenue per email (RPE)
– 10% of the time: Highest open rate pointed to the wrong winner
– 70% of the time: Open rate variance fell within the margin of error — inconclusive even when a clear business-outcome winner existed
The conclusion from that distribution: open rate either misled the analysis or provided no useful signal 80% of the time.
Click-Through Rate as a KPI:
– 7% of the time: Highest CTR version also produced the highest conversion rate or RPE
– 36% of the time: CTR pointed to the wrong winner
– 57% of the time: CTR differences were not statistically significant, even though business metrics clearly were
CTR is an even weaker predictor than open rate. It is reliable less than one in fourteen tests.
There is also an infrastructure reason open rates have become untrustworthy that goes beyond testing methodology. The NotebookLM research report on email deliverability documents what is called “Phantom Engagement”: Apple’s Mail Privacy Protection (MPP) artificially inflates open rates by pre-fetching email images before the recipient even opens the message. This means open rate data is polluted at the source — a portion of every “open” in your analytics is a machine-generated signal, not a human one.
The combination of these two problems — statistical unreliability as a conversion predictor, and technical inflation from MPP — makes open rate particularly dangerous as a primary KPI. Teams optimizing for opens are optimizing for a metric that is both statistically weak and technically corrupted.
The fix is not complicated: shift your primary KPIs to Conversion Rate and Revenue Per Email (RPE). These are the two metrics that directly answer whether your email program is producing business outcomes.
Why It Matters
If you are running email A/B tests and declaring winners based on open rate or CTR, you are almost certainly optimizing for the wrong thing. The data from Jeanne Jennings’ testing is unambiguous: 80% of the time, open rate either misleads or tells you nothing about business outcomes.
For practitioners, this matters in several concrete ways:
Subject line testing becomes sabotage. The most common use of open rate is to pick the winner of a subject line test. But if your winning subject line drove curiosity clicks — people opening the email to find out what it’s about, then bouncing — you have selected for shallow engagement over buying intent. Over time, you train your audience to open emails that underdeliver.
Campaign evaluation misleads leadership. When marketing teams report “our open rate went up 15%,” they are presenting a number that may have zero correlation to revenue outcomes. This erodes trust in marketing’s ability to demonstrate ROI and makes budget defense harder when it counts.
Click-through rate creates the wrong incentives in design. Design teams optimizing for CTR will pile in more links, bigger buttons, and more visual prominence for clickable elements. But Jennings’ data shows that 36% of the time, the highest CTR version produced lower conversions or RPE. More clicks can mean less revenue. That is a structural problem in how most email teams evaluate creative.
Apple MPP makes the problem worse. As documented in the NotebookLM research report, Apple’s Mail Privacy Protection pre-fetches images to protect user privacy. Since open tracking relies on a 1×1 tracking pixel embedded as an image, MPP fires that pixel before the user actually opens the email. The result is inflated open rates across all Apple Mail users — which, depending on your list composition, can represent 40–60% of your audience.
For agencies and consultants, this creates a direct accountability issue. If you are reporting client results using open rate and CTR as primary KPIs, you are presenting numbers that the data shows are unreliable predictors of actual performance. Sophisticated clients will eventually figure this out.
The Data
Here is a side-by-side view of how these metrics perform as predictors of actual business outcomes, based on Jennings’ multi-year testing data and the NotebookLM research report:
| Metric | Correctly Predicts Winner | Points to Wrong Winner | Inconclusive | Technically Reliable? |
|---|---|---|---|---|
| Open Rate | 20% of tests | 10% of tests | 70% of tests | No — inflated by Apple MPP |
| Click-Through Rate (CTR) | 7% of tests | 36% of tests | 57% of tests | Partially — not affected by MPP |
| Conversion Rate | Direct measure — not a predictor | N/A | N/A | Yes — tracks actual outcomes |
| Revenue Per Email (RPE) | Direct measure — not a predictor | N/A | N/A | Yes — tied to transaction data |
Metric Formulas:
| Metric | Formula | Best For |
|---|---|---|
| Conversion Rate | Conversions ÷ (Emails Sent − Bounces) × 100 | Lead gen, SaaS, B2B programs |
| Revenue Per Email (RPE) | Total Revenue ÷ (Emails Sent − Bounces) | Ecommerce, transactional programs |
| Spam Complaint Rate | Complaints ÷ Emails Delivered | Deliverability health monitoring |
For spam complaint rate, the research report specifies that the hard limit is 0.3% and the safe operating threshold is 0.1%. Exceeding 0.3% triggers filtering across major inbox providers.
Step-by-Step Tutorial: Transitioning to Conversion-Focused Email Measurement
This walkthrough covers how to rebuild your email measurement stack around the metrics that actually predict business outcomes. The process takes about two to four weeks depending on how your analytics infrastructure is currently set up.
Phase 1: Audit Your Current Measurement Stack
Before you change what you measure, document what you are currently measuring and where those numbers come from.
Step 1: List every KPI in your current email reporting.
Pull your most recent email performance report — whatever you send to stakeholders. Write down every metric included: open rate, CTR, unsubscribe rate, bounce rate, list growth rate, revenue, and so on. Note which metrics are presented as primary KPIs versus secondary diagnostics.
Step 2: Identify where each metric is generated.
For most teams, open rate and CTR come directly from your ESP (email service provider) — Klaviyo, Mailchimp, HubSpot, Salesforce Marketing Cloud, etc. Conversion data, if it exists at all, typically requires integration between your ESP and your analytics platform (Google Analytics 4, your CRM, or your ecommerce platform).
If conversion tracking does not exist in your current setup, note this explicitly. This is the most common gap.
Step 3: Determine what counts as a conversion for your program.
Per Jennings’ framework, a conversion can be: a purchase, a demo request, a webinar registration, a lead generation form completion, an app download, or a subscription renewal. Pick the one (or two) that directly represent revenue or qualified pipeline for your business. Do not track everything — prioritize the conversion event that maps to business value.
Phase 2: Implement Conversion Tracking
Step 4: Set up UTM parameters on every link in your emails.
Every link in your email campaigns should include UTM parameters that identify the source (email), medium (newsletter, promotional, transactional), and campaign name. This is the minimum requirement to track email-driven conversions in Google Analytics 4.
Example UTM structure:
https://yoursite.com/landing-page
?utm_source=email
&utm_medium=promotional
&utm_campaign=march-2026-spring-sale
&utm_content=cta-button-1
Use consistent naming conventions across your team. Inconsistent UTM naming is the single most common reason conversion data breaks down.
Step 5: Create a conversion goal or event in your analytics platform.
In Google Analytics 4: navigate to Admin → Events → Mark as Conversion. Select the event that fires when your conversion action completes (e.g., purchase, form_submit, demo_booked). In Klaviyo or HubSpot: set up a conversion metric in the campaign reporting settings that links back to your tracked event.
Step 6: Enable revenue tracking if you run an ecommerce program.
To calculate Revenue Per Email (RPE), you need your ESP to receive transaction revenue data. Most major ESPs support this via:
– Klaviyo: Native Shopify/WooCommerce integration — revenue attribution is automatic
– HubSpot: Connect to Stripe or your payment processor via native integration
– Salesforce Marketing Cloud: Einstein Analytics integration for revenue attribution
– Mailchimp: e-commerce integration via API or Zapier
Once revenue data flows into your ESP, RPE is calculated as: Total Revenue ÷ (Emails Sent − Bounces). Run this number at the campaign level, not just the program level, so you can compare campaign performance accurately.
Phase 3: Redesign Your A/B Testing Framework
Step 7: Stop declaring subject line test winners on open rate.
This is the hardest behavioral change because it requires patience. Subject line tests optimized for open rate can declare a winner in 2–4 hours. Tests optimized for conversion rate or RPE need to run long enough for sufficient conversion events to accumulate — typically 48–72 hours for most email programs.
Change your test plan template to require a minimum sample size based on your program’s expected conversion rate, not your expected open rate. Use a sample size calculator set to your historical conversion rate as the baseline metric.
Step 8: Run a retrospective analysis on your last 12 months of tests.
Pull every A/B test you ran in the last year. For each test: what was the declared winner and on what metric? Now look at conversion data for that same period. Did the winner by open rate or CTR also perform better on conversion rate or RPE?
If you find significant discrepancies — and you likely will — document them. This becomes your internal case study for shifting stakeholder expectations away from open and click metrics.
Step 9: Redesign your reporting dashboard.
Rebuild your email reporting template to lead with conversion rate and RPE as primary metrics. Move open rate and CTR to a secondary “diagnostic metrics” section. The framing matters: open rate is still useful for diagnosing deliverability issues or subject line resonance — it just should not be the metric on which you declare campaign success.
Phase 4: Manage Deliverability in the New Measurement Environment
Step 10: Audit your authentication setup.
The research report documents that email filtering systems now operate in three layers: infrastructure (SPF, DKIM, DMARC), visibility (semantic content classification), and reputation (user behavior signals). All three must be healthy for your email to reach the primary inbox.
Check your authentication using MXToolbox:
– SPF record: Should exist and include all sending IPs/domains
– DKIM: Should be configured for every sending domain
– DMARC: Should be at p=quarantine or p=reject, not p=none
Step 11: Implement a sunset policy for non-engagers.
Per the research report, chronic non-engagement creates a “Graymail Effect” — it acts as a reputational anchor that shifts even high-quality communications toward spam classification. Implement a sunset policy that automatically moves contacts without any engagement signal for 90–180 days into a suppression segment. Send a re-engagement campaign before suppressing, but do not continue mailing unengaged contacts indefinitely.
Step 12: Monitor spam complaint rate as a deliverability health metric.
Maintain your spam complaint rate below 0.1% (the research report identifies 0.3% as the hard limit that triggers filtering). Use Google Postmaster Tools and Microsoft SNDS to monitor complaint rates by domain. Set up alerts if complaint rate exceeds 0.08% so you can investigate before hitting the threshold.
Expected Outcomes
After implementing this framework, expect to see:
– Cleaner A/B test results with clear conversion-rate winners
– More accurate campaign performance data tied to revenue
– Initial “drop” in reported open rates (as you stop optimizing for opens) — this is normal and correct
– Better deliverability health as you suppress non-engagers
– More defensible ROI reporting to leadership and clients
Real-World Use Cases
Use Case 1: SaaS Trial Conversion Campaign
Scenario: A B2B SaaS company sends a 5-email drip sequence to new trial users, trying to convert them to paid subscriptions. The team has been declaring subject line winners based on open rate.
Implementation: Redefine the conversion event as a “paid subscription started” event. Connect the ESP to Stripe via native integration to pass revenue data back. Set the A/B test minimum runtime to 72 hours with a conversion-rate winner declaration threshold. Run the retrospective analysis to show leadership why open rate was misleading them.
Expected Outcome: Subject line copy that drives genuine trial engagement (feature exploration, support ticket creation, app usage) will outperform curiosity-bait subject lines in conversion rate, even if it underperforms on open rate. The team stops rewarding clickbait subject lines.
Use Case 2: Ecommerce Promotional Campaign Evaluation
Scenario: A direct-to-consumer brand runs weekly promotional emails and has been optimizing for CTR. Per Jennings’ audio equipment example, the team tested whether to include product pricing in the email — the no-price version won on CTR because recipients clicked to find out the price. The company couldn’t determine which version actually drove more revenue.
Implementation: Implement Klaviyo’s native ecommerce revenue tracking so every email links back to purchase transactions. Calculate RPE for both versions — price-included versus price-excluded. Track whether the “curiosity click” converts to a purchase at the same rate as an informed click.
Expected Outcome: Pricing-included versions typically convert better because buyers who click already have pricing context and are qualified buyers. The team stops optimizing for curiosity clicks and starts optimizing for qualified buying intent.
Use Case 3: B2B Lead Generation Newsletter
Scenario: A marketing agency sends a weekly industry newsletter to a 40,000-subscriber list. Leadership wants to know if the newsletter is generating leads. The team reports open rate and CTR as the KPIs.
Implementation: Define conversion as a “demo request form submission” or “contact sales form completion.” Use UTM parameters in all newsletter links. Create a GA4 conversion event for the form submission. Track conversion rate and revenue pipeline influenced per email send.
Expected Outcome: The team can show the specific content categories (case studies, how-to guides, tool comparisons) that drive form completions, versus content that drives traffic but not leads. Newsletter strategy shifts from “what gets opens” to “what generates pipeline.”
Use Case 4: Re-engagement Campaign for Graymail Suppression
Scenario: An ecommerce brand has 200,000 subscribers. 80,000 have not opened or clicked in 6+ months. Deliverability is declining and spam complaint rates are rising.
Implementation: Per the research report’s sunset policy guidance, segment the 80,000 inactive subscribers. Send a 3-email re-engagement sequence over two weeks with clear “stay subscribed” CTAs. Track re-engagement conversion rate (clicks on the “keep me subscribed” CTA or any purchase within the re-engagement window). Suppress non-responders after the sequence completes.
Expected Outcome: List shrinks by 50–70% of the inactive segment, but deliverability metrics improve significantly. Remaining active list shows higher conversion rates because you have eliminated the low-engagement drag on your reputation.
Use Case 5: Agency Client Reporting Transition
Scenario: A marketing agency needs to transition client reporting from open rate / CTR to conversion rate / RPE. Clients are used to seeing “our open rate is 28%” as a success metric.
Implementation: Present the Jennings data in a client-facing briefing: open rate predicted the right winner only 20% of the time, CTR only 7% of the time. Show the client one retrospective case from their own program where open rate declared a winner that underperformed on conversions. Rebuild the dashboard with conversion rate and RPE as primary, open and CTR as secondary diagnostics.
Expected Outcome: Clients develop a more accurate understanding of email ROI. The agency can make stronger cases for budget investment based on demonstrable revenue attribution rather than engagement vanity metrics.
Common Pitfalls
Pitfall 1: Using open rate as a primary KPI without accounting for Apple MPP
Apple’s Mail Privacy Protection pre-fetches tracking pixels, artificially inflating open rates across Apple Mail users. Per the research report, this “Phantom Engagement” makes open rate data unreliable at the source. Teams that benchmark open rate targets based on historical data are comparing pre-MPP and post-MPP numbers as if they are the same metric. They are not. Audit your list for Apple Mail users and apply appropriate skepticism to open rate trend data.
Pitfall 2: Declaring A/B test winners too early
Conversion-rate-based tests need more time to reach statistical significance than open-rate tests. Most teams declare winners in 4–8 hours based on open rate. For conversion rate tests, 48–72 hours is typically the minimum runtime, and low-volume programs may need longer. Declaring early produces false winners and compounds the mismeasurement problem.
Pitfall 3: Tracking “curiosity clicks” as qualified engagement
The audio equipment case study illustrates this directly — the no-price email version drove more clicks because recipients clicked to find out the price. Without conversion tracking, the team called it a winner. Curiosity clicks and buying-intent clicks look identical in CTR data. They are not identical in conversion data.
Pitfall 4: Ignoring the Graymail Effect on deliverability
Continuing to mail unengaged subscribers does not just waste sends — it actively harms deliverability. Per the research report, chronic non-engagement shifts domain reputation, eventually causing high-quality emails to land in spam. Teams that do not implement sunset policies are trading short-term list size for long-term deliverability damage.
Pitfall 5: Failing to define a conversion before running the campaign
You cannot retroactively add conversion tracking after a campaign sends. If you do not have conversion events instrumented before the campaign goes out, you have no conversion data to analyze. Set up tracking before every campaign, not after.
Expert Tips
1. Run a “conversion rate retrospective” on your last 20 A/B tests. Pull every declared winner from the past year and check whether it also won on conversion rate or RPE. Document the discrepancies. This internal case study is your most powerful tool for getting stakeholder buy-in to change reporting standards.
2. Use click-to-conversion rate, not just CTR. CTR measures everyone who clicked. Click-to-conversion rate measures the percentage of clickers who completed the desired action. This metric reveals whether your landing page and offer are aligned with the email’s promise — a high CTR with low click-to-conversion rate means the email is generating interest that the landing page fails to capture.
3. Track RPE by segment, not just by campaign. Revenue Per Email at the campaign level tells you whether the campaign worked. RPE by segment tells you which subscriber groups are most valuable. Use this to inform your segmentation and list growth strategy — not all subscribers have equal revenue potential.
4. Implement Google Postmaster Tools monitoring on a weekly cadence. The research report documents that spam complaint rates above 0.3% trigger filtering. Postmaster Tools shows you your complaint rate by sending domain. Set a weekly review cadence and alert thresholds at 0.08% — giving you buffer to investigate before hitting the 0.1% safety threshold.
5. Treat the Gmail Promotions tab as a feature, not a penalty. Per the research report, a Sinch Mailjet analysis notes that B2C promotional emails in the Promotions tab are less likely to be marked as spam because that is where users expect to find them. Stop trying to “trick” Gmail into placing promotional emails in Primary — focus on conversion rate from wherever the email lands.
FAQ
Q: Should we completely stop tracking open rate?
No — open rate remains a useful diagnostic metric for identifying delivery problems, testing send times, and evaluating subject line resonance patterns over time. The problem is using it as a primary KPI for campaign success evaluation or A/B test winner declaration. Per Jennings’ data, it correctly identifies winners only 20% of the time. Keep it as a secondary diagnostic; remove it from primary KPI status.
Q: How do I get leadership to stop asking about open rates?
Show them the data. The Martech.org article provides the specific percentages: open rate misleads or provides no insight 80% of the time; CTR is correct 7% of the time. Run one retrospective analysis on your own program to find a concrete example where the open-rate “winner” underperformed on revenue. Internal evidence is more persuasive than external data.
Q: Does this approach work for non-revenue conversion goals like event registrations?
Yes. The formula adapts directly: Conversion Rate = Registrations ÷ (Emails Sent − Bounces) × 100. Any defined action can be a conversion event — the key is that it must represent actual business value, not just email interaction. Per Jennings’ definition, valid conversions include purchases, demo requests, webinar registrations, lead gen form completions, app downloads, and subscription renewals.
Q: Our ESP doesn’t support revenue tracking. What do we do?
Use UTM parameters consistently on all email links and track conversions in Google Analytics 4 instead. GA4 can track revenue if you have ecommerce tracking enabled. You then pull email-attributed revenue by filtering sessions with utm_source=email in your GA4 reports. This is a manual but functional workaround for ESPs with limited native revenue attribution.
Q: How do we handle the Apple MPP inflation in our historical open rate data?
Segment your list by email client (most ESPs provide this data). Calculate the proportion of Apple Mail users in your subscriber base. Apply appropriate skepticism to open rate trend lines that span the MPP rollout period (September 2021 onward). For benchmarking purposes, treat post-MPP and pre-MPP open rates as different metrics — not a continuous series. Focus trend analysis on non-Apple clients for more reliable engagement signals.
Bottom Line
Open rate and click-through rate are not performance metrics — they are interaction metrics. Per Jeanne Jennings’ multi-year testing data published on Martech.org, open rate correctly predicts email program winners only 20% of the time; CTR manages just 7%. The NotebookLM research report adds the technical dimension: Apple’s Mail Privacy Protection further corrupts open rate data by pre-fetching tracking pixels across millions of Apple Mail users. The fix is straightforward: switch primary KPIs to Conversion Rate and Revenue Per Email, implement proper UTM and conversion tracking before campaigns send, and redesign A/B tests to declare winners on business outcomes rather than email interactions. Teams that make this transition stop optimizing for curiosity and start optimizing for revenue — and that difference compounds over time.
0 Comments