ROAS has been the default performance marketing scorecard for nearly a decade, but mounting evidence shows it is actively misleading the teams that depend on it most. According to Martech.org’s May 2026 analysis by Jessica Hawthorne-Castro, organizations that optimize exclusively for return on ad spend are systematically undervaluing brand-building, over-investing in bottom-funnel tactics, and ignoring the customer relationships that actually drive long-term revenue. If your media planning still starts and ends with ROAS, you are making budget decisions on a metric designed to make your ad platform look good — not your business.
What Happened
The argument is not new, but it is landing differently in 2026. As reported by Martech.org on May 6, the performance marketing community is being forced to confront a structural problem baked into how it has historically measured success. Author Jessica Hawthorne-Castro, writing from a practitioner’s vantage point, makes the case that ROAS has outlived its usefulness as a primary KPI — and that clinging to it produces a predictable set of organizational and financial failures that appear not in campaign dashboards but in the actual business results.
The core problem is what the article calls an “efficiency versus effectiveness gap.” High-ROAS campaigns frequently look excellent in reporting dashboards while delivering almost nothing in terms of incremental business growth. The most common culprit is retargeting. Retargeting campaigns routinely post strong ROAS numbers precisely because they target users who were already likely to convert. The ad does not drive the purchase — it simply appears in the customer’s journey before the purchase completes, and the attribution model assigns it full or disproportionate credit. As Martech.org frames it, this is the fundamental flaw in ROAS-centric thinking: the metric captures the efficiency of spend, not its effectiveness in generating new demand or moving undecided customers toward a purchase.
Three structural biases emerge from ROAS over-reliance, according to Hawthorne-Castro. First, budget flows disproportionately to bottom-funnel tactics — retargeting, branded search, and conversion-optimized campaigns — because those reliably produce numbers that look strong in ROAS reports. This is not a deliberate strategic choice; it is an artifact of how the metric is constructed. Second, brand-building initiatives get chronically underfunded because their impact does not materialize within short attribution windows. A video campaign that builds awareness and purchase intent this quarter may drive significant conversions next quarter, but under a seven-day last-click model, ROAS records none of that contribution. Third, optimizing at the individual channel level creates reporting silos that obscure how channels actually function together. A customer who encounters a YouTube ad, reads an email, and then converts via branded search looks like a search and email win — while the upstream YouTube exposure that created the purchase intent receives zero credit.
The Martech.org analysis identifies four metrics that should sit alongside — or in key cases replace — ROAS as primary performance indicators: Customer Acquisition Cost (CAC), Customer Lifetime Value (LTV), incrementality, and retention and loyalty measurement. These are not novel concepts to the industry. The argument being made is sharper than introducing new ideas: the marketing profession has nominally accepted these metrics for years without actually operationalizing them into budget allocation decisions. The gap between knowing that LTV matters and actually reallocating media spend based on LTV-to-CAC ratios represents the organizational capability that most performance teams have not yet built.
Hawthorne-Castro also calls for two infrastructure shifts. The first is adopting holistic attribution through Media Mix Modeling (MMM) and Multi-Touch Attribution (MTA), which provide a portfolio-level view of how channels interact rather than evaluating each channel in isolation against its own attributed ROAS. The second is investing in first-party data systems capable of supporting incrementality testing — running controlled experiments to measure what marketing actually causes, not just what it correlates with. Both approaches have become more accessible in recent years due to open-source frameworks, improved tooling, and lower computing costs, which weakens the traditional objection that these methods are too expensive or technically complex for most teams.
The timing of this argument also reflects a specific technical context that makes it more urgent than it was in prior cycles. Privacy-driven signal loss — iOS App Tracking Transparency, the ongoing deprecation of third-party cookie support across browsers, and tightening enforcement under GDPR and expanding U.S. state privacy laws — has made last-click digital attribution increasingly unreliable as a decision-making foundation. Marketers are not just optimizing for the wrong question. In many cases they are also measuring it with degraded data. That compounding problem — wrong metric, inaccurate measurement — is what elevates the shift to alternative frameworks from a best practice to an operational necessity.
Why This Matters
Every performance marketer reading this has been in the meeting where leadership asks why customer acquisition is stalling despite what look like strong campaign metrics. ROAS as a primary metric does not just fail to capture true performance — it actively incentivizes behaviors that work against the business goals it is supposed to serve.
For agencies, ROAS creates a structural misalignment between what serves the metric and what serves the client’s business. If agency performance is evaluated on ROAS, the rational play is to concentrate client spend in tactics that reliably produce strong ROAS numbers: retargeting sequences, branded keyword campaigns, and bottom-funnel conversion optimization. This makes the reports look good and secures the retainer. It does not make the client’s business grow. Agencies that have shifted toward outcome-based contracts — structured around new customer revenue, cohort LTV performance, or agreed business metrics — operate with fundamentally different incentive alignment. That structural shift in how agency contracts are written is overdue as an industry standard, and it will accelerate as clients become more sophisticated about what their ROAS reports are and are not actually telling them.
For in-house performance teams, the problem is as much political as technical. ROAS is the metric that finance departments and executives have learned to recognize, so it becomes the shared vocabulary between marketing and the CFO’s office. When marketing leads with ROAS, the implicit message is “our ad spend is efficient.” But efficiency is not growth, and efficiency is not profitability. A team that has built its internal credibility and budget defense around ROAS reporting faces a genuine organizational challenge when it argues for brand investment, incrementality testing infrastructure, or a shift to LTV-weighted optimization — because all of those moves initially produce lower ROAS numbers before they produce better business outcomes. The measurement framework becomes an organizational trap that prevents capable teams from making strategically correct decisions.
For direct-to-consumer brands and e-commerce operators, the stakes are the most immediate. These businesses live on paid acquisition economics, and ROAS is deeply embedded in their optimization feedback loops. The ad platforms themselves — Meta’s Advantage+, Google’s Performance Max, TikTok’s Smart Performance Campaigns — are increasingly automated and built to find users who will convert at the lowest cost. That means they are optimized to find people who would have bought anyway, not people who need to be persuaded. The result is that performance budgets concentrate in the highest-efficiency-looking segments while the actual growth levers — new audience penetration, category expansion, brand storytelling, market education — get treated as discretionary costs that get cut in planning and eliminated when conditions tighten.
Signal degradation compounds all of this. Measured has documented how last-click attribution becomes more distorted as tracking fidelity declines from privacy changes — marketers are not just optimizing for the wrong metric, they are frequently measuring it with data that is itself increasingly inaccurate due to event loss from client-side tracking restrictions. That double degradation makes the case for moving to measurement frameworks that do not depend on cookie-based tracking more urgent with every successive privacy enforcement cycle.
The organizational alignment dimension cited in the Martech.org piece may be the most consequential of all. Marketing organizations need to communicate in C-suite language — revenue, profitability, market share — and ROAS is a marketing department dialect, not a business fluency. When a CMO presents ROAS to a CFO, the CFO’s next question is almost always “what does that translate to in revenue terms?” Teams that can answer that question fluently — because they track CAC, LTV, and payback periods as first-class metrics alongside media efficiency data — are the ones that retain budget authority when economic conditions tighten and the ones that get included in strategic planning conversations rather than summoned for periodic spend reviews.
The Data
The core measurement problem is most clearly illustrated by examining how different attribution frameworks treat the same customer journey. Consider a single customer who converts after touching five channels over thirty days.
| Attribution Model | YouTube (Awareness) | Facebook (Consideration) | Email (Re-engagement) | Branded Search (Conversion) | Actual Incremental Driver |
|---|---|---|---|---|---|
| Last-click | 0% | 0% | 0% | 100% | Unknown |
| First-click | 100% | 0% | 0% | 0% | Unknown |
| Linear | 25% | 25% | 25% | 25% | Unknown |
| Time-decay | ~5% | ~10% | ~20% | ~65% | Unknown |
| Data-driven (platform-modeled) | Modeled | Modeled | Modeled | Modeled | Unknown |
| Incrementality holdout test | N/A | N/A | N/A | N/A | Measured via control group delta |
| Media Mix Modeling | Portfolio share | Portfolio share | Portfolio share | Portfolio share | Estimated via econometric regression |
The critical observation across every rule-based model — last-click, first-click, linear, time-decay — is that they assign credit without measuring causation. They record where touchpoints appeared in a customer journey; they do not identify which touchpoint caused the purchase decision. Only holdout-based incrementality testing and MMM attempt to isolate causal contribution. That distinction, which is at the center of the argument in the Martech.org analysis, is why ROAS calculated from platform-reported attribution is structurally unreliable as a capital allocation signal.
The table below maps the metrics recommended in the Martech.org piece against the specific business question each framework is built to answer.
| Metric | What It Measures | Time Horizon | Business Question Answered |
|---|---|---|---|
| ROAS | Revenue per ad dollar (platform-attributed) | Campaign cycle — days to weeks | “Is our ad spend efficient?” |
| CAC | All-in cost to acquire one new customer | Monthly / quarterly | “How much does growth cost?” |
| LTV | Total revenue a customer generates over their relationship | 12–36 months | “Are we acquiring quality customers?” |
| LTV:CAC ratio | Return on customer acquisition investment | 12–36 months | “Is our acquisition model profitable at scale?” |
| Incrementality | Revenue caused by marketing, measured via holdout delta | Per campaign / per channel | “What would we lose if we cut this channel?” |
| Payback period | Months to recover CAC from gross margin contribution | Monthly rolling | “How fast does acquisition pay back?” |
| Retention rate | Percentage of acquired customers who repurchase | Monthly / quarterly | “Are we building a durable customer base?” |
As Measured puts it directly: “incrementality is the only way to prove to the CFO: What would the company lose if we did no marketing?” That framing — connecting spend to what would concretely be lost without it — is the language shift that makes marketing measurement legible to finance and leadership. Moving from ROAS to incrementality is not simply a methodology upgrade. It is a change in the question you are asking the data to answer, and that question change has downstream consequences for every budget, planning, and accountability conversation in the organization.
Real-World Use Cases
Use Case 1: The Retargeting Audit
Scenario: A mid-market e-commerce brand spending $400K per month in paid media has allocated approximately a third of that budget to retargeting across Meta and Google. Their blended ROAS looks solid, but new customer acquisition has stalled and year-over-year revenue growth is flat. Leadership is puzzled because the campaign dashboards show healthy efficiency metrics.
Implementation: The marketing team designs a geo-based holdout test. They select six geographically isolated markets that have historically performed comparably in conversion rates and revenue per session. Three markets become the test group where all retargeting is suspended for four weeks, while three serve as controls with retargeting running normally. The team tracks conversion rates, direct-to-site traffic, branded search volume, and new-versus-returning customer mix in both groups. They also run a post-purchase survey asking “how did you hear about us?” to understand organic versus paid influence on completed purchases.
Expected Outcome: The holdout test produces a direct answer to the question Measured identifies as the central one for CFO credibility — what would the business actually lose without this spend? If the test reveals minimal conversion rate differences between markets with and without retargeting, the team has evidence — not assumption — to reallocate that budget toward prospecting campaigns targeting net-new audiences. Near-term ROAS will decline as the mix shifts toward top-funnel investment with longer attribution cycles. New customer acquisition volume is the metric to monitor as the reallocation takes effect.
Use Case 2: LTV-Weighted Campaign Optimization for B2B SaaS
Scenario: A B2B SaaS company runs paid acquisition across LinkedIn, Google Search, and Meta. The marketing team reports to leadership on cost per acquisition. LinkedIn looks expensive compared to Google and Meta, and there is pressure to reduce or eliminate LinkedIn spend heading into the next planning cycle.
Implementation: Before cutting anything, the analytics team connects CRM revenue data to acquisition source attribution for all customers acquired in the prior 18 months. They calculate average contract value, expansion revenue, and renewal rates by acquisition channel, producing a channel-level LTV comparison. This data already exists in most B2B SaaS organizations — it simply has not been connected in this specific way. The team then rebuilds its CPA targets to reflect LTV-weighted efficiency: a higher allowable CPA for a channel that demonstrably delivers higher-value customers is not an inefficiency, it is the rational allocation decision.
Expected Outcome: The LTV analysis reframes the budget decision entirely. If LinkedIn customers demonstrate substantially higher 12-month LTV because they match the ideal customer profile more closely and churn at lower rates, then the CPA comparison was always misleading the allocation. Budget considered for cuts instead gets defended — and potentially scaled — with a unit economics argument that finance can evaluate on its own terms: revenue per acquisition dollar at 12-month LTV. This is the kind of business-outcome framing that Martech.org identifies as essential for marketing to move from cost-center positioning to growth-investment positioning within the organization.
Use Case 3: Media Mix Modeling for Omnichannel Budget Allocation
Scenario: A national retail chain runs paid media across TV, connected TV and streaming, paid search, paid social, and out-of-home advertising. Each channel reports to the media agency in its own attribution silo. Branded search shows the highest ROAS; TV and OOH show the weakest measurable ROAS and are on the budget chopping block heading into annual planning.
Implementation: The brand commissions an MMM study covering 18-24 months of historical weekly spend and revenue data, supplemented by external variables including seasonal indices, promotional calendars, and competitor spend proxies where available. Critically, the study is calibrated with geo-level holdout test data from digital channels where the brand has previously run experiments, using those real-world lift signals to ground the econometric model in actual measured causation rather than pure regression. As Measured makes explicit, “your Media Mix Modeling is only as strong as the real-world lift signals you feed into it.” The model is built specifically to capture halo effects — the mechanism by which TV and OOH exposure drives downstream branded search volume, which then appears in digital attribution as high-ROAS branded search conversions that search gets full credit for.
Expected Outcome: The MMM surfaces the contribution of offline channels to conversions that last-click models attributed entirely to digital channels. Branded search ROAS looks elevated precisely because TV and OOH are creating the demand that branded search harvests — but last-click attribution gives search all the credit for the conversion. With the portfolio-level view, the allocation decision changes: rather than cutting TV and OOH based on their isolated ROAS figures, the team optimizes the full media mix based on modeled incremental revenue per dollar across all channels combined. This is the holistic attribution methodology that Martech.org identifies as one of the two core infrastructure investments performance marketers need to make.
Use Case 4: First-Party Data Infrastructure for Signal Recovery
Scenario: A DTC beauty brand has watched platform-reported ROAS decline steadily over two years. The team has attributed this primarily to rising media costs and competitive pressure, but has not examined whether measurement degradation from iOS ATT and browser tracking restrictions is making the reported numbers less accurate and the platform optimization algorithms less effective at finding the right audiences.
Implementation: The team begins with a measurement audit, comparing server-side Conversions API data against pixel-reported events to quantify the event gap across Meta, Google, and TikTok. This surfaces the scale of signal loss from client-side tracking limitations. They then implement full Conversions API integration across all platforms to restore server-side event signal. Simultaneously, they build a first-party audience strategy — capturing email through loyalty enrollment, post-purchase flows, and on-site opt-ins — and use that owned data to build modeled lookalike audiences for prospecting rather than relying exclusively on pixel-based audience signals. The team also establishes a quarterly incrementality testing cadence, running holdout experiments on their three main campaign types — prospecting, retargeting, and retention — to build a channel-by-channel library of measured lift benchmarks that is independent of platform-reported attribution.
Expected Outcome: Conversions API implementation improves signal quality feeding into platform optimization algorithms, improving algorithm performance because the training data becomes more complete and accurate. First-party modeled audiences provide a tracking-independent prospecting foundation that does not degrade with each new privacy restriction. The quarterly incrementality cadence gives the team a measurement system immune to platform attribution model updates — budget decisions are grounded in holdout-measured lift rather than numbers that shift whenever a platform revises its attribution methodology.
Use Case 5: C-Suite Reporting Rebuild for a Growth-Stage Brand
Scenario: A growth-stage consumer brand’s CMO presents quarterly results to the board using channel ROAS, click-through rates, and impression share data. The board asks why the business is not growing faster despite what the CMO is characterizing as strong marketing performance. The CMO cannot bridge the gap because the metrics being reported do not connect to the business outcomes the board tracks.
Implementation: The marketing team rebuilds executive reporting from scratch. The new board report centers on four business-level metrics: CAC by acquisition cohort, LTV:CAC ratio at 12 months, payback period in months, and net new customers added in the period. ROAS is retired from the board deck and retained as an internal operational tool for campaign managers to use in day-to-day optimization — appropriate for that role, but no longer misapplied as a proxy for overall business performance. The team also builds a channel-level incrementality summary showing which channels have passed holdout testing, what their measured lift contribution is, and how that evidence base supports the current budget allocation decisions. This gives the board visibility into the methodology behind the numbers, not just the results.
Expected Outcome: Board conversations shift from “why is our Meta ROAS down?” to “what is our LTV:CAC for customers acquired this quarter, and how does that compare to our target payback period?” That framing shift, which Martech.org identifies as a fundamental organizational requirement, positions marketing as a capital allocation function with measurable return on investment rather than a cost center with opaque efficiency metrics. The practical consequence is that marketing gets evaluated and funded more like any other growth investment the business makes — which is the environment where smart marketing teams do their best work.
The Bigger Picture
The push to move beyond ROAS is happening at the intersection of three converging forces operating simultaneously: measurement degradation from privacy changes, the rapid automation of platform media buying, and sustained C-suite scrutiny of marketing spend.
Measurement degradation is the most immediate technical driver. iOS App Tracking Transparency, the ongoing deprecation of third-party cookie support across browsers and web environments, and expanding privacy legislation under GDPR, UK data law, and multiple U.S. state frameworks have progressively eroded the signal quality that last-click attribution depends on. Platform-reported ROAS is increasingly built on modeled attribution fills — educated estimates — rather than direct event tracking. The metric that was already asking the wrong question is now also being calculated with less accurate inputs. Teams that continue treating platform ROAS as objective measurement ground truth are working with a degrading instrument, even when that degradation is not visible in the dashboard itself.
Platform automation has fundamentally changed the practical scope of a performance marketer’s work. Meta’s Advantage+, Google’s Performance Max, and TikTok’s Smart Performance Campaigns have absorbed substantial portions of the targeting, bidding, and creative testing decisions that used to occupy performance teams directly. The marketer’s job has shifted from managing individual ad placements and audience segments to setting the right campaign objectives, providing clean measurement signals, and evaluating portfolio-level performance. In that context, the quality of the signal you feed the algorithm matters enormously. If you are directing Meta’s algorithm to optimize for purchase events while feeding it degraded pixel data, you are training a powerful automated system on flawed inputs. The brands that perform best in an increasingly automated platform environment are those with the cleanest first-party data infrastructure and the most rigorous measurement frameworks — not those with the most sophisticated manual optimization tactics.
C-suite accountability has been a persistent structural force since the 2022-2023 period when growth-at-all-costs spending came under sustained financial pressure across consumer and technology sectors. Marketing teams that could not connect spend to revenue in terms leadership understood had their budgets cut. Teams that survived and maintained influence were those who could demonstrate business outcomes — not just efficiency metrics. That experience has permanently reshaped what effective marketing leadership looks like, and it has made the organizational alignment recommendation in the Martech.org piece — that marketing should speak in revenue, profitability, and market share rather than in ROAS and CPM — a practical operational necessity rather than an aspirational posture.
This is also part of a longer-term industry trend toward what practitioners call “causal marketing measurement” — the recognition that correlation-based attribution frameworks are insufficient foundations for capital allocation decisions. Measured describes the methodological frontier as “Causal MMM calibrated with incrementality tests,” combining econometric modeling of historical spend patterns with controlled real-world experiments to produce measurement that explains what caused a result rather than what simply appeared near one. The tools to implement this approach — open-source frameworks, accessible incrementality platforms, first-party data infrastructure — are more mature and available now than at any previous point. The question for most marketing organizations is no longer whether this approach is feasible. It is how quickly they can build the data foundations and organizational capabilities to support it in practice.
What Smart Marketers Should Do Now
1. Run a holdout test on your retargeting spend before the next budget planning cycle.
If you have retargeting or remarketing spend, run a geo-based holdout before your next quarterly or annual planning session. Suspend retargeting in two or three matched geographic markets for three to four weeks and measure conversion rates, branded search volume, direct traffic, and new-versus-returning customer mix against control markets running normally. This produces real-world incrementality data — evidence of what you actually lose when the spend is absent — rather than another month of platform-attributed ROAS that is structurally biased toward credit-taking. The methodology does not need to be perfect to be decision-useful. A reasonably designed holdout with carefully matched markets will tell you far more than continued ROAS monitoring alone, and it gives you something concrete to bring to budget discussions with leadership.
2. Pull your 12-month LTV data segmented by acquisition source and build LTV-weighted CPA targets.
Most marketing teams have the data needed to do this analysis in their existing analytics and CRM infrastructure — it simply has not been connected in this specific configuration. Build a cohort analysis showing 12-month LTV for customers acquired through each major channel and campaign type. If you find meaningful LTV variation by source — which is the norm, not the exception — you have the evidence base to build LTV-weighted CPA targets rather than applying flat CPA thresholds across channels that are actually delivering very different quality customers. This analysis has changed budget allocation for every team I have seen do it rigorously. Start imperfect. The goal is to stop making allocation decisions that are completely LTV-blind, not to build a perfect predictive model on the first iteration.
3. Implement server-side event tracking across your major paid channels.
Conversions API for Meta, enhanced conversions for Google, and equivalent server-side implementations for TikTok and other platforms are now baseline infrastructure requirements for any serious performance marketing operation. Client-side pixel tracking alone is an inadequate signal foundation given the cumulative impact of browser privacy restrictions, iOS ATT enforcement, and consent management requirements that are in place in 2026. Server-side event tracking restores meaningful portions of the event signal that client-side restrictions degrade, and it improves the accuracy of automated optimization algorithms that are making an increasing share of campaign decisions. Clean signal in means better optimization out — this is the most immediately actionable and highest-leverage measurement infrastructure investment most teams can make right now.
4. Audit your data infrastructure to understand your readiness for Media Mix Modeling.
A rigorous MMM study requires roughly 18-24 months of historical data on weekly spend by channel, revenue by period, promotional activity, and external variables including seasonal indices and macroeconomic context. Before commissioning a study or investing in an MMM platform — whether open-source options like Google’s Meridian or Meta’s Robyn, or commercial platforms — audit what you actually have available. Is spend data organized and complete by channel and week? Is revenue data separable from promotional lifts and seasonal effects? Do you have any existing holdout or geo-experiment data that could calibrate the model? Understanding your data gaps now tells you what infrastructure needs to be built before MMM investment produces reliable outputs, and it surfaces data hygiene improvements that pay off regardless of whether MMM is your immediate next step.
5. Rebuild your executive reporting framework around business metrics, not marketing-specific metrics.
Stop leading board and C-suite presentations with ROAS, CTR, impression share, and cost-per-click. Lead with CAC by cohort, LTV:CAC at 12 months, payback period in months, and net new customers added in the period. Keep ROAS as an internal operational metric for campaign managers — it is useful in that role — but remove it from the frame you use to justify budgets and communicate performance to leadership. As Martech.org argues, this organizational language shift is as strategically important as any measurement methodology upgrade. Marketing teams that can speak fluently in revenue per acquired customer and months-to-payback are evaluated as growth investments with a clear return on capital. Teams that lead with ROAS are perpetually in the position of translating their results for an audience that does not share their vocabulary — and that translation gap has real budget consequences that compound over time.
What to Watch Next
Incrementality-as-a-service tooling reaching mid-market scale. Over the next six to twelve months, watch for holdout testing infrastructure to become substantially more accessible to teams operating below the enterprise spend tier. Enterprise-grade incrementality testing has historically required meaningful data science resources and minimum spend thresholds that excluded mid-market advertisers from running rigorous experiments. Multiple platforms are explicitly investing in more automated holdout testing frameworks to lower this barrier. As that friction drops, incrementality-grounded budget decisions will shift from an enterprise differentiator to a mid-market expectation.
Google Meridian and Meta Robyn adoption curves expanding. Both Google and Meta have published open-source MMM frameworks — Meridian and Robyn respectively — that allow in-house analytics teams to run MMM studies without purchasing commercial platforms. Watch how adoption of these frameworks develops through Q2-Q4 2026, and specifically whether either company moves toward integrating MMM outputs directly into ad platform budget recommendation features. If that integration happens at meaningful scale, MMM transitions from a specialist analytics capability requiring dedicated resources to a standard feature of campaign management — and teams without MMM literacy will find themselves unable to interpret or act on the outputs their platforms are surfacing.
Privacy regulation implementation timelines continuing. Multiple U.S. state comprehensive privacy laws are in active implementation and enforcement phases through 2026 and 2027. Each successive enforcement cycle further degrades the signal available to client-side tracking and last-click attribution. Marketing teams that have built first-party data infrastructure and measurement frameworks that do not depend on third-party tracking signal are better positioned for each new restriction wave — not just by surviving the tracking degradation, but by operating with optimization systems that continue to function accurately while competitors’ systems degrade with each new restriction.
CFO-marketing alignment appearing in hiring requirements. Pay attention to how CMO and VP Marketing job descriptions evolve through the next two to four quarters. The specific indicator worth tracking is whether financial fluency — specifically literacy in LTV, unit economics, and payback periods — appears consistently as a stated requirement rather than a bonus qualification. When that language becomes standard in senior marketing hiring specifications, it signals that the organizational alignment shift described in the Martech.org piece has moved from a leading-edge practice to an industry expectation. That transition will be visible in hiring specifications before it shows up in industry surveys or benchmarks.
Causal inference methods in attribution tooling maturing. Watch the category of startups and platform features building attribution approaches grounded in structural causal inference and Bayesian methods. These approaches attempt to measure causal contribution without requiring the large holdout sample sizes that traditional geo-split testing demands — which would make causal measurement accessible for smaller product catalogs, regional campaigns, and shorter test windows where conventional holdout methodology breaks down. By Q3-Q4 2026, several early-stage entrants in this space are expected to have production-ready products, and the practical question will be whether their methods prove robust enough in real-world deployment to supplement or eventually replace traditional holdout-based incrementality testing at scale.
Bottom Line
ROAS is not disappearing from marketing dashboards, and it should not. It remains a useful operational signal for comparing campaign efficiency within a channel and over a defined campaign cycle. The problem is its role as the primary frame for evaluating overall marketing performance, defending budgets, and making capital allocation decisions. In that role, as Martech.org argues compellingly, it actively misleads: it rewards retargeting over prospecting, bottom-funnel investment over brand-building, and short-term efficiency over durable growth, while becoming progressively less accurate as privacy changes erode its underlying data quality. The solution is to demote ROAS from primary scorecard to one data point among several — one that sits alongside CAC, LTV, incrementality, and retention metrics that answer the questions business leaders are actually asking. The measurement infrastructure to make this shift is more accessible now than at any prior point: server-side tracking, holdout testing platforms, open-source MMM frameworks, and first-party data tooling are all within reach for teams below enterprise scale. Marketing teams that complete this transition in 2026 will be making budget decisions on causal evidence and business-outcome metrics while their competitors continue optimizing a metric that grows less meaningful with every privacy regulation that passes and every platform attribution model that gets quietly updated.
0 Comments