The rules of PPC measurement have fundamentally changed, and most reporting frameworks haven’t caught up. AI-driven campaign types like Performance Max and AI Max have dismantled the cause-and-effect relationship between advertiser inputs and conversion outcomes — leaving teams reporting on inputs they no longer control and metrics that increasingly mislead them. The practitioners who figure out the new framework first will hold a durable advantage over every competitor still optimizing keyword-level ROAS.
What Happened
On April 13, 2026, Brooke Osmundson writing for Search Engine Journal published one of the most operationally useful frameworks I’ve seen for navigating the AI auction era: a four-layer measurement stack that replaces input-focused reporting with business outcome confirmation. It is a direct response to three platform shifts that have converged simultaneously to make traditional PPC measurement close to obsolete.
AI Max’s Intent-Based Matching Has Eliminated Keyword Control
Google’s AI Max campaign type enables ads to appear for queries never explicitly targeted by the advertiser. A retailer bidding on “trail running shoes” may now serve ads for searches like “best shoes for rocky terrain running” or “ultra marathon footwear” — queries they never wrote a keyword for, never set a bid for, and have no direct visibility into at the keyword level. Per Osmundson’s reporting, this is not a bug or an edge case; it is the architectural design. The recommendation that follows is a practical one: stop analyzing performance through individual keyword strings and start analyzing through “intent clusters” using Google Ads’ Search Terms Insights report grouped by search category. The vocabulary of PPC has changed, and the reporting structure needs to match it.
Performance Max’s Multi-Channel Budget Distribution Is Now Reportable
Performance Max campaigns distribute budget simultaneously across Search, YouTube, Display, Discover, Gmail, and Maps. Until April 2025, advertisers had almost no visibility into where their money was actually going within these campaigns — the black box complaint was legitimate. According to the Search Engine Journal article, channel-level reporting became available in April 2025, and it changed what practitioners can actually act on. The practical implication is significant: if a Performance Max campaign is routing 35-40% of budget to YouTube but YouTube conversion rates are substantially below the campaign average, you now have the data to respond — separate branded search into a standalone campaign, refine asset groups, or adjust campaign strategy based on real channel performance rather than informed guesses.
This transparency has been reinforced by the March 2026 Performance Max feature release, which added first-party audience exclusions (allowing advertisers to exclude existing customers to focus on new acquisition), budget forecasting reports with end-of-month spend projection, expanded demographic reporting by age and gender, and network-level placement reporting accessible under the “When and where ads showed” tab. These are features that arguably should have existed from the campaign type’s launch, but their availability now means the “I can’t see where my money goes” objection to Performance Max is substantially weaker than it was eighteen months ago.
AI Conversations Have Become Ad Placements Without Mature Attribution Infrastructure
Google is actively testing shopping results embedded inside AI Mode, while ChatGPT began testing ads in January 2026 with U.S. users on Free and Go plans. The attribution challenge at this layer is genuinely unsolved: when a user reads an AI-generated product recommendation, closes the tab, does additional research over several days, and converts through a branded search, which touchpoint receives credit? Linear attribution models built for keyword-click-conversion paths have no coherent answer for AI-mediated conversion journeys. These new placements represent demand being shaped before any click — before any event a pixel can register — and that invisible influence is systematically absent from current measurement architectures.
Why This Matters
The stakes here are higher than a reporting inconvenience. Teams that continue measuring AI-driven campaigns with traditional input-focused metrics are making budget allocation decisions on data that misrepresents actual business performance. The gap between what the numbers show and what the business is actually experiencing will widen as AI campaign types capture an increasing share of total paid search spend.
The Attribution Window Problem Is Systematically Undercounting Conversions
Research cited in the Search Engine Journal piece from Google and Boston Consulting Group on “4S behaviors” — streaming, scrolling, searching, shopping — demonstrates that AI recommendations earlier in the research phase are reshaping customer journeys. AI influences decisions well before the final search query, but conversion attribution still registers at the click that closes the loop. This creates a systematic undercount: campaigns appear less efficient than they actually are, because AI-assisted awareness and consideration touchpoints go unmeasured. The practical fix is specific: review your conversion lag reports by campaign type. If your product has a 45-day average decision cycle and your attribution window is set at the platform default of 30 days, you are structurally missing conversions from every report you generate. Extending windows to 60-90 days for longer-consideration categories changes both the performance read and the strategic decisions that follow from it.
Organic Erosion Forces a Structural Rethink of Paid Measurement
The relationship between paid and organic search has shifted from complementary to substitutive. Research from SparkToro and Datos cited in the SEJ article found that nearly 60% of Google searches now end without a click — AI Overviews are answering questions directly in the SERP, capturing intent without routing users to publisher sites. For businesses with historically strong SEO performance, this creates a compounding measurement problem: paid search spend must absorb more of the acquisition burden to compensate for organic decline, but evaluating paid search in isolation — as though organic were still delivering its historical volume — produces a distorted picture of actual acquisition economics. Blended metrics across channels become essential infrastructure, not optional analytical sophistication.
ROAS Is a Misleading Primary Metric in AI-Managed Campaigns
Here is the uncomfortable reality that most PPC reporting avoids: ROAS, the metric most executive teams use to evaluate paid search performance, is structurally misleading when AI systems are optimizing campaigns. A Performance Max campaign reporting 700% ROAS sounds exceptional. But if that efficiency is driven primarily by capturing existing demand — returning customers searching branded terms who were going to convert regardless of the campaign — it is generating attribution credit for conversions the business would have seen without the campaign spend. Meanwhile, a campaign at 300% ROAS that is genuinely acquiring new customers who would not otherwise have discovered the brand may be delivering far more long-term business value.
Osmundson makes this case explicitly in the SEJ article: automation excels at intercepting already-converting users, which means high ROAS from AI-managed campaigns is often a measurement artifact rather than a genuine indicator of marketing efficiency. Performance Max and AI Max are highly capable demand capture machines. Whether they are also creating new demand — expanding the buyer pool rather than more efficiently harvesting the existing one — requires incrementality measurement to determine, and the vast majority of accounts are not running those tests.
Agency and In-House Teams Face Different Versions of the Same Problem
Agency teams managing client accounts across multiple verticals face the sharpest version of this challenge. When a client asks why branded keyword CPC is up 40% while Performance Max is running in parallel, and the honest answer involves explaining that the two campaigns may be competing against each other in the same auction, the measurement gap becomes a client management crisis, not just a technical configuration issue. In-house marketing teams face a different pressure: finance and executive stakeholders who built their budget planning models around keyword-level ROAS and impression share data are working with metrics the platform is actively making less granular and less meaningful. Rebuilding those reporting relationships around business outcomes rather than campaign mechanics requires both technical investment and organizational patience — but there is no longer a credible alternative.
The Data
The transition from traditional PPC measurement to an AI-era framework is comprehensive. It is not a matter of swapping one metric for another — the data sources, reporting cadences, success metrics, and optimization targets all change.
| Measurement Dimension | Traditional PPC Measurement | AI-Era Measurement Framework |
|---|---|---|
| Primary success metric | ROAS by campaign / ad group | Contribution margin; blended CAC |
| Attribution model | Last-click or data-driven (7–30 day window) | Data-driven with 60–90 day window plus incrementality layer |
| Targeting visibility | Keyword-level data by match type | Intent cluster analysis via Search Terms Insights |
| Campaign type focus | Standard Search, Shopping, Display | Performance Max, AI Max, Demand Gen |
| Channel visibility | Single-channel per campaign | Cross-channel breakdown (available in PMax since April 2025) |
| Incrementality testing | Not standard practice | Geo holdout tests; Google testing at $5,000 minimum threshold |
| Conversion inputs | On-site click events only | Offline imports, CRM revenue mapping, CLV value signals |
| Organic relationship | Measured independently | Blended CAC across paid + organic acquisition |
| Reporting frame | Campaign mechanics for PPC manager | Business outcomes for finance and leadership |
| Emerging placements | Search and Shopping | Google AI Mode, ChatGPT ads (launched Jan 2026), AI conversations |
Sources: SEJ — PPC Measurement, SEJ — ChatGPT Ads, SEJ — Performance Max Updates
Every row in this table represents a change in how data is collected, interpreted, and acted upon. The bottom row is particularly significant: the ad placement universe that existed in 2023 is different from the one that exists today, and any measurement framework built before January 2026 is missing at least one active placement category entirely.
Real-World Use Cases
Use Case 1: E-Commerce Brand Auditing Performance Max Channel Allocation
Scenario: A mid-sized outdoor apparel brand is running a Performance Max campaign with $50,000 in monthly budget and a reported 620% ROAS. The marketing director suspects the campaign is primarily capturing repeat customers searching branded terms, but has had no channel-level data to confirm or refute the hypothesis.
Implementation: Using channel-level reporting now available in Performance Max — released April 2025 and extended by the March 2026 Performance Max update — the team breaks spend down by network. They find that 38% of budget is routing to YouTube at a conversion rate 60% below the campaign average. They apply the new first-party audience exclusion feature to remove existing customers from the Performance Max campaign, launch a standalone branded search campaign to capture that intent separately, and implement CLV-weighted offline conversion imports that assign higher revenue values to new customer purchases versus repeat orders. The signal the optimization system receives changes from “all conversions are equal” to “new customer acquisition is worth more to the business.”
Expected Outcome: Within 60 days, the campaign’s new customer acquisition rate increases. Overall reported ROAS drops from 620% to approximately 470% — a number that looks worse in any dashboard built around ROAS as the primary metric, but represents substantially higher business value. The campaign is now correctly optimizing for customer growth rather than efficiently re-converting an existing base.
Use Case 2: B2B SaaS Agency Running an Incrementality Test to Validate Paid Search ROI
Scenario: A B2B SaaS client is questioning whether their $80,000 monthly Google Ads spend is generating genuinely incremental pipeline, or whether those leads would have found the product through organic search and software review sites regardless of ad spend. The account manager needs an evidence-based answer that does not rely on last-click attribution.
Implementation: The agency sets up a geo holdout test using Google’s built-in incrementality testing, now accessible at a $5,000 minimum threshold per Osmundson’s reporting. Three comparable metropolitan areas are designated as holdout markets where ads pause for 30 days; three matched control markets maintain normal spend. Simultaneously, they run a branded search suppression test to isolate non-brand incrementality specifically. CRM data is imported weekly to map form completions to sales-accepted leads, providing a conversion quality signal rather than raw volume.
Expected Outcome: The test reveals that approximately 65% of PPC-attributed conversions represent genuinely incremental pipeline — the other 35% were already in a purchase decision cycle and converted through alternative paths when ads were paused. The agency uses this baseline to rightsize the budget discussion with the client and shifts optimization targets from form volume toward sales-accepted lead count, providing a credible answer to the CFO’s question that doesn’t rest on platform-reported attribution alone.
Use Case 3: Retailer Building a Blended CAC Dashboard as Organic Traffic Declines
Scenario: An e-commerce retailer with historically strong SEO performance is watching organic traffic decline consistently quarter-over-quarter as Google AI Overviews answer product-related queries without routing users to the site. Paid search spend has increased 35% over 18 months to compensate, but the CFO is questioning whether total marketing investment makes economic sense.
Implementation: The marketing team builds a blended customer acquisition cost dashboard aggregating total new customer acquisition spend — paid search, paid social, and email acquisition costs — divided by total new customers acquired across all channels in the same period. This metric is reported monthly alongside channel-level last-click attribution retained for operational use. They add a time-series view tracking paid search’s share of new customer acquisition over 24 months, making the organic-to-paid demand shift visible as a structural trend rather than an unexplained cost increase.
Expected Outcome: The blended CAC dashboard provides finance stakeholders a stable, defensible number that accounts for channel mix shifts rather than treating each channel as if it operates independently. The team demonstrates that while paid search ROAS appears to be declining — because it is absorbing demand that previously arrived through organic at no incremental cost — overall acquisition economics remain within planned thresholds. Budget planning conversations shift from defending ROAS numbers to managing blended CAC within strategic targets, a framing that finance teams find considerably more credible.
Use Case 4: Higher-Education Institution Evaluating ChatGPT Ads for Enrollment Campaigns
Scenario: A graduate education institution wants to assess whether ChatGPT’s advertising platform — launched in testing in January 2026 — merits budget allocation for graduate program enrollment campaigns, given that prospective students increasingly use AI assistants for program research.
Implementation: Per Search Engine Journal’s reporting on ChatGPT ads, initial advertiser access requires commitments of $50,000 to $100,000, with self-serve capabilities expected in April 2026. Rather than committing at that scale without conversion measurement infrastructure in place, the institution monitors the self-serve launch and benchmarks the current reported CTR of 0.91% against their own enrollment funnel metrics. They construct an attribution approach that treats ChatGPT as a top-of-funnel awareness placement, tracking whether branded search volume increases in markets where ChatGPT campaigns run versus control markets — the same approach they apply to linear television and podcast sponsorships.
Expected Outcome: The institution defers meaningful budget commitment until self-serve access launches with lower minimums, using the interim period to build measurement infrastructure appropriate for an awareness-layer channel. When they commit test budget, success is measured not through last-click enrollment attribution, but through incremental branded search lift and application volume in exposed geographies — a methodology consistent with a channel where CTR is one-seventh of Google search norms.
Use Case 5: Financial Services Company Restructuring Conversion Quality Signals
Scenario: A financial services company running lead generation campaigns optimized toward contact form completions has watched lead quality degrade significantly over twelve months. Volume is up, but the sales team reports fewer leads meeting qualification criteria. The marketing team suspects AI optimization is finding the cheapest form fills rather than the most valuable prospects, because all conversion events pass to the platform as equally valuable.
Implementation: Following the conversion quality framework outlined in Osmundson’s piece, the team audits their conversion setup and confirms the problem: every form completion passes to Google Ads as an identical conversion event with no value differentiation. They implement a tiered offline conversion import mapped to CRM outcomes: marketing-qualified leads import at a value of $50, sales-accepted leads at $200, and closed-won opportunities at $800. These values are passed back to Google Ads with a 30-day lag, giving the AI optimization system real signals about which queries, audiences, and creative combinations produce customers worth acquiring rather than just cheap form submits.
Expected Outcome: Within 90 days of value-based conversion tracking, CPL increases 25% as the system stops routing budget toward cheap form fills. However, the sales-accepted lead rate improves from 18% to 31%, meaning the actual cost per sales-accepted lead decreases by approximately 15%. The AI system, now receiving differentiated value signals, reallocates spend toward intent patterns and audience segments that produce qualified pipeline — results the team had no systematic way to engineer manually.
The Bigger Picture
The PPC measurement crisis is not a temporary inconvenience or a Google-specific problem. It is a structural consequence of transitioning from advertiser-controlled targeting to AI-controlled targeting, and the industry is still in the early stages of adapting its frameworks to match the new reality.
Google’s platform trajectory has been consistent across five years. Performance Max, launched broadly in 2021, progressively reduced direct advertiser control: no keyword targeting, limited placement exclusions, no device bid adjustments. AI Max extends this model further, expanding match territory beyond what any advertiser would write explicit targeting rules for. Each campaign type generation represents another step toward a system where Google’s AI determines what inventory to buy, at what price, for which user, and in which context — and the advertiser provides creative assets, conversion signals, and budget. The March 2026 Performance Max updates — audience exclusions, budget forecasting, demographic breakdowns, placement reporting — represent meaningful progress in transparency. But they arrive years after the campaign type shipped, reinforcing the pattern: Google builds the black box, then gradually adds windows.
The ChatGPT advertising platform introduces a different dimension of complexity. OpenAI generated approximately $100 million in annualized advertising revenue within the first six weeks of testing, despite reported CTRs of 0.91% — roughly one-seventh of Google’s search benchmark of 6.4%. That revenue figure signals something important: advertisers are willing to pay for access to AI assistant users even in the absence of mature conversion attribution infrastructure, because they recognize that AI recommendation systems are where purchase decisions are increasingly being shaped. The platform monetizes on the basis of audience quality and decision-making influence rather than demonstrable click performance — a fundamentally different value proposition from search advertising, requiring a fundamentally different measurement approach.
The SparkToro and Datos finding that nearly 60% of Google searches now end without a click explains why all of this is happening simultaneously. AI answers questions. When AI answers questions, organic publisher traffic declines. When organic traffic declines, paid search must absorb a larger share of acquisition. When paid search absorbs more of the mix, measurement accuracy becomes more financially consequential — and simultaneously, the AI systems managing those paid campaigns make traditional measurement harder. The industry is in a reinforcing cycle, and practitioners who understand the full loop will make better decisions at every point within it.
The deeper implication is about where competitive advantage now lives in paid search. It used to reside with advertisers who had more sophisticated keyword strategies, better Quality Scores, and tighter bid management. Today, with AI managing those mechanics, the advantage belongs to advertisers with better first-party data infrastructure — richer conversion signals, cleaner CRM integrations, more differentiated customer lifetime value inputs — and the measurement frameworks to confirm their AI optimization systems are producing genuine business outcomes.
What Smart Marketers Should Do Now
1. Audit your conversion quality inputs before making any other changes.
The fastest leverage point in AI-driven PPC is not bid strategy or campaign structure — it is the quality of conversion signals feeding the optimization system. If every conversion event passes to Google Ads with equal weight regardless of customer value, you are telling the AI that a $15 repeat purchase of a clearance item is worth the same as an $800 first-time purchase of a high-margin product. The system optimizes for what you measure. Pull your current conversion action configuration, map each event to its actual business value, and implement offline conversion imports with differentiated values within the next 30 days. Per Osmundson’s framework, this single infrastructure change — not bid strategy adjustments, not campaign restructuring — has more impact on downstream AI performance than any other lever available to practitioners working within AI-managed campaigns.
2. Extend your attribution window and review conversion lag data this week.
Open Google Ads and pull your conversion lag report segmented by campaign type. If you have products or services with 30-plus day consideration cycles — high-consideration purchases, B2B software, financial products, higher education, home services — and your attribution window sits at the platform default of 30 days, you are structurally excluding a portion of actual conversions from every performance report you produce. Extending attribution windows to 60-90 days for categories where purchase decision cycles warrant it changes not just the raw conversion count, but the efficiency metrics that stakeholders are using to make budget decisions. More importantly, it changes how you defend channel investment when finance teams are seeing numbers that understate actual performance.
3. Build a blended CAC metric and report it alongside ROAS starting this quarter.
If organic search is declining in your category — and if AI Overviews are active in your primary SERPs, it almost certainly is — evaluating paid search in isolation creates a picture that obscures rather than illuminates actual acquisition economics. Calculate total acquisition spend across all relevant channels divided by total new customers acquired in the same period, and report this monthly alongside channel-level last-click data retained for operational optimization. Blended CAC gives finance and executive stakeholders a consistent number to track even as the channel mix shifts quarter over quarter. It also provides the framing needed to explain why paid search investment is increasing: not because paid search is becoming less efficient in absolute terms, but because it is absorbing demand that organic search no longer delivers at zero incremental cost.
4. Run at least one incrementality test before the end of Q2 2026.
Google’s incrementality testing is now accessible at a $5,000 minimum threshold, placing it within reach for most mid-market accounts that previously assumed this type of measurement was an enterprise-only capability. A geo holdout test does not require sophisticated statistical methodology — it requires defining comparable geographic markets, pausing ads in holdout markets for 30 days, maintaining normal spend in control markets, and comparing conversion outcomes. The output answers the question your leadership team is actually asking when they push back on budget: would these conversions have happened regardless of the campaign? Per Osmundson’s reporting, every account should have an incrementality baseline before Q3 budget planning. Without it, all ROAS and CAC numbers rest on attribution assumptions that may be meaningfully wrong in either direction.
5. Audit your Performance Max channel allocation and act on what the data shows.
Channel-level reporting, audience exclusions, and demographic breakdowns are now available in Performance Max following the March 2026 updates. If you have not conducted a channel audit since these features became available, you are making campaign decisions based on aggregate numbers that conceal what is actually happening at the channel level. Pull network-level spend distribution and compare conversion rates by channel. If YouTube is receiving substantial budget but converting at a rate significantly below the Search network, make an explicit strategic decision: separate the campaign by objective, set a YouTube-specific conversion goal aligned with awareness, or use audience exclusions to concentrate spend on higher-intent placements. Leaving default channel allocation intact while the data to interrogate it now exists is not a defensible position.
What to Watch Next
Google AI Mode Shopping Attribution Model
Google’s testing of shopping results within AI Mode represents the most immediate measurement frontier in paid search. The attribution framework for these placements remains undefined — when a user browses AI-generated shopping recommendations and converts through a branded search several days later, the current infrastructure has no standard mechanism to credit the AI Mode touchpoint. Watch for Google to announce an attribution model for AI Mode placements in Q2 or Q3 2026. When it arrives, implementing it will likely require verified product feed integration and enhanced conversion tracking already configured, making now the appropriate time to audit those technical foundations.
ChatGPT Self-Serve Advertising Launch
OpenAI is launching self-serve advertiser capabilities in April 2026, per Search Engine Journal’s reporting. Current access requires $50,000 to $100,000 minimum commitments; self-serve will lower that barrier significantly. The two benchmark metrics to track in Q2 and Q3 2026 are CTR improvement from the current 0.91% baseline and whether OpenAI introduces conversion tracking infrastructure that integrates with existing analytics and attribution tools. Until direct conversion measurement is available and validated, ChatGPT ads remain an awareness-layer investment without the accountability infrastructure most performance marketing teams require before scaling budget allocation.
Performance Max Exclusion and Control Expansion
The first-party audience exclusion capability launched in March 2026 covers existing customer suppression. Watch over the next two quarters for Google to expand exclusion capabilities to include low-value segment suppression and finer-grained placement controls. Combined with the demographic reporting now available, these features will progressively give practitioners more meaningful influence over Performance Max targeting than has existed at any point since the campaign type launched. Each new control capability represents an opportunity to provide cleaner optimization signals to the underlying AI system.
Regulatory Transparency Requirements on AI Auction Mechanics
The EU’s Digital Markets Act has begun scrutinizing Google’s auction mechanics, and as AI-managed campaign types become the dominant format in paid search, regulatory attention will increasingly focus on whether advertisers receive adequate transparency into how AI systems allocate their budgets. Any significant enforcement actions or transparency requirements in the second half of 2026 could compel Google to expose more auction-level data — a development that would be a meaningful net positive for measurement capabilities across the global advertiser community.
Bottom Line
PPC measurement is not broken — but the framework most teams are using is built for a platform that no longer exists. Traditional input-focused reporting made sense when advertisers directly controlled keyword targeting, match types, and bid adjustments. The shift to AI-controlled auctions requires a parallel shift in measurement: from validating inputs toward confirming business outcomes — contribution margin over ROAS, incrementality over attributed conversions, blended CAC over channel-isolated efficiency, conversion quality signals over raw volume. The four-layer framework outlined by Brooke Osmundson at Search Engine Journal — profitability first, incrementality testing, blended acquisition cost, and first-party conversion quality — is the most operationally coherent architecture available for this environment. Implementing it requires first-party data infrastructure, CRM integration, and a willingness to report numbers that look worse on existing dashboards while representing better actual business performance. The teams building this measurement foundation in 2026 will hold a compounding advantage as AI platforms continue abstracting away the auction mechanics that legacy PPC reporting depended on.
0 Comments